plugin.properties mechanism in eclipse RCP - java

My project includes multiple plugins and every plugin includes the plugin.properties file with near to 20 translations.
The MANIFEST.MF file defines the name of the properties files where the external plugin strings are stored.
Bundle-Localization: plugin
The name of the plugin i define like
%plugin.name
Eclipse will search the "%plugin.name" in the plugin.properties file at runtime.
Which class read out the MANIFEST.MF Bundle-Localization entry and at which point is the string with the starting '%' suffix is searched in the "plugin.properties" file?
I want to find and patch these class in that way, that i can first look into some other directories/files for the "%plugin.name" identifier. With these new mechanism i can add fragments to my product and overwrite single lines in a "plugin.properties" file without changing the original plugin.
With these mechanism i could create a build process for multiple customers just by adding different fragments. The fragments including the customer names and special string they want to change.
I want to do it that way, because the fragment mechanism only add files to the original plugin. When the "plugin.properties" file is existing in the plugin, the fragment "plugin.properties" files are ignored.
UPDATE 1:
The method
class ManifestLocalization{
...
protected ResourceBundle getResourceBundle(String localeString) {
}
...
}
returns the ResourceBundle of the properties file for the given locale string.
When somebody nows how i can now first look into the fragment to get the resource path please post it.
UPDATE 2:
The method in class ManifestLocalization
private URL findInResolved(String filePath, AbstractBundle bundleHost) {
URL result = findInBundle(filePath, bundleHost);
if (result != null)
return result;
return findInFragments(filePath, bundleHost);
}
Searchs for the properties file and cache it. The translations can than get from the cached file. The problem is, that the complete file is cached and not single translations.
A solution would be to first read the fragment file, than read the bundle file. When both files are existing merge them into one file and write the new properties file to the disk. The URL of the new properties file returns, so that the new propetries file can cached.

Although I got the information wrong ... I had exactly the same problem. The plugin is not activated twice and I cannot get to the fragments Bundle-Localization key.
I want all my language translations in the plugin.properties (I know this is frowned upon but it is much easier to manage a single file).
I (half)solved the problem by using
public void populate(Bundle bundle) {
String localisation = (String) bundle.getHeaders().get("Bundle-Localization");
Locale locale = Locale.getDefault();
populate(bundle.getEntry(getFileName(localisation)));
populate(bundle.getEntry(getFileName(localisation, locale.getLanguage())));
populate(bundle.getEntry(getFileName(localisation, locale.getLanguage(), locale.getCountry())));
populate(bundle.getResource(getFileName("fragment")));
populate(bundle.getResource(getFileName("fragment", locale.getLanguage())));
populate(bundle.getResource(getFileName("fragment", locale.getLanguage(), locale.getCountry())));
}
and simply call my fragment localisation file name 'fragment.properties'.
This is not particularly elegant, but it works.
By the way, to get files from the fragment you need the getResource, it seems that fragment files are on the classpath, or are only searched when using getResource.
If someone has a better approach, please correct me.
All the best,
Mark.

/**
* The Hacked NLS (National Language Support) system.
* <p>
* Singleton.
*
* #author mima
*/
public final class HackedNLS {
private static final HackedNLS instance = new HackedNLS();
private final Map<String, String> translations;
private final Set<String> knownMissing;
/**
* Create the NLS singleton.
*/
private HackedNLS() {
translations = new HashMap<String, String>();
knownMissing = new HashSet<String>();
}
/**
* Populates the NLS key/value pairs for the current locale.
* <p>
* Plugin localization files may have any name as long as it is declared in the Manifest under
* the Bundle-Localization key.
* <p>
* Fragments <b>MUST</b> define their localization using the base name 'fragment'.
* This is due to the fact that I have no access to the Bundle-Localization key for the
* fragment.
* This may change.
*
* #param bundle The bundle to use for population.
*/
public void populate(Bundle bundle) {
String baseName = (String) bundle.getHeaders().get("Bundle-Localization");
populate(getLocalizedEntry(baseName, bundle));
populate(getLocalizedEntry("fragment", bundle));
}
private URL getLocalizedEntry(String baseName, Bundle bundle) {
Locale locale = Locale.getDefault();
URL entry = bundle.getEntry(getFileName(baseName, locale.getLanguage(), locale.getCountry()));
if (entry == null) {
entry = bundle.getResource(getFileName(baseName, locale.getLanguage(), locale.getCountry()));
}
if (entry == null) {
entry = bundle.getEntry(getFileName(baseName, locale.getLanguage()));
}
if (entry == null) {
entry = bundle.getResource(getFileName(baseName, locale.getLanguage()));
}
if (entry == null) {
entry = bundle.getEntry(getFileName(baseName));
}
if (entry == null) {
entry = bundle.getResource(getFileName(baseName));
}
return entry;
}
private String getFileName(String baseName, String...arguments) {
String name = baseName;
for (int index = 0; index < arguments.length; index++) {
name += "_" + arguments[index];
}
return name + ".properties";
}
private void populate(URL resourceUrl) {
if (resourceUrl != null) {
Properties props = new Properties();
InputStream stream = null;
try {
stream = resourceUrl.openStream();
props.load(stream);
} catch (IOException e) {
warn("Could not open the resource file " + resourceUrl, e);
} finally {
try {
stream.close();
} catch (IOException e) {
warn("Could not close stream for resource file " + resourceUrl, e);
}
}
for (Object key : props.keySet()) {
translations.put((String) key, (String) props.get(key));
}
}
}
/**
* #param key The key to translate.
* #param arguments Array of arguments to format into the translated text. May be empty.
* #return The formatted translated string.
*/
public String getTranslated(String key, Object...arguments) {
String translation = translations.get(key);
if (translation != null) {
if (arguments != null) {
translation = MessageFormat.format(translation, arguments);
}
} else {
translation = "!! " + key;
if (!knownMissing.contains(key)) {
warn("Could not find any translation text for " + key, null);
knownMissing.add(key);
}
}
return translation;
}
private void warn(String string, Throwable cause) {
Status status;
if (cause == null) {
status = new Status(
IStatus.ERROR,
MiddlewareActivator.PLUGIN_ID,
string);
} else {
status = new Status(
IStatus.ERROR,
MiddlewareActivator.PLUGIN_ID,
string,
cause);
}
MiddlewareActivator.getDefault().getLog().log(status);
}
/**
* #return The NLS instance.
*/
public static HackedNLS getInstance() {
return instance;
}
/**
* #param key The key to translate.
* #param arguments Array of arguments to format into the translated text. May be empty.
* #return The formatted translated string.
*/
public static String getText(String key, Object...arguments) {
return getInstance().getTranslated(key, arguments);
}
}

Change the name of your fragment plugin.properties to something else eg. fragment.properties
in your fragment manifest change the
Bundle-Localization: plugin
to
Bundle-Localization: fragment
Your plugin will be activated twice, the first time using the plugin.properties, the second using the fragment.properties.

Plugin activation is handled by the OSGi runtime Equinox. However I would strongly discourage trying to patch any files there to create specific behavior. The suggested way from Mark seems a much more sane approach to your problem.

One way is to attach a bundle listener, and listen for installations of bundles (and perhaps also look at already installed bundles) and for each bundle generate/provide - and install - a fragment with the wanted property files. If this is done before the application starts up, this should have effect.

Related

How to load quarkus qute template dynamic without inject?

I had the following problem: I have a service where I want to dynamically render templates using qute. Whose names I don't currently know (because they are passed via the endpoint).
Unfortunately Quarkus itself doesn't give the possibility to say "Template t = new Template()".... You always have to define them via inject at the beginning of a class. After a long time of searching and thinking about it, I have the following solution:
The solution is to inject the Quarkus Template Engine instead of a Template. The Engine could render a template directly.... Then we only have to read our template file as a String (Java 11 can read Files.readString(path, encoding)) and render it with our data map.
#Path("/api")
public class YourResource {
public static final String TEMPLATE_DIR = "/templates/";
#Inject
Engine engine;
#POST
public String loadTemplateDynamically(String locale, String templateName, Map<String, String> data) {
File currTemplate = new File(YourResource.class.getResource("/").getPath() + TEMPLATE_DIR + locale + "/" + templateName + ".html"); // this generates a String like <yourResources Folder> + /templates/<locale>/<templateName>.html
try {
Template t = engine.parse(Files.readString(currTemplate.getAbsoluteFile().toPath(), StandardCharsets.UTF_8));
//this render your data to the template... you also could specify it
return t.data(data.render());
} catch (IOException e) {
e.printStackTrace();
}
return "template not exists";
}
}

How to invoke model from TensorFlow Java?

The following python code passes ["hello", "world"] into the universal sentence encoder and returns an array of floats denoting their encoded representation.
import tensorflow as tf
import tensorflow_hub as hub
module = hub.KerasLayer("https://tfhub.dev/google/universal-sentence-encoder/4")
model = tf.keras.Sequential(module)
print("model: ", model(["hello", "world"]))
This code works but I'd now like to do the same thing using the Java API. I've successfully loaded the module, but I am unable to pass inputs into the model and extract the output. Here is what I've got so far:
import org.tensorflow.Graph;
import org.tensorflow.SavedModelBundle;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.Tensors;
import org.tensorflow.framework.ConfigProto;
import org.tensorflow.framework.GPUOptions;
import org.tensorflow.framework.GraphDef;
import org.tensorflow.framework.MetaGraphDef;
import org.tensorflow.framework.NodeDef;
import org.tensorflow.util.SaverDef;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
public final class NaiveBayesClassifier
{
public static void main(String[] args)
{
new NaiveBayesClassifier().run();
}
protected SavedModelBundle loadModule(Path source, String... tags) throws IOException
{
return SavedModelBundle.load(source.toAbsolutePath().normalize().toString(), tags);
}
public void run()
{
try (SavedModelBundle module = loadModule(Paths.get("universal-sentence-encoder"), "serve"))
{
Graph graph = module.graph();
try (Session session = new Session(graph, ConfigProto.newBuilder().
setGpuOptions(GPUOptions.newBuilder().setAllowGrowth(true)).
setAllowSoftPlacement(true).
build().toByteArray()))
{
Tensor<String> input = Tensors.create(new byte[][]
{
"hello".getBytes(StandardCharsets.UTF_8),
"world".getBytes(StandardCharsets.UTF_8)
});
List<Tensor<?>> result = session.runner().feed("serving_default_inputs", input).
addTarget("???").run();
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
I used https://stackoverflow.com/a/51952478/14731 to scan the model for possible input/output nodes. I believe the input node is "serving_default_inputs" but I can't figure out the output node. More importantly, I don't have to specify any of these values when invoking the code in python through Keras so is there a way to do the same using the Java API?
UPDATE: Thanks to roywei I can now that confirm the input node is serving_default_input and output node is StatefulPartitionedCall_1 but when I plug these names into the aforementioned code I get:
2020-05-22 22:13:52.266287: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:809 : Failed precondition: Table not initialized.
Exception in thread "main" java.lang.IllegalStateException: [_Derived_]{{function_node __inference_pruned_6741}} {{function_node __inference_pruned_6741}} Error while reading resource variable EncoderDNN/DNN/ResidualHidden_0/dense/kernel/part_25 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/EncoderDNN/DNN/ResidualHidden_0/dense/kernel/part_25/class tensorflow::Var does not exist.
[[{{node EncoderDNN/DNN/ResidualHidden_0/dense/kernel/ConcatPartitions/concat/ReadVariableOp_25}}]]
[[StatefulPartitionedCall_1/StatefulPartitionedCall]]
at libtensorflow#1.15.0/org.tensorflow.Session.run(Native Method)
at libtensorflow#1.15.0/org.tensorflow.Session.access$100(Session.java:48)
at libtensorflow#1.15.0/org.tensorflow.Session$Runner.runHelper(Session.java:326)
at libtensorflow#1.15.0/org.tensorflow.Session$Runner.run(Session.java:276)
Meaning, I still cannot invoke the model. What am I missing?
I figured it out after roywei pointed me in the right direction.
I needed to use SavedModuleBundle.session() instead of constructing my own instance. This is because the loader initializes the graph variables.
Instead of passing a ConfigProto to the Session constructor, I passed it into the SavedModelBundle loader instead.
I needed to use fetch() instead of addTarget() to retrieve the output tensor.
Here is the working code:
public final class NaiveBayesClassifier
{
public static void main(String[] args)
{
new NaiveBayesClassifier().run();
}
public void run()
{
try (SavedModelBundle module = loadModule(Paths.get("universal-sentence-encoder"), "serve"))
{
try (Tensor<String> input = Tensors.create(new byte[][]
{
"hello".getBytes(StandardCharsets.UTF_8),
"world".getBytes(StandardCharsets.UTF_8)
}))
{
MetaGraphDef metadata = MetaGraphDef.parseFrom(module.metaGraphDef());
Map<String, Shape> nameToInput = getInputToShape(metadata);
String firstInput = nameToInput.keySet().iterator().next();
Map<String, Shape> nameToOutput = getOutputToShape(metadata);
String firstOutput = nameToOutput.keySet().iterator().next();
System.out.println("input: " + firstInput);
System.out.println("output: " + firstOutput);
System.out.println();
List<Tensor<?>> result = module.session().runner().feed(firstInput, input).
fetch(firstOutput).run();
for (Tensor<?> tensor : result)
{
{
float[][] array = new float[tensor.numDimensions()][tensor.numElements() /
tensor.numDimensions()];
tensor.copyTo(array);
System.out.println(Arrays.deepToString(array));
}
}
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
/**
* Loads a graph from a file.
*
* #param source the directory containing to load from
* #param tags the model variant(s) to load
* #return the graph
* #throws NullPointerException if any of the arguments are null
* #throws IOException if an error occurs while reading the file
*/
protected SavedModelBundle loadModule(Path source, String... tags) throws IOException
{
// https://stackoverflow.com/a/43526228/14731
try
{
return SavedModelBundle.loader(source.toAbsolutePath().normalize().toString()).
withTags(tags).
withConfigProto(ConfigProto.newBuilder().
setGpuOptions(GPUOptions.newBuilder().setAllowGrowth(true)).
setAllowSoftPlacement(true).
build().toByteArray()).
load();
}
catch (TensorFlowException e)
{
throw new IOException(e);
}
}
/**
* #param metadata the graph metadata
* #return the first signature, or null
*/
private SignatureDef getFirstSignature(MetaGraphDef metadata)
{
Map<String, SignatureDef> nameToSignature = metadata.getSignatureDefMap();
if (nameToSignature.isEmpty())
return null;
return nameToSignature.get(nameToSignature.keySet().iterator().next());
}
/**
* #param metadata the graph metadata
* #return the output signature
*/
private SignatureDef getServingSignature(MetaGraphDef metadata)
{
return metadata.getSignatureDefOrDefault("serving_default", getFirstSignature(metadata));
}
/**
* #param metadata the graph metadata
* #return a map from an output name to its shape
*/
protected Map<String, Shape> getOutputToShape(MetaGraphDef metadata)
{
Map<String, Shape> result = new HashMap<>();
SignatureDef servingDefault = getServingSignature(metadata);
for (Map.Entry<String, TensorInfo> entry : servingDefault.getOutputsMap().entrySet())
{
TensorShapeProto shapeProto = entry.getValue().getTensorShape();
List<Dim> dimensions = shapeProto.getDimList();
long firstDimension = dimensions.get(0).getSize();
long[] remainingDimensions = dimensions.stream().skip(1).mapToLong(Dim::getSize).toArray();
Shape shape = Shape.make(firstDimension, remainingDimensions);
result.put(entry.getValue().getName(), shape);
}
return result;
}
/**
* #param metadata the graph metadata
* #return a map from an input name to its shape
*/
protected Map<String, Shape> getInputToShape(MetaGraphDef metadata)
{
Map<String, Shape> result = new HashMap<>();
SignatureDef servingDefault = getServingSignature(metadata);
for (Map.Entry<String, TensorInfo> entry : servingDefault.getInputsMap().entrySet())
{
TensorShapeProto shapeProto = entry.getValue().getTensorShape();
List<Dim> dimensions = shapeProto.getDimList();
long firstDimension = dimensions.get(0).getSize();
long[] remainingDimensions = dimensions.stream().skip(1).mapToLong(Dim::getSize).toArray();
Shape shape = Shape.make(firstDimension, remainingDimensions);
result.put(entry.getValue().getName(), shape);
}
return result;
}
}
There are two ways to get the names:
1) Using Java:
You can read the input and output names from the org.tensorflow.proto.framework.MetaGraphDef stored in saved model bundle.
Here is an example on how to extract the information:
https://github.com/awslabs/djl/blob/master/tensorflow/tensorflow-engine/src/main/java/ai/djl/tensorflow/engine/TfSymbolBlock.java#L149
2) Using python:
load the saved model in tensorflow python and print the names
loaded = tf.saved_model.load("path/to/model/")
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)
I recommend to take a look at Deep Java Library, it automatically handle the input, output names.
It supports TensorFlow 2.1.0 and allows you to load Keras models as well as TF Hub Saved Model. Take a look at the documentation here and here
Feel free to open an issue if you have problem loading your model.
You can load TF model with Deep Java Library
System.setProperty("ai.djl.repository.zoo.location", "https://storage.googleapis.com/tfhub-modules/google/universal-sentence-encoder/1.tar.gz?artifact_id=encoder");
Criteria.Builder<NDList, NDList> builder =
Criteria.builder()
.setTypes(NDList.class, NDList.class)
.optArtifactId("ai.djl.localmodelzoo:encoder")
.build();
ZooModel<NDList, NDList> model = ModelZoo.loadModel(criteria);
See https://github.com/awslabs/djl/blob/master/docs/load_model.md#load-model-from-a-url for detail
I need to do the same, but seems still lots of missing pieces RE DJL usage. E.g., what to do after this?:
ZooModel<NDList, NDList> model = ModelZoo.loadModel(criteria);
I finally found an example in the DJL source code. The key take-away is to not use NDList for the input/output at all:
Criteria<String[], float[][]> criteria =
Criteria.builder()
.optApplication(Application.NLP.TEXT_EMBEDDING)
.setTypes(String[].class, float[][].class)
.optModelUrls(modelUrl)
.build();
try (ZooModel<String[], float[][]> model = ModelZoo.loadModel(criteria);
Predictor<String[], float[][]> predictor = model.newPredictor()) {
return predictor.predict(inputs.toArray(new String[0]));
}
See https://github.com/awslabs/djl/blob/master/examples/src/main/java/ai/djl/examples/inference/UniversalSentenceEncoder.java for the complete example.

Android doesn't localize strings properly unless they are 'preferred' on the device. Is there a workaround?

In my Android app I want to give users the ability to change app language regardless of system settings. I've read multiple tutorials on the subject and arrived at solution that works... but only if all in-app languages are already on system's preferred languages list. This is the settings screen I'm talking about:
https://imgur.com/a/chXjhvv
If any of my in-app language is not there then default string resources are used by Android. Has anyone had the same problem? Are there any workarounds?
This is my setup:
1) I have three strings.xml files in three separate folders (values, values-en and values-ru)
2) My in-app language helper method looks like this:
#Nullable
public static Context getContextLocalized(#Nullable Context context) {
if (context != null) {
Configuration configuration = new Configuration(context.getResources().getConfiguration());
configuration.setLocale(getLocale());
return context.createConfigurationContext(configuration);
} else {
return null;
}
}
It basically takes the locale selected by the user (the getLocale() method) and creates a new context with that locale.
3) I have three predefined locales:
new Locale("pl", "PL")
new Locale("en", "GB")
new Locale("ru", "RU")
that user can choose from.
4) The way I use getContextLocalized() looks like this:
#NonNull
public static String getStringLocalized(#Nullable Context context, #StringRes int stringId, Object... args) {
Context contextLocalized = getContextLocalized(context);
if (contextLocalized != null) {
try {
return contextLocalized.getString(stringId, args);
} catch (Exception e) {
Log.e(TAG, "getStringLocalized: ERROR " +
contextLocalized.getResources().getResourceName(stringId) +
"! " + e.getMessage());
e.printStackTrace();
return "ERROR";
}
}
return "NULL";
}
This method is what I use in my Fragments and Activities to extract the desired string.
This solution works great if my system language list contains all my in-app languages. However, if I delete Russian from preferred languages list then my app reverts to Polish (which is stored in the default values folder) whenever user chooses Russian as his in-app language.
My min SDK version is 20. I tested this using API 26 emulator.
I have written this test method:
public static void test(#NonNull Context appContext) {
if (!BuildConfig.DEBUG) {
return;
}
Resources appRes = appContext.getResources();
Configuration appConfig = appRes.getConfiguration();
Log.d("LanguageUtilsTest", "DEFAULT -> " + appContext.getString(R.string.cancel));
Configuration configEN = new Configuration(appConfig);
configEN.setLocale(new Locale("en", "GB"));
Context contextEN = appContext.createConfigurationContext(configEN);
Log.d("LanguageUtilsTest", "EN -> " + contextEN.getString(R.string.cancel));
Configuration configRU = new Configuration(appConfig);
configRU.setLocale(new Locale("ru", "RU"));
Context contextRU = appContext.createConfigurationContext(configRU);
Log.d("LanguageUtilsTest", "RU -> " + contextRU.getString(R.string.cancel));
}
If my system language list looks like this: https://imgur.com/a/chXjhvv
The results are:
DEFAULT -> Anuluj
EN -> Cancel
RU -> Отменить
If I remove all languages except Polish (Polski).
The results are:
DEFAULT -> Anuluj
EN -> Anuluj
RU -> Anuluj
I've tried adding resConfigs "en", "ru" in my gradle but it didn't help.
Any advice?

jAudiotagger - How to create custom TXXX tags

I want to create/add a custom ID3 tag to a MP3 (ID3v2.3 or ID3v2.4). There is a TXXX tag for this purpose, but I don't know how to create it using the library jAudiotagger.
Just found out myself, the following code is not properly tested / not clean but does the task:
/**
* This will write a custom ID3 tag (TXXX).
* This works only with MP3 files (Flac with ID3-Tag not tested).
* #param description The description of the custom tag i.e. "catalognr"
* There can only be one custom TXXX tag with that description in one MP3 file
* #param text The actual text to be written into the new tag field
* #return True if the tag has been properly written, false otherwise
*/
public boolean setCustomTag(AudioFile audioFile, String description, String text){
FrameBodyTXXX txxxBody = new FrameBodyTXXX();
txxxBody.setDescription(description);
txxxBody.setText(text);
// Get the tag from the audio file
// If there is no ID3Tag create an ID3v2.3 tag
Tag tag = audioFile.getTagOrCreateAndSetDefault();
// If there is only a ID3v1 tag, copy data into new ID3v2.3 tag
if(!(tag instanceof ID3v23Tag || tag instanceof ID3v24Tag)){
Tag newTagV23 = null;
if(tag instanceof ID3v1Tag){
newTagV23 = new ID3v23Tag((ID3v1Tag)audioFile.getTag()); // Copy old tag data
}
if(tag instanceof ID3v22Tag){
newTagV23 = new ID3v23Tag((ID3v11Tag)audioFile.getTag()); // Copy old tag data
}
audioFile.setTag(newTagV23);
}
AbstractID3v2Frame frame = null;
if(tag instanceof ID3v23Tag){
frame = new ID3v23Frame("TXXX");
}
else if(tag instanceof ID3v24Tag){
frame = new ID3v24Frame("TXXX");
}
frame.setBody(txxxBody);
try {
tag.addField(frame);
} catch (FieldDataInvalidException e) {
e.printStackTrace();
return false;
}
try {
audioFile.commit();
} catch (CannotWriteException e) {
e.printStackTrace();
return false;
}
return true;
}

Database backed i18n for java web-app

I'd like to use a database to store i18n key/value pairs so we can modify / reload the i18n data at runtime. Has anyone done this? Or does anyone have an idea of how to implement this? I've read several threads on this, but I haven't seen a workable solution.
I'm specifically refering to something that would work with the jstl tags such as
<fmt:setlocale>
<fmt:bundle>
<fmt:setBundle>
<fmt:message>
I think this will involve extending ResourceBundle, but when I tried this I ran into problems that had to do with the way the jstl tags get the resource bundle.
I finally got this working with danb's help above.
This is my resource bundle class and resource bundle control class.
I used this code from #[danb]'s.
ResourceBundle bundle = ResourceBundle.getBundle("AwesomeBundle", locale, DbResourceBundle.getMyControl());
javax.servlet.jsp.jstl.core.Config.set(actionBeanContext.getRequest(), Config.FMT_LOCALIZATION_CONTEXT, new LocalizationContext(bundle, locale));
and wrote this class.
public class DbResourceBundle extends ResourceBundle
{
private Properties properties;
public DbResourceBundle(Properties inProperties)
{
properties = inProperties;
}
#Override
#SuppressWarnings(value = { "unchecked" })
public Enumeration<String> getKeys()
{
return properties != null ? ((Enumeration<String>) properties.propertyNames()) : null;
}
#Override
protected Object handleGetObject(String key)
{
return properties.getProperty(key);
}
public static ResourceBundle.Control getMyControl()
{
return new ResourceBundle.Control()
{
#Override
public List<String> getFormats(String baseName)
{
if (baseName == null)
{
throw new NullPointerException();
}
return Arrays.asList("db");
}
#Override
public ResourceBundle newBundle(String baseName, Locale locale, String format, ClassLoader loader, boolean reload) throws IllegalAccessException,
InstantiationException, IOException
{
if ((baseName == null) || (locale == null) || (format == null) || (loader == null))
throw new NullPointerException();
ResourceBundle bundle = null;
if (format.equals("db"))
{
Properties p = new Properties();
DataSource ds = (DataSource) ContextFactory.getApplicationContext().getBean("clinicalDataSource");
Connection con = null;
Statement s = null;
ResultSet rs = null;
try
{
con = ds.getConnection();
StringBuilder query = new StringBuilder();
query.append("select label, value from i18n where bundle='" + StringEscapeUtils.escapeSql(baseName) + "' ");
if (locale != null)
{
if (StringUtils.isNotBlank(locale.getCountry()))
{
query.append("and country='" + escapeSql(locale.getCountry()) + "' ");
}
if (StringUtils.isNotBlank(locale.getLanguage()))
{
query.append("and language='" + escapeSql(locale.getLanguage()) + "' ");
}
if (StringUtils.isNotBlank(locale.getVariant()))
{
query.append("and variant='" + escapeSql(locale.getVariant()) + "' ");
}
}
s = con.createStatement();
rs = s.executeQuery(query.toString());
while (rs.next())
{
p.setProperty(rs.getString(1), rs.getString(2));
}
}
catch (Exception e)
{
e.printStackTrace();
throw new RuntimeException("Can not build properties: " + e);
}
finally
{
DbUtils.closeQuietly(con, s, rs);
}
bundle = new DbResourceBundle(p);
}
return bundle;
}
#Override
public long getTimeToLive(String baseName, Locale locale)
{
return 1000 * 60 * 30;
}
#Override
public boolean needsReload(String baseName, Locale locale, String format, ClassLoader loader, ResourceBundle bundle, long loadTime)
{
return true;
}
};
}
Are you just asking how to store UTF-8/16 characters in a DB? in mysql it's just a matter of making sure you build with UTF8 support and setting that as the default, or specifying it at the column or table level. I've done this in oracle and mysql before. Create a table and cut and paste some i18n data into it and see what happens... you might be set already..
or am I completely missing your point?
edit:
to be more explicit... I usually implement a three column table... language, key, value... where "value" contains potentially foreign language words or phrases... "language" contains some language key and "key" is an english key (i.e. login.error.password.dup)... language and key are indexed...
I've then built interfaces on a structure like this that shows each key with all its translations (values)... it can get fancy and include audit trails and "dirty" markers and all the other stuff you need to enable translators and data entry folk to make use of it..
Edit 2:
Now that you added the info about the JSTL tags, I understand a bit more... I've never done that myself.. but I found this old info on theserverside...
HttpSession session = .. [get hold of the session]
ResourceBundle bundle = new PropertyResourceBundle(toInputStream(myOwnProperties)) [toInputStream just stores the properties into an inputstream]
Locale locale = .. [get hold of the locale]
javax.servlet.jsp.jstl.core.Config.set(session, Config.FMT_LOCALIZATION_CONTEXT, new LocalizationContext(bundle ,locale));
We have a database table with key/language/term where key is a n integer and is a combined primary key together with language.
We are using Struts, so we ended up writing our own PropertyMessageResources implementation which allows us to do something like <bean:message key="impressum.text" />.
It works very well and gives us the flexibility to do dynamically switch languages in the front-end as well as updating the translations on the fly.
Actuly what ScArcher2 needed is davids response which is not marked a correct or helpfull.
The solution ScArcher2 chose to use is imo terrible mestake:) Loading ALL the translations at one time... in any bigger application its gonna kill it. Loading thousends of translations each request...
david's method is more commonly used in real production environments.
Sometimes to limit db calls, which is with every message translated, you can create groups of translations by topic, functionality etc. to preload them. But this is little bit more complex and can be substituted with good cache system.

Categories