So I have an annotation that, although it can be declared multiple times, generally needs to access the same properties file. Currently I am using a static registry in my annotation processor to track if the file has been created, and to store the writer for the file. So far this works fine, but my problem is figuring out when to save the file. Currently I am making a call to roundEnv.processingFinished (or whatever that method is) at the end of my annotation processor, and if it returns true I save the file. The problem I am having is that, if source fines are generated, and the compiler goes into a new round of processing that does not contain any of my annotation, the file will not ne saved. I was thinking about adding an entirely new processing that supports every annotation, but claims none, and making the check there. I just feel like this is a slightly clunky option, especially since the spec indicates a processor should only support annotations it will actually process. Is there some better way of doing this, or should I just go with the additional processor model?
Related
I'm in the middle of a massive refactoring project, the code has a 5000 line main class which was injected into everything, stored everything and had all of the common code.
I'm no expert on analysis and design but I've separated out things to the best of my ability and I'm about 80% through refactoring the classes that depend on the main class to use the new classes I've created.
There are some types of data which are initialised when the application starts and accessed by pretty much everything throughout the life of the application. For instance there is a Config class which holds hundreds of parameters.
The approach I've taken is to create several singletons the two most central are GUIData and ClientData. GUIData contains a reference to the mainframe of the application and clientdata maintains references to the config and other similar classes.
This allows me to call ClientData.getInstance().getConfig().getParam("param") from anywhere in the code but I don't feel like this is the best approach.
I considered individual static classes instead of these data singletons which contain instances of the classes but some of the classes do need constructors.
I've been googling on and off for a week trying to find a better way to do this but somehow I always end up on threads talking about database caching
Immutable (configuration) instances provide "thread-safe application-wide data access".
Typesafe's config (as suggested in a comment by Brian Kent) does exactly that.
Note that this does not involve static classes or singletons. Static classes and singletons may serve your purposes now,
but they could prove bothersome in the future. They can be handy ofcourse, but try limiting their use.
Initialization will have to be done after reading and parsing the configuration data. It is typically done at application startup, before other processing threads are started. The initialization will have to validate the configuration data as much as possible in order to fail fast and terminate the program if the configuration data is no good.
Having a lot of configuration data bundled together can create "hidden lines of communication". E.g. you update one value and the application fails because it required updates to other values as well. It's perfectly fine to put all configuration data in one file and load it from there, but your application (with hundreds of configuration options) should divide the configuration data in sets that are used by different parts of your application. This improves isolation, helps unit-testing and makes it possible to change the application in the future without getting too many nasty surprises.
There are two ways to use a set of configuration data:
from within an object call a singleton Settings.getInstance().getConfigForThisModule().
provide each object that uses configuration data with the configuration data via the constructor or via setConfig(ConfigForThisModule config).
The first approach depends on a convention not to call Settings.getInstance().getConfigForACompletelyUnrelatedModule() which could be a weakness. The second approach is more in line with "dependency injection" and could be more future proof.
You could mix both approaches while you are refactoring, just make sure to be consistent (e.g. only use the singleton approach for configuration data that is used in all parts of the application).
To further improve your design for using the configuration data, keep the following (likely) future functional requirement in mind: when the configuration file is updated, configuration data is reloaded and used in the application. Most logging frameworks manage to support this functional requirement without affecting the performance of multi-threaded applications. Among other things, it requires the following of your application:
if the new configuation data is no good, the program is not terminated but an error is logged instead and the old configuration data remains in use. Your initialization procedure will need to handle both "load at fresh start" and "reload" scenarios. The main thing to take away from this is that your initialization procedure needs to be re-usable and should not affect other (running) parts of your application (isolation, again).
long-lived objects may not keep a local copy of configuration data or a reference to an instance of ConfigForThisModule, instead Settings.getInstance()... (or some other method that can return an updated instance) should be called regurarly.
replacing old configuration with new configuration may not result in errors. Technically, replacing the configuration is as simple as updating an AtomicReference with a new configuration instance returned with Settings.getInstance().... But this is also where the isolation of the configuration data sets are tested: there should be no problem using an old set in one module and a new set in another module at the same.
Configuration data can be seen as a sort of "global state". With that in mind, further design points on what to do and what to avoid (partially blatantly copied to this answer) are discussed in the following two questions:
Why is Global State so Evil?
How are globals any different from a database?
Sorry, the question is a bit vague, are you looking to store the config or the cached objects used by other parts of your program ?
Since you have 100s of params, start with splitting up the config into manageable blocks
1) Split up your configuration parameters into logical blocks that have 1:1 correspondence with a simple properties file -its going to take some time
2) These property files must be externalized so that you can change them at any point in time, make sure that you pass in the base location via a env variable to the program
3) Write a utility class (singleton) that wraps Apache commons configuration to hold your config. (read *.properties from the base location and merge the properties into one configuration object) this must be done before any threads are kicked off.
4) Refer to the configuration param in your code using config.getXXXX() methods
Apache commons config also has ability to reload the config when your properties file changes on the filesystem.
Once this is done, use a DI container like Spring or Guice to cache the configured objects.
If it's just String property values you need, you don't even need a class for that - a global facility exists for you already: System.getProperties()
All you need do is first load the property values on start up:
System.setProperty("myKey", "myValue"); // see below how load properties from a file
Then read it anywhere in your code:
String myValue = System.getProperty("myKey");
or
String myValue = System.getProperty("myKey", "my desired default");
If your container doesn't support property loading out of the box, to load properties from an external file that looks like this:
key1=value
key2=some other value
etc...
you can use this code:
Files.lines(Paths.get("path/to/file"))
.filter(line -> !line.startsWith("#") || !line.contains("=")) // ignore comment/blank
.map(line -> line.split("=", 2)) // split into key/value
.forEach(split -> System.setProperty(split[0], split[1])); // load as property
you can use the Java Properties class util, basically its a HashTable
reference : https://docs.oracle.com/javase/7/docs/api/java/util/Properties.html
you create a file fileName.properties and store your data in key value pairs, for example:
username=your name
port=8080
then you load it into Properties Object and get the data like the following:
Properties prop = new Properties();
load the file...
String userName = prop.getProperty("username")
String port = prop.getProperty("port")// you can parse it to int if needed
what i suggest is to create a property file for each type of configuration like:
clientData.properties
appConfig.properties
you can follow this simple tutorial
http://www.mkyong.com/java/java-properties-file-examples/
There is a Java application and I have no permission to alter the Java code apart from annotating classes or methods (with custom or existing annotations).
Using annotations and annotations only I have to invoke code which means that every time an instance of an annotated class is created or an annotated method is called, some extra Java code must be executed (e.g a call to a REST Webservice). So my question is: how can I do this?
In order to prevent answers that I have already checked I will give you some solutions that seem to work but are not satisfying enough.
Aspect Oriented Programming (e.g AspectJ) can do this (execute code before and after the call of an annotated method) but I don't really want the runtime overhead.
Use the solution provided here which actually uses reflection. This is exactly what I need only that it alters the initial code further than just annotating and so I cannot use it.
Use annotation processor for source code generation as suggested here by the last answer. However, still this means that I will alter the source code which I don't want.
What I would really like is a way to simply include a Java file that somehow will execute some Java lines every time the annotated element will be triggered.
Why not skip annotations completely and use byteman to inject code at runtime into the entry points of your code.
I have to agree with the comment above though, that this sort of restriction is ridiculous and should be challenged.
Someone thought it would be a good idea to store Objects in the database in a blob column using Java's default serialization methods.
The structure of these objects is controlled by another group and they changed a field type from BigDecimal to a Long,
but the data in our database remains the same.
Now we can't read the objects back because it causes ClassCastExceptions.
I tried to override it by writing my own readObject method,
but that throws a StreamCorruptedException because what was written by the default writeObject method.
How do I make my readObject call behave like Java's default one?
Is there a certain number of bytes I can skip to get to my data?
Externalizable allows you to take full control of serialization/deserialization. But it means you're responsible for writing and reading every field,
When it gets difficult though is when something was written out using the default serialization and you want to read it via Externalizable. (Or rather, it's impossible. If you try to read an object serialized with the default method using Externalizable, it'll just throw an exception.)
If you've got absolutely no control on the output, your only option is to keep two versions of the class: use the default deserialization of the old version, then convert to the new. The upside of this solution is that it keeps the "dirty" code in one place, separate from your nice and clean objects.
Again, unless you want to do things really complicated, your best option is to keep the old class as the "transport" bean and rename the class your code really uses to something else.
If you want to read what's already in your database your only option is to get them to change the class back again, and to institute some awareness that you're relying on the class definition as it was when the class was serialized. Merely implementing your own readObject() call can't fix this, and if the class is under someone else's control you can't do that anyway.
If you're prepared to throw away the existing data you have many other choices starting with custom Serialization, writeReplace()/readResolve(), Externalizable, ... or a different mechanism such as XML.
But if you're going to have third parties changing things whenever they feel like it you're always going to have problems of one kind or another.
BigDecimal to Long sounds like a retrograde step anyway.
Implement the readObject and readObjectNoData methods in you class.
Read the appropriate type using ObjectInoutStream.readObject and convert it to the new type
See the Serializable interface API for details.
More Details
You can only fix this easily if you control the source of the class that was serialized into the blob.
If you do not control this class,
then you have only a few limited and difficult options:
Have the controlling party give you a version of the class that reads the old format and writes the new format.
Write you own form of serialization (as in you read the blob and convert the bytes to classes) that can read the old format and generate new versions of the classes.
Write you own version of the class in question (remove the other from the class path) which reads the old format and produces some intermediate form (perhaps JSON).
Next you have to do one of these
Convince the powers that be that the blob technique is shitty and should be done away with. use the current class change as evidance. Almost any technique is better that this. Writing JSON to the db in the blob is better.
Stop depending on shitty classes from other people. (shitty is a judgement which I can only suspect, not know, is true). Instead create a suite of classes that represent the data in the database and convert from the externally controlled classes to the new data classes before writing to the database.
I see that the Configuration class in Hadoop is writable http://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html. However, I do not see any of the methods that it has exposed that can be used to add a writable object (I see a lot of methods to set and get primitive types like int, long). Let us say, I have my own writable object and I want to add it to the configuration for all my mappers and reduces to use, how do I do this?
Thanks,
Venkat
The configuration is really not for passing entire objects. The configuration should be used more for setting simple parameters that are needed for the setup of the Mappers/Reducers. Think of the conf as you set the variables at the beginning of the job. If you make changes during the middle of a run to the configuration, it most likely won't be there at the end as it's not really meant to dynamically pass data.
What you are looking for if you want to pass around entire Objects between nodes is the Distributed Cache. Technically speaking these are files, but you can use standard object serialization to add them. About the Distributed Cache.
*apologies for linking different hadoop versions, their pages are a bit muddled and hard to find what you need sometimes.
You can check HBase sources (starting from HBase 0.94.6) MultiTableInputFormat.setConf() class method and appropriate TableMapReduceUtil code (for example .initTableMapperJob()). They pass Scan objects through configuration. Earlier TableInputFormat.setConf() class uses very similar mechanics.
Usually only minimal attributes are passed through config but this is probably case closer to your one.
Hope it will help.
Does anybody know of a way to convert xdoclet to annotations in an automated fashion? It seems to me that it should be possible to have equivalent annotations/annotation preprocessors for anything that xdoclet does but manually converting things is really tedious on large systems.
Offhand I don't know of anything that does this. However, it is possible to write: every JavaDoc object (such as MethodDoc) provides the position() method, which gives the source position of the associated declaration. Read the entire source file into an ArrayList by lines, for each tag prepend the appropriate annotation to the associated line (you don't want to add new lines to the list, because that would throw off the counts), then write the file back out.
An interesting solution, but I suspect that it will be better over the long run to do it manually, one set of tags at a time.