Even after spending a good time, I am unable to understand the purpose of Annotation Processing.
I understand why annotations are required for run-time, simplest example I can think are:
Replacement of marker interface.
Replacement of market properties of a type (e.g. transient)
In general, any usefulness that can be done at runtime.
But unfortunately, i could not understand any practical example/reason of using annotation at compile time(except for default annotations provided by JDK e.g. #Override, etc).
I could not understand what is the purpose/need of 'generating code' using Annotation Processors.
Edit: Javadoc/Custom Java doc is one utility I can think of as a purpose of using annotation processors.
This can be used for all sorts of things.
Two simple examples
The Lombock project. Tired of writing thousands of getters and setters? Why not let an annotation processor do it at compile time.
AOP. You can use something like AspectJ to weave in code dependent on annotations. This would be done post compile but as part of the compilation process. For example Spring AOP uses the #Transactional annotation in combination with AspectJ to weave transaction code around methods marked with the annotation.
There are many other uses, but they generally break down into two categories
To reduce boiler plate code.
For cross-cutting concerns.
There are two main purposes of the annotation processing environment - analysis and code generation.
The analysis permits you to extend the capabilities of the java compiler, analyzing program elements as they are being compiled, possibly adding additional constraints, validations, and reporting errors and warnings for violations of those constraints.
Code generation permits you to generate additional supplementary code from signals in your existing hand-written code, primarily (though not exclusively) keyed off of Annotations.
Some examples include Dagger, which is a system for compile-time-analyzed dependency injection, reporting errors and warnings normally found at runtime instead during the compilation of the code. Dagger also generates all the code that would normally be done with reflection, or by hand-writing glue-code, providing substantial performance benefits (in some cases) as well as infrastructure code which is available for step-through debugging, etc.
Another example is the Checker Framework which evaluates a variety of checks against your code, including null safety, etc.
A third example is Auto-Value intended to make small value types nearly trivial to write.
One thing the annotation processing environment is decidedly not suited for is mutation of existing code in place, or modification of code currently under compilation. While some projects do this, they are not actually using the annotation processor APIs but casting to internal compiler types to do so. While this is clearly possible, it's potentially brittle, and may not work reliably from version to version, or compiler to compiler, requiring custom handling for each version and compiler vendor.
Related
I was browsing the source code of the Lombok project, as I'm learning about Annotations and AOP in general and I thought that would be a good example to draw inspiration from.
However, I don't understand, the AllArgsConstructor only defines the annotation - that part I get from what I grasped so far - but where is the code that actually adds the constructor ? And all other annotations.
Let me first note that if you want to learn about annotation processing, Lombok is not a good example to start with. Lombok is not a regular annotation processor (which only adds new source files to the compilation). Instead, it modifies existing Java sources. That is not what annotation processors typically do, and it's not something the annotation processing in javac was designed for. Lombok uses the API of javac to modify and enrich an abstract syntax tree. That makes it complex and difficult to understand.
To answer your question, the logic that generates the code for Lombok annotations is located in so-called handlers. In your case, it's the HandleConstructor classes (there are two of them: one for javac, one for the Eclipse compiler).
The Java 8 type annotations (JSR 308) allow type checkers to perform static code analysis. For example, The Checker Framework can check for possible nullness via #NonNull annotations.
Various projects define their own NonNull annotations, for example:
org.checkerframework.checker.nullness.qual.NonNull
edu.umd.cs.findbugs.annotations.NonNull
javax.annotation.Nonnull
javax.validation.constraints.NotNull
lombok.NonNull
org.eclipse.jdt.annotation.NonNull
etc. (see The Checker Framework Manual, section 3.7)
For such annotations, I would expect the #interface to have #Retention(RetentionPolicy.CLASS), because they are usually not needed at runtime. Most importantly, the code does not have any runtime dependencies on the respective library.
While org.eclipse.jdt.annotation.NonNull follows this approach, most other NonNull annotations, like javax.annotation.Nonnull (JSR 305) and org.checkerframework.checker.nullness.qual.NonNull itself, have #Retention(RetentionPolicy.RUNTIME). Is there any particular reason for the RetentionPolicy.RUNTIME in these annotations?
Clarification: The Checker Framework supports annotations in comments for backward compatibility. However, using those in Java 8 just to avoid runtime dependencies seems like a dirty hack.
This is a good question.
For the purpose of static checking at compile time, CLASS retention would be sufficient. Note that SOURCE retention would not be sufficient, because of separate compilation: when type-checking a class, the compiler needs to read the annotations on libraries that it uses, and separately-compiled libraries are available to the compiler only as class files.
The annotation designers used RUNTIME retention to permit tools to perform run-time operations. This could include checking the annotations (like an assert statement), type-checking of dynamically-loaded code, checking of casts and instanceof operations, resolving reflection more precisely, and more. Not many such tools exist today, but the annotation designers wanted to accommodate them in the future.
You remarked that with #Retention(RetentionPolicy.CLASS), "the code does not have any runtime dependencies on the respective library." This is actually true with #Retention(RetentionPolicy.RUNTIME), too! See this Stack Overflow question:
Why doesn't a missing annotation cause a ClassNotFoundException at runtime? .
In summary, using CLASS retention costs a negligible amount of space at run time, enables more potential uses in the future, and does not introduce a run-time dependency.
In the case of the Checker Framework, it offers run-time tests such as isRegex(String). If your code uses such methods, your code will be dependent on the Checker Framework runtime library (which is smaller than the entire Checker Framework itself and has a more permissive license).
Each annotation has it's purpose!
javax.validation.constraints.NotNull
This ones is defined by the bean validation specification and is used to perform non-null check at runtime, so it needs to be retained at runtime to perform, for example, a form valdiation ...
#RetentionPolicy.SOURCE => usually used for documentation
#RetentionPocily.CLASS => allow to give some information to the compiler but not the JVM (for example, to perform code generation during compilation)
#RetentionPolicy.RUNTIME => allow to retrieve annotation information at the JVM level (so at runtime).
Regards,
Loïc
I having hard time understanding importance and benefits of Annotations and so have two questions regarding them:
What are the benefits of Annotations as compared to XML Configuration?
How do Annotations work internally?
Is it fair enough to say that annotation binds application tightly whereas with XML Configuration Application is loosely coupled?
Would appreciate pros and cons comparison with XML Configuration with example so that it would be much more helpful for me to understand.
Regards.
For your 1st question,
Xml configuration versus Annotation based configuration
Personally, I feel, there are two criteria's
Can annotations simplify the metadata ?
If annotations do not reduce the amount of metadata that you have to provide (in most cases they do), then you shouldn’t use annotation.
Can changes to the metadata break behavior in your application?
If not, then you can feel comfortable applying the change while the system is running in production. External config files are the best place for the metadata in this case because you don’t want to have to recompile your code to make the change.
For your 2nd question,
How Do Annotations Work?
Important Links :
What are annotations and how do they actually work for frameworks like Spring?
Both annotations and XML descriptors are used to describe some metadata on top of regular code. The primary difference is that in case of annotations you only have to deal with one file which includes code and metadata. It is also a big advantage of annotations as it reduces number of moving parts and increases productivity.
On the other hand, the drawback of annotations is that they bind together the code and the system or framework that operates using those annotations. That makes it harder to separate those in future.
For example, if you use Hibernate Annotations, you bind your model objects with Hibernate. If you choose to switch to different framework, you will have to rip out Hibernate annotations from the code.
But practically, it's not that likely that you will be changing frameworks that often. There are usually many other reasons why changing framework on existing code base may be hard. So often annotations is a good choice.
As to how they work, annotations are a part of the language and are processed by compiler and other tools and, depending on retention, can be included in produced bytecode for use at runtime. Ultimately, it's up to consumer to decide on how to use annotations.
To answer the first question, IMO the greatest benefit is the potential for compiler integration. I can write an annotation processor that can validate some semantics related to the application of the annotation. That kind of compile-time checking is not possible (or would at least be way more difficult) if the same information was instead part of an XML document.
To answer the second question, annotations don't really "work" internally, per se, in the sense that they don't have any inherent execution semantics. They are source level entities that may or may not be retained in the classfile. The can be processed during compilation of the source, and if they were retained in the classfile, can be accessed via reflection.
Is it possible to create a preprocessor like functionality that is available in C and provided by Antenna. Can we use the APT tool to achieve this functionality? Are there any articles or links on similar topics?
Annotations are not meant as a tool to transform code; they just add metadata to code. You can't use annotations for conditional compilation, for example.
As Sun's tutorial on annotations says:
Annotations provide data about a program that is not part of the program itself. They have no direct effect on the operation of the code they annotate.
Wikipedia says:
When Java source code is compiled, annotations can be processed by compiler plug-ins called annotation processors. Processors can produce informational messages or create additional Java source files or resources, which in turn may be compiled and processed, but processors cannot modify the annotated code itself.
So an annotation processor plug-in is not going to be able to give you all of the functionality that the C preprocessor has.
You can perform compile-time tasks using the annotation processing framework. It's not as powerful as a preprocessor, since you can't do things like:
#RunOnlyOn(OS.Mac) public void someMethod() { ... }
Some good use cases for annotation processors are:
creating mapping files from annotated classes, e.g. create a hibernate mapping file;
creating indexes of classes which have certain annotation, e.g. create testng xml files from a source folder of test classes;
enforce compile-time constraints not usually available, e.g. having a no-arg constructor.
Please note that as of Java 6 APT is no longer needed, since all properly declared annotation processors take part in the compilation.
I'm trying to write rules for detecting some errors in annotated multi-threaded java programs. As a toy example, I'd like to detect if any method annotated with #ThreadSafe calls a method without such an annotation, without synchronization. I'm looking for a tool that would allow me to write such a test.
I've looked at source analyzers, like CheckStyle and PMD, and they don't really have cross-class analysis capabilities. Bytecode analysers, like FindBugs and JLint seem rather difficult to extend.
I'd settle for a solution to something even simpler, but posing the same difficulty: writing a custom rule that checks whether each overriden method is annotated with #Override.
Have you tried FindBugs? It actually supports a set of annotations for thread safety (the same as those used in Java Concurrency in Practice). Also, you can write your own custom rules. I'm not sure whether you can do cross-class analysis, but I believe so.
Peter Ventjeer has a concurrency checking tool (that uses ASM) to detect stuff like this. I'm not sure if he's released it publicly but he might able to help you.
And I believe Coverity's static/dynamic analysis tools for thread safety do checking like this.
You can do cross-class analysis in PMD (though I've never used it for this specific purpose). I think it's possible using this visitor pattern that they document, though I'll leave the specifics to you.
A simple tool to checkup on annotations is apt (http://java.sun.com/j2se/1.5.0/docs/guide/apt/ also part of Java 6 api in javax.annotation.processing) however this only has type information (ie I couldn't find a quick way to get at the inheritance hierarchy using the javax.lang.model api, however if you can load the class you can get that information using reflection).
Try javap + regexes (eg. Perl)