Java annotation - how to get properties and class in processor - java

For the following custom Java annotation
#CustomAnnotation(clazz=SomeClass.class)
public class MyApplicationCode
{
...
}
I basically want to be able to grab both the Class object for the MyApplicationCode and the clazz parameter at compile time to confirm some coding convention consistencies (another story). Basically I want to be able to access MyApplicationCode.class and Someclass.class code in the annotation processor. I'm almost there but I'm missing something. I have
#Target({ElementType.TYPE})
#Retention(RetentionPolicy.SOURCE)
public #interface CustomAnnotation
{
public Class clazz();
}
Then I have for the processor:
public class CustomAnnotationProcessor extends AbstractProcessor
{
private ProcessingEnvironment processingEnvironment;
#Override
public synchronized void init(ProcessingEnvironment processingEnvironment)
{
this.processingEnvironment = processingEnvironment;
}
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment environment)
{
Set<? extends Element> elements = environment.getElementsAnnotatedWith(ActionCommand.class);
for(Element e : elements)
{
Annotation annotation = e.getAnnotation(CustomAnnotation.class);
Class clazz = ((CustomAnnotation)annotation).clazz();
// How do I get the actual CustomAnnotation clazz?
// When I try to do clazz.getName() I get the following ERROR:
// Attempt to access Class object for TypeMirror SomeClass
// Also, how do I get the Class object for the class that has the annotation within it?
// In other words, how do I get MyApplicationCode.class?
}
}
}
So what I'm trying to do in the process method is to grab SomeClass.class and MyApplication.class from the original code below to do some custom validation at compile time. I can't seem for the life of me figure out how to get those two values...
#CustomAnnotation(clazz=SomeClass.class)
public class MyApplicationCode
Update: The following post has a lot more details, and it's much closer. But the problem is that you still end up with a TypeMirror object from which to pull the class object from, which it doesn't explain: http://blog.retep.org/2009/02/13/getting-class-values-from-annotations-in-an-annotationprocessor/
Update2: You can get MyApplication.class by doing
String classname = ((TypeElement)e).getQualifiedName().toString();

I was going to point you in the direction of the blog http://blog.retep.org/2009/02/13/getting-class-values-from-annotations-in-an-annotationprocessor/, but it looks like you already found that one.
I see you figured out how to access the MyApplication Element, so I wont cover that....
The exception you see actually contains the type of the annotation property within it. So you can reference the annotation clazz value when you catch the exception:
public class CustomAnnotationProcessor extends AbstractProcessor
{
private ProcessingEnvironment processingEnvironment;
#Override
public synchronized void init(ProcessingEnvironment processingEnvironment)
{
this.processingEnvironment = processingEnvironment;
}
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment environment)
{
Set<? extends Element> elements = environment.getElementsAnnotatedWith(ActionCommand.class);
for(Element e : elements)
{
CustomAnnotation annotation = e.getAnnotation(CustomAnnotation.class);
TypeMirror clazzType = null;
try {
annotation.clazz();
} catch (MirroredTypeException mte) {
clazzType = mte.getTypeMirror();
}
System.out.println(clazzType); // should print out SomeClass
}
}
}
Yes, this is a total hack of a solution, and I'm not sure why the API developers decided to go this direction with the annotation processor feature. However, I have seen a number of people implement this (including myself), and the article mentioned describes this technique as well. This seems to be an acceptable solution at the moment.
In terms of "grabbing" the class values for MyApplicationCode and SomeClass, you will not be able to do so if they are classes being compiled. You can, however, use the Element and TypeMirror representations to perform some high level validation on your classes (Method, Field, Class names, annotations present, etc)

After reading this related SO question, I found this excellent page about the Java Annotation Processing Tool (APT). It's from 2005 so may not be the best way to do this these days.
APT [...] is an annotation processing tool for Java. More specificially, APT allows you to plug code in to handle annotations in a source file as the code compilation is occurring - and in that process, you can emit notes, warnings, and errors.
More information about APT in Java 6 from Oracle's docs.
Interesting blog post from someone at Oracle about APT.
Another example usage of APT -- this time from 2009.
This is only for Oracle's JDK.

It is compile time. I would think the compiler is not even finished up compiling the source code. You retrieve such information from AnnotatedElement instance which will give you relevant information of the type you have annotated, but not its runtime properties, thats not yet available since the relevant class files are not yet loaded by the virtual machine. And the compiler is not even guaranteed to be running under a virtual machine for java, so it is not mandated to be able to load class files. Its requirement is only to be able to produce bytecodes that any particular virtual machine can read.
So go check on the mirror Api, and for any relevant information on the class/method/field you have annotated, check on AnnotatedElement representing that instance.
And as aside note: this is information is just what i reasoned up, so it might not be the actual truth.

Related

Java - Why is RetentionPolicy.CLASS the default

In the javadoc for java.lang.annotation.RetentionPolicy it says that:
SOURCE: Annotations are to be discarded by the compiler.
CLASS: Annotations are to be recorded in the class file by the compiler but need not be retained by the VM at run time. This is the default behavior.
RUNTIME: Annotations are to be recorded in the class file by the compiler and retained by the VM at run time, so they may be read reflectively.
I understand, that RUNTIME is used to access it using the reflection API, SOURCE for compiler related information (and maybe documentation) and CLASS, as far as I could find out, for special cases, like bytecode manipulation tools, etc. and also compiler related stuff, like #FunctionInterface.
But why is CLASS the default? I expect most annotations to be annotated with RUNTIME, because I think, that most programmers use annotations to specify metadata, that should be read through the reflection API at runtime, because the average programmer doesn't play around with the generated bytecode (at least I've never done it).
So why is RUNTIME not the default? Is there any use case for CLASS, that at I'm not aware of? Or is this just another case of this decision was made long ago, for now unknown or irrelevant reasons and can't be changed, because that would break stuff.
At least for beginners, it may be very confusing and can lead to bugs, that the code
package test;
import java.lang.annotation.Annotation;
import test.Test.Example;
#Example("example")
public class Test {
public static void main(String[] args) {
Annotation[] annotations = Test.class.getAnnotations();
if (annotations.length == 0) {
System.out.println("Class Test has no annotations");
} else {
System.out.println("Class Test has the following annotations:");
for (Annotation annotation : annotations) {
System.out.println("\t" + annotation.toString());
}
}
}
// #Retention(RetentionPolicy.RUNTIME)
public static #interface Example {
public String value();
}
}
outputs "Class Test has no annotations" without the #Retention meta annotation.

Java: Safety of rewriting bytecode to add interface

To try to forestall any concern, this is for a personal project. I'm more interested in learning about the internals of the jvm than I am in hearing about what a horrible idea this is :)
Normally, java does not allow you to implement a generic interface multiple times with different generic parameters.
For example, the following code will not compile:
interface Foo<T>
{ ... }
interface MyFoo
extends Foo<MyObject>
{ ... }
final class CannotDoThis<T>
implements Foo<T>,
MyFoo // Error: Foo cannot be inherited with different arguments
{ ... }
However, I was wondering if there was a way to get around this restriction. For example:
interface Supplier<T>
{ T get(); }
interface MySupplier
extends Supplier<MyObject>
{ }
final class NullSupplier<T>
implements Supplier<T>,
MySupplier // Still an error...
{ public T get() { return null; } } // ...but after erasure this should be safe
I found that by manually disassembling and reassembling the bytecode for NullSupplier.class, I was able to forcibly implement the MySupplier interface on NullSupplier. When I add the patched class file to the resources directory for my project, it seems to compile and run without issue.
Are there any runtime issues that could arise from rewriting the bytecode in such a manner?
One oddity that immediately sticks out to me is that MySupplier.get should have a return type of MyObject whereas the return type of NullSupplier.get is Object. However, this doesn't seem to stop the code from executing properly (I wouldn't expect a ClassCastException, but I was surprised that there weren't any classloader errors).

Issue with ASM getMergedType and getCommonSuperClass

I use ASM to update the class stack map, but when asm getMergedType, the following exception occurs:
java.lang.RuntimeException:
java.io.IOException: Resource not found for IntefaceImplA.
If without asm modify the class method, it does work fine.
I have defined two interfaces A and B: IntefaceImplA and
IntefaceImplB.
My environment source code:
IntefaceA.java
public interface IntefaceA {
void inteface();
}
IntefaceImplA.java
public class IntefaceImplA implements IntefaceA {
#Override
public void inteface() {
}
}
IntefaceImplB.java
public class IntefaceImplB implements IntefaceA {
#Override
public void inteface() {
}
}
Test.java
public class Test {
public IntefaceA getImpl(boolean b) {
IntefaceA a = b ? new IntefaceImplA() : new IntefaceImplB();
return a;
}
}
Main.java
public class Main {
public static void main(String args[]) {
....
if (a instance of Test) {
..
...
}
}
}
After I compiled a runner jar, and delete the IntefaceImplA.class and IntefaceA.class manually from the jar. why i wanna to delete those classes files, since the spring always like to do this stuff.
the runner jar can be run normal without ASM, but use Asm will occur exception. since the asm wanna to getMergedType for IntefaceImplA and IntefaceImplB, but IntefaceImplA was deleted by me.
After investigate the ASM ClassWriter source code i found below code:
protected String getCommonSuperClass(String type1, String type2)
{
ClassLoader classLoader = this.getClass().getClassLoader();
Class c;
Class d;
try {
c = Class.forName(type1.replace('/', '.'), false, classLoader);
d = Class.forName(type2.replace('/', '.'), false, classLoader);
} catch (Exception var7) {
throw new RuntimeException(var7.toString());
}
if(c.isAssignableFrom(d)) {
return type1;
} else if(d.isAssignableFrom(c)) {
return type2;
} else if(!c.isInterface() && !d.isInterface()) {
do {
c = c.getSuperclass();
} while(!c.isAssignableFrom(d));
return c.getName().replace('.', '/');
} else {
return "java/lang/Object";
}
}
Actually, I deleted the related class file, the classloader cannot find the class. but without asm the Program does work normal.
Should I enhance the override to the getCommonSuperClass method, if occur exception then return java/lang/Object for it? that's funny
Generally, overriding getCommonSuperClass to use a different strategy, e.g. without loading the class, is a valid use case. As it’s documentation states:
The default implementation of this method loads the two given classes and uses the java.lang.Class methods to find the common super class. It can be overridden to compute this common super type in other ways, in particular without actually loading any class, or to take into account the class that is currently being generated by this ClassWriter, which can of course not be loaded since it is under construction.
Besides the possibility that either or both arguments are classes you are currently constructing (or changing substantially), it might be the case that the context of the code transforming tool is not the context in which the classes will eventually run, so they don’t have to be accessible via Class.forName in that context. Since Class.forName uses the caller’s context, which is ASM’s ClassWriter, it is even possible that ASM can’t access the class despite it is available in the context of the code using ASM (if different class loaders are involved).
Another valid scenario is to have a more efficient way to resolve the request by using already available meta information without actually loading the class.
But, of course, it is not a valid resolution to just return "java/lang/Object". While this is indeed a common super type of every argument, it isn’t necessarily the right type for the code. To stay with your example,
public IntefaceA getImpl(boolean b) {
IntefaceA a = b ? new IntefaceImplA() : new IntefaceImplB();
return a;
}
the common super type of IntefaceImplA and IntefaceImplB is not only required to verify the validity of assigning either type to it, it is also the result type of the conditional expression, which must be assignable to the return type of the method. If you use java/lang/Object as common super type, a verifier will reject the code as it can’t be assignable to IntefaceA.
The original stackmap, very likely reporting IntefaceA as common super, will be accepted by the verifier as that type is identical to the method’s return type, so it can be considered assignable, even without loading the type. The test, whether either, IntefaceImplA and IntefaceImplB, is assignable to that specified common type, might be postponed to the point where these types are actually loaded and since you said, you deleted IntefaceA, this can never happen.
A method whose declared return type is absent, can’t work at all. The only explanation of your observation that “without asm the program does work normal”, is, that this method was never invoked at all during your test. You most probably created a time bomb in your software by deleting classes in use.
It’s not clear why you did this. Your explanation “since the spring always like to do this stuff” is far away from being comprehensible.
But you can use the overriding approach to get the same behavior as with the unmodified code. It just doesn’t work by return java/lang/Object. You could use
#Override
protected String getCommonSuperClass(String type1, String type2) {
if(type1.matches("IntefaceImpl[AB]") && type2.matches("IntefaceImpl[AB]"))
return "IntefaceA";
return super.getCommonSuperClass(type1, type2);
}
Of course, if you deleted more class files, you have to add more special cases.
An entirely different approach is not to use the COMPUTE_FRAMES option. This option implies that ASM will recompute all stack map frames from scratch, which is great for the lazy programmer, but implies a lot of unnecessary work if you are just doing little code transformations on an existing class and, of course, creates the requirement to have a working getCommonSuperClass method.
Without that option, the ClassWriter will just reproduce the frames the ClassReader reports, so all unchanged methods will also have unchanged stack maps. You will have to care about the methods whose code you change, but for a lot of typical code transformation tasks, you can still keep the original frames. E.g. if you just redirect method calls to signature-compatible targets or inject logging statements which leave the stack in the same state it was before them, you can still keep the original frames, which happens automatically. Note the existence of the ClassWriter(ClassReader,int) constructor, which allows an even more efficient transfer of the methods you don’t change.
Only if you change the branch structure or insert code with branches, you have to care for frames. But even then, it’s often worth learning how to do this, as the automatic calculation is quiet expensive while you usually have the necessary information already when doing a code transformation.

Android app cannot install

While developing an app in AIDE for Android I have come across this error. The app would compile successfully but wouldn't install, reporting this error:
Could not run the App directly as root. Consider disabling direct running in the settings.
WARNING: linker: app_process has text relocations. This is wasting memory and is a security risk. Please fix.
pkg: /storage/sdcard/AppProjects/MyProgram/bin/MyProgram.apk
Failure [INSTALL_FAILED_DEXOPT]
exit with 0
I researched what could cause this and mainly came across reasons like "certificate error, try resigning the package" and "setting a permission twice in the manifest" and other stuff, none of which have worked.
Your problem: Java thinks you define two methods with the same signature.
Java method signature definition: https://docs.oracle.com/javase/tutorial/java/javaOO/methods.html
method declarations have six components, in order:
1.Modifiers—such as public, private, and others you will learn about later.
2.The return type—the data type of the value returned by the method, or void if the method does not return a value.
3.The method name—the rules for field names apply to method names as well, but the convention is a little different.
4.The parameter list in parenthesis—a comma-delimited list of input parameters, preceded by their data types, enclosed by parentheses, ().
If there are no parameters, you must use empty parentheses.
An exception list—to be discussed later.
The method body, enclosed between braces—the method's code, including the declaration of local variables, goes here.
As you can see above, the specification of generic classes is NOT part of the java method signature. Therefore java detects two add-methods with the same signature.
I found where the problem resides. It was in some code which looked very much like this:
public class Builder<T extends Base> {
private final List<Def1> subDefs1 = new ArrayList<>();
private final List<Def2> subDefs2 = new ArrayList<>();
public Builder<T> add(final Collection<Def1> ds) {
subDefs1.addAll(ds);
return this;
}
public Builder<T> add(final Collection<Def2> ds) {
subDefs2.addAll(ds);
return this;
}
}
interface Base {}
final class Def1 implements Base {}
final class Def2 implements Base {}
I had these add methods, which both take a Collection of some kind. The problem must be something to do with Java's lacklustre generics and the dexing process, I guess...

can't cast to implemented interface

i'm very confused...
I have a class which directly implements an interface:
public class Device implements AutocompleteResult
{...}
Here is proof that I'm looking at the right variables:
Object match = ...;
log.debug(match.getClass()); // Outputs 'Device'
log.debug(match.getClass().getInterfaces()[0]); // Outputs 'AutocompleteResult'
Yet when I try to cast an instance of the class to the interface:
AutocompleteResult result = (AutocompleteResult) match;
I get a ClassCastException!
ClassCastException: Device cannot be cast to AutocompleteResult
Also, isAssignableFrom returns false and i'm not sure why:
log.debug(AutocompleteResult.class.isAssignableFrom(Device.class));
from the doc:
Determines if the class or interface represented by this Class object is either the same as, or is a superclass or superinterface of, the class or interface represented by the specified Class parameter.
Shouldn't I always be able to cast a object to an interface its class implements?
Thanks.
This can happen if two different classloaders load a class named AutocompleteResult.
These two classes are then treated as entirely different classes, even if they have the same package and name (and even implementation/fields/methods).
A common cause for this is if you use some kind of plugin system and both your base classes and the plugin classes provide the same class.
To check for this issue print the value returned by Class.getClassLoader() on both offending classes (i.e. the class of the interface implemented by Device and the result of AutocompleteResult.class).
AKA when Java apparently doesn't Java.
I hit this problem recently with Play Framework 2.6.3, what helped me was this:
https://www.playframework.com/documentation/2.6.x/ThreadPools#Application-class-loader
I leave this info here for the people that might have the same problem.
To make it clearer, what helps is:
Injecting Application on an Eager Singleton and then using its classloader to load the classes I was having issues with.
To make it clearer
public class Module {
#Override
public void configure {
bind(TheClassLoaderLoader.class).asEagerSingleton()
public static class TheClassLoaderLoader {
#Inject
public TheClassLoaderLoader( Application application) {
ClassLoader classloader = application.classloader();
Class<?> interfaceClass = classloader.loadClass(InterfaceClass.class.getName());
classloader.loadClass(ImplementsInterfaceClass.class.getName()).asSubclass(interfaceClass);
The example here https://playframework.com/documentation/2.6.x/JavaDependencyInjection#Configurable-bindings
That uses Environment often throws a frustrating ClassNotFoundException
Cheers

Categories