I have an annotation processor for an annotation of retention policy=SOURCE.
I have not idea how to step-debug it.
I have issued print statements, logger info when I run mvn install, compile or package or ant javac, and I see their sysouts in the compile log.
However, I have no idea how to step-debug the processor in Eclipse. I mean, how do you step-debug compile-time?
An option in recent times is to use something like http://github.com/google/compile-testing which lets you invoke the compilation job against arbitrary annotation processors, which you can set break points, step through, etc.
#Test public void testStuff() {
// Create a source file to process, or load one from disk.
JavaFileObject file = JavaFileObjects.fromSourceLines("test.Foo",
"package test;",
"",
"import bar.*;",
"",
"#MyAnnotation(blah=false)",
"interface TestInterface {",
" Bar someBar();",
"}",
// assert conditions following a compilation in the context of MyProcessor.
assert_().about(javaSource()).that(file)
.processedWith(new MyProcessor())
.failsToCompile()
.withErrorContaining("some error message").in(file).onLine(5);
}
This test expects you will get some error message because #MyAnnotation is incorrectly declared in the test data source. If this assertion fails, you can run it in debug mode in your IDE, set breakpoints in MyProcessor, and step through with a full compiler environment active during debugging.
For unit testing specific methods within your processor, you can also use the #Rule called CompilationRule from which you can obtain Elements and Types utility classes in order to test specific logic in your compiler in a more isolated way.
You have to invoke the Java compiler from Eclipse, using a debug configuration (you'll need to create the configuration manually, from the "Debug Configurations..." menu choice.
The "correct" way to invoke the Java compiler under JDK 1.6 or above is to use the JavaCompiler interface in javax.tools, which you get from the ToolProvider (I include all the links because there's a decent amount of class/package documentation that you should read).
The "quick-and-dirty" way (that should work, but I make no guarantees) is to invoke com.sun.tools.javac.Main.main(), passing it your normal command-line arguments. To do this, you'll need tools.jar on your classpath (it's found in $JAVA_HOME/lib).
Annotation processing occurs during compilation, so normal debugging won't work. If you want to debug it in the context of you project, you can use Eclipse remote debugging, while having Gradle or Maven in debug mode. Then you can put breakpoints in the Annotation Processor's files.
See Debugging an Annotation Processor in any project.
Disclaimer: I wrote the post.
Related
Is there a way to get dagger to spit out why it didn't generate a particular component?
I tried refactoring some of our modules and ended up breaking something, but I have literally no idea what I broke! All I see is that all my DaggerFoo components are missing, because dagger is apparently silently failing.
I've tried compiling with verbosity & higher max errors, but I still see absolutely nothing from Dagger itself saying what went wrong.
-Xdiags:verbose
-Xmaxerrs=1000
I have no relevant errors to share, because none are printed!
How the heck do you debug Dagger2?
Dagger runs as an annotation processor, so its error messages will manifest as compiler errors. These will often look like this message ("X cannot be provided...").
error: some.injected.ClassName cannot be provided without an #Inject constructor
or from an #Provides- or #Produces-annotated method.
some.injected.ClassName is injected at
some.class.that.InjectsIt
some.class.that.InjectsThatAbove
some.class.that.FurtherInjectsThat
If you're not sure where to look for compiler errors, you can see some other answers here:
Android Studio: Where is the Compiler Error Output Window?
How to view the list of compile errors in IntelliJ?
eclipse annotation processor not working. Where are errors shown?
If you've edited your project configuration or Gradle definition, it is also possible that Dagger is no longer running at all, or that it hasn't run for a while and has been working only based on its previous output. If so, check your Gradle file or Eclipse project definition to ensure that you are including Dagger as an annotationProcessor, and that you have at least one #Component file for that annotation processor to find.
I have written a simple Annotation Processor (just for fun) that will generate some boilerplate code that I have been writing in my previous project. It actually generates a module like following by collecting annotations on Activity classes
#Module
abstract class ActivityInjectorModule {
#ContributesAndroidInjector
abstract fun providesMain2Activity(): Main2Activity
#ContributesAndroidInjector
abstract fun providesMainActivity(): MainActivity
}
However, when I run it with dagger, dagger can't seem to find classes generated by my annotation processor. Although, class is generated and present in generated directory, I can use it in my source code but on compilation, dagger produces the following exception. Any expert suggestion?
error: cannot find symbol
#dagger.Component(modules = {dagger.android.AndroidInjectionModule.class, com.mallaudin.daggietest.di.AppModule.class, ActivityInjectorModule.class})
^
symbol: class ActivityInjectorModule
This is the main app component.
#Singleton
#Component(
modules = [
AndroidInjectionModule::class,
AppModule::class,
ActivityInjectorModule::class
]
)
interface AppComponent : AndroidInjector<App> {
#Component.Builder
interface Builder {
fun addContext(#BindsInstance ctx: Context): Builder
fun build(): AppComponent
}
}
ActivityInjectorModule class is generated by annotation processor and exists in the generated directory.
Application class
class App : DaggerApplication() {
override fun applicationInjector(): AndroidInjector<out DaggerApplication> {
return DaggerAppComponent.builder().addContext(this).build()
}
}
Everything works perfectly, if I create the generated class myself.
Somehow on compile time, dagger is unable to find the class when generated by my annotation processor.
After Yuriy Kulikov's answer,
You can see generated file is in the same package but also referenced with fully qualified name. Still dagger reports errors.
Here is the link to github repository if someone wants to experiment
Solution:
Generate java code. Kapt does not support multiple rounds.
Write generated files on earliest possible round.
Explanation:
Javac annotation processor uses rounds instead of defining processors order. So normally the simplified algorithm is like that:
Gather all java sources
Run all annotation processors. Any annotation processor can generate new files using Filer.
Gather all generated files and if there are any, run step 2 again.
If there are no files generated, run one more round where RoundEnvironment.processingOver() returns true, signaling this is the last round.
Here is a pretty good explanation of the process
Now a bit about kapt. Kapt uses javac to run annotation processors. To make it possible, it runs kotlin compliler first to generate java stub files and runs javac on them. Currently kapt does not support multiple rounds, meaning it does not generate java stubs for kotlin classes, generated by annotation processors.
Note: javac still uses multiple rounds, it just can't pick up generated kotlin sources.
So, back to your question. One possible option is to move your generated classes into a separate module like it's described here.
But the easiest option is to generate java code directly and your generated java classes will be picked up by javac automatically, launching second round of annotation processing, where dagger will process them.
Just a few more notes:
Do not generate your code when RoundEnvironment.processingOver() == true, it will not trigger another round. Generate it during the same round you see the annotation.
To make the generated code visible to annotation processor, write it using Filer.
New answer
I have somehow missed that you are using kapt. Kapt can process your classes, even without full qualified name (which is remarkable) if you add this to your build.gradle:
kapt {
arguments {
arg("argumentIncremental", 'true')
}
correctErrorTypes = true
}
More info about this: https://kotlinlang.org/docs/reference/kapt.html#non-existent-type-correction
Previous answer can be useful is someone has the same issue with annotationProcessor (apt) in gradle.
Short answer: use fully qualified name for ActivityInjectorModule:
#dagger.Component(modules = {dagger.android.AndroidInjectionModule.class, com.mallaudin.daggietest.di.AppModule.class, com.mallaudin.daggietest.di.ActivityInjectorModule.class})
Alternatively put both files in the same package.
Long answer: Dagger is an annotation processor, it runs before your code is compiled and (potentially) before your other annotation processor runs. The sequence in which processors run is not defined.
Dagger annotation processor will process the TypeElement annotated with #dagger.Component and it will try to find all modules including the "ActivityInjectorModule.class". The thing is, ActivityInjectorModule might not have been generated yet. Therefore "ActivityInjectorModule" will not have a package at this point. Dagger will assume that ActivityInjectorModule resides in the same package as the Component class and will not add an import. The usual workaround for this is to use full-qualified names for generated classes, if they are used by other annotation processors. Sometimes it makes sense to move annotation processing to a difference gradle module, but I don't this that this is what you want.
There may be a more elegant way to solve this, but the simplest and most reliable solution is to do two passes with javac—once to run just your annotation processor, and the second to do everything it normally does.
The javac documentation specifies two options which should help you out.
-proc: {none,only}
Controls whether annotation processing and/or compilation is done. -proc:none means that compilation takes place without annotation processing. -proc:only means that only annotation processing is done, without any subsequent compilation.
-processor class1[,class2,class3...]
Names of the annotation processors to run. This bypasses the default discovery process.
The first pass (to run only your own annotation processor) is
javac -proc:only -processor com.foo.bar.MyProcessor MyProject/src/*
and the second pass (a regular build) is
javac MyProject/src/*
If you’re using something like Ant or Maven, you should be able to update the build instructions to have two compiler passes with only a minimal amount of effort.
Edit: here’s my attempt at Gradle instructions
I have no experience with Gradle, but it seems like you need to do something like this.
In your Gradle build script, you need to define the preprocessing task and add a dependency on your task to the javaCompile task.
javaCompile.dependsOn myAnnotationTask
task myAnnotationTask(type: JavaCompile) {
options.compilerArgs << '-proc:only' << '-processors com.foo.bar.MyAnnotationProcessor'
}
First maven project contains the sources with annotated classes.
Second maven project contains the annotation processor (javax.annotation.processing.AbstractProcessor).
I would like second project, on compile time, to process the sources (annotated) of first project and do some stuff.
How should I approach it?
I am guessing Annotation Processor is not the right choice as it required to be bounded to a compiler...
The other option is to scan all java files in first project, load them (with class.forname) and process the annotation.
Can you suggest something else?
You can supply the -proc:only command-line argument to avoid compilation -- no .class files will be output.
The javac documentation says:
-proc: [none, only]
Controls whether annotation processing and compilation are done.
-proc:none means that compilation takes place without annotation processing. -proc:only means that only annotation processing is done,
without any subsequent compilation.
I'm facing a problem with JUnit tests. I have written an JUnitRunner which is used to execute the WrapperTest.
The WrapperTest generates a plain JUnit-Test and a needed file. If I want to execute the methods of the generated test, my Runner searchs in the Developement-Workspace for the "NeededClass".
I'm generating the needed class in the JUnit-Workspace and i want the tests to use this generated class file, so i can delete this file in my Develop-Workspace.
So, how do I execute the generated test in the JUnit-Workspace? (He shall look in the JUnit-Workspace for the needed file)
edit: OK, i found out, it's a ClassLoader problem... The Develop Workspace got another ClassLoader than the JUnit-Workspace, this causes weired errors, for example that a "class isn't the identical class Exception" (java.lang.ClassCastException: org.junit.runner.JUnitCore cannot be cast to org.junit.runner.JUnitCore). Looks like i have to fix this problem by reflection, what is very dirty.
Look into Maven and its build lifecycle. You can wire the code generation you are doing into the generate-test-sources phase and then have it participate as normal in the test phase.
See this question for an example.
I'm trying to fiddle with Foursquare's HeapAudit, and am attempting to set it up using IntelliJ IDEA. I have managed to get it to build just fine, using the dependencies from the pom.xml.
However, when I actually try to run the JUnit tests, basically all of them fail. I'm guessing this is because using HeapAudit requires the JVM to be started with it as a -javaagent, according to the github:
$ java -javaagent:heapaudit.jar MyTest
Presumably the tests would pass if I put this line in, and referenced the heapaudit.jar i downloaded/built earlier. However, it seems to me that if I make changes the the source, I'm gonna need to re-package this silly .jar file in order to see if it works.
Is there any way of running the tests with a -javaagent without going through the whole rigmarole of compile -> package-into-jar every testing cycle? Perhaps getting IntelliJ to attached the newly-compiled .class files as a -javaagent before running the tests?
1) Have a jar just with a META-INF/MANIFEST.MF
The manifest must be properly configured with Premain-Class and other attributes. The jar doesn't need any other files. Use this jar with the -javaagent. Provided that the agent classes are in the classpath, the agent will start normally.
This might fail when using maven-surefire-plugin with forkMode=never because by default the application classes are loaded in a child ClassLoader.
Works fine with Eclipse and Intellij.
If doing this, double check the manifest syntax (once I spent a long time to figure out that a package name was wrong).
2) Use ea-agent-loader
It will allow you to load the agent (any agent) in runtime (it uses VM.attach()). However the VM.attach() sometimes disrupts debugging and breakpoints might fail to trigger.
It will have the same issues with the surefire in forkMode=never
3) Load the agent in runtime.
Write your on code to load the agent in runtime. And call it from your #BeforeClass You will still need a jar (which you can generate in runtime if you want).
Just you need to call this (only once):
AgentLoader.loadAgentClass(YourAgentClass.class.getName());