AnnotationProcessor using multiple source-files to create one file - java

I have two classes with methods and i want to combine the methods of the two classes to one class.
#Service("ITestService")
public interface ITest1
{
#Export
void method1();
}
#Service("ITestService")
public interface ITest2
{
#Export
void method2();
}
Result should be:
public interface ITestService extends Remote
{
void method1();
void method2();
}
The first run of my AnnotationProcessor generates the correct output (because the RoundEnvironment contains both classes).
But if I edit one of the classes (for example adding a new method), the RoundEnviroment contains only the edited class and so the result is follwing (adding newMethod() to interface ITest1)
public interface ITestService extends Remote
{
void method1();
void newMethod();
}
Now method2 is missing. I don't know how to fix my problem. Is there a way (Enviroment), to access all classes in the project? Or is there another way to solve this?
The code to generate the class is pretty long, so here a short description how i generate the class. I iterate through the Elements with env.getElementsAnnotatedWith(Service.class) and extract the methods and write them into the new file with:
FileObject file = null;
file = filer.createSourceFile("com/test/" + serviceName);
file.openWriter().append(serviceContent).close();

-- Option 1 - Manual compilation from command line ---
I tried to do what you want, which is access all the classes from a processor, and as people commented, javac is always compiling all classes and from RoundEnvironment I do have access to all classes that are being compiled, everytime (even when no files changed), with one small detail: as long as all classes show on the list of classes to be compiled.
I've done a few tests with two interfaces where one (A) depends on the (B) other (extends) and I have the following scenarios:
If I ask the compiler to explicitly compile only the interface that has the dependency (A), passing the full path to the java file into the command line, and adding the output folder to the classpath, only the interface I passed into the command line gets processed.
If I explicitly compile only (A) and don't add the output folder to the classpath, the compiler still only processes interface (A). But it also gives me the warning: Implicitly compiled files were not subject to annotation processing.
If I use * or pass both classes to the compiler into the command line, then I get the expected result, both interfaces gets processed.
If you set the compiler to be verbose, you'll get an explicity message showing you what classes will be processed in each round. This is what I got when I explicitly passed interface (A):
Round 1:
input files: {com.bearprogrammer.test.TestInterface}
annotations: [com.bearprogrammer.annotation.Service]
last round: false
And this is what I've got when I added both classes:
Round 1:
input files: {com.bearprogrammer.test.AnotherInterface, com.bearprogrammer.test.TestInterface}
annotations: [com.bearprogrammer.annotation.Service]
last round: false
In both cases I see that the compiler parses both classes, but in a different order. For the first case (only one interface added):
[parsing started RegularFileObject[src\main\java\com\bearprogrammer\test\TestInterface.java]]
[parsing completed 15ms]
[search path for source files: src\main\java]
[search path for class files: ...]
[loading ZipFileIndexFileObject[lib\processor.jar(com/bearprogrammer/annotation/Service.class)]]
[loading RegularFileObject[src\main\java\com\bearprogrammer\test\AnotherInterface.java]]
[parsing started RegularFileObject[src\main\java\com\bearprogrammer\test\AnotherInterface.java]]
For the second case (all interfaces added):
[parsing started RegularFileObject[src\main\java\com\bearprogrammer\test\AnotherInterface.java]]
...
[parsing started RegularFileObject[src\main\java\com\bearprogrammer\test\TestInterface.java]]
[search path for source files: src\main\java]
[search path for class files: ...]
...
The important detail here is that the compiler is loading the dependency as an implicit object for the compilation in the first case. In the second case it will load it as part of the to-be-compiled-objects (you can see this because it starts searching other paths for files after the provided classes are parsed). And it seems that implicit objects aren't included in the annotation processing list.
For more details over the compilation process, check this Compilation Overview. Which is not explicitly saying what files are picked up for processing.
The solution in this case would be to always add all classes into the command for the compiler.
--- Option 2 - Compiling from Eclipse ---
If you are compiling from Eclipse, incremental build will make your processor fail (haven't tested it). But I would think you can go around that asking for a clean build (Project > Clean..., also haven't tested it) or writing an Ant build that always clean the classes directory and setting up an Ant Builder from Eclipse.
--- Option 3 - Using build tools ---
If you are using some other build tool like Ant, Maven or Gradle, the best solution would be to have the source generation in a separate step than your compilation. You would also need to have your processor compiled in a separated previous step (or a separated subproject if using multiprojects build in Maven/Gradle). This would be the best scenario because:
For the processing step you can always do a full clean "compilation" without actually compiling the code (using the option -proc:only from javac to only process the files)
With the generated source code in place, if you were using Gradle, it would be smart enough to not recompile the generated source files if they didn't change. Ant and Maven would only recompile the needed files (the generated ones and that their dependencies).
For this third option you could also setup an Ant build script to generate those files from Eclipse as a builder that runs before your Java builder. Generate the source files in some special folder and add that to your classpath/buildpath in Eclipse.

NetBeans #Messages annotation generates single Bundle.java file per all classes in the same package. It works correctly with incremental compilation thanks to following trick in the annotation processor:
Set<Element> toProcess = new HashSet<Element>();
for (Element e : roundEnv.getElementsAnnotatedWith(Messages.class)) {
PackageElement pkg = findPkg(e);
for (Element elem : pkg.getEnclosingElements()) {
if (elem.getAnnotation(Message.class) != null) {
toProcess.add(elem);
}
}
}
// now process all package elements in toProcess
// rather just those provided by the roundEnv
PackageElement findPkg(Element e) {
for (;;) {
if (e instanceof PackageElement) {
return (PackageElement)e;
}
e = e.getEnclosingElement();
}
}
By doing this one can be sure all (top level) elements in a package are processed together even if the compilation has only been invoked on a single source file in the package.
In case you know where to look for your annotation (top level elements in a package or even any element in a package) you should be able to always get list of all such elements.

Related

Getting the resource path of the java project from a custom gradle plugin

Trying to create a custom gradle plugin in java, how do i get the resources path from inside the task class?
public class MyCustomPlugin implements Plugin<Project> {
#Override
public void apply(Project project) {
project.getTasks().register("doStuff", CustomTask.class);
}
}
public class CustomTask extends DefaultTask {
// How do I get java project resources dir from here?
#Inject
public CustomTask(ProjectLayout projectLayout) {
directoryProperty = projectLayout.getBuildDirectory();
}
#TaskAction
public void execute() {
...
}
}
I would recommend to not get the directory inside the task, because the plugin that provides it might not be applied. Instead I would do it from within your plugin that registers the task, this way you can also ensure that the necessary plugin is actually applied. Gradle will display an error if the task is used without a value being assigned to the input that explains that nothing was assigned.
With the kotlin-dsl:
#CacheableTask
abstract class CustomTask : DefaultTask() {
#get:InputFiles
abstract val resources: FileCollection
//...
}
I cannot answer if #InputFiles is the right annotation for your use case, because I don't know what you want to do with the resource. Refer to the Gradle documentation for more information on the available annotations, and what they do.
plugins {
java
}
tasks.register<CustomTask>("customTask") {
resources.set(sourceSets.main.map { it.resources })
}
Notice the map {} which ensures that our task has a dependency on the processResources task, this is done automatically for us because we stick to the provider API of Gradle for everything.
Note that the resources are by default in one directory, but they don't have to be. This is why the resources are defined as SourceDirectorySet and not as Provider<Directory>. The same is true for anything that originates from the SourceSetContainer. It is easier to explain with Java source code: imagine you have Java and Kotlin, then you will have src/main/java and src/main/kotlin, hence, 2 directories. The former will have a **/*.java include filter, whereas the latter has a **/*.kt includes filter. If we just want to get all sources then we use sourceSets.main.map { it.java.sourceDirectories }, and if we want to get one of both it gets complicated. 😝
First, you'd have to ensure this is a Java project: either applying the "java" plugin from your plugin (project.getPluginManager().apply("java")), or only registering the task when the "java" plugin has been applied by the user (project.getPluginManager().withPlugin("java", ignored -> { project.getTasks().register(…); });).
You could then get the resources from the main source set:
SourceSetContainer sourceSets = project.getExtensions().getByType(SourceSetContainer.class);
// Use named() instead of get() if you prefer/need to use providers
SourceSet mainSourceSet = sourceSets.get(SourceSet.MAIN_SOURCE_SET_NAME);
SourceDirectorySet resources = mainSourceSet.getResources();
BTW, the best practice is to have tasks only declare their inputs and outputs (e.g. I need a set of directories, or files, as inputs, and my outputs will be one single file, or in one single directory) and have the actual wiring with default values be done by the plugin.
You could have the plugin unconditionally register the task, then conditionally when the "java" plugin is applied configure its inputs to the project resources; or conditionally register the task or unconditionally apply the "java" plugin, as I showed above.
You can access the sources through the project.sourceSets.
#Inject
public CustomTask(Project project) {
directoryProperty = project.projectLayout.getBuildDirectory();
sourceSet = project.sourceSets.main
}
See also the reference documentation here: https://docs.gradle.org/current/userguide/java_plugin.html#sec:java_project_layout

How to provide external path for step definition files in citrus cucmber

I have a code which is working fine until I change the path of glue code here is the working code
#RunWith(Cucumber.class)
#CucumberOptions(features="D:/citrus/Feature",
strict = true,
glue = { "todo" },
plugin = { "com.consol.citrus.cucumber.CitrusReporter" } )
public class TodoFeatureTest {
}
I am able to specify external path i.e out side of eclipse project but when I want to specify external path for glue ={} option I am getting un implemented steps error what I can do for this. I want to specify step definition files outside of the project.
cucumber.glue= # comma separated package names. example: com.example.glue
see - https://cucumber.io/docs/cucumber/api/#junit
the values for glue must be package names where to find the deep definitions, not paths
it really does not matter where you implement the step definitions (same project or external jars), as long as they are on test class path. make sure correct dependency is added for the test.
Try using:
features={"D:/citrus/Feature"}
Instead of:
features="D:/citrus/Feature"

Multi-module annotation processing in Android Studio

I have a project with multiple modules in Android Studio. A module may have a dependency on another module, for example:
Module PhoneApp -> Module FeatureOne -> Module Services
I've included my annotation processing in the root module but the android-apt annotation processing occurs only at the top most level (PhoneApp) so that it should theoretically have access to all the modules at compile time. However, what I'm seeing in the generated java file is only the classes annotated in PhoneApp and none from the other modules.
PhoneApp/build/generated/source/apt/debug/.../GeneratedClass.java
In the other modules, I am finding a generated file in the intermediates directory that contains only the annotated files from that module.
FeatureOne/build/intermediates/classes/debug/.../GeneratedClass.class
FeatureOne/build/intermediates/classes/debug/.../GeneratedClass.java
My goal is to have a single generated file in PhoneApp that allows me to access the annotated files from all modules. Not entirely sure why the code generation process is running for each and failing to aggregate all annotations at PhoneApp. Any help appreciated.
Code is fairly simple and straight forward so far, checkIsValid() omitted as it works correctly:
Annotation Processor:
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
try {
for (Element annotatedElement : roundEnv.getElementsAnnotatedWith(GuiceModule.class)) {
if (checkIsValid(annotatedElement)) {
AnnotatedClass annotatedClass = new AnnotatedClass((TypeElement) annotatedElement);
if (!annotatedClasses.containsKey(annotatedClass.getSimpleTypeName())) {
annotatedClasses.put(annotatedClass.getSimpleTypeName(), annotatedClass);
}
}
}
if (roundEnv.processingOver()) {
generateCode();
}
} catch (ProcessingException e) {
error(e.getElement(), e.getMessage());
} catch (IOException e) {
error(null, e.getMessage());
}
return true;
}
private void generateCode() throws IOException {
PackageElement packageElement = elementUtils.getPackageElement(getClass().getPackage().getName());
String packageName = packageElement.isUnnamed() ? null : packageElement.getQualifiedName().toString();
ClassName moduleClass = ClassName.get("com.google.inject", "Module");
ClassName contextClass = ClassName.get("android.content", "Context");
TypeName arrayOfModules = ArrayTypeName.of(moduleClass);
MethodSpec.Builder methodBuilder = MethodSpec.methodBuilder("juice")
.addParameter(contextClass, "context")
.addModifiers(Modifier.PUBLIC, Modifier.STATIC)
.returns(arrayOfModules);
methodBuilder.addStatement("$T<$T> collection = new $T<>()", List.class, moduleClass, ArrayList.class);
for (String key : annotatedClasses.keySet()) {
AnnotatedClass annotatedClass = annotatedClasses.get(key);
ClassName className = ClassName.get(annotatedClass.getElement().getEnclosingElement().toString(),
annotatedClass.getElement().getSimpleName().toString());
if (annotatedClass.isContextRequired()) {
methodBuilder.addStatement("collection.add(new $T(context))", className);
} else {
methodBuilder.addStatement("collection.add(new $T())", className);
}
}
methodBuilder.addStatement("return collection.toArray(new $T[collection.size()])", moduleClass);
TypeSpec classTypeSpec = TypeSpec.classBuilder("FreshlySqueezed")
.addModifiers(Modifier.PUBLIC, Modifier.FINAL)
.addMethod(methodBuilder.build())
.build();
JavaFile.builder(packageName, classTypeSpec)
.build()
.writeTo(filer);
}
This is just for a demo of annotation processing that works with Guice, if anyone is curious.
So how can I get all the annotated classes to be included in the generated PhoneApp .java file from all modules?
It's never too late to answer a question on SO, so...
I have faced a very similar complication during one of tasks at work.
And I was able to resolve it.
Short version
All you need to know about generated classes from moduleB in moduleA is package and class name. That can be stored in some kind of MyClassesRegistrar generated class placed in known package. Use suffixes to avoid names clashing, get registrars by package. Instantiate them and use data from them.
Lond version
First of all - you will NOT be able to include your compile-time-only dependency ONLY at topmost module (lets call it "app" module as your typical android project structure does). Annotation processing just does not work that way and, as far as I could find out - nothing can be done about this.
Now to the details. My task was this:
I have human-written annotated classes. I'll name them "events". At compile time I need to generate helper-classes for those events to incorporate their structure and content (both statically-available (annotation values, consts, etc) and runtime available (I am passing event objects to those helpers when using latter). Helper class name depends on event class name with a suffix so I don't know it until code generation finished.
So after helpers are generated I create a factory and generate code to provide new helper instance based on MyEvent.class provided. Here's the problem: I only needed one factory in app module, but it should be able to provide helpers for events from library module - this can't be done straightforward.
What I did was:
skip generating factory for modules that my app module depends upon;
in non-app modules generate a so-called HelpersRegistrar implementation(s):
– they all share same package (you'll know why later);
– their names don't clash because of suffix (see below);
– differentiation between app module and library-module is done via javac "-Amylib.suffix=MyModuleName" param, that user MUST set - this is a limitation, but a minor one. No suffix must be specified for app module;
– HelpersRegistrar generated implementation can provide all I need for future factory code generating: event class name, helper class name, package (these two share package for package-visibility between helper and event) - all Strings, incorporated in POJO;
in app module I generate helpers - as usual, then I obtain HelperRegistrars by their package, instantiate them, run through their content to enrich my factory with code that provides helpers from other modules. All I needed for this was class names and a package.
Voilà! My factory can provide instances of helpers both from app module and from other modules.
The only uncertainty left is order of creating and running processor-class instances in app module and in other modules. I have not found any solid info on this, but running my example shows that compiler (and, therefore, code generation) first runs in module that we depend upon, and then - in app module (otherwise compilation of app module will be f..cked). This gives us reason to expect known order of code processor executions in different modules.
Another, slightly similar, approach is this: skip registrars, generate factories in all modules and write factory in app module to use other factories, that you get and name same way as registrars above.
Example can be seen here: https://github.com/techery/janet-analytics - this is a library where I applied this approach (the one without registrars since I have factories, but that can be not the case for you).
P. S.: suffix param can be switched to simpler "-Amylibraryname.library=true" and factories/registrars names can be autogenerated/incremented
Instead of using Filer to save generated file, use regular java file writing instead. You will need to serialize objects to temp files when processing because even static variables won't save in between modules. Configure gradle to delete the temp files before compilation.

How do I get all the class names and method names of a project?

I have downloaded a huge project written in Java. I wish to know the Classes and Methods of every class that are available in the project (for further analysis). How can I recover this information. Can I try javadoc in eclipse?
I guess you may ask about changing SVN properties.
Follow this step if that so.
press Alt + Shift + Q
Select Show view (view : Outline)
then under that u can see all details
I have wrote a custom doclet to list the classname and its methods:
public class ListClassAndMethods {
public static boolean start(RootDoc root) {
ClassDoc[] classes = root.classes();
for(ClassDoc clazz : classes){
System.out.println("Class Name: "+clazz);
System.out.println("--------------------------");
for(MethodDoc methodz :clazz.methods()){
System.out.println(methodz.name());
}
}
return true;
}
}
you need to run create a jar of this class and refer it while creating
a javadoc using Eclipse IDE
I would extract all the class source files (.java) with find (if you're on a *nix implementation) and create an empty NetBeans project with just one package and all the classess inside it. Netbeans will correct the package declaration and you can easily use autogenerate javadoc to get a navigable web archive listing all the classes and public/protected methods.
Of course the code may not run anymore but you'll get what you want in minutes.

When will the "jar" command refuse to add a class to a .jar file?

I have 204 total classes (most of the classes are inner classes). For months, I have been building fine with SCons (SCons just calls the jar command).
For some reason, it stopped adding the last inner class for a particular class. For example, suppose I have the following classes:
class1
class2
class3
class4
class5
class6
...
class79
class80
Before this last change, SCons would jar everything fine. But NOW... it specifically does not add class80 to it's jar command. (I see an omission of the class80 in the jar command).
Is there an instance where the jar command just ignores certain classes?
----------- EDIT. I found the culprit. For some reason this inner class is not recognized my SCons!
vehicleFilter = new RowFilter<Object, Object>(){
public boolean include(Entry<? extends Object, ? extends Object> entry) {
{return false;}
};
You need to add JAVAVERSION='1.6' as an argument to your env.Java() call:
env.Java(target='classes', source='src', JAVAVERSION='1.6')
Without this, if you're compiling with a current javac, SCons won't determine the correct names for anonymous inner classes, so when those bad class file names get passed to jar, it will fail.
Rather than pass a whole list of class files to the Jar command, you can pass a directory. This avoids problems with SCons's java parser as SCons will scan the directory for files and jar up anything it finds.
Something like the following will compile files in "src" to the directory "classes", then create a jar from the contents of "classes":
env = Environment(tools=['javac', 'jar'])
env.Java(target='classes', source='src')
env.Jar(target='foo.jar', source=['classes', 'Manifest.txt'],
JARCHDIR='$SOURCE')
The manifest file "Manifest.txt" is in the root of your project here. The only requirement is that it begins with the text "Manifest-Version".
SCons may construct a command line by listing all classes to jar on it and that may get too long (either a platform limitation or a heuristic inside SCons).
You need to peek inside the SCons package to see what goes on.
Any particular reason you don't just use ant?

Categories