General question
I have two projects A and B; B has a dependency on A. I want to generate some code in B with an Annotation Processor, based on annotations on objects in A. When I run the compilation with the correct Processor implementation, only the annotated objects from B are picked up.
I understand that scanning other JARs must be disabled by default, because you usually don't want to do an annotation scan for all your dependencies. I also understand that it may be impossible to do what I want to do because of compiler magic - which I don't know a lot about - but I'm hoping it's not.
Specific case
My projects are called DB and WEB. WEB obviously depends on DB for its JPA access; this is configured in Maven. Due to a number of architectural choices, DB must remain a separate JAR. DB doesn't use Spring except for some annotations which are consumed by WEB; WEB uses Spring MVC.
I'm trying to generate the CrudRepository interfaces for all my JPA entities with an Annotation Processor. The #Repository objects are supposed to go in a repo package in the WEB project, so they can be used with #Autowired wherever in my WEB application. The annotation I'm performing the scan for is #javax.persistence.Entity, but I've also tried a custom annotation, with the same results.
#SupportedAnnotationTypes("javax.persistence.Entity")
#SupportedSourceVersion(SourceVersion.RELEASE_8)
public class RepositoryFactory extends AbstractProcessor {
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
for (Element e : roundEnv.getElementsAnnotatedWith(Entity.class)) {
if (e.getKind() != ElementKind.CLASS) {
continue;
}
// TODO: implement logic to skip manually implemented Repos
try {
String name = e.getSimpleName().toString();
TypeElement clazz = (TypeElement) e;
JavaFileObject f = processingEnv.getFiler().
createSourceFile("blagae.web.repo." + name + "Repo");
try (Writer w = f.openWriter()) {
PrintWriter pw = new PrintWriter(w);
pw.println("package blagae.web.repo;");
pw.println("import org.springframework.data.repository.CrudRepository;");
pw.printf("import %s;\n", clazz.toString());
pw.println("import org.springframework.stereotype.Repository;");
pw.println("#Repository");
pw.printf("public interface %sRepo extends CrudRepository<%s, Long> {}\n", name, name);
pw.flush();
}
} catch (IOException ex) {
Logger.getLogger(RepositoryFactory.class.getName()).log(Level.SEVERE, null, ex);
}
}
return false;
}
}
Ideally, I'd love for someone to tell me about an annotation that would be as simple as
#ComponentScan(basePackages = "blagae.db.*")
But of course, I'm not counting on that because it would probably be documented somewhere. As a workaround, I could just add the Spring dependency to the db and generate the classes there, but they only serve a purpose in the Spring MVC app. I'm also wary of the config it might take to make this work.
UPDATE
Some extra info: I'm using the maven-processor-plugin, which I've verified to work well in the WEB project for classes that are defined there. However, I specifically want access classes annotated in the dependency project DB. I have looked into the method AbstractProcessor::getSupportedOptions but it's unclear to me what I could do there.
Maven config:
<plugin>
<groupId>org.bsc.maven</groupId>
<artifactId>maven-processor-plugin</artifactId>
<version>2.2.4</version>
<configuration>
<processors>
<processor>blagae.utils.RepositoryFactory</processor>
</processors>
</configuration>
<executions>
<execution>
<id>process</id>
<goals>
<goal>process</goal>
</goals>
<phase>generate-sources</phase>
</execution>
</executions>
</plugin>
SUGGESTION
Another random thought I had would be to run a JavaCompiler process for the DB project in WEB, but how would I inject my Processor ?
Annotation processor works on compilation phase of your project (WEB in your case) and compiler compiles this project. Dependencies of current project are already compiled and compiler (and as result your annotation processor) don't touch (or have no access) third party libraries (DB).
You can try to extract annotation processor into separate project/jar and use it in WEB and DB projects. In this case annotation processor will create CrudRepository on compilation phase of concrete project. And all generated classes in DB project will be available in WEB.
Personally, I would extract the annotation processor in a separate maven module and add a dependency to it from the WEB module.
However, this doesn't matter that much for successfully triggering an annotation processor.
In order to have an annotation processor working, there are two things you have to provide:
a class that extends the javax.annotation.processing.AbstractProcessor class.
a special file, nested in the META-INF/services of the project.
Since you mentioned that currently no classes are generated, I would assume that you're missing the meta file. So, open your WEB project and navigate to src/main/resouces folder. Within, you have to create a META-INF folder with a nested services folder in it. Then, in services create a file, named javax.annotation.processing.Processor. The content of the file should list the fully-qualified class name(s) of your annotation processor(s). If there is more than one annotation processor, the fully-qualified class-names should be on separate lines. But since you have just one, you'd have something like:
com.yourdomain.processor.RepositoryFactory
Note that you will have to change this line with the actual fully-quallified class-name of your annotation processor.
In the end, you should end up with similar structure:
This meta file is important, because otherwise the compiler is not aware of the user-defined annotation processors. Having it, it will make use of all the registered processors.
After that, when you do a mvn clean install all your modules will be cleaned and built. However, since the compiler will now be aware of your annotation processor, it will trigger it. All the generated sources will be located (by default) in the target/generated-sources folder. Moreover, they will all be under the package you've configured in the annotation process, i.e. blagae.web.repo.
In order to use the generated sources within your code, you will have to add the targer/generated-sources to the project classpath. If you don't want to rely on the IDE to do this, you can extend the maven <build> by adding the target/generated-sources to the classpath. Something like:
<build>
<resources>
...
<resource>
<directory>${project.build.directory}/generated-resources</directory>
</resource>
</resources>
</build>
In your project A include the META-INF/beans.xml file,
which will contain following:
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
version="1.1" bean-discovery-mode="all">
</beans>
and give it a try. You should use JavaEE 7/CDI 1.1. Refer to Java EE 7 Deployment Descriptors for more information.
You can also refer to this related question: How to #Inject object from different project module included as jar
Related
We have a git multimodules projects MAVEN on IntelliJ. We use hibernate-jpamodelgen for criteria builder API.
We have web project using maven dependency entities library which are generating annotated class in entities => target folder.
When launching Install and test from mvn terminal it is working like a charm, but the issue is if we want to debug a test we have to launch with
-Dmaven.surefire.debug
option and with a remote application. It takes a time and not efficient. The problem is when we try to launch with right maven configuration (in IntelliJ) or directly in the class clicking the "play button" close to the method name it does not work, because it is creating in the web project all annotated classes but with empty body
for example what is in the dependency lib
#Generated(value = "org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor", date = "2022-12-30T10:11:22.168+0400")
#SuppressWarnings({ "deprecation", "rawtypes" })
#StaticMetamodel(FixedAsset.class)
public abstract class FixedAsset_ extends com.seanergie.persistence.ObjectWithUnidAndVersion_ {
public static volatile SingularAttribute<FixedAsset, LocalDate> purchaseDate;
public static volatile SingularAttribute<FixedAsset, String> serialNumber;
public static volatile SingularAttribute<FixedAsset, MutableMoney> cost;
public static volatile SingularAttribute<FixedAsset, String> notes;
public static volatile SingularAttribute<FixedAsset, FixedAssetFilesStore> filesStore;
and what is created in other dependency target annotated class folder if i try to launch from intellij
#Generated(value = "org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor", date = "2022-12-30T10:12:05.141+0400")
#SuppressWarnings({ "deprecation", "rawtypes" })
#StaticMetamodel(FixedAsset.class)
public abstract class FixedAsset_ extends com.seanergie.persistence.ObjectWithUnidAndVersion_ {
}
As you can see class is empty so compilation not work
We should be able to launch directly without creating a test configuration and modifying it (we have more than 100 test classes, so we can't create a conf for each one).
We see also that it try to compile all projects on each test
we tried to add in pom.xml
<dependency><!-- For launch tests directly from IntelliJ play button-->
<groupId>org.hibernate</groupId>
<artifactId>hibernate-jpamodelgen</artifactId>
<exclusions>
<exclusion>
<groupId>com.sun.activation</groupId>
<artifactId>jakarta.activation</artifactId>
</exclusion>
</exclusions>
<scope>provided</scope>
</dependency>
We also tried to deactivate or activate annotation processor in IntelliJ settings but nothing works
here in web project the hibernate mapping
<properties>
<mainClass>com.intranet.Main</mainClass>
<datasource.uri>jdbc:postgresql://localhost:5432/intranet?charSet=utf-8&ApplicationName=${DATASOURCE_APPLICATION_NAME}</datasource.uri>
<datasource.pool_size.min>15</datasource.pool_size.min>
<datasource.pool_size.max>30</datasource.pool_size.max>
<hibernate.mapping.files>
<![CDATA[
<mapping-file>com/intranet-base.entities.xml</mapping-file>
<mapping-file>com/intranet-webapp.entities.xml</mapping-file>
<mapping-file>com/intranet-intranet.entities.xml</mapping-file>
<mapping-file>com/intranet-webapp.entities.xml</mapping-file>
<mapping-file>com/intranet2-intranet.entities.xml</mapping-file>
]]>
</hibernate.mapping.files>
<databaseInitializationHook.class>com.intranet.persistence.initialization.DatabaseInitializationHook</databaseInitializationHook.class>
<test.databaseInitializationHook.class>com.intranet.persistence.initialization.DatabaseInitializationHook</test.databaseInitializationHook.class>
</properties>
So finally when we launch mvn clean install even from IntelliJ (not only terminal) it create all annotated classes correctly in each library where we defined entities, but when we launch test it create another time same classes, but with empty body and in projects where entities are not defined (but use others as dependencies !)
What is the good way to make it work?
I do believe I have solved your puzzle.
The appearance of "empty" metamodel classes could be caused by one of the following:
some of plenty IDEA plugins influences on compile process and, obviously, fails
jpamodelgen annotation processor may perform extra work
And the last reason seems to be the actual one:
persistenceXml:
Per default the processor looks in /META-INF for persistence.xml. Specifying this option a persitence.xml file from a different location can be specified (has to be on the classpath)
ormXml:
Allows to specify additional entity mapping files. The specified value for this option is a comma separated string of mapping file names. Even when this option is specified /META-INF/orm.xml is implicit.
fullyAnnotationConfigured:
If set to true the processor will ignore orm.xml and persistence.xml.
I have checked and indeed placing META-INF/orm.xml with defined "foreign" entities into resources folder causes jpamodelgen to generate empty metamodel classes - as for me it looks like a bug.
You have following options (actually, depends on your project structure):
remove jpamodelgen from module dependencies if module does not define entity classes, also make sure dependency on jpamodelgen is defined as non-transitive, i.e. <scope>provided</scope>
rename extra xml mapping files - if you are on spring it is possible to specify those mappings via spring.jpa.mapping-resources
configure annotation processor as described in JBoss guide
UPD. based on #cyril comment.
First of all, defining dependencies (not imposing versions via <dependencyManagement>) in parent pom is a terrible idea - you have no chance to exclude such dependency in child modules, so, your parent pom is definitely broken.
However, I do believe you can meet your objectives (i.e. running tests via "green play button") even with broken parent pom - just follow JBoss guide and configure maven-compiler-plugin like:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<compilerArgs>
<arg>-AfullyAnnotationConfigured=true</arg>
</compilerArgs>
</configuration>
</plugin>
IntelliJ does recognise such configuration.
We want to make a plugin-type main program based on spring. The main program can load other Spring jars and non-spring jars as a plugin. Each plugin is based on IPlugin, And the plugin's 'IPlugin' class same as the main program's 'IPlugin' class.
We make the non-spring plugin work by 'URLClassLoader', But the way not for the spring plugin.
In the 'TestPlugin' project, the implementation named of 'PluginTest' and execute 'SpringApplication.run(PluginTest.class, args);' in function 'init(String[])'.
'ClassNotFoundException' occurred for load class 'PluginTest'(cause of spring jars structure).
String pluginClassName = "com.example.demo.PluginTest";
c = newClassLoader.loadClass(pluginClassName);
So, We replace the pom 'plugins' section like following:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<finalName>${project.artifactId}-${project.version}</finalName>
<appendAssemblyId>false</appendAssemblyId>
<attach>false</attach>
<archive>
<manifest>
<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
But, We got another error 'Caused by: java.lang.NoClassDefFoundError: com/zaxxer/hikari/HikariConfig' (JDBC used in test project).
I don't know how to properly load other spring jars at run time.
Whole codes of plugin loader:
#Component
public class Initializer implements ApplicationRunner
{
#Override
public void run(ApplicationArguments args)
{
String jarPath = "e:/tmp/demo-0.0.1-SNAPSHOT.jar";
File file = new File(jarPath);
IPlugin p;
try
{
ClassLoader oldClassLoader = Thread.currentThread().getContextClassLoader();
URLClassLoader newClassLoader = new URLClassLoader(new URL[]{file.toURI().toURL()}, oldClassLoader);
Thread.currentThread().setContextClassLoader(newClassLoader);
String pluginClassName = "com.example.demo.PluginTest";
Class<?> c = newClassLoader.loadClass(pluginClassName);
Object pluginTest = c.newInstance();
p = (IPlugin)pluginTest;
p.init(new String[]{
"--spring.config.location=e:/tmp/application.properties"
});
}
catch (Exception e)
{
System.out.println(e);
e.printStackTrace();
}
}
}
Thanks!
In short - if your requirement is to load external jars at runtime and extend your spring application context with that - then this is not possible for a couple of reasons:
The Spring Application Context is built at application startup by scanning the Classpath, instantiating all beans and wiring them to each other (plus - doing a lot of fancy stuff in addition). Once setup, that application context is mostly static and will not be reevaluated.
It's possible to change a whole lot in Spring via configuration as code (e.g. how database transactions work or enabling/disabling certain spring features). Therefore, modifying that configuration at runtime, would be extremely hard as all possible combinations of changes would need to be considered - and back-integrated into the existing context. In addition, there's also potential error cases that can't be resolved while the application is running (e.g. consider you're introducing a circular dependency. that would normally prevent the application from even starting up - but now the application is already started - what should happen?)
There are massive security and stability issues with dynamically loading an executing code from external jars at runtime.
Even if Spring would be able to take care of all these things, there'd still be the problem that the application would need to be implemented dynamically. (e.g. do not cache references to other beans or information locally. also consider that beans may just not be there yet)
In short, Spring is not designed for that kind of dynamicity. If that is what you really need, consider application platforms that are more suitable for that sort of requirement. OSGI (Spring DM was built on OSGI) might be a solution, but be warned that there is a gigantic complexity and overhead involved in building OSGI applications that this platform requires from an application developer in order to solve the challenges mentioned above.
I would instead really recommend, to evaluate if you can work with a model, that can live without dynamic code loading as this makes things a lot easier. For instance, Spring has a very usable auto configuration system built in, that requires absolutely minimal overhead. What's necessary though is that your libraries are present in the classpath at runtime. For more details, you can read the documentation here.
I want to write a piece of Java code which can be executed with 2 different kinds of dependencies (or version of a dependency). Namely speaking about org.apache.poi. The code must run on a system with version=2 as well as version=3 or org.apache.poi.
Unfortunately between the versions 2 & 3 some interfaces have changed, code must be build slightly different and there is no way to upgrade both system to the same org.apache.poi version.
So my questions are:
Is there a way to compile the code with both versions to not run into compiler errors?
Is there a way to execute the right code based on the available org.apache.poi version?
What would be an appropriate approach to solve this issue?
As an amendment:
I'm building a code which shall work for two applications which provides an interface in different versions (maven scope of the dependency is provided).
If I have both dependencies in maven, it takes any of the dependencies and IF clauses will fail to compile as Cell.CELL_TYPE_STRING or CellType.STRING is not available in the chosen dependency.
And I would like to have the code working regardless of which dependency is plugged in the application.
// working with old poi interface
if (cell != null && cell.getCellType() == Cell.CELL_TYPE_STRING
&& cell.getRichStringCellValue().getString().trim().equals(cellContent)) {
return row;
}
// working with new poi interface
if (cell != null && cell.getCellType() == CellType.STRING
&& cell.getRichStringCellValue().getString().trim().equals(cellContent)) {
return row;
}
This i probably opinion based, but it seams legit.
First, you will have to create common interface that you will use to do your job.
Second, you will have to create adapter classes that implements that interface and will do required job using particular version of POI library
Third, you will write adapter factory that will return proper instance of adapter.
Adapter itself should provide "isSupported" method that will detect if given adapter can be used based on what kind of POI classes are currently loaded (detect by reflection - there must be some version specific classes or other markers)
Then you will put each adapter into separate maven module, so each module can be compiled independently (thus you will have no class conflicts). Each module will have POI dependency in "provided" scope in version that this adapter is going to support
Either module registers itself with the factory in your main module, or factory itself detects all adapters that are available (like #ComponentScan in Spring).
Then you will pack everything into single app bundle. Main module will use only common interface. All in all it will be kind of extensible plugin system
I do not think there is a single "best way".
Nonetheless, we faced a similar issue in a few of our apps that share a common library. I ended up with a variant of #Antoniossss's variant, except that the library itself does not use dependency injection (the parent app may or may not, but the library is free of it).
To be more specific, and due to transitive dependencies, some of our apps need a certain version of Apache Lucene (e.g. 7.x.y, or more) and other are stuck on older versions (5.5.x).
So we needed a way to build one of our lib against those versions, using maven in our case.
What we ended uses the following principles :
We share some code, which is common between all versions of Lucene
We have specific code, for each target version of Lucene that has an incompatible API (e.g. package change, non existing methods, ...)
We build as many jars as there are supported versions of lucene, with a naming scheme such as groupId:artifact-luceneVersion:version
Where the lib is used, direct access to the Lucene API is replaced by access to our specific classes
For exemple, un Lucene v5 there is a org.apache.lucene.analysis.synonym.SynonymFilterFactory facility. In v7 the same functionnality is implemented using org.apache.lucene.analysis.synonym.SynonymGraphFilterFactory e.g. same package, but different class.
What we end up with is providing a com.mycompany.SynonymFilterFactoryAdapter. In the v5 JAR, this class extends the Lucene v5 class, and respectively with v7 or any other version.
In the final app, we always instantiate the com.mycompany object, that behaves just the same as the native org.apache class.
Project structure
The build system being maven, we build it as follow
project root
|- pom.xml
|-- shared
|---|- src/main/java
|---|- src/test/java
|-- v5
|---|- pom.xml
|-- v7
|---|- pom.xml
Root pom
The root pom is a classic multimodule pom, but it does not declare the shared folder (notice that the shared folder has no pom).
<modules>
<module>v5</module>
<module>v7</module>
</modules>
The shared folder
The shared folder stores all non-version specific code and the tests. On top of that, when a version specific class is needed, it does not code against the API of this class (e.g. it does not import org.apache.VersionSpecificStuff), it does against com.mycompany.VersionSpecificStuffAdapter).
The implementation of this Adapter being left to the version specific folders.
Version specific folders
The v5 folder declares in its artifact id the Lucene version it compiles to, and of course declares it as a dependency
....
<artifactId>myartifact-lucene-5.5.0</artifactId>
....
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>5.5.0</version>
</dependency>
But the real "trick" is that it declares an external source folder for classes and tests using the build-helper-maven-plugin : see below how the source code from the shared folder is imported "as if" it was from this project itself.
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<id>add-5.5.0-src</id>
<phase>generate-sources</phase>
<goals>
<goal>add-source</goal>
</goals>
<configuration>
<sources>
<source>../shared/src/main/java</source>
</sources>
</configuration>
</execution>
<execution>
<id>add-5.5.0-test</id>
<phase>generate-test-sources</phase>
<goals>
<goal>add-test-source</goal>
</goals>
<configuration>
<sources>
<source>../shared/src/test/java</source>
</sources>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
For the whole implementation to work, it provides the Adapter implementations in its own source folder src/main/java, e.g.
package com.mycompany
public class VersionSpecificStuffAdapter extends org.apache.VersionSpecificStuff {
}
If both the v5 and the v7 package do it the same way, then client code using the com.mycompany.xxxAdapter will always compile, and under the hood, get the corresponding implementation of the library.
This is one way to do it. You can also, as already suggested, define your whole new interfaces and have clients of your lib code against your own interface. This is kind of cleaner, but depending on the case, may imply more work.
In your edit, you mention refering to constants that are not defined the same way, e.g. CellType.TYPE_XX.
In the version specific code, you could either produce another constant MyCellType.TYPE_XX that would duplicate the actual constant, under a stable name.
In case of an enum, you could create a CellTypeChecker util with a method isCellTypeXX(cell), that would be implemented in a version specific way.
v7 folder
It's pretty much the same structure, you just swap what changed between v5 and v7.
Caveats
This may not always scale.
If you have hundreds of types you need to adapt, this is cumbersome to say the least.
If you have 2 or more libs you need to cross-compile against (e.g. mylib-poi-1.0-lucene-5.5-guava-19-....) it's a mess.
If you have final classes to adapt, it gets harder.
You have to test to make sure every JAR has all the adapters. I do that by testing each Adapted class in the shared test folder.
First of all, I have a multi-module maven hierarchy like that:
├── project (parent pom.xml)
│ ├── service
│ ├── api-library
So now to the problem:
I am writing a JAX-RS Endpoint in the service module which uses classes in the api-library.
When I start quarkus, I am getting this warning:
13:01:18,784 WARN [io.qua.dep.ste.ReflectiveHierarchyStep] Unable to properly register the hierarchy of the following classes for reflection as they are not in the Jandex index:
- com.example.Fruit
- com.example.Car
Consider adding them to the index either by creating a Jandex index for your dependency or via quarkus.index-dependency properties.
This two classes com.example.Fruit and com.example.Car are located in the api-library module.
So I think I need to add them to the Jandex index-dependency in the application.properties.
But how can I add Jandex index-dependencies into quarkus?
Quarkus automatically indexes the main module but, when you have additional modules containing CDI beans, entities, objects serialized as JSON, you need to explicitly index them.
There are a couple of different (easy to implement) options to do so.
Using the Jandex Maven plugin
Just add the following to the additional module pom.xml:
<build>
<plugins>
<plugin>
<groupId>org.jboss.jandex</groupId>
<artifactId>jandex-maven-plugin</artifactId>
<version>1.2.3</version>
<executions>
<execution>
<id>make-index</id>
<goals>
<goal>jandex</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
It's the most beneficial option if your dependency is external to your project and you want to build the index once and for all.
Using the Gradle Jandex plugin
If you are using Gradle, there is a third party plugin allowing to generate a Jandex index: https://github.com/kordamp/jandex-gradle-plugin .
Adding an empty META-INF/beans.xml
If you add an empty META-INF/beans.xml file in the additional module src/main/resources, the classes will also be indexed.
The classes will be indexed by Quarkus itself.
Indexing other dependencies
If you can't modify the dependency (think of a third-party dependency, for instance), you can still index it by adding an entry to your application.properties:
quarkus.index-dependency.<name>.group-id=
quarkus.index-dependency.<name>.artifact-id=
quarkus.index-dependency.<name>.classifier=(this one is optional)
with <name> being a name you choose to identify your dependency.
Edit (11/02/2020)
Now in my microservices I am extensively using the targets property from RegisterForReflection annotation.
This is the property explanation according to the documentation:
/**
* Alternative classes that should actually be registered for reflection instead of the current class.
*
* This allows for classes in 3rd party libraries to be registered without modification or writing an
* extension. If this is set then the class it is placed on is not registered for reflection, so this should
* generally just be placed on an empty class that is not otherwise used.
*/
This works fine on quarkus based projects and can handle the basic cases when you want to register a handful POJOs for reflection. The RegisterForReflection annotation will register the POJO by himself, but will not going to register the return types from POJO's methods.
More advanced way is to use #AutomaticFeature annotation as described here. I am using it with Reflections Library and with custom made utility wrapper: ReflectUtils
Now I can do more complex tasks:
#AutomaticFeature
#RegisterForReflection(targets = {
com.hotelbeds.hotelapimodel.auto.convert.json.DateSerializer.class,
TimeDeserializer.class,
DateSerializer.class,
TimeSerializer.class,
RateSerializer.class,
})
public class HotelBedsReflection implements Feature {
public static Logger log = Utils.findLogger(Reflections.class);
#Override
public void beforeAnalysis(BeforeAnalysisAccess access) {
ReflectUtils.registerPackage(LanguagesRQ.class.getPackage().getName(), Object.class);
ReflectUtils.registerPackage(AvailabilityRQ.class.getPackage().getName(), Object.class);
ReflectUtils.registerPackage(Occupancy.class.getPackage().getName(), Object.class);
}
}
Initial Answer
I've tried to add Jandex index, to add beans.xml and also to Indexing other dependencies as described in #emre-işık answer, however my third party class (EpAutomationRs) wasn't registered for reflection in native mode.
So I've ended up with quick and dirty solution for registering it (see below).
I've created an unused REST JSON endpoint which returns the class.
/**
* the purpose of this method is to register for reflection EpAutomationRs class
*
* #return
*/
#GET
#Path(GET_EMPTY_RS)
#Produces(MediaType.APPLICATION_JSON)
public EpAutomationRs entry() {
return new EpAutomationRs();
}
For gradle users, you can use this plugin in the build.gradle of the module you depend on.
I am new to Hibernate and SpringBoot. My projects deals with a search engine that is composed of 2 independent modules + 1 base module common to both (where the IndexSetup class resides).
There is one module for indexing (JavaFx) and the other one for searching via the web browser (Spring Boot).
The indexing module involves an "IndexSetup" class that has the details on how / what should be indexed :
#Entity
#Table(name = "IndexSetups")
#Access(AccessType.PROPERTY)
public class IndexSetup {
private final SimpleIntegerProperty id = new SimpleIntegerProperty();
#Id
#GeneratedValue(strategy = GenerationType.AUTO) // For H2 AUTO is required to auto increment the id
public int getId() {
return id.get();
}
//... other properties, getters and setters
}
So it works great, the data is indexed and can be retrieved via a search method within the indexing module.
However when I run the Spring Boot server and do the same search I get
java.lang.IllegalArgumentException: Not an entity: class my.package.IndexSetup
By the way there is no build error, and before the modules were parts of a parent pom project, they were in the same project with the server class in a subfolder, and it worked. I decided to separate them for convenience during developpment and to offer two independent modules in production.
So why did it work when everything was under the same Netbeans project and now that the modules are in 2 different subfolders (but in the same group id package "my.package") I get this "Not an entity" and what should I do to solve this, where should I look at ?
Please note : I already tried this without success ("null pointer exception, cannot load the database").
Edit 1:
I also tried to add #EntityScan following this but I still get Not an entity: class my.package.IndexSetup :
#SpringBootApplication
#EntityScan( basePackages = {"my.package"} )
public class ServerApplication {
public static void main(String[] args) {
SpringApplication.run(ServerApplication.class, args);
}
}
Edit 2 :
The architecture of the project is like :
- Parent project (my.package)
-Module Base (with IndexSetup class)
-Module Indexing (that depends on Base)
-Module Server (that also depends on Base)
The parent pom.xml reads like the following :
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>my.package</groupId>
<artifactId>MyApp</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>pom</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<!--According to https://stackoverflow.com/questions/10665936/maven-how-to-build-multiple-independent-maven-projects-from-one-project-->
<modules>
<module>Base</module> <!-- Common resources which is a dependency in Indexer and Server -->
<module>Indexer</module> <!-- Indexing part with JavaFx-->
<module>Server</module> <!-- Server (spring boot) part of -->
</modules>
<name>MyApp</name>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<compilerArguments>
<bootclasspath>${sun.boot.class.path}${path.separator}${java.home}/lib/jfxrt.jar</bootclasspath>
</compilerArguments>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.16</version>
<configuration>
<additionalClasspathElements>
<additionalClasspathElement>${java.home}/lib/jfxrt.jar</additionalClasspathElement>
</additionalClasspathElements>
</configuration>
</plugin>
</plugins>
</build>
Edit 3:
The problem originates from when the table to look at is specified:
Root<IndexSetup> from = criteriaQuery.from(IndexSetup.class);
Looking at hibernate sources not an entity is thrown whenever entityType == null. So I don't gather why the entity type is null here whereas it works outside of SpringBoot ?
Edit 4:
If I remove SpringApplication.run(ServerApplication.class, args); from the Server class' main method then the same call which was causing the issue ie :
LocalDatabase.getInstance(false) // no GUI
.getAllIndexSetups();
now works picobello. Of course it does not solve anything since I still need SpringBoot for the search! So for me it means that Spring Boot does not understand the hibernate configuration. I opened a new question to introduce the problem more accurately.
Any help appreciated,
I think you should add to your #EntityScan annotation package of your entities within second project/module
As first, some checking :
Is all your configuration only built with annotations in ServerApplication or any Java class, or are there any others external configurations in XML / YML files ? Maybe look for conflicts. We do prefer to not mix XML with annotation configuration, if possible.
Try to remove #Serializable (not really mandatory).
Try to move your entity in your root package (just as a test).
Check if the package which export #Entity is correct.
Question : what are you calling "module", it is a subpackage or Maven module or another thing ? Could we have the configuration of package names about this ?
Edit :
In the case of a multi-modules project, did you follow recommendations from spring.io about multi-modules projets ? Did you import the Spring BOM (or starter) in your submodules and did you test the Spring Boot Maven Plugin ?
Can you provide your application.properties (or application.yml whatever) with your datasource configuration ? You should check if your datasource (and JPA, driver class, ...) is correctly defined ; see spring.io
So it happened that I did not use correctly SpringBoot capabilities. Here are the steps I followed. Please remember the architecture of the project :
- Parent maven project (my.package)
|-Module Base (with IndexSetup class and [initialy] hibernate.cfg.xml in /resources. It also had in the beginning LocalDatabase class to access to the local db via hibernate)
|-Module Indexing (that depends on Base)
|-Module Server (that also depends on Base)
|-Database file (myLocalDB)
1) First I removed hibernate.cfg.xml from Base and dropped it into resources of the Indexing module. I did it because SpringBoot has its own configuration mechanism. I also removed LocalDatabase class from Base (since it would not be needed by SpringBoot) and dropped it too in the Indexing Module (where it is used indeed).
2) Following [this](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-sql.html] I added spring-boot-starter-data-jpa to the Server module pom.xml.
3) Following this tutorial I created a JPA Repository IndexSetupRepository with barely a single line of code :
public interface IndexSetupRepository extends CrudRepository<IndexSetup, Integer> {
}
4) In the Server application.properties I added those lines :
# Embedded database configuration
# The embedded database file is one level above the Server folder (ie directly within the parent project)
# we use the auto server mode to be able to use the database simultaneously in the indexer and the server
spring.datasource.url=jdbc:h2:file:../myLocalDB;AUTO_SERVER=TRUE
spring.datasource.username=myName
# This parameter helped me discover that SpringBoot was not targetting the right table name.
spring.jpa.hibernate.ddl-auto=validate
5) As SpringBoot was telling me it could not find table named index_setup (see camel case converted to _), I had to add this line to the application.properties :
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
6) Now as I got "Entity not managed", I eventually added #EntityScan annotation to the Server main class as many of you advised me to do.
#EntityScan("my.package.Entities")
Please note that #EntityScan should point to the folder containing the entity class not the entity class itself ie #EntityScan("my.package.Entities.IndexSetup") did not work.