I am attempting to get a WAR file to run inside of a Karaf OSGi container. The application runs correctly in stand-alone Jetty 6.1.26, but when the application is run inside of Karaf, I get the following exception and the Karaf instance freezes:
WARN org.hibernate.ejb.packaging.InputStreamZippedJarVisitor - Unable to find
file (ignored): bundle://125.0:240/ java.lang.NullPointerException: in is null
Note that the application is not relying on Hibernate in a separate OSGi bundle; it includes the hibernate jars in WEB-INF/lib.
I have examined the information on this post: Equinox (OSGi) and JPA/Hibernate - Finding Entities. However, the application is using JPA, rather than using Hibernate directly. The application's configuration is much like the 2nd option found in this post: Difference between configuring data source in persistence.xml and in spring configuration files. As such, I don't have a handle on a Hibernate SessionFactory that allows me to set the annotatedClasses property.
Any ideas on how to get past the exception?
I worked in parallel with the author and I'll post our solution here for anyone that runs into this in the future.
The exception is thrown because Hibernate tries to unzip it's jar to look for the persistence classes. As other posts mention, OSGi does not allow Hibernate to act like a classloader, so this fails. The solution was specifying all of the classes that it needed to load by hand and then telling it not to try to load anything else.
We used a persistence.xml file and an orm.xml file (we used default names so we didn't have to specify either in our applicationContext.xml).
Our persistence.xml file simply pointed to the orm.xml using the <mapping-file> tag. It also included the <exclude-unlisted-classes/> tag to keep hibernate from trying to load additional classes.
Our orm.xml file used <entity class="path.to.my.class" metadata-complete="false"/> to call out every entity class that we needed to load. The metadata-complete part tells hibernate to use the annotations found in the class to complete the configuration.
Related
Still getting in my raw project (established only for #Pattern exercises in #Entity without any Spring framework and servlets):
Exception in thread "main" javax.validation.NoProviderFoundException: Unable to create a Configuration, because no Bean Validation provider could be found ...
I've decided to try it with Jakarta libraries and found in hibernate operate manual point 1.1.3. "Running with a security manager" such an additional, suggested configuration lines to put in the java policy file
vide: https://docs.jboss.org/hibernate/stable/validator/reference/en-US/html_single/#section-getting-started-security-manager
but before I will begin configuration tests with my $JAVA_HOME/lib/security/default.policy file I would like to gather information: how to refer to libraries that I would like to authorize in this file for the appropriate accesses?
Should I give this direct path to the jar files or point only to the fully-qualified class name? and what is the correct syntax in default.policy?
I don't know what you think you are doing, but the error clearly says that there is no bean validation implementation available on the class-/module-path. The reason why it can't find the provider depends on your project. Maybe you didn't specify it as dependency? Anyway, if you need further help, you will have to post more information about your project and runtime setup.
Scenario:
I'm supporting an Enterprise application that runs in Wildfly10. The application (.war) uses J2EE technologies (EJBs, JPA, JAX-RS) and SpringBoot features (like, SpringMVC, SpringRest, SpringData, SpringRestData) ... Both stacks co-exists "happily" because they don't interact between them; however, they do share common classes, like utility or Entity Classes (the stacks map to the same database model). Why the application uses those stacks is out the scope of the question.
Currently, I'm trying to improve the performance of a #RestController that pulls some data from the database using a JPA Spring Repository. I found that we're suffering the N + 1 queries problem when calling the #RestController. In past projects (where there were only J2EE technologies), I have used the #BatchSize hibernate annotation to mitigate this problem with total success.
But, in this project, Spring seems to be skipping such annotation. How do I know that? Because I turned on the hibernate SQL logging (hibernate.show_sql) and I can see the N + 1 queries is still happening ...
Key Points:
Here are some insights about the application that you must know before providing (or trying to guess) any answer:
The application has many sub-modules encapsulated as libraries inside WAR file (/WEB-INF/lib) ... Some of these libraries are the jars that encapsulate the entity classes; others are the jars that encapsulate the REST Services (that could be JAX-RS services or Spring Controllers).
The Spring configuration is done in the classes defined in the WAR artifact: in there, we have a class (that extends from SpringBootServletInitializer) annotated with #SpringBootApplication and another class (that extends from RepositoryRestConfigurerAdapter) annotated with #Configuration. Spring customization is done is such class.
The application works with multiple datasources, which are defined in the Wildly server. Spring DATA JPA must address any query pointing to the right datasource. To accomplish this requirement, the application (Spring) was configured like this:
#Bean(destroyMethod="")
#ConfigurationProperties(prefix="app.datasource")
public DataSource dataSource() {
// the following class extends from AbstractRoutingDataSource
// and resolve datasources using JNDI names (the wildfly mode!)
return new DataSourceRouter();
}
#Bean("entityManagerFactory")
public LocalContainerEntityManagerFactoryBean getEntityManagerFactoryBean() {
LocalContainerEntityManagerFactoryBean lemfb;
lemfb = new LocalContainerEntityManagerFactoryBean();
lemfb.setPersistenceUnitName("abcd-pu");
lemfb.setDataSource(dataSource());
return lemfb;
}
The last #Bean declaration favors the use of a persistence.xml file, which we do have in the route /WEB-INF/classes/META-INF/ (i.e. Spring does find this file!) ... In such file, we define our domain classes, so that Spring JPA can see such entities. Also, we can define special JPA properties like: hibernate.show_sql and hibernate.use_sql_comments without issues (this is how I detected the N + 1 queries problem in the first place) ...
What I have done so far?
I tried to add the #BatchSize annotation to the problematic collection. No luck!
I created a new JAX-RS Service whose purpose was to mimic the behavior of the #RestController. I confirmed that the #BatchSize annotation does work in the application's deployment, at least, in JAX-RS Services! (NOTE: the service uses it own persistence.xml) ...
Test details (Updated 2020/07/30): What I did here was to create a new JAX-RS Service and deployed it inside the WAR application, next to the #RestController that presents the problem (I mean, it is the same WAR and the same physical JVM). Both services pull from database the same entity (same class - same classloader), which has a lazy Collection annotated with #BatchSize! ... If I invoke both services, the JAX-RS honors the #BatchSize and pulls the collection using the expected strategy, the #RestController does not ... So, what it is happening here? The only thing different between the services is that each one has a different persistence.xml: the persistence.xml for the JAX-RS is picked by Wildfly directly, the other one is picked by Spring and delegated to Wildfly (I guess) ...
I tried to add the properties: hibernate.batch_fetch_style (=dynamic) and hibernate.default_batch_fetch_size (=10), to the persistence.xml read by Spring ... No luck. I debug the Spring startup process and I saw that such properties are passed to the Spring Engine, but Spring does not care about them. The weird thing here is that properties like: hibernate.show_sql, Spring does honor them ... For those who are asking: "What does these properties do?" Well, they are global equivalent to apply #BatchSize to any JPA lazy collection or proxy without declaring such annotation in any entity.
I setup a small SpringBoot Project using the same Spring version as enterprise application (which is 1.5.8.RELEASE, by the way) and both the annotation and properties approach worked as supposed to.
I've been stuck with this issue for two days, any help to fix this will be appreciated ... thanks!
There are 2-3 possible issues that I can think off.
For some reason, whatever you modify isnt picked up by wildfly - Wildfly classpath resolution is a separate Topic and some missing configuration can cause you a nightmare. This you can identify if you have access to debug the query, and in if you put a breakpoint in the constructor of your Entity class, you will get a chance to evaluate the entity configuration being used, somewhere in the execution conetxt.
BatchSize doesnt work on OneToOne, It only works on OneToMany relationships.
A typical way to define BatchSize is to do along with Lazy load as mentioned in the example here. If you are not using Lazy fetch, hibernate assumes that you are willing to make an eager load and makes another select query to fetch all the details.Please confirm you are using the same syntax as given in the example above.
New Addition:
Put Conditional Breakpoints in PropertyBinder#setLazy() function, and may be backtrace it and put relavent breakpoints in CollectionBinder and AnnotationBinder. then restart/redeploy the server and see what data you are getting for the relavent properties. That will give you fair idea where it is failing..
Why conditional breakpoint? Its because you will have thousands of properties and if you do not add condition to the breakpoint, you will take 1 hour to reach your actual breakpoint
What should be the condition - If its property binder, the condition shoud be like `this.name == . For other classes also you can use the same approach.
Sorry for too detailed description on conditional breakpoints, you might find it redundent.
Looks like the only way to debug your problem is to debug hibernate framework from server startup, then only we will be able to find out the rootcause
I have a project structure with three jar files. All are loaded into one classpath and then get executed. I am using the spring core, jpa and hibernate. #Services/#Autowired are working fine, as well as Entities and Repositories (all on a mysql database).
Now I want that the project can send and receive messages over network/internet. So I asked some people how I could achieve this without breaking my current structure. And I was told that spring-boot is the architecture for me because I do not need a web server (tomcat or glassfish) for it.
But now I am not sure if this is correct because I did not find any sources that say the same thing. Because of that I tried implementing it in order to verify it myself.
The important changes I made to my project (all pom.xml files and my configuration class) can be found here: http://84.141.90.123:9910/
From what I read I need the #SpringBootApplication annotation for spring boot. This is a equivalent to #Configuration, #ComponentScan, #EnableAutoConfiguration, #EnableWebMvc.
The first two are already in my structure. But when I add the last two annotations, I get different errors:
When I add #EnableAutoConfiguration I get
Caused by: java.lang.IllegalArgumentException: No auto configuration classes found in META-INF/spring.factories. If you are using a custom packaging, make sure that file is correct.
When I add #EnableWebMvc I get
Caused by: java.lang.IllegalArgumentException: A ServletContext is required to configure default servlet handling
From my bad english knollage, the #EnableWebMvc error seems to say that the application needs a web server (tomcat or glassfish).
So is the main statement wrong and I can not start spring boot without a web server?
Because I do not use any xml files and/or property files for spring (they did not worked), I only rely on java code based configuration for spring, jpa and hibernate. And therefore there are only very few tutorials/threads with help. Most of the time they just say add thi or add that to your xml but because I don't have them, it is a little pain in the ass.
Also I compile with aspectj, so I can not use the spring compile parent. And also I am not able to manipulate the main class/method, because the main class is in an outer jar file that is not programmed by me.
So concrete:
Can a spring boot application in a standalone jar run without a web server wrapping it?
If yes, what am I doing wrong? Am I missing a dependency, an annotation or a configuration?
If you want to receive messages over HTTP(ie run a REST API) then you need a web server.
If you just want to send messages over HTTP then you only need a HTTP client.
Spring-boot has the the option of running an embedded web server(tomcat by default), you don't need to run a separate application server.
To work out your issues with your build I would start with generating a project using spring initializr.
You can select the dependencies you want(try using the advanced version link at the bottom), and it will build a maven/gradle project for you with the right structure, build file and compatible dependencies.
Change :
public static void main(String[] args) {
SpringApplication.run(ActiveMqApplication.class, args);
}
to :
public static void main(String[] args) {
new SpringApplicationBuilder(ActiveMqApplication.class)
.web(WebApplicationType.NONE).run(args);
}
I'm trying to create a portlet with liferay 6.2 and using spring. If I create a bean without using constructor-arg or factory-method then everything works fine. But if I use either of these then I get exceptions when the portlet is deployed.
an example:
the exception I'm getting is:
01:28:21,884 ERROR [ContextLoader:323] Context initialization failed
java.lang.IncompatibleClassChangeError: class org.springframework.core.LocalVariableTableParameterNameDiscoverer$ParameterNameDiscoveringVisitor has interface org.springframework.asm.ClassVisitor as super class
I realize that this can be caused by having 2 versions of ams, but im using the spring jars that come with liferay.
You give an option yourself - duplicate classes. But without knowing how you build and what you're doing, there's hardly anything to do apart from asking you to make extra extra extra sure that you don't have duplicate resources on the classpath:
Check your deployed web application (once it's deployed to your application server) and its WEB-INF/lib folder for such duplicates. They might come in only during the buildprocess, e.g. they might not be in your IDE's workspace. Or Liferay might inject them (due to declared dependencies) during deployment.
You'll have to figure out how (and in which phase) those resources get there, then eliminate that option (e.g. through proper maven scope, e.g. "provided")
I am creating a Java application that utilizes a JPA annotated model - the core model -. On top of these entities, at runtime, I would like to add a jar file from an external source that contains some other JPA classes definitions and mappings. The imported archive might change its class structure and mappings, but it is the application's duty to refresh the entire schema when changed.
However, when trying to add the jar to hibernate Configuration, I get a
org.hibernate.service.spi.ServiceException: Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
The inner exception is related to the hibernate dialect:
org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set
However, I am sure to have specified the hibernate.dialect property in the persistence.xml file. Below is the code I am using in my application:
org.hibernate.cfg.Configuration cfg = new org.hibernate.cfg.Configuration();
cfg.addJar(new File("path/to/jar.jar"));
cfg.buildSessionFactory();
What am I doing wrong?
Also, could you please tell me if you find this a good approach to create a dynamically updateable schema shared between multiple applications?
I managed to solve the problem. The main point is that, when using EntityManagerFactory (the JPA API), the hibernate persistence provider only reads the persistence.xml configuration files and loads the persistence units that are specified therein.
However, using a hibernate API configuration, hibernate does not read the persistence.xml files, so one will have to explicitly specify all aspects such as dialect, connection parameters etc in the hibernate.cfg.xml file.
However, I managed to work around this issue. Indeed, in the dynamically loaded jar file, one must export the folders (the META-INF especially) and configure a persistence.xml file in there too. However, naming two persistence units the same, their corresponding classes will not get merged and neither will any other properties. By default, hibernate will load the first found persistence unit and will treat the identically-named ones as different. So, I created a more flexible core schema that allows access to multiple persistence units, while caching them in something similar to dictionaries. Consequently, for each schema in my application, I will load the corresponding persistence unit while storing all of them in a dictionary-style container, allowing the application to get notified should any changes occur to the underlying jar file.