I am using Hibernate 4.3.6.Final with JPA and Spring 4.0.6.RELEASE in my project with Java Configuration.
I have two jar files. module1.jar and module2.jar. module1.jar has some entities
and module2 has some entities. I can't use the module1.jar entity in module2.jar without using
persistent.xml and
<jar-file>module1.jar</jar-file>
Is it necessary to have persistent.xml as I am using
entityManagerFactoryBean.setPackagesToScan("com.mydomain") to scan all the entities from all jar files.
No, it is not necessary to use the persistence.xml if you are configuring Spring's entityManagerFactoryBean with setPackagesToScan().
From New Features and Enhancements in Spring 3.1:
3.1.12 JPA EntityManagerFactory bootstrapping without persistence.xml
In standard JPA, persistence units get defined through META-INF/persistence.xml files in specific jar files which will in turn get searched for #Entity classes. In many cases, persistence.xml does not contain more than a unit name and relies on defaults and/or external setup for all other concerns (such as the DataSource to use, etc). For that reason, Spring 3.1 provides an alternative: LocalContainerEntityManagerFactoryBean accepts a 'packagesToScan' property, specifying base packages to scan for #Entity classes. This is analogous to AnnotationSessionFactoryBean's property of the same name for native Hibernate setup, and also to Spring's component-scan feature for regular Spring beans. Effectively, this allows for XML-free JPA setup at the mere expense of specifying a base package for entity scanning: a particularly fine match for Spring applications which rely on component scanning for Spring beans as well, possibly even bootstrapped using a code-based Servlet 3.0 initializer.
Related
Scenario:
I'm supporting an Enterprise application that runs in Wildfly10. The application (.war) uses J2EE technologies (EJBs, JPA, JAX-RS) and SpringBoot features (like, SpringMVC, SpringRest, SpringData, SpringRestData) ... Both stacks co-exists "happily" because they don't interact between them; however, they do share common classes, like utility or Entity Classes (the stacks map to the same database model). Why the application uses those stacks is out the scope of the question.
Currently, I'm trying to improve the performance of a #RestController that pulls some data from the database using a JPA Spring Repository. I found that we're suffering the N + 1 queries problem when calling the #RestController. In past projects (where there were only J2EE technologies), I have used the #BatchSize hibernate annotation to mitigate this problem with total success.
But, in this project, Spring seems to be skipping such annotation. How do I know that? Because I turned on the hibernate SQL logging (hibernate.show_sql) and I can see the N + 1 queries is still happening ...
Key Points:
Here are some insights about the application that you must know before providing (or trying to guess) any answer:
The application has many sub-modules encapsulated as libraries inside WAR file (/WEB-INF/lib) ... Some of these libraries are the jars that encapsulate the entity classes; others are the jars that encapsulate the REST Services (that could be JAX-RS services or Spring Controllers).
The Spring configuration is done in the classes defined in the WAR artifact: in there, we have a class (that extends from SpringBootServletInitializer) annotated with #SpringBootApplication and another class (that extends from RepositoryRestConfigurerAdapter) annotated with #Configuration. Spring customization is done is such class.
The application works with multiple datasources, which are defined in the Wildly server. Spring DATA JPA must address any query pointing to the right datasource. To accomplish this requirement, the application (Spring) was configured like this:
#Bean(destroyMethod="")
#ConfigurationProperties(prefix="app.datasource")
public DataSource dataSource() {
// the following class extends from AbstractRoutingDataSource
// and resolve datasources using JNDI names (the wildfly mode!)
return new DataSourceRouter();
}
#Bean("entityManagerFactory")
public LocalContainerEntityManagerFactoryBean getEntityManagerFactoryBean() {
LocalContainerEntityManagerFactoryBean lemfb;
lemfb = new LocalContainerEntityManagerFactoryBean();
lemfb.setPersistenceUnitName("abcd-pu");
lemfb.setDataSource(dataSource());
return lemfb;
}
The last #Bean declaration favors the use of a persistence.xml file, which we do have in the route /WEB-INF/classes/META-INF/ (i.e. Spring does find this file!) ... In such file, we define our domain classes, so that Spring JPA can see such entities. Also, we can define special JPA properties like: hibernate.show_sql and hibernate.use_sql_comments without issues (this is how I detected the N + 1 queries problem in the first place) ...
What I have done so far?
I tried to add the #BatchSize annotation to the problematic collection. No luck!
I created a new JAX-RS Service whose purpose was to mimic the behavior of the #RestController. I confirmed that the #BatchSize annotation does work in the application's deployment, at least, in JAX-RS Services! (NOTE: the service uses it own persistence.xml) ...
Test details (Updated 2020/07/30): What I did here was to create a new JAX-RS Service and deployed it inside the WAR application, next to the #RestController that presents the problem (I mean, it is the same WAR and the same physical JVM). Both services pull from database the same entity (same class - same classloader), which has a lazy Collection annotated with #BatchSize! ... If I invoke both services, the JAX-RS honors the #BatchSize and pulls the collection using the expected strategy, the #RestController does not ... So, what it is happening here? The only thing different between the services is that each one has a different persistence.xml: the persistence.xml for the JAX-RS is picked by Wildfly directly, the other one is picked by Spring and delegated to Wildfly (I guess) ...
I tried to add the properties: hibernate.batch_fetch_style (=dynamic) and hibernate.default_batch_fetch_size (=10), to the persistence.xml read by Spring ... No luck. I debug the Spring startup process and I saw that such properties are passed to the Spring Engine, but Spring does not care about them. The weird thing here is that properties like: hibernate.show_sql, Spring does honor them ... For those who are asking: "What does these properties do?" Well, they are global equivalent to apply #BatchSize to any JPA lazy collection or proxy without declaring such annotation in any entity.
I setup a small SpringBoot Project using the same Spring version as enterprise application (which is 1.5.8.RELEASE, by the way) and both the annotation and properties approach worked as supposed to.
I've been stuck with this issue for two days, any help to fix this will be appreciated ... thanks!
There are 2-3 possible issues that I can think off.
For some reason, whatever you modify isnt picked up by wildfly - Wildfly classpath resolution is a separate Topic and some missing configuration can cause you a nightmare. This you can identify if you have access to debug the query, and in if you put a breakpoint in the constructor of your Entity class, you will get a chance to evaluate the entity configuration being used, somewhere in the execution conetxt.
BatchSize doesnt work on OneToOne, It only works on OneToMany relationships.
A typical way to define BatchSize is to do along with Lazy load as mentioned in the example here. If you are not using Lazy fetch, hibernate assumes that you are willing to make an eager load and makes another select query to fetch all the details.Please confirm you are using the same syntax as given in the example above.
New Addition:
Put Conditional Breakpoints in PropertyBinder#setLazy() function, and may be backtrace it and put relavent breakpoints in CollectionBinder and AnnotationBinder. then restart/redeploy the server and see what data you are getting for the relavent properties. That will give you fair idea where it is failing..
Why conditional breakpoint? Its because you will have thousands of properties and if you do not add condition to the breakpoint, you will take 1 hour to reach your actual breakpoint
What should be the condition - If its property binder, the condition shoud be like `this.name == . For other classes also you can use the same approach.
Sorry for too detailed description on conditional breakpoints, you might find it redundent.
Looks like the only way to debug your problem is to debug hibernate framework from server startup, then only we will be able to find out the rootcause
I am a new user of Spring framework. I am facing some confusion in understanding the difference between core spring framework and spring boot. As far as I understand, Spring boot is a framework which performs the initial setup automatically (like Setting up Maven dependencies and downloading the jar files) and comes with an embedded Tomcat server which makes it ready to deploy in just one click., Whereas, Spring MVC requires manual setup. All the tutorials that I watched for core spring show bean configuration using bean factory which configures the beans using a .XML file. In Spring boot, this bean configuration file is absent. My question is, what is the use of this bean configuration file? I did not find any legitimate use of this file in making a REST service with spring. I didn't see any use of the Application Context, Bean Factory in creating web application. Can someone point out how can bean factory be used in Spring web apps? Is there any fundamental difference between core spring and spring boot other than the additional components?
The Spring application context is essentially the "pool" of beans (service objects, which include controllers, converters, data-access objects, and so on) and related information that define an application; I recommend the reference introduction. In theory, you can get complicated with the context setup and have hierarchical organization and such, but in most real-world cases you just have a single plain context.
Inside this context you need to install all of the beans that provide the logic for your application. There are several possible ways to do this, but the two main ways are by providing XML files with have directives like bean (define an individual bean) or component-scan (automatically search for classes with certain annotations, including #Controller) and by using Java classes annotated with #Configuration, which can use annotations and #Bean methods.
The XML style is generally older, and newer applications mostly use Java configuration, but both provide entries that are collected into the context, and you can use both simultaneously. However, in any application, you have to provide some way of getting the registration started, and you will typically have one "root" XML file or configuration class that then imports other XML files and/or configuration classes. In a legacy web.xml-based application, you specify this in your servlet configuration file.
Spring Boot is, as you said, essentially a collection of ready-to-go configuration classes along with a mechanism for automatically detecting configurations and activating them. Even this requires a configuration root, though! This is the #EnableAutoConfiguration instruction, frequently used through its composite #SpringBootApplication. The application context and configuration mechanisms work normally once Boot finds them and pulls them in. Spring knows where to get started because you give it an explicit instruction to build a context starting with that entry point, usually with SpringApplication.run(MyApplication.class, args).
The embedded-server configuration just happens to be a particular set of configuration that is really useful and comes with one of the Boot starter packages. There's nothing there that you couldn't do in a non-Boot application.
I am creating a Java application that utilizes a JPA annotated model - the core model -. On top of these entities, at runtime, I would like to add a jar file from an external source that contains some other JPA classes definitions and mappings. The imported archive might change its class structure and mappings, but it is the application's duty to refresh the entire schema when changed.
However, when trying to add the jar to hibernate Configuration, I get a
org.hibernate.service.spi.ServiceException: Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
The inner exception is related to the hibernate dialect:
org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set
However, I am sure to have specified the hibernate.dialect property in the persistence.xml file. Below is the code I am using in my application:
org.hibernate.cfg.Configuration cfg = new org.hibernate.cfg.Configuration();
cfg.addJar(new File("path/to/jar.jar"));
cfg.buildSessionFactory();
What am I doing wrong?
Also, could you please tell me if you find this a good approach to create a dynamically updateable schema shared between multiple applications?
I managed to solve the problem. The main point is that, when using EntityManagerFactory (the JPA API), the hibernate persistence provider only reads the persistence.xml configuration files and loads the persistence units that are specified therein.
However, using a hibernate API configuration, hibernate does not read the persistence.xml files, so one will have to explicitly specify all aspects such as dialect, connection parameters etc in the hibernate.cfg.xml file.
However, I managed to work around this issue. Indeed, in the dynamically loaded jar file, one must export the folders (the META-INF especially) and configure a persistence.xml file in there too. However, naming two persistence units the same, their corresponding classes will not get merged and neither will any other properties. By default, hibernate will load the first found persistence unit and will treat the identically-named ones as different. So, I created a more flexible core schema that allows access to multiple persistence units, while caching them in something similar to dictionaries. Consequently, for each schema in my application, I will load the corresponding persistence unit while storing all of them in a dictionary-style container, allowing the application to get notified should any changes occur to the underlying jar file.
In Spring, I'd like to do the following:
<import resource="${resourceFile}" />
However, 'resourceFile' is not evaluated by the import.
The reason why I need it to work is I defined to two different resourceFiles:
resources-serviceA.xml
resources-serviceB.xml
Each of the above files define different sets of beans. When running ServiceA, I wouldn't need the beans needed just for ServiceB and hence I do not want to create them.
Any pointers on how to accomplish this?
We are using Spring 3.0.
Spring 3.0 can not evaluate properties inside import tag, evaluating those was one of the new features of Spring 3.1 (in 2011)
See Spring 3.1 M1: Unified Property Management
So basically you should use the actual version of Spring. Spring 3.1+ also introduced bean profiles, so you can define ServiceA and ServiceB in different profiles.
If you are interested how this problem was solved by users of Spring 3.0 you can look at Import Spring config file based on property in .properties file but keep in mind that Spring 3.0 is 3 years old now, it's suspicious to make changes in the basic bootstrapping configuration of the 3yo project, consider switching to Spring 4.0+.
I am attempting to get a WAR file to run inside of a Karaf OSGi container. The application runs correctly in stand-alone Jetty 6.1.26, but when the application is run inside of Karaf, I get the following exception and the Karaf instance freezes:
WARN org.hibernate.ejb.packaging.InputStreamZippedJarVisitor - Unable to find
file (ignored): bundle://125.0:240/ java.lang.NullPointerException: in is null
Note that the application is not relying on Hibernate in a separate OSGi bundle; it includes the hibernate jars in WEB-INF/lib.
I have examined the information on this post: Equinox (OSGi) and JPA/Hibernate - Finding Entities. However, the application is using JPA, rather than using Hibernate directly. The application's configuration is much like the 2nd option found in this post: Difference between configuring data source in persistence.xml and in spring configuration files. As such, I don't have a handle on a Hibernate SessionFactory that allows me to set the annotatedClasses property.
Any ideas on how to get past the exception?
I worked in parallel with the author and I'll post our solution here for anyone that runs into this in the future.
The exception is thrown because Hibernate tries to unzip it's jar to look for the persistence classes. As other posts mention, OSGi does not allow Hibernate to act like a classloader, so this fails. The solution was specifying all of the classes that it needed to load by hand and then telling it not to try to load anything else.
We used a persistence.xml file and an orm.xml file (we used default names so we didn't have to specify either in our applicationContext.xml).
Our persistence.xml file simply pointed to the orm.xml using the <mapping-file> tag. It also included the <exclude-unlisted-classes/> tag to keep hibernate from trying to load additional classes.
Our orm.xml file used <entity class="path.to.my.class" metadata-complete="false"/> to call out every entity class that we needed to load. The metadata-complete part tells hibernate to use the annotations found in the class to complete the configuration.