I have an application using Java servlets/JSP's. There are multiple clients using my app, however each client has a separate database. All the databases have the same schema. I would like to determine which database connection to use at the time when a user logs into the system.
For example client A logs in, I determine that client A belongs to database C, grab the connection for database C and continue on my merry way.
I am using JPA with Hibernate as my JPA provider. Is it possible to do this using multiple persistence units and determining which unit to use at login time? Is there a better/preferred way to do this?
Edited to add:
I am using annotations and EJB's so the Persistence Context is being set in the EJB with #PersistenceContext(unitName = "blahblah"), can this be determined at login time? Can I change the unitName at runtime?
Thanks
1) Create several persistent units in your persistence.xml with different names.
2) Create necessary number of EntityManagerFactorys (1 per persistence-unit) and specify which persistence-unit should be used for concrete factory:
<bean id="authEntityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean">
<property name="persistenceUnitName" value="SpringSecurityManager"/>
</bean>
3) Create necessary number of TransactionManager s:
<bean id="authTransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="authEntityManagerFactory" />
</bean>
4) In your DAO's classes specify with which persistence-unit (and so with which EntityManagerFactory) you want to work:
public class AbstractAuthDao<T> {
#PersistenceContext (unitName = "SpringSecurityManager")
protected EntityManager em;
...
}
5) In your service-objects specify which TransactionManager should be used (this feature is supported only in Spring 3.0):
#Transactional (value = "authTransactionManager", readOnly = true)
public class UserServiceImpl implements UserService {
...
}
6) If you have OpenEntityManagerInViewFilter in your web.xml, then specify in its init-param name of necessary EntityManagerFactory (or create several filters with correspondent init-blocks):
<init-param>
<param-name>entityManagerFactoryBeanName</param-name>
<param-value>authEntityManagerFactory</param-value>
</init-param>
Related
I have spring based multimodule application. And in my DAO module the DB (embedded derby) is started and created by the class the implements ApplicationListener.
Problem that in the logs the huge stacktrace from Spring which say that there is no db(couldn't get connection).
Still, my application works without any problems. This stacktrace appeared before the ApplicationListener invoked and the db is created. Actually, I see it only when I am starting the application the first time on the machine, because the db created only this time, than it just used.
So my question is whow to avoid this exception in logs? Maybe there is spring or hibenate setup not connect to the db before the application context fully loaded? Or invoke the code that creates db by some other listener?
Well here is the way I do : the ROOT context contains the datasource, the dao, the service and the transaction manager. In XML config, the declaration of the database is :
<bean id="datasource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"
p:url="jdbc:derby:/path/to/database;create=TRUE"
p:username="user" p:password="pwd"
p:driverClassName="org.apache.derby.jdbc.EmbeddedDriver"/>
it can then be used to declare a session factory for hibernate and an associated DAO as :
<bean class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"
id="sessionFactory" p:dataSource-ref="datasource">
<!-- hibernate config -->
...
</bean>
<bean class="org.springframework.orm.hibernate4.HibernateTransactionManager"
name="transactionManager" p:sessionFactory-ref="sessionFactory"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="myDao" class="... .myDaoImpl" p:sessionFactory-ref="sessionFactory" .../>
That way all is created by spring, that ensures that the creation order is correct. Of course the same is possible in Java config with the same logic.
I suppose you are fetching some data from database from inside spring beans that are being created. Perhaps thru #PostConstruct or other way. Remember that until spring context is fully loaded some beans can have injected uninitialized beans.
So do not use DB, do not call any DAOs until you are sure that spring context is fully initialized.
To do such initial calls to DAOs try such patter that guarantees spring context completness:
#Component
public class SpringContextMonitor implements ApplicationListener<ApplicationEvent> {
#Autowired
private SomeDao dao;
...
#Override
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ContextRefreshedEvent) {
onStart((ContextRefreshedEvent) event);
}
}
private void onStart(ContextRefreshedEvent event) {
// do your initialization here
dao.getSomething();
dao2.getSomething();
...
}
...
}
The onStart method in above example is place where you are sure that all beans are fully initialized
I have a project which deals with two different database instances.
Each access to a database is transactional, but the transaction on database1 do not need to be linked to transaction on database2.
I am using Hibernate and spring-tx 4.0.3 Release, spring Ioc4 and hibernate4.
I use #Transactional annotation in my DAO services.
So I configure two datasource beans, two sessionFactory beans and two HibernateTransactionManager beans.
But doing so, I get an UniqueBeanException as the TransactionAspectSupport.determineTransactionManager tries to find only one instance of class implementing PlatformTransactionManager interface.
I have seen that I can make my java configuration class implements TransactionManagementConfigurer, so that I can specifically tell which transaction-manager bean to use, and I was hoping to implement a ProxyTransactionManager who could delegate to each appropriate transaction-manager depending on which database the current call need to be made.
The problem is implementing such ProxyPlatformTransactionManager methods, how can I know which database is being accessed, or which SessionFactory is being accessed? Otherwise I an not know which PlatformTransactionManager to use.
Has anyone faced that type of issue yet?
Thanks,
Mel
In your application context, you need to define 2 transactionalManagers as below
<bean id="txMngr1" class="org.springframework.orm.hibernate5.HibernateTransactionManager"
p:sessionFactory-ref="sessionFactory1">
<qualifier value="txMngr1"/>
</bean>
<bean id="txMngr2" class="org.springframework.orm.hibernate5.HibernateTransactionManager"
p:sessionFactory-ref="sessionFactory2">
<qualifier value="txMngr2"/>
</bean>
And then use the Transactional Qualifier with your DAOs/Services.
#Transactional("txMngr2")
FYI: You can access multiple sessionFactories from your code using qualifiers as well
#Autowired
#Qualifier(value="sessionFactory2")
private SessionFactory sessionFactory;
I'm using Spring 3.0.6, with Hibernate 3.2.7.GA in a Java-based webapp. I'm declaring transactions with #Transactional annotations on the controllers (as opposed to in the service layer). Most of the views are read-only.
The problem is, I've got some DAOs which are using JdbcTemplate to query the database directly with SQL, and they're being called outside of a transaction. Which means they're not reusing the Hibernate SessionFactory's connection. The reason they're outside the transaction is that I'm using converters on method parameters in the controller, like so:
#Controller
#Transactional
public class MyController {
#RequestMapping(value="/foo/{fooId}", method=RequestMethod.GET)
public ModelAndView get(#PathVariable("fooId") Foo foo) {
// do something with foo, and return a new ModelAndView
}
}
public class FooConverter implements Converter<String, Foo> {
#Override
public Foo convert(String fooId) {
// call FooService, which calls FooJdbcDao to look up the Foo for fooId
}
}
My JDBC DAO relies on SimpleJdbcDaoSupport to have the jdbcTemplate injected:
#Repository("fooDao")
public class FooJdbcDao extends SimpleJdbcDaoSupport implements FooDao {
public Foo findById(String fooId) {
getJdbcTemplate().queryForObject("select * from foo where ...", new FooRowMapper());
// map to a Foo object, and return it
}
}
and my applicationContext.xml wires it all together:
<mvc:annotation-driven conversion-service="conversionService"/>
<bean id="conversionService" class="org.springframework.context.support.ConversionServiceFactoryBean">
<property name="converters">
<set>
<bean class="FooConverter"/>
<!-- other converters -->
</set>
</property>
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"
p:sessionFactory-ref="sessionFactory" />
FooConverter (which converts a path variable String to a Foo object) gets called before MyController#get() is called, so the transaction hasn't been started yet. Thus when FooJdbcDAO is called to query the database, it has no way of reusing the SessionFactory's connection, and has to check out its own connection from the pool.
So my questions are:
Is there any way to share a database connection between the SessionFactory and my JDBC DAOs? I'm using HibernateTransactionManager, and from looking at Spring's DataSourceUtils it appears that sharing a transaction is the only way to share the connection.
If the answer to #1 is no, then is there a way to configure OpenSessionInViewFilter to just start a transaction for us, at the beginning of the request? I'm using "on_close" for the hibernate.connection.release_mode, so the Hibernate Session and Connection are already staying open for the life of the request.
The reason this is important to me is that I'm experiencing problems under heavy load where each thread is checking out 2 connections from the pool: the first is checked out by hibernate and saved for the whole length of the thread, and the 2nd is checked out every time a JDBC DAO needs one for a query outside of a transaction. This causes deadlocks when the 2nd connection can't be checked out because the pool is empty, but the first connection is still held. My preferred solution is to make all JDBC DAOs participate in Hibernate's transaction, so that TransactionSynchronizationManager will correctly share the one single connection.
Is there any way to share a database connection between the SessionFactory and my JDBC DAOs? I'm using HibernateTransactionManager, and from looking at Spring's DataSourceUtils it appears that sharing a transaction is the only way to share the connection.
--> Well you can share database connection between SessionFactory and JdbcTemplate. What you need to do is share same datasource between the two. Connection pooling is also shared between the two. I am using it in my application.
What you need to do is configure HibernateTransactionManager for both transactions.
Add JdbcDao class(with properties jdbcTemplate and dataSource with getter-setter) in your existing package structure(in dao package/layer), Extend your jdbc implementation classes by JdbcDao. If you have configured, hibernateTxManager for hibernate, you will not need to configure it.
The problem is, I've got some DAOs which are using JdbcTemplate to query the database directly with SQL, and they're being called outside of a transaction. Which means they're not reusing the Hibernate SessionFactory's connection.
--> You may be wrong here. You may be using same connection, I think, only problem may lie in HibernateTransaction configuration.
Check HibernateTransactionManager javadoc : This transaction manager is appropriate for applications that use a single Hibernate SessionFactory for transactional data access, but it also supports direct DataSource access within a transaction (i.e. plain JDBC code working with the same DataSource). This allows for mixing services which access Hibernate and services which use plain JDBC (without being aware of Hibernate)!
Check my question : Using Hibernate and Jdbc both in Spring Framework 3.0
Configuration : Add dao classes and service classes with your current hibernate classes, do not make separate packages for them, If you want to work with existing configuration. Otherwise configure HibernateTransactionManager in xml configuration and Use #Transactional annotation.
Mistake in your code :
#Controller
#Transactional
public class MyController {......
Use #Transactional annotation in service classes(best practice).
Correction :
#Transactional(readOnly = true)
public class FooService implements FooService {
public Foo getFoo(String fooName) {
// do something
}
// these settings have precedence for this method
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void updateFoo(Foo foo) {
// do something
}
}
I am encountering some problems with a web app that uses Spring, Hibernate and JPA. The problems are very high memory consumption which increases over time and never seems to decrease. They most likely stem from an incorrect usage of the EntityManager. I have searched around but I haven't found something for sure yet.
We are using DAOs which all extend the following GenericDAO where our ONLY EntityManager is injected:
public abstract class GenericDAOImpl<E extends AbstractEntity<P>, P> implements
GenericDAO<E, P> {
#PersistenceContext
#Autowired
private EntityManager entityManager;
[...]
The generic DAO is used because it has methods to get entities by ID and so on which would be a pain to implement in all ~40 DAOs.
The EntityManager is configured as a Spring bean in the following way:
<bean class="org.springframework.orm.jpa.JpaTransactionManager"
id="transactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<tx:annotation-driven mode="aspectj"
transaction-manager="transactionManager" />
<bean
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
id="entityManagerFactory">
<property name="persistenceUnitName" value="persistenceUnit" />
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="entityManager" factory-bean="entityManagerFactory"
factory-method="createEntityManager" scope="singleton" />
The biggest problem I think is using this shared EntityManager for everything. In the services classes, we are using the #Transactional annotation for methods which require a transaction. This flushes the EntityManager automatically from what I read, but does is different from clearing, so I guess the objects are still in memory.
We noticed an increase in memory after each automatic import of data in the DB which happens every day (~7 files of 25k lines each, where a lot of linked objects are created). But also during normal functioning, when retrieving lots of data (let's say 100-200 objects at a time for a request).
Anyone has any idea how I could improve the current situation (because it's kind of bad at this point...)?
Edit: Have run a profiler on the deployed app and this is what it found:
One instance of "org.hibernate.impl.SessionFactoryImpl" loaded by "org.apache.catalina.loader.WebappClassLoader # 0xc3217298" occupies 15,256,880 (20.57%) bytes. The memory is accumulated in one instance of "org.hibernate.impl.SessionFactoryImpl" loaded by "org.apache.catalina.loader.WebappClassLoader # 0xc3217298".
This is probably the EntityManager is not cleared?
I'm inclined to agree with your assessment. EntityManagers aren't really designed to be used as singletons. Flushing the EntityManager doesn't clear anything from memory, it only synchronizes entities with the database.
What is likely happening is the EntityManager is keeping reference to all of the objects in the persistence context and you're never closing the context. (This guy had a similar issue.) Clearing it will indeed remove all references from EntityManager to your entities, however, you should probably re-evaluate how you use your EntityManager in general if you find yourself constantly having to call clear(). If you are just wanting to avoid LazyInitializationExceptions, consider the OpenSessionInViewFilter from Spring*. This allows you to lazily load entities while still letting Spring manage the lifecycle of your beans. Lifecycle management of your beans is one of the great advantages of the Spring Framework, so you need to make sure that overriding that behavior is really what you want.
There are indeed some cases where you want a long-lived EntityManager, but those cases are relatively few and require a great deal of understanding to implement properly.
*NOTE: OpenSessionInView requires great care to avoid the N+1 problem. It's such a big issue that some call Open Session in View an AntiPattern. Use with caution.
Edit
Also, you don't need to annotate #PersistenceContext elements with #Autowired as well. The #PersistenceContext does the wiring itself.
The non JEE compliant application server, you should not be using #Autowired/#PersistenceContext private EntityManager entityManager;!
What you should be doing is something like this:
class SomeClass {
#PersistenceUnit private EntityManagerFactory emf;
public void myMethod() {
EntityManager em = null;
try {
em = emf.createEntityManager();
// do work with em
}
} catch (SomeExceptions e) {
// do rollbacks, logs, whatever if needed
} finally {
if (em != null && em.isOpen()) {
// close this sucker
em.clear();
em.close();
}
}
}
Some notes:
This applies to Non Full JEE app server with Spring + Hibernate
I've tested it with JDK 1.7 and 1.8, no difference in terms of leaks.
Regular Apache Tomcat is not true JEE app server (TomEE is however)
List of Java EE Compliant App Servers
You should delete #Autowired annotation from above private EntityManager entityManager; and remove entityManager bean definition from your context definition file. Also, if you don't use <context:annotation-config/> and <context:component-scan/> XML tags you must define PersistenceAnnotationBeanPostProcessor bean in your context.
I have a web application which connects to an Oracle database. The application is now going to have a new set of users. A new db is being planned for this new set of users. Is it possible to connect to the appropriate db based on the user who logs in. As of now the database configuration is done through JNDIName entry in an xml file.
Absolutely. For a given DAO class (assuming you're using DAOs), create two bean definitions, one for each database, and then pick which DAO bean you want to use in your business logic:
<bean id="dao1" class="com.app.MyDaoClass">
<property name="dataSource" ref="dataSource1"/>
</bean>
<bean id="dao2" class="com.app.MyDaoClass">
<property name="dataSource" ref="dataSource2"/>
</bean>
Where dao1 and dao2 are the DataSource beans representing your two different databases.
At runtime, your business logic selects dao1 or dao2 appropriately.
I'd suggest injecting both the data sources into your DAOs and then within your DAO decide the correct data source to use based on the current user. The current user can be passed to the DAO from your presentation/service layer.