I'm using hibernate 3.2.7 (same problem on 3.2.5) with spring 3.0.1, all deployed on weblogic 10.3 and with an Oracle 10g database. I'm using JTA transaction management and the transaction is distributed (it is actually started and ended in another application, this code is just in between).
The configuration used by hibernate is declared in my persistence.xml and is the following:
<property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/>
<property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.WeblogicTransactionManagerLookup"/>
<property name="hibernate.query.factory_class" value="org.hibernate.hql.classic.ClassicQueryTranslatorFactory"/>
<property name="hibernate.current_session_context_class" value="jta"/>
<property name="hibernate.connection.release_mode" value="auto"/>
The spring configuration regarding the transaction manager is the following:
<!-- Instructs Spring to perfrom declarative transaction managemenet on annotated classes -->
<tx:annotation-driven transaction-manager="txManager" proxy-target-class="true"/>
<!-- Data about transact manager and session factory -->
<bean id="txManager" class="org.springframework.transaction.jta.WebLogicJtaTransactionManager">
<property name="transactionManagerName" value="javax.transaction.TransactionManager"/>
<property name="defaultTimeout" value="${app.transaction.timeOut}"/>
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<!-- persistence unit is missing jta data source so that application server is not
creating EntitiyManagerFactory, spring will create its own LocalContainerEntityManagerFactoryBean overriding data source-->
<property name="dataSource" ref="myDataSource"/>
<!-- specific properties like jpa provider and jpa provider properties are in persistance unit -->
<property name="persistenceUnitName" value="my.persistence.unit"/>
</bean>
<!-- define data source in application server -->
<jee:jndi-lookup id="myDataSource" jndi-name="${db.jndiName}"/>
I'm using a generic CrudDao with an update method that looks like this:
public void update(Object entity) {
//entityManager injected by #PersistenceContext
entityManager.merge(entity);
entityManager.flush();
}
public Object getById(Object id, Class entityClass) throws PersistenceException{
return (Object)entityManager.find(entityClass, id);
}
UPDATED: added the getById method.
The code that does not work as expected looks like this:
MyObject myObj = getMyObjectThroughSomeOneToManyRelation(idOne, idOther);
// till now was null
myObj.setSomeDateAttr(someDate);
genericDao.update(myObj);
MyObject myObjFromDB = genericDao.getById(myObj.getId(), MyObject.class);
The result is that if I print myObj.getSomeDateAttr() it returns me the value of someDate, if I print myObjFromDB.getSomeDateAttr() it still has null.
I've tried changing the update method to:
org.hibernate.Session s = (org.hibernate.Session) entityManager.getDelegate();
s.evict(entity);
s.update(entity);
s.flush();
And it still doesn't work.
When turning on the show_sql flag of hibernate I don't see any update occurring when doing flush nor when I query the entity manager for the object with the same id. The selects are all visible.
UPDATE:
At the end of the transaction the update is actually called and everything is written to the db. So my problem is "just" during the transaction.
I'm afraid the problem may be linked with the configuration of the transaction manager on spring and on hibernate.
Hope that someone can help me as I have already lost a day and a half with no luck.
You need to look at the hibernate merge behaviour closely. As per documentation
if there is a persistent instance with the same identifier currently
associated with the session, copy the state of the given object onto
the persistent instance
if there is no persistent instance currently associated with the session, try to load it from the database, or create a new persistent instance
the persistent instance is returned
the given instance does not become associated with the session, it
remains detached
As per your statement on the sql queries in log, it look like
MyObject myObj = getMyObjectThroughSomeOneToManyRelation(idOne, idOther); returning the persistent object but when you modify it(becomes dirty) and call merge method, new state is copied to the current persistent object in session. If you see third point merge returns persistent object which is actually new manageable persistent object which you need to use in subsequent operations.
When you call find method hibernate returns the persistent object in session and not maneagable persistent object thats why you dont find the changes in object return by find.
To fix your problem change the reurn type of update method
public Object update(Object entity) {
//entityManager injected by #PersistenceContext
return entityManager.merge(entity);
}
and in service you need to use as below
MyObject myObj = getMyObjectThroughSomeOneToManyRelation(idOne, idOther);
// till now was null
myObj.setSomeDateAttr(someDate);
//You can use myObj as well instead myNewObj
MyObject myNewObj= genericDao.update(myObj);
//No need to call get
//MyObject myObjFromDB = genericDao.getById(myObj.getId(), MyObject.class);
System.out.println("Updated value:"+myNewObj.getSomeDateAttr());
Have a look at this artical as well.
Related
I'm using Hibernate 5.1.0.Final with ehcache and Spring 3.2.11.RELEASE. I have the following #Cacheable annotation set up in one of my DAOs:
#Override
#Cacheable(value = "main")
public Item findItemById(String id)
{
return entityManager.find(Item.class, id);
}
The item being returned has a number of assocations, some of which are lazy. So for instance, it (eventually) references the field:
#ManyToMany(fetch = FetchType.LAZY)
#JoinTable(name = "product_category", joinColumns = { #JoinColumn(name = "PRODUCT_ID") }, inverseJoinColumns = { #JoinColumn(name = "CATEGORY_ID") })
private List<Category> categories;
I notice that within one of my methods that I mark as #Transactional, when the above method is retrieved from the second level cache, I get the below exception when trying to iterate over the categories field:
#Transactional(readOnly=true)
public UserContentDto getContent(String itemId, String pageNumber) throws IOException
{
Item Item = contentDao.findItemById(ItemId);
…
// Below line causes a “LazyInitializationException” exception
for (Category category : item.getParent().getProduct().getCategories())
{
The stack trace is:
16:29:42,557 INFO [org.directwebremoting.log.accessLog] (ajp-/127.0.0.1:8009-18) Method execution failed: : org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: org.mainco.subco.ecom.domain.Product.standardCategories, could not initialize proxy - no Session
at org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:579) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:203) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:558) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:131) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.hibernate.collection.internal.PersistentBag.iterator(PersistentBag.java:277) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.mainco.subco.ebook.service.ContentServiceImpl.getCorrelationsByItem(ContentServiceImpl.java:957) [myproject-90.0.0-SNAPSHOT.jar:]
at org.mainco.subco.ebook.service.ContentServiceImpl.getContent(ContentServiceImpl.java:501) [myproject-90.0.0-SNAPSHOT.jar:]
at sun.reflect.GeneratedMethodAccessor819.invoke(Unknown Source) [:1.6.0_65]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [rt.jar:1.6.0_65]
at java.lang.reflect.Method.invoke(Method.java:597) [rt.jar:1.6.0_65]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96) [spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260) [spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94) [spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at com.sun.proxy.$Proxy126.getContent(Unknown Source)
I understand what the Hibernate session is closed — I do not care about why this happens. Also, it is NOT an option o make the above association eager (instead of lazy). Given that, how can I solve this problem?
Edit: Here is how my ehccahe.xml is configured …
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="../config/ehcache.xsd" updateCheck="false">
<!-- This is a default configuration for 256Mb of cached data using the JVM's heap, but it must be adjusted
according to specific requirement and heap sizes -->
<defaultCache maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="86400"
timeToLiveSeconds="86400"
overflowToDisk="false"
memoryStoreEvictionPolicy="LRU">
</defaultCache>
<cache name="main" maxElementsInMemory="10000" />
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1,
multicastGroupPort=4446, timeToLive=32"/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=localhost, port=40001,
socketTimeoutMillis=2000"/>
</ehcache>
and here is how I’m plugging it into my Spring context …
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="packagesToScan" value="org.mainco.subco" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"/>
</property>
<property name="dataSource" ref="dataSource"/>
<property name="jpaPropertyMap" ref="jpaPropertyMap" />
</bean>
<cache:annotation-driven key-generator="cacheKeyGenerator" />
<bean id="cacheKeyGenerator" class="org.mainco.subco.myproject.util.CacheKeyGenerator" />
<bean id="cacheManager"
class="org.springframework.cache.ehcache.EhCacheCacheManager"
p:cacheManager-ref="ehcache"/>
<bean id="ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
p:configLocation="classpath:ehcache.xml"
p:shared="true" />
<util:map id="jpaPropertyMap">
<entry key="hibernate.show_sql" value="false" />
<entry key="hibernate.hbm2ddl.auto" value="validate"/>
<entry key="hibernate.dialect" value="org.hibernate.dialect.MySQL5InnoDBDialect"/>
<entry key="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.JBossTransactionManagerLookup" />
<entry key="hibernate.cache.region.factory_class" value="org.hibernate.cache.ehcache.EhCacheRegionFactory"/>
<entry key="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider"/>
<entry key="hibernate.cache.use_second_level_cache" value="true" />
<entry key="hibernate.cache.use_query_cache" value="false" />
<entry key="hibernate.generate_statistics" value="false" />
</util:map>
<bean id="entityManager" class="org.springframework.orm.jpa.support.SharedEntityManagerBean">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>
Take a look at a similar question. Basically, your cache is not a Hibernate second-level cache. You are accessing a lazy uninitialized association on a detached entity instance, so a LazyInitializationException is expected to be thrown.
You can try to play around with hibernate.enable_lazy_load_no_trans, but the recommended approach is to configure Hibernate second level cache so that:
Cached entities are automatically attached to the subsequent sessions in which they are loaded.
Cached data is automatically refreshed/invalidated in the cache when they are changed.
Changes to the cached instances are synchronized taking the transaction semantics into consideration. Changes are visible to other sessions/transactions with the desired level of cache/db consistency guarantees.
Cached instances are automatically fetched from the cache when they are navigated to from the other entities which have associations with them.
EDIT
If you nevertheless want to use Spring cache for this purpose, or your requirements are such that this is an adequate solution, then keep in mind that Hibernate managed entities are not thread-safe, so you will have to store and return detached entities to/from the custom cache. Also, prior to detachment you would need to initialize all lazy associations that you expect to be accessed on the entity while it is detached.
To achieve this you could:
Explicitly detach the managed entity with EntityManager.detach. You would need to detach or cascade detach operation to the associated entities also, and make sure that the references to the detached entities from other managed entities are handled appropriately.
Or, you could execute this in a separate transaction to make sure that everything is detached and that you don't reference detached entities from the managed ones in the current persistence context:
#Override
#Cacheable(value = "main")
#Transactional(propagation = Propagation.REQUIRES_NEW)
public Item findItemById(String id) {
Item result = entityManager.find(Item.class, id);
Hibernate.initialize(result.getAssociation1());
Hibernate.initialize(result.getAssociation2());
return result;
}
Because it may happen that the Spring transaction proxy (interceptor) is executed before the cache proxy (both have the same default order value: transaction; cache), then you would always start a nested transaction, be it to really fetch the entity, or to just return the cached instance.
While we may conclude that performance penalty for starting unneeded nested transactions is small, the issue here is that you leave a small time window when a managed instance is present in the cache.
To avoid that, you could change the default order values:
<tx:annotation-driven order="200"/>
<cache:annotation-driven order="100"/>
so that cache interceptor is always placed before the transaction one.
Or, to avoid ordering configuration changes, you could simply delegate the call from the #Cacheable method to the #Transactional(propagation = Propagation.REQUIRES_NEW) method on another bean.
What you implemented in your code snippets is a custom cache based on spring-cache. With your implementation you would need to take care of cache evictions, making sure that at the point when your object graphs will get cached they are properly loaded, etc. Once they get cached and the original hibernate session that loaded them is closed they'll become detached, you can no longer navigate unfetched lazy associations. Also, your custom cache solution in its current state would cache entity graphs, which is probably not what you want, since any part of that graph might change at a given time, and your cache solution would need to watch for changes in all parts of that graph to properly handle evictions.
The configuration you posted in your question is not Hibernate second-level cache.
Managing a cache is a complex endeavor and I don't recommend it doing it by yourself, unless you're absolutely sure what you're doing (but then you won't be asking this question on Stackoverflow).
Let me explain what is happening with when you get the LazyInitializationException: you marked one of your dao methods with #org.springframework.cache.annotation.Cacheable. What happens in this case is the following:
Spring attaches an interceptor to your managed bean. The interceptor will intercept the dao method call, it will create a cache key based on the interceptor method and the actual method arguments (this can be customized), and look up the cache to see if there's any entry in the cache for that key. In case there's an entry it will return that entry without actually invoking your method. In case there's no cache entry for that key, it will invoke your method, serializes the return value and store it in the cache.
For the case when there was no cache entry for the key, your method will get invoked. Your method uses a spring provided singleton proxy to the thread bound EntityManager which was assigned earlier when Spring encountered the first #Transactional method invocation. In your case this was the getContent(...) method of another spring service bean. So your method loads an entity with EntityManager.find(). This will give you a partially loaded entity graph containing uninitialized proxies and collections to other associated entities not yet loaded by the persistence context.
Your method returns with the partially loaded entity graph and spring will immediately serialize it for you and store it in the cache. Note that serializing a partially loaded entity graph will deserialize to a partially loaded entity graph.
On the second invocation of the dao method marked with #Cacheable with the same arguments, Spring will find that there is indeed an entry in the cache corresponding to that key and will load and deserialize the entry. Your dao method will not be called since it uses the cached entry. Now you encounter the problem: your deserialized cached entity graph was only partially loaded when you stored in the cache, and as soon as you touch any uninitialized part of the graph you'll get the LazyInitializationException. A deserialized entity will always be detached, so even if the original EntityManager would be still open (which is not), you would still get the same exception.
Now the question is: what can you do to avoid the LazyInitializationException. Well, my recommendation is that you forget about implementing a custom cache and just configure Hibernate to do the caching for you. I will talk about how to do that later. If you want to stick with the custom cache you tried to implement, here's what you need to do:
Go through your whole code base and find all invocations of your #Cacheable dao method. Follow all possible code paths where the loaded entity graph is passed around and mark all parts of the entity graph which ever gets touched by client code. Now go back to your #Cacheable method and modify it so that it loads and initializes all parts of the entity graph that would ever get possibly touched. Because once you return it and it gets serialized, and deserialized later, it will always be in a detached state so better make sure all possible graph paths are properly loaded. You should already feel how impractical this will end up. If that still didn't convince you not to follow this direction, here's another argument.
Since you load up a potentially big chunk of the database, you will have a snapshot of that part of the database at the given time when it got actually loaded and cached. Now, whenever you use a cached version of this big chunk of the database, there's is a risk that you are using a stale version of that data. To defend from this, you would need to watch for any changes in the current version of that big chunk of the database you just cached and evict the whole entity graph from the cache. So you pretty much need to take into account which entities are parts of your entity graph and set up some event listeners whenever those entities are changed and evict the whole graph. None of these issues are present with Hibernate second-level cache.
Now back to my recommendation: set up Hibernate second-level cache
Hibernate second-level cache is managed by Hibernate and you get eviction management from hibernate automatically. If you have Hibernate second-level cache enabled, Hibernate will cache the data needed to reconstruct your entities and, if - when seeking to load an entity from the database - it finds that it has a valid cache entry for your entity, it will skip hitting the database and reconstruct your entity from its cache. (Mark the difference to caching an entity graph with its possibly unfetched associations and uninitialized proxies in your custom cache solution). It will also replace stale cache entries when you update an entity. It does all sorts of things related to managing the cache so that you don't have to worry about it.
Here's how can you enable Hibernate second-level cache: in addition to your configuration do the following:
In addition to the hibernate properties you already have for second-level management, namely
<entry key="hibernate.cache.region.factory_class" value="org.hibernate.cache.ehcache.EhCacheRegionFactory"/>
<entry key="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider"/>
<entry key="hibernate.cache.use_second_level_cache" value="true" />
add the following entry:
<entry key="javax.persistence.sharedCache.mode" value="ENABLE_SELECTIVE" />
alternatively, you could add a shared-cache-mode configuration option to your persistence.xml (since you didn't post it, I assumed you don't use it hence the previous alternative; the following one is preferred though):
<persistence-unit name="default">
<!-- other configuration lines stripped -->
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<!-- other configuration lines stripped -->
</persistence-unit>
Add javax.persistence.#Cacheable annotation to your #Entity classes you want to be cacheable.
If you want to add caching for collection valued associations which Hibernate doesn't cache by default, you can add a #org.hibernate.annotations.Cache annotation (with a proper cache concurrency strategy choice) for each such collection:
#ManyToMany(fetch = FetchType.LAZY)
#JoinTable(name = "product_category", joinColumns = { #JoinColumn(name = "PRODUCT_ID")
}, inverseJoinColumns = { #JoinColumn(name = "CATEGORY_ID") })
#Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
private List<Category> categories;
See Improving performance/The Second Level Cache in the Hibernate Reference Documentation for further details.
This is a nice informative article about the subject: Pitfalls of the Hibernate Second-Level / Query Caches
I have put together a small project based on your posted code snippets which you can check out to see Hibernate second-level cache in action.
The problem is that you are caching references to objects which are loaded lazily. Cache the object once it is all loaded or do not use the cache at all.
Here is how you could load the categories manually before caching it:
Item item = entityManager.find(Item.class, id);
item.getParent().getProduct().getCategories();
return item;
Also a better caching strategy would be to have the cache at the service level of your application instead of the DAO level or no cache at all.
Your issue is caused by the following events:
An Item is being retrieved without its categories then put in the cache in transaction 1. In transaction 2, you call the same method and retrieve the Item and try to read its categories. At that moment hibernate tries to read the categories from transaction 1 which is associated to the Item object but transaction 1 is already completed so it fails.
I used simple type cache with this config as below:
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
spring.jpa.open-in-view=true
spring.cache.type=simple
I'm using Spring 3.0.6, with Hibernate 3.2.7.GA in a Java-based webapp. I'm declaring transactions with #Transactional annotations on the controllers (as opposed to in the service layer). Most of the views are read-only.
The problem is, I've got some DAOs which are using JdbcTemplate to query the database directly with SQL, and they're being called outside of a transaction. Which means they're not reusing the Hibernate SessionFactory's connection. The reason they're outside the transaction is that I'm using converters on method parameters in the controller, like so:
#Controller
#Transactional
public class MyController {
#RequestMapping(value="/foo/{fooId}", method=RequestMethod.GET)
public ModelAndView get(#PathVariable("fooId") Foo foo) {
// do something with foo, and return a new ModelAndView
}
}
public class FooConverter implements Converter<String, Foo> {
#Override
public Foo convert(String fooId) {
// call FooService, which calls FooJdbcDao to look up the Foo for fooId
}
}
My JDBC DAO relies on SimpleJdbcDaoSupport to have the jdbcTemplate injected:
#Repository("fooDao")
public class FooJdbcDao extends SimpleJdbcDaoSupport implements FooDao {
public Foo findById(String fooId) {
getJdbcTemplate().queryForObject("select * from foo where ...", new FooRowMapper());
// map to a Foo object, and return it
}
}
and my applicationContext.xml wires it all together:
<mvc:annotation-driven conversion-service="conversionService"/>
<bean id="conversionService" class="org.springframework.context.support.ConversionServiceFactoryBean">
<property name="converters">
<set>
<bean class="FooConverter"/>
<!-- other converters -->
</set>
</property>
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"
p:sessionFactory-ref="sessionFactory" />
FooConverter (which converts a path variable String to a Foo object) gets called before MyController#get() is called, so the transaction hasn't been started yet. Thus when FooJdbcDAO is called to query the database, it has no way of reusing the SessionFactory's connection, and has to check out its own connection from the pool.
So my questions are:
Is there any way to share a database connection between the SessionFactory and my JDBC DAOs? I'm using HibernateTransactionManager, and from looking at Spring's DataSourceUtils it appears that sharing a transaction is the only way to share the connection.
If the answer to #1 is no, then is there a way to configure OpenSessionInViewFilter to just start a transaction for us, at the beginning of the request? I'm using "on_close" for the hibernate.connection.release_mode, so the Hibernate Session and Connection are already staying open for the life of the request.
The reason this is important to me is that I'm experiencing problems under heavy load where each thread is checking out 2 connections from the pool: the first is checked out by hibernate and saved for the whole length of the thread, and the 2nd is checked out every time a JDBC DAO needs one for a query outside of a transaction. This causes deadlocks when the 2nd connection can't be checked out because the pool is empty, but the first connection is still held. My preferred solution is to make all JDBC DAOs participate in Hibernate's transaction, so that TransactionSynchronizationManager will correctly share the one single connection.
Is there any way to share a database connection between the SessionFactory and my JDBC DAOs? I'm using HibernateTransactionManager, and from looking at Spring's DataSourceUtils it appears that sharing a transaction is the only way to share the connection.
--> Well you can share database connection between SessionFactory and JdbcTemplate. What you need to do is share same datasource between the two. Connection pooling is also shared between the two. I am using it in my application.
What you need to do is configure HibernateTransactionManager for both transactions.
Add JdbcDao class(with properties jdbcTemplate and dataSource with getter-setter) in your existing package structure(in dao package/layer), Extend your jdbc implementation classes by JdbcDao. If you have configured, hibernateTxManager for hibernate, you will not need to configure it.
The problem is, I've got some DAOs which are using JdbcTemplate to query the database directly with SQL, and they're being called outside of a transaction. Which means they're not reusing the Hibernate SessionFactory's connection.
--> You may be wrong here. You may be using same connection, I think, only problem may lie in HibernateTransaction configuration.
Check HibernateTransactionManager javadoc : This transaction manager is appropriate for applications that use a single Hibernate SessionFactory for transactional data access, but it also supports direct DataSource access within a transaction (i.e. plain JDBC code working with the same DataSource). This allows for mixing services which access Hibernate and services which use plain JDBC (without being aware of Hibernate)!
Check my question : Using Hibernate and Jdbc both in Spring Framework 3.0
Configuration : Add dao classes and service classes with your current hibernate classes, do not make separate packages for them, If you want to work with existing configuration. Otherwise configure HibernateTransactionManager in xml configuration and Use #Transactional annotation.
Mistake in your code :
#Controller
#Transactional
public class MyController {......
Use #Transactional annotation in service classes(best practice).
Correction :
#Transactional(readOnly = true)
public class FooService implements FooService {
public Foo getFoo(String fooName) {
// do something
}
// these settings have precedence for this method
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void updateFoo(Foo foo) {
// do something
}
}
I have a situation where I have to handle multiple clients in one app and each client has separate database. To support that I'm using Spring custom scope, quite similar to the built in request scope. A user authenticates in each request and can set context client ID based passed credentials. The scoping itself seems to be working properly.
So I used my custom scope to create a scoped-proxy for my DataSource to support a diffrent database per client. And I get connections to proper databases.
Than I created a scoped-proxy for EntityManagerFactory to use JPA. And this part also looks OK.
Than I added a scoped-proxy for PlatformTransactionManager for declarative transaction management. I use #Transactional on my service layer and it gets propagated nicely to my SpringData powered repository layer.
All is fine and works correctly as long a s I use only JPA. I can even switch context to a diffrent client within the request (I use ThreadLocals under the hood) and transactions to both databases are handled correctly.
The problems start when I try to use JDBCTempate in one of my custom repositiries. Than at first glance all looks OK too, as no exceptions are thrown. But when I check the database for the objects I thought I inserted with my custom JDBC-based repository the're not there!
I know for sure I can use JPA and JDBC together by declaring only JpaTransactionManager and passing both the DataSource and EntityManagerFactory to it - I checked it and without the scoped-proxies and it works.
So the question is how to make JDBC work together with JPA using the JpaTransactionManager when I have scoped-proxied the DataSource, EntityManagerFactory and PlatformTransactionManager beans? I remind that using only JPA works perfectly, but adding plain JDBC into the mix is not working.
UPDATE1: And one more thing: all readonly (SELECT) operations work fine with JDBC too - only writes (INSERT, UPDATE, DELETE) end up not commited or rolledback.
UPDATE2: As #Tomasz suggested I've removed scoped proxy from EntityManagerFactory and PlatformTransactionManager as those are indeed not needed and provide more confusion than anything else.
The real problem seems to be switching the scope context within a transaction. The TransactionSynchronizationManager bounds transactional resources (i.e. EMF or DS) to thread at transaction start. It has the ability to unwrap the scoped proxy, so it binds the actual instance of the resource from the scope active at the time of starting a transaction. Then when I change the context within a transaction it all gets messed up.
It seems like I need to suspend the active transaction and store aside the current transaction context to be able to clear it upon entering another scope to make Spring think it's not inside a transaction any more and to force it create one for the new scope when needed. And then when leaving the scope I'd have to restore the previously suspended transaction. Unfortunatelly I was unable to come up with a working implementation yet. Any hints appreciated.
And below is some code of mine, but it's pretty standard, except for the scoped-proxies.
The DataSource:
<!-- provides database name based on client context -->
<bean id="clientDatabaseNameProvider"
class="com.example.common.spring.scope.ClientScopedNameProviderImpl"
c:clientScopeHolder-ref="clientScopeHolder"
p:databaseName="${base.db.name}" />
<!-- an extension of org.apache.commons.dbcp.BasicDataSource that
uses proper database URL based on database name given by above provider -->
<bean id="jpaDataSource" scope="client"
class="com.example.common.spring.datasource.MysqlDbInitializingDataSource"
destroy-method="close"
p:driverClassName="${mysql.driver}"
p:url="${mysql.url}"
p:databaseNameProvider-ref="clientDatabaseNameProvider"
p:username="${mysql.username}"
p:password="${mysql.password}"
p:defaultAutoCommit="false"
p:connectionProperties="sessionVariables=storage_engine=InnoDB">
<aop:scoped-proxy proxy-target-class="false" />
</bean>
The EntityManagerFactory:
<bean id="jpaVendorAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"
p:database="MYSQL"
p:generateDdl="true"
p:showSql="true" />
<util:properties id="jpaProperties">
<!-- omitted for readability -->
</util:properties>
<bean id="jpaDialect"
class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:packagesToScan="com.example.model.core"
p:jpaVendorAdapter-ref="jpaVendorAdapter"
p:dataSource-ref="jpaDataSource"
p:jpaDialect-ref="jpaDialect"
p:jpaProperties-ref="jpaProperties" />
The PlatformTracsactionManager:
<bean id="transactionManager"
class="org.springframework.orm.jpa.JpaTransactionManager"
p:dataSource-ref="jpaDataSource"
p:entityManagerFactory-ref="entityManagerFactory" />
<tx:annotation-driven proxy-target-class="false" mode="proxy"
transaction-manager="transactionManager" />
I have a code with this structure (there are a lot of classes, but schema is like this:
void f() {
MyObj o = db.getById(id);
o.setField1(value);
db.update(o);
o = db.getById(id);
assertEquals(value, o.getField());
}
update and get methods use the same data source, incjected with Spring. get works via JdbcTemplate and update just takes connection from dataSource and uses raw JDBC.
Update is marked with #Transactional annotation.
here is a definition of transacion manager from Spring config:
<tx:annotation-driven transaction-manager="TransactionManager"/>
<bean id="TransactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
The issue is that if I use update and after it get in different calls of webservice methods, that use them, for exampple, the result is correct and I get updated values.
And if I call them sequentially in one unit-test method after update I don't see updated value.
I can't post the whole read/write code here, because it is large and splitted into many files, but probably you have some ideas how to fix it.
Thanks.
You have to flush the update before you can see it in the select.
you can try
entityManager.refresh(yourEntity);
this way, you will get your entities recent instance where you use it after this row.
I'm working on a project that has in the last couple of months developed an incredibly annoying bug involving object updates. Certain objects (most notably users) when updated in hibernate are never marked as dirty and flushed. For example:
Session session = factory.openSession(interceptor);
Transaction tx = session.beginTransaction();
Object object = session.load(someId);
// Modify object ...
session.update(object);
tx.commit();
session.flush();
session.close();
All the appropriate methods on the interceptor get called by hibernate except Interceptor.onFlushDirty and Interceptor.findDirty. My initial assumption was that this was an issue with detached objects as we were storing the user object in an http session; however a refactor that removed all detached objects did not solve the problem.
The transaction is definitely getting committed and the session is being flushed and closed on completion. I also double checked to ensure the session wasn't in read-only mode.
I have also tried using Session.merge in place of Session.update to no effect. When using Session.merge the object returned contains the correct updated information but again the database is never updated.
I have found this SO question that seems to describe a similar issue (relevant because the object I'm working stores a custom enum field) and a good description of Hibernate's dirty checking mechanism but other than that information has been kind of sparse.
My cfg.xml looks like this:
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.default_batch_fetch_size">1024</property>
<property name="hibernate.order_inserts">true</property>
<property name="hibernate.order_updates">true</property>
<property name="hibernate.show_sql">false</property>
<property name="hibernate.c3p0.aquire_increment">1</property>
<property name="hibernate.c3p0.initial_pool_size">1</property>
<property name="hibernate.c3p0.min_size">4</property>
<property name="hibernate.c3p0.max_size">32</property>
<property name="hibernate.c3p0.idle_test_period">100</property> <!-- seconds -->
<property name="hibernate.c3p0.timeout">600</property> <!-- seconds -->
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">true</property>
<property name="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.jdbc.fetch_size">1024</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.search.worker.execution">async</property>
<property name="hibernate.search.default.directory_provider">org.hibernate.search.store.RAMDirectoryProvider</property>
<property name="hibernate.search.default.indexwriter.batch.ram_buffer_size">256</property>
<property name="hibernate.search.default.optimizer.transaction_limit.max">1000</property>
UPDATE: I have tried both disabling and enabling Hibernates second-level cache as Finbarr suggested to no effect.
Anyone have any suggestions on other things I might try?
Try disabling hibernate's second level cache. Actually, can you post your cfg.xml?
Finbarr was partially correct it is almost certainly a caching issue but but not the second level cache. By forcing Hibernate to do the update irrespective of whether it changed everything is now saving properly.
session.evict(object);
session.update(object);
In the context of my project, I've stumbled against a piece of code that did just what you've mentioned above: evict then update.
Did further elements in your investigations shed more insights on this issue. Bug or no bug afterall
Have you reported this behavior/bug to hibernate and/or would you have a reference of that JIRA?
What was your version of hibernate?
On the potential cause:
-1- you've mentioned read only session but not readonly at query level or object level
-2- have you checked that the object was not getting evicted from the session (EvictEventListener)
-3- have you checked that the object was not being changed again to its original state, hence requiring no update or another update call again with an older (detached) version of the object
Any input appreciated,
Regards,
Christophe
The problem is in the code : you are committing the transaction before flushing the session!
You must flush the session which will 'synchronize' session state with the current transaction and THEN commit the transaction:
Session session = factory.openSession(interceptor);
Transaction tx = session.beginTransaction();
Object object = session.load(someId);
// Modify object ...
session.update(object);
session.flush();
tx.commit();
session.close();