I'm trying to implement a shared second-level Hibernate cache using JCache and Hazelcast.
The goal is to have multiple servers joined in a Hazelcast cluster sharing the same Hibernate second-level cache,
so when Hibernate on one of the servers (nodes) updates the cache, all other servers (nodes) have their second-level cache updated as well.
I have managed to establish a Hazelcast cluster with two nodes, where each one "sees" the second-level cache of another.
The problem is that each of the nodes is still using its own cache so when one of them updates the cache,
another continues to fetch old (unchanged) entries from its "outdated" cache.
In other words - I have two second-level caches distributed between two nodes with each node using a different cache.
I'm using Hazelcast 4.2, Hibernate 5.4, Spring Boot 2.4.8
These are my spring-boot properties:
spring.jpa.properties.hibernate.generate_statistics = true
spring.jpa.properties.hibernate.cache.use_second_level_cache = true
spring.jpa.properties.hibernate.cache.use_query_cache = true
spring.jpa.properties.javax.persistence.sharedCache.mode = ENABLE_SELECTIVE
spring.jpa.properties.hibernate.cache.region.factory_class = jcache
spring.jpa.properties.hibernate.javax.cache.provider = com.hazelcast.cache.impl.HazelcastServerCachingProvider
spring.jpa.properties.hibernate.javax.cache.uri = classpath:hazelcast.xml
Sample cache configuration in hazelcast.xml:
<cache name="jobsCache">
<statistics-enabled>true</statistics-enabled>
<management-enabled>true</management-enabled>
<eviction size="200" max-size-policy="ENTRY_COUNT" eviction-policy="LRU" />
<expiry-policy-factory>
<timed-expiry-policy-factory expiry-policy-type="CREATED" duration-amount="10" time-unit="MINUTES"/>
</expiry-policy-factory>
</cache>
Am I missing some configuration or have done something wrong?
Thank you!
You need to consider the cache strategy CacheConcurrencyStrategy
If your cache data is updated frequently, and you want it to be updated on all instances, then use CacheConcurrencyStrategy. READ_WRITE
We have used class level #transaction annotations to enable rollback mechanism in Spring batch.
The code is given below:
#Transactional(rollbackFor = { DaoException.class, LogicRuntimeException.class })
public class ClassA{}
where, it is expected that whenever any exception is being thrown from any method from ClassA, it should rollback all other transactions which is already executed (but not committed) from the following class.
But it is observed that when we are executing this using local setting, it is working perfectly but when the WAR is deployed to the server, in which Global setting is used, the rollback is not working at all.
The main difference is for local system, we have used JDBC:
<beans:property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" />
and for server (global) setup, we have used JNDI:
org.springframework.jndi.JndiObjectFactoryBean
Also, FYI,
to set auto-commit to false, the following common code exists for both local and global setup:
SqlSession session = sqlSessionFactory.openSession();
session.getConnection().setAutoCommit(false);
where, sqlSessionFactory is object for "org.mybatis.spring.SqlSessionFactoryBean" where all the mappers and data source are linked.
Just wanna know if I am missing any specific configuration for Global transaction, probably to set auto-commit as false or any other details.
I'm using Hibernate 5.1.0.Final with ehcache and Spring 3.2.11.RELEASE. I have the following #Cacheable annotation set up in one of my DAOs:
#Override
#Cacheable(value = "main")
public Item findItemById(String id)
{
return entityManager.find(Item.class, id);
}
The item being returned has a number of assocations, some of which are lazy. So for instance, it (eventually) references the field:
#ManyToMany(fetch = FetchType.LAZY)
#JoinTable(name = "product_category", joinColumns = { #JoinColumn(name = "PRODUCT_ID") }, inverseJoinColumns = { #JoinColumn(name = "CATEGORY_ID") })
private List<Category> categories;
I notice that within one of my methods that I mark as #Transactional, when the above method is retrieved from the second level cache, I get the below exception when trying to iterate over the categories field:
#Transactional(readOnly=true)
public UserContentDto getContent(String itemId, String pageNumber) throws IOException
{
Item Item = contentDao.findItemById(ItemId);
…
// Below line causes a “LazyInitializationException” exception
for (Category category : item.getParent().getProduct().getCategories())
{
The stack trace is:
16:29:42,557 INFO [org.directwebremoting.log.accessLog] (ajp-/127.0.0.1:8009-18) Method execution failed: : org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: org.mainco.subco.ecom.domain.Product.standardCategories, could not initialize proxy - no Session
at org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:579) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:203) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:558) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:131) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.hibernate.collection.internal.PersistentBag.iterator(PersistentBag.java:277) [hibernate-myproject-5.1.0.Final.jar:5.1.0.Final]
at org.mainco.subco.ebook.service.ContentServiceImpl.getCorrelationsByItem(ContentServiceImpl.java:957) [myproject-90.0.0-SNAPSHOT.jar:]
at org.mainco.subco.ebook.service.ContentServiceImpl.getContent(ContentServiceImpl.java:501) [myproject-90.0.0-SNAPSHOT.jar:]
at sun.reflect.GeneratedMethodAccessor819.invoke(Unknown Source) [:1.6.0_65]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [rt.jar:1.6.0_65]
at java.lang.reflect.Method.invoke(Method.java:597) [rt.jar:1.6.0_65]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96) [spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260) [spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94) [spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) [spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at com.sun.proxy.$Proxy126.getContent(Unknown Source)
I understand what the Hibernate session is closed — I do not care about why this happens. Also, it is NOT an option o make the above association eager (instead of lazy). Given that, how can I solve this problem?
Edit: Here is how my ehccahe.xml is configured …
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="../config/ehcache.xsd" updateCheck="false">
<!-- This is a default configuration for 256Mb of cached data using the JVM's heap, but it must be adjusted
according to specific requirement and heap sizes -->
<defaultCache maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="86400"
timeToLiveSeconds="86400"
overflowToDisk="false"
memoryStoreEvictionPolicy="LRU">
</defaultCache>
<cache name="main" maxElementsInMemory="10000" />
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1,
multicastGroupPort=4446, timeToLive=32"/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=localhost, port=40001,
socketTimeoutMillis=2000"/>
</ehcache>
and here is how I’m plugging it into my Spring context …
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="packagesToScan" value="org.mainco.subco" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"/>
</property>
<property name="dataSource" ref="dataSource"/>
<property name="jpaPropertyMap" ref="jpaPropertyMap" />
</bean>
<cache:annotation-driven key-generator="cacheKeyGenerator" />
<bean id="cacheKeyGenerator" class="org.mainco.subco.myproject.util.CacheKeyGenerator" />
<bean id="cacheManager"
class="org.springframework.cache.ehcache.EhCacheCacheManager"
p:cacheManager-ref="ehcache"/>
<bean id="ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
p:configLocation="classpath:ehcache.xml"
p:shared="true" />
<util:map id="jpaPropertyMap">
<entry key="hibernate.show_sql" value="false" />
<entry key="hibernate.hbm2ddl.auto" value="validate"/>
<entry key="hibernate.dialect" value="org.hibernate.dialect.MySQL5InnoDBDialect"/>
<entry key="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.JBossTransactionManagerLookup" />
<entry key="hibernate.cache.region.factory_class" value="org.hibernate.cache.ehcache.EhCacheRegionFactory"/>
<entry key="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider"/>
<entry key="hibernate.cache.use_second_level_cache" value="true" />
<entry key="hibernate.cache.use_query_cache" value="false" />
<entry key="hibernate.generate_statistics" value="false" />
</util:map>
<bean id="entityManager" class="org.springframework.orm.jpa.support.SharedEntityManagerBean">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>
Take a look at a similar question. Basically, your cache is not a Hibernate second-level cache. You are accessing a lazy uninitialized association on a detached entity instance, so a LazyInitializationException is expected to be thrown.
You can try to play around with hibernate.enable_lazy_load_no_trans, but the recommended approach is to configure Hibernate second level cache so that:
Cached entities are automatically attached to the subsequent sessions in which they are loaded.
Cached data is automatically refreshed/invalidated in the cache when they are changed.
Changes to the cached instances are synchronized taking the transaction semantics into consideration. Changes are visible to other sessions/transactions with the desired level of cache/db consistency guarantees.
Cached instances are automatically fetched from the cache when they are navigated to from the other entities which have associations with them.
EDIT
If you nevertheless want to use Spring cache for this purpose, or your requirements are such that this is an adequate solution, then keep in mind that Hibernate managed entities are not thread-safe, so you will have to store and return detached entities to/from the custom cache. Also, prior to detachment you would need to initialize all lazy associations that you expect to be accessed on the entity while it is detached.
To achieve this you could:
Explicitly detach the managed entity with EntityManager.detach. You would need to detach or cascade detach operation to the associated entities also, and make sure that the references to the detached entities from other managed entities are handled appropriately.
Or, you could execute this in a separate transaction to make sure that everything is detached and that you don't reference detached entities from the managed ones in the current persistence context:
#Override
#Cacheable(value = "main")
#Transactional(propagation = Propagation.REQUIRES_NEW)
public Item findItemById(String id) {
Item result = entityManager.find(Item.class, id);
Hibernate.initialize(result.getAssociation1());
Hibernate.initialize(result.getAssociation2());
return result;
}
Because it may happen that the Spring transaction proxy (interceptor) is executed before the cache proxy (both have the same default order value: transaction; cache), then you would always start a nested transaction, be it to really fetch the entity, or to just return the cached instance.
While we may conclude that performance penalty for starting unneeded nested transactions is small, the issue here is that you leave a small time window when a managed instance is present in the cache.
To avoid that, you could change the default order values:
<tx:annotation-driven order="200"/>
<cache:annotation-driven order="100"/>
so that cache interceptor is always placed before the transaction one.
Or, to avoid ordering configuration changes, you could simply delegate the call from the #Cacheable method to the #Transactional(propagation = Propagation.REQUIRES_NEW) method on another bean.
What you implemented in your code snippets is a custom cache based on spring-cache. With your implementation you would need to take care of cache evictions, making sure that at the point when your object graphs will get cached they are properly loaded, etc. Once they get cached and the original hibernate session that loaded them is closed they'll become detached, you can no longer navigate unfetched lazy associations. Also, your custom cache solution in its current state would cache entity graphs, which is probably not what you want, since any part of that graph might change at a given time, and your cache solution would need to watch for changes in all parts of that graph to properly handle evictions.
The configuration you posted in your question is not Hibernate second-level cache.
Managing a cache is a complex endeavor and I don't recommend it doing it by yourself, unless you're absolutely sure what you're doing (but then you won't be asking this question on Stackoverflow).
Let me explain what is happening with when you get the LazyInitializationException: you marked one of your dao methods with #org.springframework.cache.annotation.Cacheable. What happens in this case is the following:
Spring attaches an interceptor to your managed bean. The interceptor will intercept the dao method call, it will create a cache key based on the interceptor method and the actual method arguments (this can be customized), and look up the cache to see if there's any entry in the cache for that key. In case there's an entry it will return that entry without actually invoking your method. In case there's no cache entry for that key, it will invoke your method, serializes the return value and store it in the cache.
For the case when there was no cache entry for the key, your method will get invoked. Your method uses a spring provided singleton proxy to the thread bound EntityManager which was assigned earlier when Spring encountered the first #Transactional method invocation. In your case this was the getContent(...) method of another spring service bean. So your method loads an entity with EntityManager.find(). This will give you a partially loaded entity graph containing uninitialized proxies and collections to other associated entities not yet loaded by the persistence context.
Your method returns with the partially loaded entity graph and spring will immediately serialize it for you and store it in the cache. Note that serializing a partially loaded entity graph will deserialize to a partially loaded entity graph.
On the second invocation of the dao method marked with #Cacheable with the same arguments, Spring will find that there is indeed an entry in the cache corresponding to that key and will load and deserialize the entry. Your dao method will not be called since it uses the cached entry. Now you encounter the problem: your deserialized cached entity graph was only partially loaded when you stored in the cache, and as soon as you touch any uninitialized part of the graph you'll get the LazyInitializationException. A deserialized entity will always be detached, so even if the original EntityManager would be still open (which is not), you would still get the same exception.
Now the question is: what can you do to avoid the LazyInitializationException. Well, my recommendation is that you forget about implementing a custom cache and just configure Hibernate to do the caching for you. I will talk about how to do that later. If you want to stick with the custom cache you tried to implement, here's what you need to do:
Go through your whole code base and find all invocations of your #Cacheable dao method. Follow all possible code paths where the loaded entity graph is passed around and mark all parts of the entity graph which ever gets touched by client code. Now go back to your #Cacheable method and modify it so that it loads and initializes all parts of the entity graph that would ever get possibly touched. Because once you return it and it gets serialized, and deserialized later, it will always be in a detached state so better make sure all possible graph paths are properly loaded. You should already feel how impractical this will end up. If that still didn't convince you not to follow this direction, here's another argument.
Since you load up a potentially big chunk of the database, you will have a snapshot of that part of the database at the given time when it got actually loaded and cached. Now, whenever you use a cached version of this big chunk of the database, there's is a risk that you are using a stale version of that data. To defend from this, you would need to watch for any changes in the current version of that big chunk of the database you just cached and evict the whole entity graph from the cache. So you pretty much need to take into account which entities are parts of your entity graph and set up some event listeners whenever those entities are changed and evict the whole graph. None of these issues are present with Hibernate second-level cache.
Now back to my recommendation: set up Hibernate second-level cache
Hibernate second-level cache is managed by Hibernate and you get eviction management from hibernate automatically. If you have Hibernate second-level cache enabled, Hibernate will cache the data needed to reconstruct your entities and, if - when seeking to load an entity from the database - it finds that it has a valid cache entry for your entity, it will skip hitting the database and reconstruct your entity from its cache. (Mark the difference to caching an entity graph with its possibly unfetched associations and uninitialized proxies in your custom cache solution). It will also replace stale cache entries when you update an entity. It does all sorts of things related to managing the cache so that you don't have to worry about it.
Here's how can you enable Hibernate second-level cache: in addition to your configuration do the following:
In addition to the hibernate properties you already have for second-level management, namely
<entry key="hibernate.cache.region.factory_class" value="org.hibernate.cache.ehcache.EhCacheRegionFactory"/>
<entry key="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider"/>
<entry key="hibernate.cache.use_second_level_cache" value="true" />
add the following entry:
<entry key="javax.persistence.sharedCache.mode" value="ENABLE_SELECTIVE" />
alternatively, you could add a shared-cache-mode configuration option to your persistence.xml (since you didn't post it, I assumed you don't use it hence the previous alternative; the following one is preferred though):
<persistence-unit name="default">
<!-- other configuration lines stripped -->
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<!-- other configuration lines stripped -->
</persistence-unit>
Add javax.persistence.#Cacheable annotation to your #Entity classes you want to be cacheable.
If you want to add caching for collection valued associations which Hibernate doesn't cache by default, you can add a #org.hibernate.annotations.Cache annotation (with a proper cache concurrency strategy choice) for each such collection:
#ManyToMany(fetch = FetchType.LAZY)
#JoinTable(name = "product_category", joinColumns = { #JoinColumn(name = "PRODUCT_ID")
}, inverseJoinColumns = { #JoinColumn(name = "CATEGORY_ID") })
#Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
private List<Category> categories;
See Improving performance/The Second Level Cache in the Hibernate Reference Documentation for further details.
This is a nice informative article about the subject: Pitfalls of the Hibernate Second-Level / Query Caches
I have put together a small project based on your posted code snippets which you can check out to see Hibernate second-level cache in action.
The problem is that you are caching references to objects which are loaded lazily. Cache the object once it is all loaded or do not use the cache at all.
Here is how you could load the categories manually before caching it:
Item item = entityManager.find(Item.class, id);
item.getParent().getProduct().getCategories();
return item;
Also a better caching strategy would be to have the cache at the service level of your application instead of the DAO level or no cache at all.
Your issue is caused by the following events:
An Item is being retrieved without its categories then put in the cache in transaction 1. In transaction 2, you call the same method and retrieve the Item and try to read its categories. At that moment hibernate tries to read the categories from transaction 1 which is associated to the Item object but transaction 1 is already completed so it fails.
I used simple type cache with this config as below:
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
spring.jpa.open-in-view=true
spring.cache.type=simple
In my ActiveMQ configuration I would like to change the default DB lock transaction isolation level to TRANSACTION_REPEATABLE_READ.
API documentation writes:
public void setTransactionIsolation(int transactionIsolation)
set the Transaction isolation level to something other that
TRANSACTION_READ_UNCOMMITTED This allowable dirty isolation level may
not be achievable in clustered DB environments so a more restrictive
and expensive option may be needed like TRANSACTION_REPEATABLE_READ
see isolation level constants in Connection
In the XML configuration, the jdbcPersistenceAdapter's transactionIsolation attribute accepts only integer-type values, so I cannot use the Connection.TRANSACTION_REPEATABLE_READ constant directly, but only it's value (4) instead:
<persistenceAdapter>
<jdbcPersistenceAdapter dataDirectory="${activemq.data}" dataSource="#mysql-ds" transactionIsolation="4" lockKeepAlivePeriod="5000">
<locker>
<lease-database-locker lockAcquireSleepInterval="10000"/>
</locker>
</jdbcPersistenceAdapter>
</persistenceAdapter>
Is there a way, that I could specify the constant instead of hardcoding number "4"?
As ActiveMQ is Spring-based, I thought I could try to assign it somehow via using <util:constant>, but could not find how to do it...
Try that way:
<util:constant id="transactionType" static-field="java.sql.Connection.TRANSACTION_REPEATABLE_READ" />
EDIT:
The problem is not in spring but in activemq XML Schema:
<xs:attribute name="transactionIsolation" type="xs:integer">
So it wont accept any other value than hardcoded int - you can try put property placeholder here:
transactionIsolation="#{myproperty}"
but i'm not sure if this will work.
Work around to this problem is to somehow configure activemq by pure spring beans (bean id=...) without using dedicated amq tags.
EDIT2: here you have sample config with pure spring tags http://activemq.apache.org/jms-and-jdbc-operations-in-one-transaction.html
I'm using an extended persistence context (injected Entitymanager at SFSB) and have additionally set #TransactionManagement(value=TransactionManagementType.BEAN) for the SFSB to have full control over the UserTransaction.
The Transaction is controlled on client side where I start a lookup for the SFSBs containing a reference to the entity beans.
SymbolischeWerte sbw = (SymbolischeWerte)symbolischeWerteHome.findByPrimaryKey(BigDecimal.valueOf(24704578762l));
System.out.println(symbolischeWerteHome.getSEQ_ID() + "\t\t" + symbolischeWerteHome.getName());
symbolischeWerteHome.beginTransaction();
symbolischeWerteHome.setName(symbolischeWerteHome.getName().concat("A"));
symbolischeWerteHome.commitTransaction();
that works so far!
After enabling JBoss Cache and multiple clients, only the first client causes a database select. The others get the entity from cache.
perfect!
The problem:
2 clients (CLIENTA, CLIENTB) concurrently looks up for an entity with the same primary key, while CLIENTA runs through the program, CLIENTB is manually halt after findByPrimaryKey.
When CLIENTA has finished (value is successfully persisted) CLIENTB's system out shows the old value which is modified and stored into database too.
So I'm loosing CLIENTA's values!!
Is this a JBoss Cache configuration problem or is this a general problem of my systems design?
Cache config for entity:
#Cache(usage=CacheConcurrencyStrategy.TRANSACTIONAL, region="com.culturall.pension.system.SymbolischeWerteEntity")
Cache config in persistence.xml
<property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.jbc2.JndiMultiplexedJBossCacheRegionFactory"/>
<property name="hibernate.cache.region.jbc2.cachefactory" value="java:CacheManager"/>
<property name="hibernate.cache.region.jbc2.cfg.entity" value="mvcc-entity"/>
<property name="hibernate.cache.region.jbc2.cfg.query" value="local-query"/>
Thx for ANY advice!
If I read you right, you configured the cache to be transactional. This by definition means clients in different transactions see different versions of data; if the data was modified in other transaction, you need to refresh the data from DB explicitely (thus discarding your changes) to see those changes.