Wildfly - Infinispan Transactions configuration - java

I am using Wildfly 8.2 with its included Infinispan (6.0.2) and I am trying to cache all values from some Oracle database table in an Infispan cache. In most cases, it seems to work, but sometimes it does not. When accessing the cache.values() (which also may not be a good idea for performance, but is an example) it appears sometime to be empty, sometimes it contains the values correctly. Therefore I think it might a problem with the configuration of the transactions. When making the Infinispan cache non-transactional, the problem disappears.
The service which accesses the cache and the DB is an EJB bean with container-managed transactions. On initialzation of the Service, all data is loaded from the DB (it does not contain many entries).
According to what's new in ejb 3.2 it should be possible to access the DB transactionally in a EJB Singleton bean.
Is the configuration of the data source and the Infinispan cache correct? Can I use a non-XA datasource with Infinispan and expect it work consistently? According to the Infinispan doc, NON_XA means that Infinispan is registering as a Synchronization, which should be ok, shouldn't it?
The cache is configured in the standalone-full.xml as follows (when removing <transaction mode="NON_XA" locking="PESSIMISTIC"/> the problem disappears, at the price of having no transactional cache):
<cache-container name="cacheContainer" start="EAGER">
<local-cache name="my_table_cache">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="NON_XA" locking="PESSIMISTIC"/>
</local-cache>
</cache-container>
The Oracle DS is defined as follows
<datasource jndi-name="java:jboss/datasources/myDataSource" pool-name="dataSource" enabled="true">
<connection-url>jdbc:oracle:thin:#127.0.0.1:1523:orcl</connection-url>
<driver>ojdbc7.jar</driver>
<pool>
<max-pool-size>25</max-pool-size>
</pool>
<security>
<user-name>myuser</user-name>
<password>myuser</password>
</security>
<timeout>
<blocking-timeout-millis>5000</blocking-timeout-millis>
</timeout>
</datasource>
My Service (the dao is using simple JDBC operations, not Hibernate or similar)
#Singleton
#ConcurrencyManagement(ConcurrencyManagementType.BEAN)
public class MyService {
#Resource(lookup = "java:jboss/infinispan/container/cacheContainer")
protected EmbeddedCacheManager cm;
protected Cache<String, MyEntity> cache;
#PostConstruct
private void init() {
try {
cache = getCache();
} catch (SQLException ex) {
log.fatal("could not initialize caches", ex);
throw new IllegalStateException(ex);
}
}
public Cache<String, MyEntity> getCache() {
Cache<String, MyEntity> cache = cm.getCache(getCacheName(), true);
fillCache(cache);
return cache;
}
protected void fillCache(Cache<String, MyEntity> cache) {
List<MyEntity> entities = myDao.getEntities();
for (MyEntity e : entities) {
cache.put(e.getKey, e);
}
}
public MyEntity getEntity(String key) {
return cache.get(key);
}
public void insert(MyEntity entity) {
myDao.insert(entity);
cache.put(entity.getKey(), entity);
}
public void debug() {
log.debug(cache.values());
}
}

When using NON_XA transactions, failure to commit TX in cache may let the transaction to commit, and you would not get any exception that would tell you that the cache is inconsistent.
As for cache.values(), prior to Infinispan 7.0 it returns only local entries, however that should not matter in your case - with local cache all entries are local. The transactional consistency of this operation should hold. I don't see anything wrong in your configuration.
Generally I would recommend to use Infinispan module in Hibernate ORM rather than trying to do the caching on your own, as you show here.

According to the accepted answer, the configuration is correct.
If anyone has similar problems:
The problem seemed to be that one of the read methods like MyService.getEntity() is in some application-specific contexts being called extremly often (which I was not aware of), so using optimistic locking and READ_COMMITTED instead of REPEATABLE_READ seems to make it working as expected.

Related

Hibernate random "Session is closed error" with 2 databases

I have a requirement to use 2 different databases within single DAO class. One of the databases is read/write while the other is read only.
I have created 2 data sources, 2 session factories and 2 transaction managers (transaction manager for the read/write database is the platform transaction manager) for these databases. I am using #Transactional on the service method to configure Spring for transaction management.
We are getting random Session is closed! exceptions when we call sessionFactory.getCurrentSession() in the DAO class ( I can not always produce it, it sometimes works ok, sometimes gets error) :
org.hibernate.SessionException: Session is closed!
at org.hibernate.internal.AbstractSessionImpl.errorIfClosed(AbstractSessionImpl.java:133)
at org.hibernate.internal.SessionImpl.setFlushMode(SessionImpl.java:1435)
at org.springframework.orm.hibernate4.SpringSessionContext.currentSession(SpringSessionContext.java:99)
at org.hibernate.internal.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:1014)
I don't have a requirement to use global transaction (XA), I just want to query 2 different databases.
I have read this thread, it suggests injecting two separate session factories in the DAO layer as we do now: Session factories to handle multiple DB connections
Also AbstractRoutingDataSource does not work for single Dao class as per this answer: https://stackoverflow.com/a/7379048/572380
Example code from my dao looks like this:
Criteria criteria = sessionFactory1.getCurrentSession().createCriteria(MyClass.class);
criteria.add(Restrictions.eq("id", id));
criteria.list();
criteria = sessionFactory2.getCurrentSession().createCriteria(MyClass2.class); // generates random "Session is closed!" error.
criteria.add(Restrictions.eq("id", id));
criteria.list();
I have also tried using "doInHibernate" method. But the session passed to it is also randomly throwing "Session is closed!" exceptions:
#Autowired
protected HibernateTemplate hibernateTemplate;
#SuppressWarnings("unchecked")
protected List<Map<String, Object>> executeStaticQuery(final String sql) {
HibernateCallback<List<Map<String, Object>>> hibernateCallback = new HibernateCallback<List<Map<String, Object>>>() {
#Override
public List<Map<String, Object>> doInHibernate(Session session) throws HibernateException {
SQLQuery query = session.createSQLQuery(sql);
query.setResultTransformer(CriteriaSpecification.ALIAS_TO_ENTITY_MAP);
return query.list();
}
};
return hibernateTemplate.execute(hibernateCallback);
}
So you do have the below code in your application? If you don't you should add it,might be it is causing the problem.
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
<tx:annotation-driven/>
Remove this property as mentioned below
<property name="current_session_context_class">thread</property>
You are overriding Spring which sets this to SpringSessionContext.class. This is almost certainly at least part of your problem.
Spring manages your session objects. These session objects that it manages are tied to Spring transactions. So the fact that you are getting that error means to me that it is most likely due to how you are handling transactions.
in other words don't do this
Transaction tx = session.beginTransaction();
unless you want to manage the life cycle of the session yourself in which case you need to call session.open() and session.close()
Instead use the framework to handle transactions. I would take advantage of spring aspects and the declarative approach using #Transactional like I described earlier its both cleaner and more simple, but if you want to do it pragmatically you can do that with Spring as well. Follow the example outlined in the reference manual. See the below link:
http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/orm.html#orm-hibernate-tx-programmatic
Above error suggest, you are not able to get the session as session is closed sometimes. You can use openSession() method instead of getCurrentSession() method.
Session session = this.getSessionFactory().openSession();
session.beginTransaction();
// Your Code Here.
session.close();
Drawback with this approach is you will explicitly need to close the session.
In single threaded environment it is slower than getCurrentSession().
Check this Link Also:- Hibernate Session is closed
The problem is that you have a single hibernate session and two data stores. The session is bound to the transaction. If you open a new transaction towards the other database this will effectively open a new session for this database and this entity manager.
this is equivalent to #Transactional(propagation = Propagation.REQUIRES_NEW)
You need to ensure that there are two different transactions/sessions bound to each of the persistent operations towards the two databases.
If all configurations are correct, then every thing should work fine without error
I think you missed #Qualifier(value="sessionFactory1") and #Qualifier(value="sessionFactory2") at your DAO
kindly look at those examples
Hibernate configuring multiple datasources and multiple session factories
https://medium.com/#joeclever/using-multiple-datasources-with-spring-boot-and-spring-data-6430b00c02e7
HibernateTemplate usage is discouraged already. The clear explanation is given here https://stackoverflow.com/a/18002931/1840818
As stated over there, declarative transaction management has to be used.

ClassCastException while reading from Infinispan Cache after redeployment on Wildfly 8.2

I have a simple Infinispan local-cache (also tried distributed cache) on Wildfly 8.2. Everything is working fine until I do redeployment of my .WAR. After redeployment of my .WAR I get the following error:
Caused by: java.lang.ClassCastException: my.package.MyClass cannot be cast to my.package.MyClass
Full stacktrace: https://gist.github.com/bagges/07af1842a874f7c99ef3
I lookup the Cache in a CDI Bean like this:
#Path("/mypath")
#Stateless
public class MyServiceClass {
#Resource(lookup = "java:jboss/infinispan/myContainer")
private CacheContainer container;
private Cache<Integer, MyCacheObject> myCache;
#PostConstruct
public void start() {
myCache = container.getCache("myCache");
}
#GET
public String get() {
if(!myCache.containsKey(1)) {
myCache.put(1, new MyCacheObject(1, "Hello Cache"));
}
return myCache.get(1).getName();
}
}
Wildfly-Config:
<cache-container name="myContainer" jndi-name="java:jboss/infinispan/myContainer" start="EAGER">
<local-cache name="myCache"/>
</cache-container>
I know that the error occured because off different class loaders. Infinispan tries to cast the entity stored with the previous classloader which cannot work. But how to avoid this?
Don't use start="EAGER". That will fix your problem.
We've removed this from WildFly 9, since its misuse has been the source of many user headaches.
Also, I recommend injecting your cache directly (instead of just the cache container). That way the cache lifecycle will be bound to the lifecycle of your deployment.
e.g.
#Resource(lookup = "java:jboss/infinispan/cache/myContainer/myCache")
private Cache<Integer, MyCacheObject> myCache;
Lastly, feel free to use a resource-ref to avoid referencing a vendor-specific jndi namespace in your application.
You should be able to share the cache if you enable store-as-binary in the Infinispan configuration and you force the cache to use the application's classloader instead of the one in the GlobalConfiguration:
Cache appSpecificCache = cacheFromJndi.getAdvancedCache().with(applicationClassLoader)

JPA cache not invalidated

I am developing an application using Eclipse IDE, EclipseLink, JPA and MySQL. During the initial launch of the app, I need to delete a table's content. However, after deletion the application, making a new connection, still reads the old data from the empty table.
My initial approach was to create a new EntityManager each time an operation was performed.
private EntityManager entityManager;
public FacadeFactory() {
entityManager = DBConnection.connect();
}
After disabling the JPA caching, the problem was solved.
Due to performance issues, the EntityManager was changed to Singleton in order to open a only one connection to the database.
private static FacadeFactory instance;
private EntityManager entityManager;
private FacadeFactory() {
entityManager = DBConnection.connect();
}
public static FacadeFactory getInstance(){
if(instance == null){
instance = new FacadeFactory();
}
return instance;
}
Now, I have the same problem as before even if the cache is still disabled. I tried to disable the caching both from persistence.xml and from code, but none of them works for me.
<property name="eclipselink.cache.shared.default" value="false"/>
entityManager.getEntityManagerFactory().getCache().evictAll();
Can anyone please help me?
entityManager.getEntityManagerFactory().getCache().evictAll(); is clearing the shared cache, while eclipselink.cache.shared.default" value="false" also affects the shared cache. The shared cache is also known as a second level cache - the first being the cache used within the EntityManager itself to track managed entities. Because you are using a single EntityManager for everything, everything gets put in that first level cache.
Either you can create a new EntityManager as required, or you can occasionally call em.clear() to clear the cache in the EntityManager - detaching your entities.

Why DataSourceTransactionManage does not roll back while HibernateTransactionManager does?

<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
lazy-init="true">
<property name="dataSource" ref="dataSource" />
</bean>
While building an app using spring and hibernate , if I use DataSourceTransactionManager, then on exception , it does not roll back. Seems like it uses auto-comit in different session. However , if I change the transaction manager to org.springframework.orm.hibernate3.HibernateTransactionManager, the roll back works as expected.
Or is it that if we use hibernate , then we need to use HibernateTransactionManager?
N.b: My service annotated with #Transactional(rollbackFor = { Throwable.class} )
When working with plain hibernate the following is needed to manage transactions
Session s = sessionfactory.openSession();
Transaction tx = null;
try {
tx = s.beginTransaction();
// Your data manipulation here
tx.commit();
} catch (Exception e) {
if (tx != null) { tx.rollback();}
} finally {
s.close();
}
This is also what the HibernateTransactionManager does (open a session if needed, start transaction, afterwards commit/rollback).
Now what you are trying to do is the following (which is kind-of similair to the DataSourceTransactionManager, that operates on the `DataSource instead of the session.)
Session s = sessionfactory.openSession();
Connection conn = null;
try {
conn = s.connection();
// Your data manipulation here
conn.commit();
} catch (Exception e) {
if (conn != null) {
try {
conn.rollback();
catch (IOExceptin) {}
}
} finally {
s.close();
}
Which isn't going to work as the actual transactional unit, the session, is never getting informed of the commit or rollback. So in the worst case, depending in your flush-mode, everything (or partially) gets committed.
In short always use the transaction manager which fits your technology.
When using plain hibernate use the HibernateTransactionManager, when using JPA use the JpaTransactionManager, don't use the DataSourceTransactionManager in those cases as that is usable only in situations where only plain JDBC is used.
The DataSourceTransactionManager clearly states that it operates on the DataSource and underlying connection. Whereas when one uses Hibernate the transaction is controlled by the hibernate Session, this is the level where the HibernateTransactionManager operates on. For JPA this is the EntityManager and this is what the JpaTransactionManager recognizes.
According to Spring documentation:
PlatformTransactionManager implementations normally require knowledge
of the environment in which they work: JDBC, JTA, Hibernate, and so
on.
If you use JTA in a Java EE container then you use a container
DataSource, obtained through JNDI, in conjunction with Spring’s
JtaTransactionManager.
You can also use Hibernate local transactions easily...
In this case, you need to define a Hibernate LocalSessionFactoryBean,
which your application code will use to obtain Hibernate Session
instances ... in this case is of the HibernateTransactionManager type.
In the same way as the DataSourceTransactionManager needs a reference
to the DataSource, the HibernateTransactionManager needs a reference
to the SessionFactory.
Although:
DataSourceTransactionManager will binds a JDBC Connection from the
specified DataSource to the current thread, potentially allowing for
one thread-bound Connection per DataSource.
the Session won't be bound to the current transaction and you need both for local transactions.
This is what hibernate or JPA specific TM would do for you. They will associate the persistence context and one connection per transaction per thread.
If you choose JTA transactions than an external TM will coordinate transactions. DB connections will be released aggressively after each statement, which is fine as long as the external TM will always return the same connection to the same thread during a global transaction life.

What's the best way to share a connection between Hibernate's SessionFactory and a JDBC DAO?

I'm using Spring 3.0.6, with Hibernate 3.2.7.GA in a Java-based webapp. I'm declaring transactions with #Transactional annotations on the controllers (as opposed to in the service layer). Most of the views are read-only.
The problem is, I've got some DAOs which are using JdbcTemplate to query the database directly with SQL, and they're being called outside of a transaction. Which means they're not reusing the Hibernate SessionFactory's connection. The reason they're outside the transaction is that I'm using converters on method parameters in the controller, like so:
#Controller
#Transactional
public class MyController {
#RequestMapping(value="/foo/{fooId}", method=RequestMethod.GET)
public ModelAndView get(#PathVariable("fooId") Foo foo) {
// do something with foo, and return a new ModelAndView
}
}
public class FooConverter implements Converter<String, Foo> {
#Override
public Foo convert(String fooId) {
// call FooService, which calls FooJdbcDao to look up the Foo for fooId
}
}
My JDBC DAO relies on SimpleJdbcDaoSupport to have the jdbcTemplate injected:
#Repository("fooDao")
public class FooJdbcDao extends SimpleJdbcDaoSupport implements FooDao {
public Foo findById(String fooId) {
getJdbcTemplate().queryForObject("select * from foo where ...", new FooRowMapper());
// map to a Foo object, and return it
}
}
and my applicationContext.xml wires it all together:
<mvc:annotation-driven conversion-service="conversionService"/>
<bean id="conversionService" class="org.springframework.context.support.ConversionServiceFactoryBean">
<property name="converters">
<set>
<bean class="FooConverter"/>
<!-- other converters -->
</set>
</property>
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"
p:sessionFactory-ref="sessionFactory" />
FooConverter (which converts a path variable String to a Foo object) gets called before MyController#get() is called, so the transaction hasn't been started yet. Thus when FooJdbcDAO is called to query the database, it has no way of reusing the SessionFactory's connection, and has to check out its own connection from the pool.
So my questions are:
Is there any way to share a database connection between the SessionFactory and my JDBC DAOs? I'm using HibernateTransactionManager, and from looking at Spring's DataSourceUtils it appears that sharing a transaction is the only way to share the connection.
If the answer to #1 is no, then is there a way to configure OpenSessionInViewFilter to just start a transaction for us, at the beginning of the request? I'm using "on_close" for the hibernate.connection.release_mode, so the Hibernate Session and Connection are already staying open for the life of the request.
The reason this is important to me is that I'm experiencing problems under heavy load where each thread is checking out 2 connections from the pool: the first is checked out by hibernate and saved for the whole length of the thread, and the 2nd is checked out every time a JDBC DAO needs one for a query outside of a transaction. This causes deadlocks when the 2nd connection can't be checked out because the pool is empty, but the first connection is still held. My preferred solution is to make all JDBC DAOs participate in Hibernate's transaction, so that TransactionSynchronizationManager will correctly share the one single connection.
Is there any way to share a database connection between the SessionFactory and my JDBC DAOs? I'm using HibernateTransactionManager, and from looking at Spring's DataSourceUtils it appears that sharing a transaction is the only way to share the connection.
--> Well you can share database connection between SessionFactory and JdbcTemplate. What you need to do is share same datasource between the two. Connection pooling is also shared between the two. I am using it in my application.
What you need to do is configure HibernateTransactionManager for both transactions.
Add JdbcDao class(with properties jdbcTemplate and dataSource with getter-setter) in your existing package structure(in dao package/layer), Extend your jdbc implementation classes by JdbcDao. If you have configured, hibernateTxManager for hibernate, you will not need to configure it.
The problem is, I've got some DAOs which are using JdbcTemplate to query the database directly with SQL, and they're being called outside of a transaction. Which means they're not reusing the Hibernate SessionFactory's connection.
--> You may be wrong here. You may be using same connection, I think, only problem may lie in HibernateTransaction configuration.
Check HibernateTransactionManager javadoc : This transaction manager is appropriate for applications that use a single Hibernate SessionFactory for transactional data access, but it also supports direct DataSource access within a transaction (i.e. plain JDBC code working with the same DataSource). This allows for mixing services which access Hibernate and services which use plain JDBC (without being aware of Hibernate)!
Check my question : Using Hibernate and Jdbc both in Spring Framework 3.0
Configuration : Add dao classes and service classes with your current hibernate classes, do not make separate packages for them, If you want to work with existing configuration. Otherwise configure HibernateTransactionManager in xml configuration and Use #Transactional annotation.
Mistake in your code :
#Controller
#Transactional
public class MyController {......
Use #Transactional annotation in service classes(best practice).
Correction :
#Transactional(readOnly = true)
public class FooService implements FooService {
public Foo getFoo(String fooName) {
// do something
}
// these settings have precedence for this method
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void updateFoo(Foo foo) {
// do something
}
}

Categories