MySQL or Hibernate caching issue receiving old data - java

I am facing one weird issue when I update the table and after a couple of seconds when I try to fetch that then I still receive the old data. When I again fetch with same query after couple of second then I receive refreshed data. Basically what I see is it takes some time to return the fresh data.
I have disabled all caching from hibernate also while fetching I am making session.clear() and marked query as uncachable.
I also look into mysql query log and I figured out that hibernate is querying to mysql, but I am receiving old data.
How can I make sure that at any given point of time I receive refreshed data only
Below is my hibernate config file
<hibernate-configuration>
<session-factory>
<property name="dialect">org.hibernate.dialect.MySQL5InnoDBDialect</property>
<property name="show_sql">true</property>
<property name="connection.url">jdbc:mysql://127.0.0.1:4804/aluminidb?autoReconnect=true</property>
<property name="connection.username">root</property>
<property name="connection.password">root</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<!-- Example mapping file inclusion -->
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<!-- Disable the second-level cache -->
<property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
<mapping resource="com/alumini/spring/model/Alumini.hbm.xml"/>
<mapping resource="com/alumini/spring/model/Question.hbm.xml"/>
<mapping resource="com/alumini/spring/model/Events.hbm.xml"/>
</session-factory>
Below is the code to fetch the object
#Override
public Alumini login(String email, String password) {
Session session=sessionFactory.openSession();
session.clear();
Transaction t;
try{
t=session.beginTransaction();
Query query = session.getNamedQuery("chkLogIn");
query.setParameter("email",email);
query.setParameter("password",password);
query.setCacheMode(CacheMode.REFRESH);
query.setCacheable(false);
List<Alumini> aluminiList=query.list();
if(aluminiList!=null && aluminiList.size()>0){
System.out.println(aluminiList.get(0).getLastUpdated());
t.commit();
return aluminiList.get(0);
}else{
t.rollback();
return null;
}
}finally{
session.close();
}
}
So I am clearing the session, also in my config I have set all cache disabled. still when I update the record and if with in couple of seconds if I fetch the record using above method then I receive old data for once. After that it gives me latest data.

If some entities are loaded in the current Session and you run a native query, the Session might not flush automatically.
Hibernate Session offers application-level repeatable reads, so if other database transaction changes an entity, Hibernate will not refresh the current entity states.
Now, since you did not post the UPDATE part, it's hard to tell what you are doing there. The best way to address this issues is to simply log all JDBC statements as explained in this article. Then, you will know for sure whether the update was executed or not.
More, the way you do transaction and Session management is flawed as well. You don't even rollback in a finally block, and since you are using MySQL, this can lead to locks being held and causing deadlocks.
Just use a framework like Spring or Java EE to handle the Persistence Context and transaction management for you.
In your example:
Session session=sessionFactory.openSession();
session.clear();
How can one tell whether this is a new Session, and calling clear would not make any sense, or it is the same Session you used for the update?
From this code, I would assume that this is the case:
if(aluminiList!=null && aluminiList.size()>0){
System.out.println(aluminiList.get(0).getLastUpdated());
t.commit();
return aluminiList.get(0);
}else{
t.rollback();
return null;
}
But it points out that you might have skipped the Hibernate User Guide and jumped to coding Hibernate.
The aluminiList can never be null. It can only be empty.
Logging via System.out is wrong. Use a Logging framework for that.
Why do you want to commit after the query was executed? Maybe the change was not flushed at all and the query did not trigger the flush because either you set the FlushMode.MANUAL or the query is a native SQL, not a JPQL. Check out this article for more details about the difference.
You call rollback on else? What's the point? You don't trust the database that it issued the UPDATE properly and now you want to roll back that change. Or, you suspect that Hibernate did not flush, but then, why do you would you roll it back if the change didn't happen, right? But if it happened, then you should read-your-writes because that's how ACID isolation levels work.
All in all, there are many issues in the code that you posted. So, read the Hibernate User Guide and these tutorials, and you will fix all your issues. There's no other way.

Related

How to intercept JDBC queries with Hibernate/Spring/Tomcat?

I'm trying to implement the solution outlined in this answer. The short of it is: I want to set the role for each database connection in order to provide better data separation for different customers. This requires intercepting JDBC queries or transactions, setting the user before the query runs and resetting it afterwards. This is mainly done to comply with some regulatory requirements.
Currently I'm using Tomcat and Tomcat's JDBC pool connecting to a PostgreSQL database. The application is built with Spring and Hibernate. So far I couldn't find any point for intercepting the queries.
I tried JDBC interceptors for Tomcat's built in pool but they have to be global and I need to access data from my Web appliation in order to correlate requests to database users. As far as I see, Hibernate's interceptors work only on entities which is too high level for this use case.
What I need is something like the following:
class ConnectionPoolCallback {
void onConnectionRetrieved(Connection conn) {
conn.execute("SET ROLE " + getRole()); // getRole is some magic
}
void onConnectionReturned(Connection conn) {
conn.execute("RESET ROLE");
}
}
And now I need a place to register this callback... Does anybody have any idea how to implement something like this?
Hibernate 4 has multitenancy support. For plain sql you will need datasource routing which I believe spring has now or is an addon.
I would not mess ( ie extend) the pool library.
Option 1:
As Adam mentioned, use Hibernate 4's multi-tenant support. Read the docs on Hibernate multi-tenancy and then implement the MultiTenantConnectionProvider and CurrentTenantIdentifierResolver interfaces.
In the getConnection method, call SET ROLE as you've done above. Although it's at the Hibernate level, this hook is pretty close in functionality to what you asked for in your question.
Option 2:
I tried JDBC interceptors for Tomcat's built in pool but they have to
be global and I need to access data from my Web appliation in order to
correlate requests to database users.
If you can reconfigure your app to define the connection pool as a Spring bean rather than obtain it from Tomcat, you can probably add your own hook by proxying the data source:
<!-- I like c3p0, but use whatever pool you want -->
<bean id="actualDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="jdbcUrl" value="${db.url}"/>
<property name="user" value="${db.user}" />
.....
<!-- uses the actual data source. name it "dataSource". i believe the Spring tx
stuff looks for a bean named "dataSource". -->
<bean id="dataSource" class="com.musiKk.RoleSettingDSProxy">
<property name="actualDataSource"><ref bean="actualDataSource" /></property>
</bean>
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource"><ref bean="dataSource" /></property>
....
And then build com.musiKk.RoleSettingDSProxy like this:
public class RoleSettingDSProxy implements DataSource {
private DataSource actualDataSource;
public Connection getConnection() throws SQLException {
Connection con = actualDataSource.getConnection();
// do your thing here. reference a thread local set by
// a servlet filter to get the current tenant and set the role
return con;
}
public void setActualDataSource(DataSource actualDataSource) {
this.actualDataSource = actualDataSource;
}
Note that I haven't actually tried option 2, it's just an idea. I can't immediately think of any reason why it wouldn't work, but it may unravel on you for some reason if you try to implement it.
One solution that comes to mind is to utilize the Hibernate listeners/callbacks. But do beware that is very low level and quite error-prone. I use it myself to get a certain degree of automated audit logging going; it was not a pretty development cycle to get it to work reliably. unfortunately I can't share code since I don't own it.
http://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/listeners.html

AbstractRoutingDataSource & Transactional Managers

I currently have a program which has two data sources. Each of the data source is tied to one transactional manager.
<bean id="tM" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="ds1" />
</bean>
<bean id="tM2" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="ds2" />
</bean>
If I had a function that accesses both datasource and an error occurs, if one datasource rollbacks would the second data source would also rollback?
Thanks!
If your function access datastores sequentialy (I mean it make COMMIT to the first datastore and try to COMMIT to the second one) then if error occur after first COMMIT, second data source will do ROLLBACK, but first - stay COMMITED
So, you must use one data store or JTATransactionManager.
Spring can't rollback a committed JDBC statment. This is what XADataSources and 2 phase commit are for (usually through a JTA TX manager).
You are asking for data inconsistency trying to manage this yourself because this may or may not work depending on what fails when. For example, assume this flow:
Start TX
Do work with ds1
Do work with ds2
End TX
commit ds2
commit ds1
If the commit on ds1 fails, then ds2 will stay commited. But, if the commit on ds2 fails, then the whole tx will fail and ds1 will rollback.
Also, are you sure you are always closing the DataSources in the same order they were open (first used)? Spring may take care of this, but I am not sure.
Autocommit may be on by default. Consider setting this off and managing the commits yourself .

Why do I get org.hibernate.HibernateException: No CurrentSessionContext configured

I'm writing a simple project, a business app written in Swing, using Hibernate for back-end. I come from Spring, that gave me easy ways to use hibernate and transactions. Anyway I managed to have Hibernate working. Yesterday, while writing some code to delete a bean from DB, I got this:
org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions
The deletion code is simply:
Session sess = HibernateUtil.getSession();
Transaction tx = sess.beginTransaction();
try {
tx.begin();
sess.delete(ims);
} catch (Exception e) {
tx.rollback();
throw e;
}
tx.commit();
sess.flush();
and my HibernateUtil.getSession() is:
public static Session getSession() throws HibernateException {
Session sess = null;
try {
sess = sessionFactory.getCurrentSession();
} catch (org.hibernate.HibernateException he) {
sess = sessionFactory.openSession();
}
return sess;
}
additional details: I never close a hibernate session in my code, just on application closing. Is this wrong? Why do I get this on delete (only for that bean, others do work), and I don't on other operations (Insert, query, update)?
I read around and I tried to modify my getSession method simply in a sessionFactory.getCurrentSessionCall(), but I got: org.hibernate.HibernateException: No CurrentSessionContext configured!
Hibernat conf:
<hibernate-configuration>
<session-factory >
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost/joptel</property>
<property name="hibernate.connection.username">root</property>
<property name="hibernate.connection.password">******</property>
<property name="hibernate.connection.pool_size">1</property>
<property name="show_sql">true</property>
<property name="hibernate.hbm2ddl.auto">update</property>
..mappings..
</session-factory>
</hibernate-configuration>
I wanted to ask you one thing, why are you trying to use "OpenSession" method?
public static Session getSession() throws HibernateException {
Session sess = null;
try {
sess = sessionFactory.getCurrentSession();
} catch (org.hibernate.HibernateException he) {
sess = sessionFactory.openSession();
}
return sess;
}
You don't have to call openSession(), because getCurrentSession() method is always returns current session (Thread in case if you have configured it to be).
I got it!...
You have to specify current context in your hibernate.cfg.xml file
it should be:
<property name="hibernate.current_session_context_class">thread</property>
No CurrentSessionContext configured
Read the reference guide on Contextual Sessions. You're required to configure some provided or custom strategy for this. In a hibernate.cfg.xml, you'd configure it with
<property name="hibernate.current_session_context_class">...</property>
You'd probably want to use "thread" as the value to get per-thread sessions. When using Spring, it automatically sets this to a SpringSessionContext, allowing Spring to easily integrate Hibernate with its transaction management framework.
I come from Spring, that gave me easy ways to use hibernate and transactions.
If you're familiar with Spring, why aren't you using it to manage Hibernate here? You must already know how simple and foolproof it makes it.
I never close a hibernate session in my code, just on application closing. Is this wrong?
Yes, this is very wrong. Every session not closed is an open database connection, so your app is currently hemorrhaging connections.
Illegal attempt to associate a collection with two open sessions
That means exactly what it says. You tried to do some persistence operation (save(), update(), delete()) on something that was already associated to a different session. That's what will happen when you go randomly opening new sessions whenever, which is what's happening since SessionFactory.getCurrentSession() will always fail when no "current session context" is set. In general, never open a session just because one wasn't already there. You need to have well-defined strategies for opening and closing sessions and never let anything open a session outside of these "strategies". That's a sure path to resource leaks and errors like the one you've encountered.
I faced the same problem when I am working on a portal where I am using spring remoting with hibernate.
This kind of problem arise only if when the called service method contains multiple DAO calls that hit database with hibernate session.
And the solution is set the #Transaction annotation for those methods with multiple DAO calls. (Implies all the DOA calls with in this method should be under one transaction.)

Hibernate not flushing modified objects to database

I'm working on a project that has in the last couple of months developed an incredibly annoying bug involving object updates. Certain objects (most notably users) when updated in hibernate are never marked as dirty and flushed. For example:
Session session = factory.openSession(interceptor);
Transaction tx = session.beginTransaction();
Object object = session.load(someId);
// Modify object ...
session.update(object);
tx.commit();
session.flush();
session.close();
All the appropriate methods on the interceptor get called by hibernate except Interceptor.onFlushDirty and Interceptor.findDirty. My initial assumption was that this was an issue with detached objects as we were storing the user object in an http session; however a refactor that removed all detached objects did not solve the problem.
The transaction is definitely getting committed and the session is being flushed and closed on completion. I also double checked to ensure the session wasn't in read-only mode.
I have also tried using Session.merge in place of Session.update to no effect. When using Session.merge the object returned contains the correct updated information but again the database is never updated.
I have found this SO question that seems to describe a similar issue (relevant because the object I'm working stores a custom enum field) and a good description of Hibernate's dirty checking mechanism but other than that information has been kind of sparse.
My cfg.xml looks like this:
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.default_batch_fetch_size">1024</property>
<property name="hibernate.order_inserts">true</property>
<property name="hibernate.order_updates">true</property>
<property name="hibernate.show_sql">false</property>
<property name="hibernate.c3p0.aquire_increment">1</property>
<property name="hibernate.c3p0.initial_pool_size">1</property>
<property name="hibernate.c3p0.min_size">4</property>
<property name="hibernate.c3p0.max_size">32</property>
<property name="hibernate.c3p0.idle_test_period">100</property> <!-- seconds -->
<property name="hibernate.c3p0.timeout">600</property> <!-- seconds -->
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">true</property>
<property name="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.jdbc.fetch_size">1024</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.search.worker.execution">async</property>
<property name="hibernate.search.default.directory_provider">org.hibernate.search.store.RAMDirectoryProvider</property>
<property name="hibernate.search.default.indexwriter.batch.ram_buffer_size">256</property>
<property name="hibernate.search.default.optimizer.transaction_limit.max">1000</property>
UPDATE: I have tried both disabling and enabling Hibernates second-level cache as Finbarr suggested to no effect.
Anyone have any suggestions on other things I might try?
Try disabling hibernate's second level cache. Actually, can you post your cfg.xml?
Finbarr was partially correct it is almost certainly a caching issue but but not the second level cache. By forcing Hibernate to do the update irrespective of whether it changed everything is now saving properly.
session.evict(object);
session.update(object);
In the context of my project, I've stumbled against a piece of code that did just what you've mentioned above: evict then update.
Did further elements in your investigations shed more insights on this issue. Bug or no bug afterall
Have you reported this behavior/bug to hibernate and/or would you have a reference of that JIRA?
What was your version of hibernate?
On the potential cause:
-1- you've mentioned read only session but not readonly at query level or object level
-2- have you checked that the object was not getting evicted from the session (EvictEventListener)
-3- have you checked that the object was not being changed again to its original state, hence requiring no update or another update call again with an older (detached) version of the object
Any input appreciated,
Regards,
Christophe
The problem is in the code : you are committing the transaction before flushing the session!
You must flush the session which will 'synchronize' session state with the current transaction and THEN commit the transaction:
Session session = factory.openSession(interceptor);
Transaction tx = session.beginTransaction();
Object object = session.load(someId);
// Modify object ...
session.update(object);
session.flush();
tx.commit();
session.close();

JBoss Cache Configuration

I'm using an extended persistence context (injected Entitymanager at SFSB) and have additionally set #TransactionManagement(value=TransactionManagementType.BEAN) for the SFSB to have full control over the UserTransaction.
The Transaction is controlled on client side where I start a lookup for the SFSBs containing a reference to the entity beans.
SymbolischeWerte sbw = (SymbolischeWerte)symbolischeWerteHome.findByPrimaryKey(BigDecimal.valueOf(24704578762l));
System.out.println(symbolischeWerteHome.getSEQ_ID() + "\t\t" + symbolischeWerteHome.getName());
symbolischeWerteHome.beginTransaction();
symbolischeWerteHome.setName(symbolischeWerteHome.getName().concat("A"));
symbolischeWerteHome.commitTransaction();
that works so far!
After enabling JBoss Cache and multiple clients, only the first client causes a database select. The others get the entity from cache.
perfect!
The problem:
2 clients (CLIENTA, CLIENTB) concurrently looks up for an entity with the same primary key, while CLIENTA runs through the program, CLIENTB is manually halt after findByPrimaryKey.
When CLIENTA has finished (value is successfully persisted) CLIENTB's system out shows the old value which is modified and stored into database too.
So I'm loosing CLIENTA's values!!
Is this a JBoss Cache configuration problem or is this a general problem of my systems design?
Cache config for entity:
#Cache(usage=CacheConcurrencyStrategy.TRANSACTIONAL, region="com.culturall.pension.system.SymbolischeWerteEntity")
Cache config in persistence.xml
<property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.jbc2.JndiMultiplexedJBossCacheRegionFactory"/>
<property name="hibernate.cache.region.jbc2.cachefactory" value="java:CacheManager"/>
<property name="hibernate.cache.region.jbc2.cfg.entity" value="mvcc-entity"/>
<property name="hibernate.cache.region.jbc2.cfg.query" value="local-query"/>
Thx for ANY advice!
If I read you right, you configured the cache to be transactional. This by definition means clients in different transactions see different versions of data; if the data was modified in other transaction, you need to refresh the data from DB explicitely (thus discarding your changes) to see those changes.

Categories