I currently have a program which has two data sources. Each of the data source is tied to one transactional manager.
<bean id="tM" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="ds1" />
</bean>
<bean id="tM2" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="ds2" />
</bean>
If I had a function that accesses both datasource and an error occurs, if one datasource rollbacks would the second data source would also rollback?
Thanks!
If your function access datastores sequentialy (I mean it make COMMIT to the first datastore and try to COMMIT to the second one) then if error occur after first COMMIT, second data source will do ROLLBACK, but first - stay COMMITED
So, you must use one data store or JTATransactionManager.
Spring can't rollback a committed JDBC statment. This is what XADataSources and 2 phase commit are for (usually through a JTA TX manager).
You are asking for data inconsistency trying to manage this yourself because this may or may not work depending on what fails when. For example, assume this flow:
Start TX
Do work with ds1
Do work with ds2
End TX
commit ds2
commit ds1
If the commit on ds1 fails, then ds2 will stay commited. But, if the commit on ds2 fails, then the whole tx will fail and ds1 will rollback.
Also, are you sure you are always closing the DataSources in the same order they were open (first used)? Spring may take care of this, but I am not sure.
Autocommit may be on by default. Consider setting this off and managing the commits yourself .
Related
I am facing one weird issue when I update the table and after a couple of seconds when I try to fetch that then I still receive the old data. When I again fetch with same query after couple of second then I receive refreshed data. Basically what I see is it takes some time to return the fresh data.
I have disabled all caching from hibernate also while fetching I am making session.clear() and marked query as uncachable.
I also look into mysql query log and I figured out that hibernate is querying to mysql, but I am receiving old data.
How can I make sure that at any given point of time I receive refreshed data only
Below is my hibernate config file
<hibernate-configuration>
<session-factory>
<property name="dialect">org.hibernate.dialect.MySQL5InnoDBDialect</property>
<property name="show_sql">true</property>
<property name="connection.url">jdbc:mysql://127.0.0.1:4804/aluminidb?autoReconnect=true</property>
<property name="connection.username">root</property>
<property name="connection.password">root</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<!-- Example mapping file inclusion -->
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<!-- Disable the second-level cache -->
<property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
<mapping resource="com/alumini/spring/model/Alumini.hbm.xml"/>
<mapping resource="com/alumini/spring/model/Question.hbm.xml"/>
<mapping resource="com/alumini/spring/model/Events.hbm.xml"/>
</session-factory>
Below is the code to fetch the object
#Override
public Alumini login(String email, String password) {
Session session=sessionFactory.openSession();
session.clear();
Transaction t;
try{
t=session.beginTransaction();
Query query = session.getNamedQuery("chkLogIn");
query.setParameter("email",email);
query.setParameter("password",password);
query.setCacheMode(CacheMode.REFRESH);
query.setCacheable(false);
List<Alumini> aluminiList=query.list();
if(aluminiList!=null && aluminiList.size()>0){
System.out.println(aluminiList.get(0).getLastUpdated());
t.commit();
return aluminiList.get(0);
}else{
t.rollback();
return null;
}
}finally{
session.close();
}
}
So I am clearing the session, also in my config I have set all cache disabled. still when I update the record and if with in couple of seconds if I fetch the record using above method then I receive old data for once. After that it gives me latest data.
If some entities are loaded in the current Session and you run a native query, the Session might not flush automatically.
Hibernate Session offers application-level repeatable reads, so if other database transaction changes an entity, Hibernate will not refresh the current entity states.
Now, since you did not post the UPDATE part, it's hard to tell what you are doing there. The best way to address this issues is to simply log all JDBC statements as explained in this article. Then, you will know for sure whether the update was executed or not.
More, the way you do transaction and Session management is flawed as well. You don't even rollback in a finally block, and since you are using MySQL, this can lead to locks being held and causing deadlocks.
Just use a framework like Spring or Java EE to handle the Persistence Context and transaction management for you.
In your example:
Session session=sessionFactory.openSession();
session.clear();
How can one tell whether this is a new Session, and calling clear would not make any sense, or it is the same Session you used for the update?
From this code, I would assume that this is the case:
if(aluminiList!=null && aluminiList.size()>0){
System.out.println(aluminiList.get(0).getLastUpdated());
t.commit();
return aluminiList.get(0);
}else{
t.rollback();
return null;
}
But it points out that you might have skipped the Hibernate User Guide and jumped to coding Hibernate.
The aluminiList can never be null. It can only be empty.
Logging via System.out is wrong. Use a Logging framework for that.
Why do you want to commit after the query was executed? Maybe the change was not flushed at all and the query did not trigger the flush because either you set the FlushMode.MANUAL or the query is a native SQL, not a JPQL. Check out this article for more details about the difference.
You call rollback on else? What's the point? You don't trust the database that it issued the UPDATE properly and now you want to roll back that change. Or, you suspect that Hibernate did not flush, but then, why do you would you roll it back if the change didn't happen, right? But if it happened, then you should read-your-writes because that's how ACID isolation levels work.
All in all, there are many issues in the code that you posted. So, read the Hibernate User Guide and these tutorials, and you will fix all your issues. There's no other way.
I have a situation where I have to handle multiple clients in one app and each client has separate database. To support that I'm using Spring custom scope, quite similar to the built in request scope. A user authenticates in each request and can set context client ID based passed credentials. The scoping itself seems to be working properly.
So I used my custom scope to create a scoped-proxy for my DataSource to support a diffrent database per client. And I get connections to proper databases.
Than I created a scoped-proxy for EntityManagerFactory to use JPA. And this part also looks OK.
Than I added a scoped-proxy for PlatformTransactionManager for declarative transaction management. I use #Transactional on my service layer and it gets propagated nicely to my SpringData powered repository layer.
All is fine and works correctly as long a s I use only JPA. I can even switch context to a diffrent client within the request (I use ThreadLocals under the hood) and transactions to both databases are handled correctly.
The problems start when I try to use JDBCTempate in one of my custom repositiries. Than at first glance all looks OK too, as no exceptions are thrown. But when I check the database for the objects I thought I inserted with my custom JDBC-based repository the're not there!
I know for sure I can use JPA and JDBC together by declaring only JpaTransactionManager and passing both the DataSource and EntityManagerFactory to it - I checked it and without the scoped-proxies and it works.
So the question is how to make JDBC work together with JPA using the JpaTransactionManager when I have scoped-proxied the DataSource, EntityManagerFactory and PlatformTransactionManager beans? I remind that using only JPA works perfectly, but adding plain JDBC into the mix is not working.
UPDATE1: And one more thing: all readonly (SELECT) operations work fine with JDBC too - only writes (INSERT, UPDATE, DELETE) end up not commited or rolledback.
UPDATE2: As #Tomasz suggested I've removed scoped proxy from EntityManagerFactory and PlatformTransactionManager as those are indeed not needed and provide more confusion than anything else.
The real problem seems to be switching the scope context within a transaction. The TransactionSynchronizationManager bounds transactional resources (i.e. EMF or DS) to thread at transaction start. It has the ability to unwrap the scoped proxy, so it binds the actual instance of the resource from the scope active at the time of starting a transaction. Then when I change the context within a transaction it all gets messed up.
It seems like I need to suspend the active transaction and store aside the current transaction context to be able to clear it upon entering another scope to make Spring think it's not inside a transaction any more and to force it create one for the new scope when needed. And then when leaving the scope I'd have to restore the previously suspended transaction. Unfortunatelly I was unable to come up with a working implementation yet. Any hints appreciated.
And below is some code of mine, but it's pretty standard, except for the scoped-proxies.
The DataSource:
<!-- provides database name based on client context -->
<bean id="clientDatabaseNameProvider"
class="com.example.common.spring.scope.ClientScopedNameProviderImpl"
c:clientScopeHolder-ref="clientScopeHolder"
p:databaseName="${base.db.name}" />
<!-- an extension of org.apache.commons.dbcp.BasicDataSource that
uses proper database URL based on database name given by above provider -->
<bean id="jpaDataSource" scope="client"
class="com.example.common.spring.datasource.MysqlDbInitializingDataSource"
destroy-method="close"
p:driverClassName="${mysql.driver}"
p:url="${mysql.url}"
p:databaseNameProvider-ref="clientDatabaseNameProvider"
p:username="${mysql.username}"
p:password="${mysql.password}"
p:defaultAutoCommit="false"
p:connectionProperties="sessionVariables=storage_engine=InnoDB">
<aop:scoped-proxy proxy-target-class="false" />
</bean>
The EntityManagerFactory:
<bean id="jpaVendorAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"
p:database="MYSQL"
p:generateDdl="true"
p:showSql="true" />
<util:properties id="jpaProperties">
<!-- omitted for readability -->
</util:properties>
<bean id="jpaDialect"
class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:packagesToScan="com.example.model.core"
p:jpaVendorAdapter-ref="jpaVendorAdapter"
p:dataSource-ref="jpaDataSource"
p:jpaDialect-ref="jpaDialect"
p:jpaProperties-ref="jpaProperties" />
The PlatformTracsactionManager:
<bean id="transactionManager"
class="org.springframework.orm.jpa.JpaTransactionManager"
p:dataSource-ref="jpaDataSource"
p:entityManagerFactory-ref="entityManagerFactory" />
<tx:annotation-driven proxy-target-class="false" mode="proxy"
transaction-manager="transactionManager" />
I have a Camel project and after we create a controll bean we want to clean up a DB log table. SO each time we run the application we TRUNCATE a table called agent orders. This is setup in an Enity object as a named query.
#NamedNativeQuery(name="cleanOrderTable", query="TRUNCATE agent_orders",resultClass= AgentOrderEntity.class)
The code that calls this query looks like:
#Component("mgr")
public class Controller{
#PersistenceContext(unitName="camel")
private EntityManager em;
.......
#Transactional
public void clearHistoricalOrders() throws Exception{
Query query = em.createNamedQuery("cleanOrderTable");
query.executeUpdate();
}
}
Call the clear history method we get an error javax.persistence.TransactionRequiredException: Executing an update/delete query
I have tried everything, UserTransaction, em.getTransaction().begin - nothing works. Any idea how I can run this query?
We have the following tran manager setup in our app context.xml:
<tx:annotation-driven transaction-manager="txManager" />
<bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager"
p:dataSource-ref="dataSource">
<property name="entityManagerFactory" ref="emFactory" />
</bean>
Try debugging and check whether your controller is proxied and whether there's transaction-related code executed before your method. Try enabling database server logs to check what queries really get executed.
Make sure you don't have any ServletFilters that set-up a read-only transaction prior to getting to your Controller.
Make sure your entity manager is the one that's passed to the transaction manager.
Also, I've found some info advising against using #PersistenceContext in servlets: http://weblogs.java.net/blog/ss141213/archive/2005/12/dont_use_persis.html
Hope this helps!
I'd try executing the query with a TransactionTemplate just to check that the #Transactional annotation really isn't having an effect.
Also, what's up with resultClass= AgentOrderEntity.class? Why does a query that truncates a table need to return something?
I'm working on a project that has in the last couple of months developed an incredibly annoying bug involving object updates. Certain objects (most notably users) when updated in hibernate are never marked as dirty and flushed. For example:
Session session = factory.openSession(interceptor);
Transaction tx = session.beginTransaction();
Object object = session.load(someId);
// Modify object ...
session.update(object);
tx.commit();
session.flush();
session.close();
All the appropriate methods on the interceptor get called by hibernate except Interceptor.onFlushDirty and Interceptor.findDirty. My initial assumption was that this was an issue with detached objects as we were storing the user object in an http session; however a refactor that removed all detached objects did not solve the problem.
The transaction is definitely getting committed and the session is being flushed and closed on completion. I also double checked to ensure the session wasn't in read-only mode.
I have also tried using Session.merge in place of Session.update to no effect. When using Session.merge the object returned contains the correct updated information but again the database is never updated.
I have found this SO question that seems to describe a similar issue (relevant because the object I'm working stores a custom enum field) and a good description of Hibernate's dirty checking mechanism but other than that information has been kind of sparse.
My cfg.xml looks like this:
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.default_batch_fetch_size">1024</property>
<property name="hibernate.order_inserts">true</property>
<property name="hibernate.order_updates">true</property>
<property name="hibernate.show_sql">false</property>
<property name="hibernate.c3p0.aquire_increment">1</property>
<property name="hibernate.c3p0.initial_pool_size">1</property>
<property name="hibernate.c3p0.min_size">4</property>
<property name="hibernate.c3p0.max_size">32</property>
<property name="hibernate.c3p0.idle_test_period">100</property> <!-- seconds -->
<property name="hibernate.c3p0.timeout">600</property> <!-- seconds -->
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">true</property>
<property name="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.jdbc.fetch_size">1024</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.search.worker.execution">async</property>
<property name="hibernate.search.default.directory_provider">org.hibernate.search.store.RAMDirectoryProvider</property>
<property name="hibernate.search.default.indexwriter.batch.ram_buffer_size">256</property>
<property name="hibernate.search.default.optimizer.transaction_limit.max">1000</property>
UPDATE: I have tried both disabling and enabling Hibernates second-level cache as Finbarr suggested to no effect.
Anyone have any suggestions on other things I might try?
Try disabling hibernate's second level cache. Actually, can you post your cfg.xml?
Finbarr was partially correct it is almost certainly a caching issue but but not the second level cache. By forcing Hibernate to do the update irrespective of whether it changed everything is now saving properly.
session.evict(object);
session.update(object);
In the context of my project, I've stumbled against a piece of code that did just what you've mentioned above: evict then update.
Did further elements in your investigations shed more insights on this issue. Bug or no bug afterall
Have you reported this behavior/bug to hibernate and/or would you have a reference of that JIRA?
What was your version of hibernate?
On the potential cause:
-1- you've mentioned read only session but not readonly at query level or object level
-2- have you checked that the object was not getting evicted from the session (EvictEventListener)
-3- have you checked that the object was not being changed again to its original state, hence requiring no update or another update call again with an older (detached) version of the object
Any input appreciated,
Regards,
Christophe
The problem is in the code : you are committing the transaction before flushing the session!
You must flush the session which will 'synchronize' session state with the current transaction and THEN commit the transaction:
Session session = factory.openSession(interceptor);
Transaction tx = session.beginTransaction();
Object object = session.load(someId);
// Modify object ...
session.update(object);
session.flush();
tx.commit();
session.close();
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="data.emf" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
<bean id="transactionManager2" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="data.emf" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager2" />
In my service layer, can I use #Transactional(name="transactionManager2"); to identify which transaction manager I use if I have multiple transaction managers?
You can specify which tx manager to use with #Transactional using the value attribute:
A qualifier value for the specified
transaction.
May be used to determine the target
transaction manager, matching the
qualifier value (or the bean name) of
a specific PlatformTransactionManager
bean definition.
For example:
#Transactional("txManager1");
Alternatively, you can use the more explicit TransactionProxyFactoryBean, which gives you finer-grained control over what objects gets proxied by what tx managers. This still uses the annotations, but it doesn't auto-detect beans, it's configured explicitly on a bean-by-bean basis.
This normally isn't an issue, but it's not wise to have multiple transaction managers unless you have a very good reason to do so. If you find yourself needing two tx managers, it's usually better to see if you can make do with one. For example, if you have two data sources configured in your app server, you can incorporate both in a single JtaTransactionManager, rather than two seperate JpaTransactionManager or DataSourceTransactionmanagers.
More on the need for more than one transaction manager. You might be trying to do nested or separate transactions in sequence -- then you can use different propagation settings. You can achieve that with configuration using single transaction manager see Transaction propagation.