Spring #Transactional on one service method spanning over two Hibernate transaction managers - java

I was wondering if it is possible to use two transaction manager in one service methods.
Because due to the limitation of client's mysql db configuration, we have got 2 different datasources within one database, i.e., one user/pwd/url per schema. Thats why i have to configured two transaction managers. Now I got problem when it comes to the service implementation. See the following code:
public class DemoService{
...
#Transactional(value = "t1")
public doOne(){
doTwo();
}
#Transactional(value = "t2")
public doTwo(){
}
...
}
if I using this code pattern, i always got the exception
org.hibernate.HibernateException: No Session found for current thread
If i run the two methods seperately, it workd fine.
Did i miss something? Or there is other work around here?
Any advice would be appreciated.
btw: some of my configuration
<bean id="sessionFactorySso" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="mappingLocations">
<list>
<value>classpath*:sso.vo/*.hbm.xml</value>
</list>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.show_sql">true</prop>
<prop key="generateDdl">true</prop>
<prop key="hibernate.dialect">${dialect} </prop>
</props>
</property>
<property name="dataSource" ref="dataSourceSso"/>
</bean>
<bean id="dataSourceSso" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass" value="${driver}"/>
<property name="jdbcUrl" value="${sso.url}"/>
<property name="user" value="${sso.username}"/>
<property name="password" value="${sso.password}"/>
<!-- these are C3P0 properties -->
<property name="acquireIncrement" value="2" />
<property name="minPoolSize" value="1" />
<property name="maxPoolSize" value="2" />
<property name="automaticTestTable" value="test_c3p0" />
<property name="idleConnectionTestPeriod" value="300" />
<property name="testConnectionOnCheckin" value="true" />
<property name="testConnectionOnCheckout" value="true" />
<property name="autoCommitOnClose" value="true" />
<property name="checkoutTimeout" value="1000" />
<property name="breakAfterAcquireFailure" value="false" />
<property name="maxIdleTime" value="0" />
</bean>
<bean id="transactionManagerSso" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactorySso"/>
<qualifier value="sso" />
</bean>
<tx:annotation-driven transaction-manager="transactionManagerSso" />

Because you want to enlist two data sources in one transaction you need XA(Global) Transaction.
Therefore you need to:
Set the Spring JTA transaction manager
You Hibernate properties should use the JTA platform settings
Your data source connections should be XA complaint
You need an application server JTA transaction manager or a stand-alone tarnsaction manager (Bitronix, Atomikos, JOTM)
You will need two session factory configurations, one for each individual data source.
And you won't have two transaction managers: t1 and t2, but instead you will enlist two transactional XA data sources that will be automatically enlisted in the same global transaction, meaning you will have two XA connections being enlisted in the same global transaction. The XA transaction will use the 2PC protocol to commit both resources upon commit time.
Checkout this Bitronix Hibernate example.

You have a few options:
Inject the bean into itself and use the reference to call doTwo(). This really goes against the whole idea of IoC and AOP so I don't recommend it.
Switch to compile time weaving. Rather than using proxies, Spring (actually the AspectJ compiler) will add the bytecode to start/stop transactions to your class at compile time. There are pros and cons to this approach. See this page for more details.
Use load time weaving. Same as #2 except that your classes are modified as they are loaded rather than at compile time. IMO, Java classloading is complicated enough. I'm sure this works great for some folks but I personally would avoid this.
As Vlad pointed out, you can use JTA and XA.
Start a new transaction against transaction manager 2 within doOne() before calling doTwo(). RTFM on programmatic transaction management.
Check out ChainedTransactionManager. It essentially aggregates multiple transaction managers and does a "best effort" with commit/rollback. This is NOT a true two-phase commit like Vlad's solution.
All of these except for Vlad's solution (#4) have the potential to leave the databases in an inconsistent state. You need to use JTA/XA/two-phase commit to ensure consistency in the event that one of the TX managers throws an exception at commit time.

Related

How-to configure Spring data source to avoid participation in transaction synchronization?

I'm observing seemingly weird behavior related to how Spring handles operations with multiple data sources, especially participation in transaction synchronization.
My Spring context has two data sources configured:
1) dataSource1 - MySQL database
2) dataSource2 - Apache Impala database
Both data sources have a jdbc template configured for them. Additionally dataSource1 has a transaction manager configured. dataSource2 has no transaction manager configured (Apache Impala does not have transactions by design, it just happens to expose SQL-like querying functionality via JDBC connector).
I'm using this configuration in a spring-batch application - dataSource1 and its transactionManager is set to be used to store data related to spring-batch job meta-info.
Next I have a spring-batch job configured with a single custom Tasklet step. In this step I'm accessing dataSource2 via its jdbc template. And this is where the problem occurs - to my surprise connection to dataSource2 starts to participate in a transaction synchronization.
2018-06-06 10:41:08,179 DEBUG [main] org.springframework.jdbc.datasource.DataSourceUtils - Registering transaction synchronization for JDBC Connection
I understand that upon a start of a spring-batch step a transaction related to dataSource1 is started however why should dataSource2 be anyhow involved in that?
A negative side-effect is that once connection to dataSource2 starts to participate in transaction synchronization the connection is not closed after e.g. jdbcTemplate.execute(...) command completes. Essentially what I'm observing is that open connections to dataSource2 are closed only when the outer transaction completes.
Is there a way to configure Spring context and dataSource2 to not participate in a transaction synchronization?
Configuration
<bean id="dataSource1" class="com.zaxxer.hikari.HikariDataSource" destroy-method="close">
<constructor-arg ref="hikariConfig" />
</bean>
<bean id="hikariConfig" class="com.zaxxer.hikari.HikariConfig">
<property name="poolName" value="springHikariCP" />
<property name="dataSourceClassName" value="com.mysql.jdbc.jdbc2.optional.MysqlDataSource" />
...
</bean>
<bean id="dataSource1JdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource1"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entityManagerFactory-ref="entityManagerFactory"/>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" p:dataSource-ref="dataSource1">
<property name="packagesToScan" value="..."/>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
</property>
<property name="sharedCacheMode" value="DISABLE_SELECTIVE"/>
</bean>
<bean id="dataSource2" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass" value="org.apache.hive.jdbc.HiveDriver"/>
...
</bean>
<bean id="dataSource2JdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource2"/>
</bean>

Multi connection pool configurations for different modules in application

Our application has multiple modules, each module use its own schema in the same mysql database. Now I need to make different connection pool configurations for each module because of their different db resource consuming nature, i.e. some module may have 20 active connections at a point of time, but others may just have 1 max. I have searched here and other forums, couldn't find a solution, just this topic is not about multi-tenancy or multi-database, all schemas are in the same db.
Here's the config we have:
<bean id="dataSource" class="our.own.package.RoutingDataSource"> <!-- RoutingDataSource extends spring AbstractDataSource -->
<property name="master" ref="masterDS"/>
</bean>
<bean id="abstractDataSource" abstract="true">
<property name="driverClass" value="com.mysql.jdbc.Driver" />
<property name="initialPoolSize" value="#initial.pool.size#" />
<property name="minPoolSize" value="#min.pool.size#" />
<property name="maxPoolSize" value="#max.pool.size#" /> <!-- I want to have different configs for each module in our application -->
</bean>
<bean id="masterDS" class="com.mchange.v2.c3p0.ComboPooledDataSource" parent="abstractDataSource">
<property name="jdbcUrl" value="jdbc:mysql://#host#/" />
<property name="user" value="#user#" />
<property name="password" value="#pwd#" />
<property name="dataSourceName" value="#dbName#" />
</bean>
So now I have two questions:
1) Is it possible to have different connection pool configurations for one datasource in Spring?
2) If I have to go with the multiple datasource way(one datasource for one module), is implementing Spring's AbstractRoutingDataSource the correct way to go?
Thank you!
Ad.1 Your data source is in the fact connection pool so you want to have multiple pools on the top of another pool. You can do it but you will face many other problems.
Ad.2. Yes definitely. You already have RoutingDataSource so this one should be implementation of AbstractRoutingDataSource. Probably you already have logic there to determine current the data source routing key which should be used to do the lookup.

DataSource in an OSGI container

I have a simple Spring App that connects to a DB via an EntityManager
So I have to following configuration:
<bean id="domainEntityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitName" value="TheManager" />
<property name="dataSource" ref="domainDataSource" />
<property name="packagesToScan" value="com.conztanz.persistence.stock.model" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
</property>
<property name="jpaProperties">
<props>
<prop key="hibernate.hbm2ddl.auto">create-drop</prop>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</prop>
</props>
</property>
</bean>
<bean id="domainDataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost:5433/dbName" />
<property name="username" value="xxxx" />
<property name="password" value="xxxx" />
</bean>
This works fine when lunched via a main class (loading the AppContext manually)
But, once deployed into ServiceMix I get the following error :
Property 'driverClassName' threw exception; nested exception is java.lang.IllegalStateException: Could not load JDBC driver class [org.postgresql.Driver]
I've read somewhere that OSGI and DriverManager don't mix well but I fail to understand why.
The solution that seems to be a good practice is to expose the dataSource as an OSGI bundle, do you agree ? and in that case how would you have access to it from a spring context to be able to have an EntityManager for example ?
DriverManager does not work well in OSGi. The easiest way is to use a DataSource directly. Most DB drivers have such a class. If you instantiate it in your app context then it will work. The downside is though that it binds your application to the DB driver as it then will import the packages for the DataSource impl.
A more loosely coupled way is to use ops4j pax jdbc. It allows to create a DataSource as an OSGi service from a config in config admin. So in your app context you just have to add a dependency to a DataSource service. So your application is not bound to the specific DB driver. One typical use case is to use H2 in tests and oracle in production.

hibernate spring session rolls back updates after some period of time

I run my tomcat app - spring3+hibernate3(mysql).
Hibernate creates the session. I have no problem with inserts.
But after that I am trying to make an update in sql IDE (e.g. sql developer)
update t_user set password='123' where id=1;
row updates but (!) my updates rolled back after some time.
I think it happens when session flushes. Please fix me if I am wrong.
Also I am trying to execute that update with different variations via java code.
For instance:
#Override
public void addUser(User u) {
sessionFactory.getCurrentSession().save(u);
}
sessionFactory.getCurrentSession().update(u);
sessionFactory.getCurrentSession().saveOrUpdate(u);
So the only chance to make some updates in my database is to stop my application and execute the query.
I think that session could configured wrongly:
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass" value="${jdbc.driverClassName}"/>
<property name="jdbcUrl" value="${jdbc.databaseurl}"/>
<property name="user" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
<property name="minPoolSize" value="1"/>
<property name="maxPoolSize" value="15"/>
<property name="maxStatements" value="100"/>
<property name="maxIdleTime" value="120"/>
<property name="testConnectionOnCheckout" value="true"/>
<property name="idleConnectionTestPeriod" value="300"/>
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="configLocation">
<value>classpath:hibernate.cfg.xml</value>
</property>
<property name="configurationClass">
<value>org.hibernate.cfg.AnnotationConfiguration</value>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop>
<prop key="hibernate.show_sql">true</prop>
<prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl}</prop>
</props>
</property>
</bean>
Also i tried flush the session:
sesion.flush();
I have a such property in hibernate.cfg.xml file:
<property name="hibernate.flushMode">ALWAYS</property>
Or maybe problem with mysql...
Does anybody encountered that problem?
Please advice.
UPDATE:
I am using transaction manager:
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
hibernate.cfg.xml:
<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.flushMode">ALWAYS</property>
<mapping class="com.dao.model.user.User"/>
</session-factory>
</hibernate-configuration>
You're missing the actual transaction demarcation so that Spring can create the transactional context that the SessionFactory should use.
Regular use case
You could, for instance, annotate your transactional method with #Transactional and trigger its processing with <tx:annotation-driven/> as you are already using the XML configuration (#EnableTransactionManagement if you are using java config).
You shouldn't call flush yourself as the infrastructure will do that automatically when it needs to by default.
Check the transaction management section of the documentation for more information.
Open session in view
You may just as well have all this in combination with OpenSessionInViewFilter. In that case, the same principle applies: having the session open for your request is not enough, check this note from the javadoc of OpenSessionInViewFilter:
NOTE: This filter will by default not flush the Hibernate Session, with the flush mode set to FlushMode.NEVER. It assumes to be used in combination with service layer transactions that care for the flushing: The active transaction manager will temporarily change the flush mode to FlushMode.AUTO during a read-write transaction, with the flush mode reset to FlushMode.NEVER at the end of each transaction. If you intend to use this filter without transactions, consider changing the default flush mode (through the "flushMode" property).
So you need to flag your method to be transactional so that Spring's transaction abstraction can take over and flush when it needs to. Adding #Transactional for your transactional method and enabling its processing through <tx:annotation-driven/> are probably the only things that you need to do. Please read the documentation first to understand when you need to add this annotation.

EntityManager configuration in each DAO

I understand that this is a very long question, but i wanted to ask everything because i'm
stuck with these things for more than 2 weeks and i'm in a situation to solve this within
this week. Please guide me in this matter.
I'm Using EclipseLink jpa version 2, Spring 3, jdk6, MySQL5 and tomcat7.
I have configured the following in each of my DAO classes.
#PersistenceContext
private EntityManager em;
I have the following in my Spring xml:
<bean class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" id="dataSource">
<property name="url" value="jdbc:mysql://localhost:3306/xxxxx"/>
<property name="username" value="xxxx"/>
<property name="password" value="xxxx"/>
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
</bean>
<bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="entityManagerFactory">
<property name="dataSource" ref="dataSource"/>
<property name="jpaVendorAdapter" ref="jpaVendorAdapter"/>
<property name="jpaDialect" ref="jpaDialect"/>
</bean>
<bean class="org.springframework.orm.jpa.JpaTransactionManager" id="transactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
<property name="jpaDialect" ref="jpaDialect"/>
</bean>
<bean id="jpaVendorAdapter" class="org.springframework.orm.jpa.vendor.EclipseLinkJpaVendorAdapter" >
<property name="showSql" value="true"/>
<property name="generateDdl" value="true" />
</bean>
<bean id="jpaDialect" class="org.springframework.orm.jpa.vendor.EclipseLinkJpaDialect"/>
From Persistence.xml:
<persistence-unit name="xxxxx" transaction-type="RESOURCE_LOCAL">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<-- class mappings -->
</persistence-unit>
I've got few confusion about what i have done:
Is the EntityManager injected by Spring? (I understand that #PersistenceContext is a
J2EE annotation, so wondering whether it is injected without Spring's contribution).
As i have already mentioned, i have injected EntityManager in all the DAO classes. Is
this a good practice? or should i make it Singleton by having a separate class like
PersistenceManager, which has EntityManager attribute wired, and have
getEntityManager() method?
As you can see above, i have configured Spring transactions. But when i do CRUD
operations continuously for 2-3 times, application gets stuck and fails with EclipseLink
exception saying unable to get lock, timeout etc. Am i doing anything wrong here or
missing any transaction configurations??
With the above configurations, i can only use #Transactional annotation with default
values which are PROPAGATION_REQUIRED,ISOLATION_DEFAULT. If i change these for any other
values, such as #Transactional(PROPAGATION_REQUIRED,ISOLATION_SERIALIZABLE) etc,
application throws exception as Custom isolation levels are not supported. Again, am
i missing any configurations?
Thanks.
Yes, spring recognizes the #PersistenceContext annotation and injects the entity manager
Spring takes care of that - it injects the same EntityManager instance in all DAOs. In fact, it injects a proxy so that each request uses a different entity manager.
Normally everything should run fine. You need <tx:annotation-driven /> in order to use #Transactional
JPA only supports the default isolation level. You can work this around by customizing the spring jpa dialect, but there's nothing built-in. The way to go is extend XJpaDialect (in your case X=EclipseLink), override the beingTransaction, obtain the Connection (in an eclipse-link specific way), set the desired isolation level (accessible through the transaction definition), and configure this as a property of your LocalContainerEntityManagerFactoryBean:
<property name="jpaDialect">
<bean class="com.foo.util.persistence.EclipseLinkExtendedJpaDialect" />
</property>

Categories