How to use same connection for two queries in Spring? - java

I have the following code in a Spring JdbcTemplate based dao -
getJdbcTemplate().update("Record Insert Query...");
int recordId = getJdbcTemplate().queryForInt("SELECT last_insert_id()");
The problem is that my sometimes my update and queryForInt queries get executed using different connections from the connection pool.
This results in an incorrect recordId being returned since MySql last_insert_id() is supposed to be called from the same connection that issued insert query.
I have considered the SingleConnectionDataSource but do not want to use it since it degrades the application performance. I only want single connection for these two queries. Not for all the requests for all the services.
So I have two questions:
Can I manage the connection used by the template class?
Does JdbcTemplate perform automatic transaction management? If i manually apply a transaction to my Dao method does that mean two transactions will be created per query?
Hoping that you guys can shed some light on the topic.
Update - I tried nwinkler's approach and wrapped my service layer in a transaction. I was surprised to see the same bug pop up again after sometime. Digging into the Spring source code i found this -
public <T> T execute(PreparedStatementCreator psc, PreparedStatementCallback<T> action)
throws DataAccessException {
//Lots of code
Connection con = DataSourceUtils.getConnection(getDataSource());
//Lots of code
}
So contrary to what I thought, there isn't necessarily one database connection per transaction, but one connection for each query executed.
Which brings me back to my problem. I want to execute two queries from the same connection. :-(
Update -
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="${db.driver}" />
<property name="url" value="${db.jdbc.url}" />
<property name="username" value="${db.user}" />
<property name="password" value="${db.password}" />
<property name="maxActive" value="${db.max.active}" />
<property name="initialSize" value="20" />
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate"
autowire="byName">
<property name="dataSource">
<ref local="dataSource" />
</property>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:advice id="transactionAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="*" propagation="REQUIRES_NEW" rollback-for="java.lang.Exception" timeout="30" />
</tx:attributes>
</tx:advice>
<aop:config>
<aop:pointcut id="pointcut" expression="execution(* service.*.*(..))" />
<aop:pointcut id="pointcut2" expression="execution(* *.ws.*.*(..))" />
<aop:advisor pointcut-ref="pointcut" advice-ref="transactionAdvice" />
<aop:advisor pointcut-ref="pointcut2" advice-ref="transactionAdvice" />
</aop:config>

Make sure your DAO is wrapped in a transaction (e.g. by using Spring's Interceptors for Transactions). The same connection will then be used for both calls.
Even better would be to have the transactions one level higher, at the service layer.
Documentation: http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/transaction.html
Update:
If you take a look at the JavaDoc of the DataSourceUtils.getConnection() method that you referenced in your update, you will see that it obtains the connection associated with the current thread:
Is aware of a corresponding Connection bound to the current thread, for example
when using {#link DataSourceTransactionManager}. Will bind a Connection to the
thread if transaction synchronization is active, e.g. when running within a
{#link org.springframework.transaction.jta.JtaTransactionManager JTA} transaction).
According to this, it should work like you have set it up. I have used this pattern plenty of times, and never ran into any issues like you described...
Please also take a look at this thread, someone was dealing with similar issues there: Spring Jdbc declarative transactions created but not doing anything

This is my approach to do this:
namedJdbcTemplate.execute(savedQuery, map, new PreparedStatementCallback<Object>() {
#Override
public Object doInPreparedStatement(PreparedStatement paramPreparedStatement)
throws SQLException, DataAccessException {
paramPreparedStatement.execute("SET #userLogin = 'blabla123'");
paramPreparedStatement.executeUpdate();
return null;
}
});

Related

DBCP Connection pool

Below is my DBCP Connection Pool configuration,
<property name="maxWait" value="30000"/>
<property name="maxActive" value="100"/>
<property name="minIdle" value="0"/>
<property name="minEvictableIdleTimeMillis" value="60000"/>
<property name="defaultAutoCommit" value="true"/>
<property name="validationQuery" value="select sysdate from dual" />
<property name="testOnBorrow" value="true" />
<property name="tryRecoveryInMinutes" value="0.25" />
however I am getting below exception in Thread dump file.
"mythread-10444" prio=10 tid=0x00007ff098de9800 nid=0x77c runnable [0x00007ff0fd289000]
java.lang.Thread.State: RUNNABLE
at oracle.jdbc.driver.T2CStatement.t2cParseExecuteDescribe(Native Method)
at oracle.jdbc.driver.T2CStatement.executeForDescribe(T2CStatement.java:703)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
- locked <0x00000000e434a3c0> (a oracle.jdbc.driver.T2CConnection)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at org.apache.commons.dbcp.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:658)
at org.apache.commons.dbcp.PoolableConnectionFactory.validateObject(PoolableConnectionFactory.java:635)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1165)
at org.apache.commons.dbcp.AbandonedObjectPool.borrowObject(AbandonedObjectPool.java:79)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
Initially it is working fine, But after some time my application hanging completely. Could you please let me know what is the issue?
The exception clearly states that your Thread is still running and your connection is locked while it's busy executing a query.
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
- locked <0x00000000e434a3c0> (a oracle.jdbc.driver.T2CConnection)
My concern will be to find out which query is executing at that long (before the timeout) and optimize it. Based on the exception stacktrace, you are doing a DESCRIBE which the Oracle RDMS has a lock on that query and its still execute while trying to run another query.
Considering a Spring environment, did you properly define a transaction-manager bean in your Spring configuration XML?
<!-- Spring transaction manager -->
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="emf" />
</bean>
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="*" propagation="REQUIRED" />
</tx:attributes>
</tx:advice>
<!-- Spring transaction management per transactional-annotation -->
<tx:annotation-driven transaction-manager="transactionManager" />
In my team we had a similar issue a few weeks ago, not noticing that this section was wrapped by a comment in our Spring XML. As a result a bunch of transactions never got commited ideling in front of the database. Hope this helps.
I had similar issues with my application using dbcp. And it turned out that the connections were not closed properly. On exceptions the connections were leaked and hence leading to deadlocks after some time.
I have written a full explanation here

Spring #Transactional on one service method spanning over two Hibernate transaction managers

I was wondering if it is possible to use two transaction manager in one service methods.
Because due to the limitation of client's mysql db configuration, we have got 2 different datasources within one database, i.e., one user/pwd/url per schema. Thats why i have to configured two transaction managers. Now I got problem when it comes to the service implementation. See the following code:
public class DemoService{
...
#Transactional(value = "t1")
public doOne(){
doTwo();
}
#Transactional(value = "t2")
public doTwo(){
}
...
}
if I using this code pattern, i always got the exception
org.hibernate.HibernateException: No Session found for current thread
If i run the two methods seperately, it workd fine.
Did i miss something? Or there is other work around here?
Any advice would be appreciated.
btw: some of my configuration
<bean id="sessionFactorySso" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="mappingLocations">
<list>
<value>classpath*:sso.vo/*.hbm.xml</value>
</list>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.show_sql">true</prop>
<prop key="generateDdl">true</prop>
<prop key="hibernate.dialect">${dialect} </prop>
</props>
</property>
<property name="dataSource" ref="dataSourceSso"/>
</bean>
<bean id="dataSourceSso" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass" value="${driver}"/>
<property name="jdbcUrl" value="${sso.url}"/>
<property name="user" value="${sso.username}"/>
<property name="password" value="${sso.password}"/>
<!-- these are C3P0 properties -->
<property name="acquireIncrement" value="2" />
<property name="minPoolSize" value="1" />
<property name="maxPoolSize" value="2" />
<property name="automaticTestTable" value="test_c3p0" />
<property name="idleConnectionTestPeriod" value="300" />
<property name="testConnectionOnCheckin" value="true" />
<property name="testConnectionOnCheckout" value="true" />
<property name="autoCommitOnClose" value="true" />
<property name="checkoutTimeout" value="1000" />
<property name="breakAfterAcquireFailure" value="false" />
<property name="maxIdleTime" value="0" />
</bean>
<bean id="transactionManagerSso" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactorySso"/>
<qualifier value="sso" />
</bean>
<tx:annotation-driven transaction-manager="transactionManagerSso" />
Because you want to enlist two data sources in one transaction you need XA(Global) Transaction.
Therefore you need to:
Set the Spring JTA transaction manager
You Hibernate properties should use the JTA platform settings
Your data source connections should be XA complaint
You need an application server JTA transaction manager or a stand-alone tarnsaction manager (Bitronix, Atomikos, JOTM)
You will need two session factory configurations, one for each individual data source.
And you won't have two transaction managers: t1 and t2, but instead you will enlist two transactional XA data sources that will be automatically enlisted in the same global transaction, meaning you will have two XA connections being enlisted in the same global transaction. The XA transaction will use the 2PC protocol to commit both resources upon commit time.
Checkout this Bitronix Hibernate example.
You have a few options:
Inject the bean into itself and use the reference to call doTwo(). This really goes against the whole idea of IoC and AOP so I don't recommend it.
Switch to compile time weaving. Rather than using proxies, Spring (actually the AspectJ compiler) will add the bytecode to start/stop transactions to your class at compile time. There are pros and cons to this approach. See this page for more details.
Use load time weaving. Same as #2 except that your classes are modified as they are loaded rather than at compile time. IMO, Java classloading is complicated enough. I'm sure this works great for some folks but I personally would avoid this.
As Vlad pointed out, you can use JTA and XA.
Start a new transaction against transaction manager 2 within doOne() before calling doTwo(). RTFM on programmatic transaction management.
Check out ChainedTransactionManager. It essentially aggregates multiple transaction managers and does a "best effort" with commit/rollback. This is NOT a true two-phase commit like Vlad's solution.
All of these except for Vlad's solution (#4) have the potential to leave the databases in an inconsistent state. You need to use JTA/XA/two-phase commit to ensure consistency in the event that one of the TX managers throws an exception at commit time.

Is it possible to write transactional unit tests that test transactional code?

I am writing an application using Spring. I want to write my JDBC code to be transactional, which I can achieve by using AOP:
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="get*" read-only="true" rollback-for="MyCustomException"/>
<tx:method name="*" rollback-for="MyCustomException" />
</tx:attributes>
</tx:advice>
<aop:config>
<aop:pointcut id="pc" expression="execution(* com.me.jdbc.*.*(..))" />
<aop:advisor advice-ref="txAdvice" pointcut-ref="pc" />
</aop:config>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy">
<constructor-arg ref="simpleDataSource" />
</bean>
<bean id="simpleDataSource" class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
<property name="driverClass" value="org.h2.Driver" />
<property name="username" value="sa" />
<property name="password" value="" />
<property name="url" value="jdbc:h2:mem:aas;MODE=Oracle;DB_CLOSE_DELAY=-1;MVCC=true" />
</bean>
So all my methods in com.me.jdbc should be transactional and rollback if a MyCustomException occurs. This works so far.
I now want to write a unit test, but I want my unit test to be transactional, such that, once the test is complete the entire transaction will roll back and leave the database the way it was at the start.
I am able to achieve this by declaring:
#TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, TransactionalTestExecutionListener.class })
#Transactional
on my Test class. The problem however is that I lose the transactions on the application code, i.e. after each test the transactions are rolled back, but if an exception occurs in the application code the insert is not rolled back and my unit test fails because I expect the table to be empty, but it isn't.
How would I go about achieving these two levels of transaction? Is it possible? I can work around the problem by writing a setup method for my unit tests that clears out all of the tables before each test. I am fine with this, but I thought it would be good to be able to use the rollback method of achieving the same thing.
As posted in the comments section, the issue is that by making the test method encapsulate a transaction, the rollback will only occur if the test throws the exception. This makes it impossible to verify the behaviour under test before the exception is thrown. I have gone with the approach of just emptying all the tables in the setup method (before each test).

How to synchronize JOOQ with SpringBatch JdbcTemplate

I'm working with Spring batch and trying to build TASKLET with two 'ORM' frameworks:
use jdbcTemplate for simple queries, and JOOQ framework for more complex query.
Here is a part of spring config:
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSourceDbcp_MySQL" />
</bean>
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager" />
</bean>
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<job id="importProductsJob" xmlns="http://www.springframework.org/schema/batch">
<step id="readWrite">
<tasklet transaction-manager="transactionManager">
<chunk reader="multiResourceReader"
processor="itemProcessor"
writer="itemWriter"
commit-interval="250" />
</tasklet>
</step>
</job>
<bean id="itemWriter" class="com.myexample.writer.JdbcSequenceWriter">
<property name="dataSourceTransactionManager" ref="transactionManager" />
<!-- and some more specific properties -->
</bean>
I initialise my ORMs at setter in com.myexample.writer.JdbcSequenceWriter:
private JdbcTemplate jdbcTemplate;
private JdbcTemplate jdbcTemplate2;
private DSLContext dslContext;
public void setDataSourceTransactionManager(DataSourceTransactionManager trxManager) {
this.jdbcTemplate = new JdbcTemplate(trxManager.getDataSource());
this.jdbcTemplate2 = new JdbcTemplate(trxManager.getDataSource());
dslContext = DSL.using(trxManager.getDataSource(), SQLDialect.MYSQL);
}
Both jdbcTemplate's has one session and I can INSERT a record using 'jdbcTemplate', and find this record using SELECT with 'jdbcTemplate2'. But if try to INSERT a record with any 'jdbcTemplate' and find it using dslContext (JOOQ ORM) I have empty result.
I undarstand that Spring Batch use some tricky tansaction manager, witch rollback all the operation if 'writer' can't finish all it's operations. But how can I synchronize another framework with single transaction manager?
The generally expected behavior is what you observe between JdbcTemplate and dslContext: each acquires a separate DB connection from the DataSource.
Your first finding, with two JdbcTemplate instances, is special behavior which JdbcTemplate had to be specifically designed for. Apparently, all its instances share the same connection internally.
You can make both work (probably) by first acquiring a JDBC Connection from the DataSource and then passing it to both the JdbcTemplate and dslContext. This is not the supported usage pattern and there may still be problems with it.
Once you have acquired a connection, naturally you will be in charge of releasing it as well.

Spring Data + JPA with multiple datasources but only one set of Repositories

I've been researching this a bunch today and I'm starting to think that what I want to do may not be possible, so I am turning to you, o mighty Stackoverflow, for help.
I'm building a RESTful services platform in Java, with Spring Data 3.1.2 + JPA as my persistence layer (as documented here). My data model objects are all implemented as interfaces that extend the Spring JpaRepository interface. I've got everything wired up and working nicely with a single datasource, as shown in this example (note that the datasource shown is Derby, but that's just for development purposes; in production, we'll be using Oracle):
<jpa:repositories base-package="com.my.cool.package.repository"/>
<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="loadTimeWeaver">
<bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver" />
</property>
<property name="packagesToScan" value="com.my.cool.package" />
<property name="jpaVendorAdapter" ref="jpaVendorAdapter" />
</bean>
<bean name="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="emf" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.apache.derby.jdbc.EmbeddedDriver" />
<property name="url" value="jdbc:derby:derbyDB" />
<property name="username" value="dev" />
<property name="password" value="notARealPassword" />
</bean>
<bean id="jpaVendorAdapter" class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="databasePlatform" value="org.hibernate.dialect.DerbyTenSevenDialect" />
</bean>
The problem is that this application will need to connect to several (Oracle) databases. The credentials included with each incoming request will contain a field that tells the application which database to go to in order to fulfill that request. The schemas for each database are the same, so there's no need for separate repository interfaces for each database.
After a fair amount of Googling, it's clear that this is a common scenario. To wit:
multiple databases with Spring Data JPA
Spring + Hibernate + JPA + multiple databases
how to setup spring data jpa with multiple datasources
And here's a blog post by a (former?) Spring developer, which isn't actually relevant to the topic at hand, but someone brings it up in the comments, and the author responds with some info:
http://blog.springsource.org/2011/04/26/advanced-spring-data-jpa-specifications-and-querydsl/#comment-198835
The theme that seems to be emerging is that the way to solve this problem is to define multiple EntityManagerFactories, and to wire each one to the appropriate repositories like so:
<jpa:repositories base-package="com.my.cool.package.repository1" entity-manager-factory-ref="firstEntityManagerFactory" />
<jpa:repositories base-package="com.my.cool.package.repository2" entity-manager-factory-ref="secondEntityManagerFactory" />
However, as I've mentioned, I want to reuse my repository across all of the datasources, so this approach doesn't seem like it would work.
I know that there's no way around having logic in my code that takes the relevant piece of information from the request, and uses it to determine which datasource (or EntityManagerFactory) to use. The part I'm struggling with is how to get a handle to that datasource/EntityManagerFactory and "inject" it into my repository objects. Any ideas?
If you're really using the different DataSourcees in a multi-tenant kind of way (essentially assigning a request to a DataSource and sticking with it for the entire request) you should have a look at AbstractRoutingDataSource. It essentially provides a way to keep a Map of DataSourcees as well as a callback method to return a key to be used to lookup the DataSource to be used eventually. The implementation of this method usually looks up some thread bound key and return that (or even maps that onto a DataSource map key in turn). You just have to make sure some web component binds that key to the thread in the first place.
If you have that in place your Spring configuration just sets up a bean for your sub-class of AbstractRoutingDataSource and pipes the map of DataSources into it. Your Spring Data JPA setup stays the default way. The EntityManagerFactoryBean refers to the AbstractRoutingDataSource and you have a single <jpa:repositories /> element only.

Categories