Below is my DBCP Connection Pool configuration,
<property name="maxWait" value="30000"/>
<property name="maxActive" value="100"/>
<property name="minIdle" value="0"/>
<property name="minEvictableIdleTimeMillis" value="60000"/>
<property name="defaultAutoCommit" value="true"/>
<property name="validationQuery" value="select sysdate from dual" />
<property name="testOnBorrow" value="true" />
<property name="tryRecoveryInMinutes" value="0.25" />
however I am getting below exception in Thread dump file.
"mythread-10444" prio=10 tid=0x00007ff098de9800 nid=0x77c runnable [0x00007ff0fd289000]
java.lang.Thread.State: RUNNABLE
at oracle.jdbc.driver.T2CStatement.t2cParseExecuteDescribe(Native Method)
at oracle.jdbc.driver.T2CStatement.executeForDescribe(T2CStatement.java:703)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
- locked <0x00000000e434a3c0> (a oracle.jdbc.driver.T2CConnection)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at org.apache.commons.dbcp.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:658)
at org.apache.commons.dbcp.PoolableConnectionFactory.validateObject(PoolableConnectionFactory.java:635)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1165)
at org.apache.commons.dbcp.AbandonedObjectPool.borrowObject(AbandonedObjectPool.java:79)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
Initially it is working fine, But after some time my application hanging completely. Could you please let me know what is the issue?
The exception clearly states that your Thread is still running and your connection is locked while it's busy executing a query.
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
- locked <0x00000000e434a3c0> (a oracle.jdbc.driver.T2CConnection)
My concern will be to find out which query is executing at that long (before the timeout) and optimize it. Based on the exception stacktrace, you are doing a DESCRIBE which the Oracle RDMS has a lock on that query and its still execute while trying to run another query.
Considering a Spring environment, did you properly define a transaction-manager bean in your Spring configuration XML?
<!-- Spring transaction manager -->
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="emf" />
</bean>
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="*" propagation="REQUIRED" />
</tx:attributes>
</tx:advice>
<!-- Spring transaction management per transactional-annotation -->
<tx:annotation-driven transaction-manager="transactionManager" />
In my team we had a similar issue a few weeks ago, not noticing that this section was wrapped by a comment in our Spring XML. As a result a bunch of transactions never got commited ideling in front of the database. Hope this helps.
I had similar issues with my application using dbcp. And it turned out that the connections were not closed properly. On exceptions the connections were leaked and hence leading to deadlocks after some time.
I have written a full explanation here
Related
I have a Java Spring application connecting to a SQL Server database.
The connection settings are:
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close">
<property name="driverClass" value="net.sourceforge.jtds.jdbc.Driver" />
<property name="jdbcUrl"
value="jdbc:jtds:sqlserver://${db.host}:1433/TestDB" />
<property name="user" value="${db.user}" />
<property name="password" value="${db.pass}" />
<!-- these are connection pool properties for C3P0 -->
<property name="minPoolSize" value="10" />
<property name="maxPoolSize" value="100" />
<property name="acquireIncrement" value="5"/>
<property name="maxIdleTime" value="30000" />
</bean>
everything works fine but sometimes I got the following error:
Could not open JDBC Connection for the transaction; nested exception is java.sql.SQLException: I/O Error: Read timed out
I have searched a lot but can't find any clue, any idea or help?
I'm using
<bean id="sqlSession" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionFactory" />
</bean>
in my spring-config xml to get my sqlSession, and in the DAO I use:
#Autowired
SqlSession sqlSession;
and then I execute the queries I want. Is it possible that because the connection is not closed that this error exists?
In my case connections were getting dropped when the nightly db backup job triggered. I'm using jtds/sql-server as well. Here is what I did to fix it:
Create/setup a health-check cron job that executes a simple query from within your application, like a short select from. Call it every 10 minutes or so and log the result. It will give you some feedback about when and why this is happening.
Reduce the idle time parameter (maxIdleTime) in your configuration so that old connections get automatically discarded.
Keep in mind that if you don't change the maxIdleTime and you keep multiple connections open, some of them may remain in a bad state even if you are using the health-check function. Quoting from c3p0 documentation:
By default, pools will never expire Connections. If you wish Connections to be expired over time in order to maintain "freshness", set maxIdleTime and/or maxConnectionAge. maxIdleTime defines how many seconds a Connection should be permitted to go unused before being culled from the pool. maxConnectionAge forces the pool to cull any Connections that were acquired from the database more than the set number of seconds in the past.
Another way to setup a health-check call is by using the idleConnectionTestPeriod parameter. Also check this answer which can give you more ideas on how to set it up.
I have an existing app that utilizes Spring 3.0.3, Hibernate 3.6.0 and an Oracle DB.
I've got it set up and running c3p0 but I noticed something strange that I can't really figure out.
This is my Spring set up
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass" value="${xxgglom.driver}" />
<property name="jdbcUrl" value="${url}" />
<property name="user" value="${username}" />
<property name="password" value="${password}" />
<property name="minPoolSize" value='5' />
<property name="maxPoolSize" value="40" />
<property name="maxIdleTime" value="240" />
<property name="maxIdleTimeExcessConnections" value="180" />
<property name="maxStatements" value="50" />
<property name="testConnectionOnCheckin" value="true" />
<property name="testConnectionOnCheckout" value="false" />
<property name="idleConnectionTestPeriod" value="300" />
I check the database v$session and I see it creates the 5 connections in the pool. I'll start using the app and it will increase the pool size when needed. So I can tell C3P0 is working from checking the logs. The one issue I'm having is. There are these inactive connections, that are pass the maxIdleTime.
I check their alive times and they're way pass the 240 seconds. I check the database again and they all show inactive but when I look in the logs tell me this.
trace com.mchange.v2.resourcepool.BasicResourcePool#282fafdd [managed: 5, unused: 4, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection#5f9f7637)
I'm not sure what's going on exactly but after awhile these idle connections start piling up, and they don't seem to be getting culled from the connection pool. Any suggestions on what to do?
maxIdleTime doesn't guarantee that Connections will be culled at any particular time. As long as each Connection is used at least once every 4 minutes under your config, they won't be culled. If you want to put an unconditional limit on Connections' live-time (I don't know why you would), you can use maxConnectionAge.
I'm using quartz scheduler for scheduling a spring batch job.
The application starts without any exception but it never fires any job.
Just let me to explain my scenario:
If I run the job(with scheduler) through a main method using MapJobRepositoryFactoryBean it works perfectly, but after integration of the scheduler with spring-mvc web app it shows some version update error, after that I used "JobRepositoryFactoryBean" which uses database for storing job states.
So I added JobRepositoryFactoryBean bean and other DB changes, but it never triggers the job.
bellow is a snippet of log
2015-02-10 19:14:45 INFO context.support.XmlWebApplicationContext - Bean 'jobRegistry' of type [class org.springframework.batch.core.configuration.support.MapJobRegistry] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2015-02-10 19:14:45 INFO jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver
2015-02-10 19:14:45 INFO launch.support.SimpleJobLauncher - No TaskExecutor has been set, defaulting to synchronous executor.
2015-02-10 19:14:46 INFO context.support.DefaultLifecycleProcessor - Starting beans in phase 2147483647
2015-02-10 19:14:46 INFO scheduling.quartz.SchedulerFactoryBean - Starting Quartz Scheduler now
2015-02-10 19:14:46 INFO web.servlet.DispatcherServlet - FrameworkServlet 'mvc-dispatcher': initialization completed in 2155 ms
Here is my job configuration
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<bean
class="org.springframework.batch.core.configuration.support.JobRegistryBeanPostProcessor">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean"
p:dataSource-ref="dataSource" p:transactionManager-ref="transactionManager">
<property name="databaseType" value="reconConfig!{batch.databaseType}" />
<property name="isolationLevelForCreate" value="ISOLATION_DEFAULT" />
</bean>
<bean id="mapJobRepository"
class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean"
lazy-init="true" autowire-candidate="false" />
<bean id="jobOperator"
class="org.springframework.batch.core.launch.support.SimpleJobOperator"
p:jobLauncher-ref="jobLauncher" p:jobExplorer-ref="jobExplorer"
p:jobRepository-ref="jobRepository" p:jobRegistry-ref="jobRegistry" />
<bean id="jobExplorer"
class="org.springframework.batch.core.explore.support.JobExplorerFactoryBean"
p:dataSource-ref="dataSource" />
<bean id="jobRegistry"
class="org.springframework.batch.core.configuration.support.MapJobRegistry" />
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="appDataSource" />
</bean>
<bean class="org.springframework.batch.core.scope.StepScope" />
<bean id="reconConfigPlaceholderProperties"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="ignoreUnresolvablePlaceholders" value="true" />
<property name="location" value="classpath:batchDb.properties" />
<property name="placeholderPrefix" value="reconConfig!{" />
<property name="placeholderSuffix" value="}" />
</bean>
</beans>
It was running successfully , but after some more development it stopped working. I'm unable to figure out what exactly I changed in configuration which caused this.
Can any one please suggest the check points in using "JobRepositoryFactoryBean", If I'm missing or the problem is else where.
If this is your entire configuration for job scheduling, I believe you are missing the Cron scheduling part entirely...
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<bean id="cronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="jobDetail" />
<property name="cronExpression" value="*/10 * * * * ?" />
</bean>
</property>
</bean>
Please read through the spring doc and the quartz scheduling section here.
We have had the similar or the same problem. Look into DB repository. Repository is not resistant from different instances of application server (e.g. testing and development environment). It means, when two or more applications are connected into the same DB, you can have a problem. Applications begin to pull on the time and jobs. Unregistered jobs in one application are signed as ERROR and blocked and vice versa.
Two tables are important in this case.
Select XXX_SCHEDULER_STATE. Is there more than one row? Than there can be conflict. (Are you not able to distinguish your APP Server? If yes, you are connected into another DB than you suppose. It is very often but trivial problem.)
Select XXX_TRIGGERS.TRIGGER_STATE is there ERROR? If yes, try to change it from any SQL tool:
update TRIGGERS set TRIGGER_STATE = 'WATING' where TRIGGER_STATE = 'ERROR';
Restart application server. If you have a luck, the failed trigger started and work after restart. If not, try to shutdown concurrent App Server or change the repository.
I'm using WebLogic 10.3.3 with Oracle 11g and face a weird problem with Spring Batch as soon as I'm switching from Spring ResourcelessTransactionManager (which is mainly for testing) to the productive DataSourceTransactionManager. First I used WebLogics default driver oracle.jdbc.xa.client.OracleXADataSource but this one fails because Spring can't set the isolation level - this is also documented here.
I'm fine with that since I don't need global transactions anyway so I switched to oracle.jdbc.driver.OracleDriver. Now I'm getting the error message
ORA-01453: SET TRANSACTION must be first statement of transaction
I don't find a lot of information on this, there was a bug but that should have been fixed in Oracle 7 long time ago. It looks like a transaction is started before (?) the actual job gets added to the JobRepository and is not closed properly or something like that.
JobI was able to solve this by setting the Isolation level for all transactions to READ_COMMITTED. By default, Spring sets that to SERIALIZABLE which is very strict (but perfectly fine). This didn't work on my machine although Oracle should support it:
http://www.oracle.com/technetwork/issue-archive/2005/05-nov/o65asktom-082389.html
Here's my code - first for the configuration:
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager" />
<property name="dataSource" ref="dataSource" />
<property name="isolationLevelForCreate" value="ISOLATION_READ_COMMITTED" />
</bean>
...and this is for the job itself (simplified):
public class MyFancyBatchJob {
#Transactional(isolation=Isolation.READ_COMMITTED)
public void addJob() {
JobParameters params = new JobParametersBuilder().toJobParameters();
Job job = jobRegistry.getJob("myFancyJob");
JobExecution execution = jobLauncher.run(job, params);
}
}
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" >
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"></property>
<property name="url" value="jdbc:oracle:thin:<username>/<password>#<host>:1521:<sid>" />
</bean>
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="org/springframework/batch/core/schema-drop-oracle10g.sql" />
<jdbc:script location="org/springframework/batch/core/schema-oracle10g.sql" />
</jdbc:initialize-database>
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="databaseType" value="oracle" />
<property name="tablePrefix" value="BATCH_"/>
<property name="isolationLevelForCreate" value="ISOLATION_DEFAULT"/>
</bean>
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
/*for spring batch with oracle 10g and 11g
*/
I have the following code in a Spring JdbcTemplate based dao -
getJdbcTemplate().update("Record Insert Query...");
int recordId = getJdbcTemplate().queryForInt("SELECT last_insert_id()");
The problem is that my sometimes my update and queryForInt queries get executed using different connections from the connection pool.
This results in an incorrect recordId being returned since MySql last_insert_id() is supposed to be called from the same connection that issued insert query.
I have considered the SingleConnectionDataSource but do not want to use it since it degrades the application performance. I only want single connection for these two queries. Not for all the requests for all the services.
So I have two questions:
Can I manage the connection used by the template class?
Does JdbcTemplate perform automatic transaction management? If i manually apply a transaction to my Dao method does that mean two transactions will be created per query?
Hoping that you guys can shed some light on the topic.
Update - I tried nwinkler's approach and wrapped my service layer in a transaction. I was surprised to see the same bug pop up again after sometime. Digging into the Spring source code i found this -
public <T> T execute(PreparedStatementCreator psc, PreparedStatementCallback<T> action)
throws DataAccessException {
//Lots of code
Connection con = DataSourceUtils.getConnection(getDataSource());
//Lots of code
}
So contrary to what I thought, there isn't necessarily one database connection per transaction, but one connection for each query executed.
Which brings me back to my problem. I want to execute two queries from the same connection. :-(
Update -
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="${db.driver}" />
<property name="url" value="${db.jdbc.url}" />
<property name="username" value="${db.user}" />
<property name="password" value="${db.password}" />
<property name="maxActive" value="${db.max.active}" />
<property name="initialSize" value="20" />
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate"
autowire="byName">
<property name="dataSource">
<ref local="dataSource" />
</property>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:advice id="transactionAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="*" propagation="REQUIRES_NEW" rollback-for="java.lang.Exception" timeout="30" />
</tx:attributes>
</tx:advice>
<aop:config>
<aop:pointcut id="pointcut" expression="execution(* service.*.*(..))" />
<aop:pointcut id="pointcut2" expression="execution(* *.ws.*.*(..))" />
<aop:advisor pointcut-ref="pointcut" advice-ref="transactionAdvice" />
<aop:advisor pointcut-ref="pointcut2" advice-ref="transactionAdvice" />
</aop:config>
Make sure your DAO is wrapped in a transaction (e.g. by using Spring's Interceptors for Transactions). The same connection will then be used for both calls.
Even better would be to have the transactions one level higher, at the service layer.
Documentation: http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/transaction.html
Update:
If you take a look at the JavaDoc of the DataSourceUtils.getConnection() method that you referenced in your update, you will see that it obtains the connection associated with the current thread:
Is aware of a corresponding Connection bound to the current thread, for example
when using {#link DataSourceTransactionManager}. Will bind a Connection to the
thread if transaction synchronization is active, e.g. when running within a
{#link org.springframework.transaction.jta.JtaTransactionManager JTA} transaction).
According to this, it should work like you have set it up. I have used this pattern plenty of times, and never ran into any issues like you described...
Please also take a look at this thread, someone was dealing with similar issues there: Spring Jdbc declarative transactions created but not doing anything
This is my approach to do this:
namedJdbcTemplate.execute(savedQuery, map, new PreparedStatementCallback<Object>() {
#Override
public Object doInPreparedStatement(PreparedStatement paramPreparedStatement)
throws SQLException, DataAccessException {
paramPreparedStatement.execute("SET #userLogin = 'blabla123'");
paramPreparedStatement.executeUpdate();
return null;
}
});