I've an application that uses Spring+Hibernate+C3P0 as the connection pool. If I start the application and the database is down, Tomcat hangs for a long log time withouth giving any feedback. Is there some property that I can set to avoid this? For example, if after 30 seconds it can't get a connection, throw a connection timeout exception.
By default, it should take about 30 seconds before c3p0 signals a failure if it cannot acquire a Connection. You can control the length of time by modifying either the number of attempts c3p0 makes at the database or the interval between attempts.
See c3p0.acquireRetryAttempts and c3p0.acquireRetryDelay.
If you set c3p0.acquireRetryAttempts to one, c3p0 won't retry and connection attempts will fail retry immediately.
See also Configuring Recovery From Database Outages.
Related
We are using H2 started as database server process and listening on standard TCP/IP port 9092.
Our application is deployed in a Tomcat using connection pooling. We do a purge during idle time which at the end results in closing all connections to H2. From time to time we observe errors when the application tries to open the connection to H2 again:
SCHEDULERSERVICE schedule: Exception: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.engine.Engine.openSession(Engine.java:209)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
at java.lang.Thread.run(Thread.java:748)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
The problem occurs when the Tomcat connection pool closes all idle connection (unused) and one connection still in use is closed afterwards.
The next attempt to open a new connection fails, a retry is successfully after some wait time.
Under which circumstances does this exception happen?
What does the exception mean?
Are there any recommendations to follow to avoid the problem?
It sounds to me that H2 does a database close after the last connection has been closed.
When does the database close occure?
How can database closures been controlled?
Thx in advance
Thorsten
Embedded database in web applications needs careful handling of its lifecycle.
You can add a javax.servlet.ServletContextListener implementation (marked with #WebListener annotation or included into web.xml) and add explicit database shutdown to its contextDestroyed() methods.
You can force database shutdown here with connection.createStatement().execute("SHUTDOWN"). If your application needs to write something to database during unload, it should do it before that command.
Without the explicit shutdown H2 closes the database when all connections are closed, if some other behavior wasn't configured explicitly (with parameters in JDBC URL, for example). For example, DB_CLOSE_DELAY sets the additional delay, maybe your application uses that setting and therefore H2 doesn't close the database immediately, or application doesn't close all connections immediately.
Anyway, when you're trying to update the web application of the fly, Tomcat tries to initialize the new version before its old version is unloaded. If H2 is in classpath of the web application itself, the new version will be unable to connect to the database during short period of time when the new version is already online but the old version isn't unloaded yet.
If you don't like it, you can run the standalone H2 Server process and use remote connections to it in your web applications.
Another option is to move H2 to the classpath of Tomcat itself and configure the connection pool as resource in the server.xml, in that case it shouldn't be affected by the lifecycle of your applications.
In both these cases you shouldn't use the SHUTDOWN command.
UPDATED
With client-server connections to a remote server such exception means that server decided to close the database because there are no active connection. This operation can't be interrupted and reverted in the middle. On attempt to open a new connection to the same database during this process it waits at most for 1 minute for completion of this process to re-open the database again. This timeout is not configurable.
There are two possible solutions.
DB_CLOSE_DELAY setting can be used with some large value in seconds. When all connections are closed, database will stay online for the specified number of seconds. -1 also can be used to set an infinite timeout.
You can try to speed up the shutdown process, but you have to figure out what takes so much time by yourself. The file compaction procedure is limited to 200 milliseconds by default, it may take a longer time, but I think it shouldn't be that long. Maybe you have a lot of temporary objects or uncommitted data. Maybe you have a very high fragmentation of database file. It's hard to say what's going wrong without further investigation.
I have created a connection using connection pool in init() method of servlet and and closing / returning the connection in destroy() method. Basically idea is to use single connection for all time and for all users of application. Now my question is , If users are idle for some time say for 15 mins , does oracle trigger any error related to idle timeout (idle timeout is set to 10 mins.)? and if yes how could i prevent the same ?..
Seems like by default there is no any timeouts. See this answer for details. In short, there is no timeouts but you can configure it. Also you can configuration dead connection detection.
But if I'm not mistaken you asking if you have such an opportunity to disable it. In short: you may not do anything and everything will work as you wish. But why? You should configure such idle timeouts and dead connection detection if you don't want some user occasionally hang you application with a single connection.
It was the first point. The other one: actually, you should have a connection pool, not a single connection, in order to serve many requests concurrently.
So sorry, the answer is the following: there is no such timeouts by default but I don't understand you intentions:) Hope this helps.
We are using MySQL database along with C3P0ComboPooledDataSource for pooling. Below are the configuration:
minPoolSize=10
maxPoolSize=50
maxStatements=50
idleConnectionTestPeriod=3600
acquireIncrement=5
maxIdleTime=600
numHelperThreads=6
unreturnedConnectionTimeout=3600
maxIdleTimeExcessConnections=600
As per document after maxIdleTime the connection should close automatically, and this happens when we debug locally. But on production server we see the MySQL connection count (show processlist) exceeds more than 50 and they are in sleep mode from last 2400 seconds, ideally which should get closed after 600 seconds.Due to this the application is not able to establish connection through pooled data source as it exceeds maxPoolSize. Please let me know what can be done in order to debug this and how can we resolve it.
This is almost certainly a Connection leak. Please be sure to reliably close your Connections, ideally using something like the try-with-resources.
If you don't understand where your application is not reliably closing Connnections See e.g. C3P0 Spring Hibernate: Pool maxed out. How to debug?
I have been getting these exceptions for a number of weeks with no solution...
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
The last packet successfully received from the server was 179,695,604 milliseconds ago.
The last packet sent successfully to the server was 179,695,604 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem
So I have altered my application's Context.xml to have the autoReconnect=true tag set with my databases for connection pooling in Tomcat 7. I have even set wait_timeout to infinity in the context file.
What am I missing? Is this a common problem? It seems to have a small amount of information around on the net, but when I follow these guides, the same thing happens the next day after a period of inactivity.
The more I use the server, the less this happens. I think it is expiration of the pool connections but how can I stop them expiring if wait_timeout is failing? Any ideas on how to diagnose the problem or config files?
I was facing a similar problem, autoReconnect=true throws the CommunicationsException exception, but then creates a new connection to mysql. So the next request would succeed. This behavior would continue to occur and the first request after an idle time would fail.
To add to what Alex has already answered, I added the following params to my JDBC connection string and I don't see the error any more.
testOnBorrow="true" validationQuery="SELECT 1" validationInterval="60000"
The description of testOnBorrow sufficiently explains it. And the good thing is I do not have to make any changes in my code.
References:
https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
http://www.tomcatexpert.com/blog/2010/04/01/configuring-jdbc-pool-high-concurrency
The MySQL Connector/J documentation says about autoReconnect:
If enabled the driver will throw an exception for a queries issued on
a stale or dead connection, which belong to the current transaction,
but will attempt reconnect before the next query issued on the
connection in a new transaction.
Meaning you will still get exceptions.
Connectors such as JDBI get around this by adding optional test queries on idle, borrow from pool and return to pool. Perhaps you could add something to your own JDBC connection wrapper to do the same. Alternatively, follow the documentation for autoReconnect and properly catch SQLExceptions arising from dead/stale connections.
There are further useful references on this answer using DBCP and c3p0
My application uses Tomcat JDBC connection pool, with MySQL DB.
Seems like a process that run during the night (anti virus scan?) cause the memory and CPU on the machine to increase, and as a result connections from the pool stuck on active until the connection pool can't response to any connection request.
I'm getting errors like:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after statement closed.
Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:100; busy:97; idle:0; lastwait:10000]. (That's weird, where are the remaining 3?)
Looking at a chart I'm generating describing the active connection state, it is flat until at some point it start increasing until it reach the maximum and stays there.
My connection pool is configure to remove unclosed connections (setRemoveAbandoned = true).
Do you have any idea how can I solve this issue?
I think this is because your application not closing connections after use. Please check your code and make sure all connections are closing after use.