MySQL connection doesnt close by C3P0 - java

We are using MySQL database along with C3P0ComboPooledDataSource for pooling. Below are the configuration:
minPoolSize=10
maxPoolSize=50
maxStatements=50
idleConnectionTestPeriod=3600
acquireIncrement=5
maxIdleTime=600
numHelperThreads=6
unreturnedConnectionTimeout=3600
maxIdleTimeExcessConnections=600
As per document after maxIdleTime the connection should close automatically, and this happens when we debug locally. But on production server we see the MySQL connection count (show processlist) exceeds more than 50 and they are in sleep mode from last 2400 seconds, ideally which should get closed after 600 seconds.Due to this the application is not able to establish connection through pooled data source as it exceeds maxPoolSize. Please let me know what can be done in order to debug this and how can we resolve it.

This is almost certainly a Connection leak. Please be sure to reliably close your Connections, ideally using something like the try-with-resources.
If you don't understand where your application is not reliably closing Connnections See e.g. C3P0 Spring Hibernate: Pool maxed out. How to debug?

Related

JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute"

We are using H2 started as database server process and listening on standard TCP/IP port 9092.
Our application is deployed in a Tomcat using connection pooling. We do a purge during idle time which at the end results in closing all connections to H2. From time to time we observe errors when the application tries to open the connection to H2 again:
SCHEDULERSERVICE schedule: Exception: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database may be already in use: "Waited for database closing longer than 1 minute". Possible solutions: close all other connection(s); use the server mode [90020-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.engine.Engine.openSession(Engine.java:209)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
at java.lang.Thread.run(Thread.java:748)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
The problem occurs when the Tomcat connection pool closes all idle connection (unused) and one connection still in use is closed afterwards.
The next attempt to open a new connection fails, a retry is successfully after some wait time.
Under which circumstances does this exception happen?
What does the exception mean?
Are there any recommendations to follow to avoid the problem?
It sounds to me that H2 does a database close after the last connection has been closed.
When does the database close occure?
How can database closures been controlled?
Thx in advance
Thorsten
Embedded database in web applications needs careful handling of its lifecycle.
You can add a javax.servlet.ServletContextListener implementation (marked with #WebListener annotation or included into web.xml) and add explicit database shutdown to its contextDestroyed() methods.
You can force database shutdown here with connection.createStatement().execute("SHUTDOWN"). If your application needs to write something to database during unload, it should do it before that command.
Without the explicit shutdown H2 closes the database when all connections are closed, if some other behavior wasn't configured explicitly (with parameters in JDBC URL, for example). For example, DB_CLOSE_DELAY sets the additional delay, maybe your application uses that setting and therefore H2 doesn't close the database immediately, or application doesn't close all connections immediately.
Anyway, when you're trying to update the web application of the fly, Tomcat tries to initialize the new version before its old version is unloaded. If H2 is in classpath of the web application itself, the new version will be unable to connect to the database during short period of time when the new version is already online but the old version isn't unloaded yet.
If you don't like it, you can run the standalone H2 Server process and use remote connections to it in your web applications.
Another option is to move H2 to the classpath of Tomcat itself and configure the connection pool as resource in the server.xml, in that case it shouldn't be affected by the lifecycle of your applications.
In both these cases you shouldn't use the SHUTDOWN command.
UPDATED
With client-server connections to a remote server such exception means that server decided to close the database because there are no active connection. This operation can't be interrupted and reverted in the middle. On attempt to open a new connection to the same database during this process it waits at most for 1 minute for completion of this process to re-open the database again. This timeout is not configurable.
There are two possible solutions.
DB_CLOSE_DELAY setting can be used with some large value in seconds. When all connections are closed, database will stay online for the specified number of seconds. -1 also can be used to set an infinite timeout.
You can try to speed up the shutdown process, but you have to figure out what takes so much time by yourself. The file compaction procedure is limited to 200 milliseconds by default, it may take a longer time, but I think it shouldn't be that long. Maybe you have a lot of temporary objects or uncommitted data. Maybe you have a very high fragmentation of database file. It's hard to say what's going wrong without further investigation.

Proper way to keey always alive database connection

I'm working on a backend web server using Mybatis to query MySQL DB.
The web server only has requested in limited (but not fixed) time periods, so nearly every day I will receive the warning log like below.
pooled.PooledDataSource: Execution of ping query 'SELECT 1' failed: The last packet successfully received from the server was 51,861,027 milliseconds ago. The last packet sent successfully to the server was 51,861,027 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
autoReconnect=true (Why does autoReconnect=true not seem to work?) looks close to my problem but it's not what I expected.
I'm wondering is that possible to make connections NEVER timeout by ping it when it has been in an idle state for some time (maybe 1 hour).
Since Mybatis are using a connection pool, it's hard to force ping a specified idle connection and make it never timeout.
I've done some search on Google but looks like it's not easy to hack into Mybatis.
My questions are:
Are there any suggestions, references, or alternative libraries for this problem?
Or
Are there reasons that I should not try to keep always alive connections? (potential risks? violate best practices? etc)
Use connection pool manager like C3P0. You will be able to configure permanent connections. It works just like you have described - "pings" connection with example query like SELEC 1 to keep them alive if the connection is idle for some N seconds (thats configurable).
Some guidence here http://gbif.blogspot.com/2011/08/using-c3p0-with-mybatis.html or here http://edwin.baculsoft.com/2012/09/connecting-mybatis-orm-to-c3p0-connection-pooling/. Configuration options of C3P0 can be googled out.
Periodically execute a "select" simply for keeping db connection alive.
When connected to MySQL server default "wait_timeout" is 8 hours(28800 seconds), which means that a connection is idle more than eight hours, Mysql will automatically disconnect the connection. Can be checked using show variables like '%timeout%';
But java connection does not know about this connection being closed from DB side.
To address this issue you have multiple options:
Increase the value of wait_timeout properties. This will for all applications connecting the database and might not be the best solution
set interactive_timeout=432000;
set wait_timeout=432000;
Use connection pool manager like C3P0. It will ping DB using preferredTestQuery after some intervals for you and then you will have permanent connections.

how does Hikari recycle connections?

I please you all for an answer I could not find in the HikariCP doc. Given I set the following pool parameters:
minimumIdle 1
idleTimeout 10 minutes
maxLifeTime 20 minutes
When my app stays idle (with nobody making requests) during the night, I expect Hikari to close each connection 10 minutes after the connection's last request, after the last connection being closed create a new one (and hold it in the pool), and then close and re-create this idle connection every 20 minutes.
Did I understood correctly?
The fact is that after an idle period on my app, I see (uppon the next request) the following exception:
WARN c.z.hikari.proxy.ConnectionProxy - Connection oracle.jdbc.driver.T4CConnection#3c63f80e <POOL_NAME> marked as broken because of SQLSTATE(08003), ErrorCode(17008).
java.sql.SQLRecoverableException: Closed Connection
The connection has probably been closed by Oracle and cannot be used. I'll try to avoid this using the above config. Another point is that I do not understand why I get a closed connection from the pool. This should never happen, as Hikari does test the connection's state before returning it...
Note: I am not the owner of the DB, I cannot configure it or let it be reconfigured to suit my needs. I also have no access to its config.
Our set-up is Spring 4.1.6, Hibernate 4.3.7 with JPA 2.1 API, Hikari 2.1.0
Your understanding of the pool parameters is correct. It is possible that the Oracle instance has a shorter idle timeout than 10 minutes. One thing you can do is to enable DEBUG level logging for the com.zaxxer.hikari package. You'll get a lot more information about what is going on internally within HikariCP, without being too noisy.
Feel free to post the log as an issue on Github and we'll take a look at it.
We face the almost similar issue and resolved it. I reported this issue to HikariCP team. Here is the link: https://github.com/brettwooldridge/HikariCP/issues/683

Tomcat JDBC Connection Pool - Connections stuck on active

My application uses Tomcat JDBC connection pool, with MySQL DB.
Seems like a process that run during the night (anti virus scan?) cause the memory and CPU on the machine to increase, and as a result connections from the pool stuck on active until the connection pool can't response to any connection request.
I'm getting errors like:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after statement closed.
Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:100; busy:97; idle:0; lastwait:10000]. (That's weird, where are the remaining 3?)
Looking at a chart I'm generating describing the active connection state, it is flat until at some point it start increasing until it reach the maximum and stays there.
My connection pool is configure to remove unclosed connections (setRemoveAbandoned = true).
Do you have any idea how can I solve this issue?
I think this is because your application not closing connections after use. Please check your code and make sure all connections are closing after use.

java mysql server getting slower

I manage my connections by JDBC connection pool (BoneCP) and I always close the connection, the preparedStatement und the ResultSet.
But, when my programm is running for several days, the mysql-server gets slower and slower (for testing, I let my programm insert an entry every second). After 2 days, there were several seconds between the entries and that is why I think that the mysql server is getting slower and can handle the incomming transaction. Am I right?
The mysql server also uses much more of RAM and does not release the resources. So does anyone know, how I could find the error causing this behaviour? Thanks in advice!
Use the MySQL Workbench to detect open connections. It also gives you a host of options to see performance of your database server.
Also [I might be mistaken about this part of your question], when you say
I use connection pooling
why do you close the connection? Isn't that the opposite of the purpose of connection pooling?

Categories