My application uses Tomcat JDBC connection pool, with MySQL DB.
Seems like a process that run during the night (anti virus scan?) cause the memory and CPU on the machine to increase, and as a result connections from the pool stuck on active until the connection pool can't response to any connection request.
I'm getting errors like:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after statement closed.
Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:100; busy:97; idle:0; lastwait:10000]. (That's weird, where are the remaining 3?)
Looking at a chart I'm generating describing the active connection state, it is flat until at some point it start increasing until it reach the maximum and stays there.
My connection pool is configure to remove unclosed connections (setRemoveAbandoned = true).
Do you have any idea how can I solve this issue?
I think this is because your application not closing connections after use. Please check your code and make sure all connections are closing after use.
Related
We are using MySQL database along with C3P0ComboPooledDataSource for pooling. Below are the configuration:
minPoolSize=10
maxPoolSize=50
maxStatements=50
idleConnectionTestPeriod=3600
acquireIncrement=5
maxIdleTime=600
numHelperThreads=6
unreturnedConnectionTimeout=3600
maxIdleTimeExcessConnections=600
As per document after maxIdleTime the connection should close automatically, and this happens when we debug locally. But on production server we see the MySQL connection count (show processlist) exceeds more than 50 and they are in sleep mode from last 2400 seconds, ideally which should get closed after 600 seconds.Due to this the application is not able to establish connection through pooled data source as it exceeds maxPoolSize. Please let me know what can be done in order to debug this and how can we resolve it.
This is almost certainly a Connection leak. Please be sure to reliably close your Connections, ideally using something like the try-with-resources.
If you don't understand where your application is not reliably closing Connnections See e.g. C3P0 Spring Hibernate: Pool maxed out. How to debug?
I'm working on a backend web server using Mybatis to query MySQL DB.
The web server only has requested in limited (but not fixed) time periods, so nearly every day I will receive the warning log like below.
pooled.PooledDataSource: Execution of ping query 'SELECT 1' failed: The last packet successfully received from the server was 51,861,027 milliseconds ago. The last packet sent successfully to the server was 51,861,027 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
autoReconnect=true (Why does autoReconnect=true not seem to work?) looks close to my problem but it's not what I expected.
I'm wondering is that possible to make connections NEVER timeout by ping it when it has been in an idle state for some time (maybe 1 hour).
Since Mybatis are using a connection pool, it's hard to force ping a specified idle connection and make it never timeout.
I've done some search on Google but looks like it's not easy to hack into Mybatis.
My questions are:
Are there any suggestions, references, or alternative libraries for this problem?
Or
Are there reasons that I should not try to keep always alive connections? (potential risks? violate best practices? etc)
Use connection pool manager like C3P0. You will be able to configure permanent connections. It works just like you have described - "pings" connection with example query like SELEC 1 to keep them alive if the connection is idle for some N seconds (thats configurable).
Some guidence here http://gbif.blogspot.com/2011/08/using-c3p0-with-mybatis.html or here http://edwin.baculsoft.com/2012/09/connecting-mybatis-orm-to-c3p0-connection-pooling/. Configuration options of C3P0 can be googled out.
Periodically execute a "select" simply for keeping db connection alive.
When connected to MySQL server default "wait_timeout" is 8 hours(28800 seconds), which means that a connection is idle more than eight hours, Mysql will automatically disconnect the connection. Can be checked using show variables like '%timeout%';
But java connection does not know about this connection being closed from DB side.
To address this issue you have multiple options:
Increase the value of wait_timeout properties. This will for all applications connecting the database and might not be the best solution
set interactive_timeout=432000;
set wait_timeout=432000;
Use connection pool manager like C3P0. It will ping DB using preferredTestQuery after some intervals for you and then you will have permanent connections.
I have a web application which needs to connect to the database every now and then, I am sure that I am closing every instance of new connection that I am opening.
The issue is I have a lot of inactive sessions in oracle db of same user. I have tried pooling, I have tried to close all sessions but nothing seems to work fine. I have searched for possible solutions over Stack Overflow but unfortunately did not find answer to my solution. The closest I get to is Inactive session in Oracle by JDBC where the person asking the question has itself answer the question by saying that he modified the code.
Any answer, recommendation would be appreciated
I have tried pooling...
Without a connection pool your application has a direct controll about opening and closing the database connections. This is not the typical case as the acquiring of a physical connection is a costly operation.
A connection pool optimizes it while keeping a particular number of connection open and provides them on request to the application.
If a connection is closed by application, it is not closed in the DB, it is made available in the pool as idle. You can control among other parameters how many idle connections should be kept in the pool. E.g. for DBCP check the parameters minIdle and maxIdle. Except for some special cases with invalid connections, the number of idle connection (those connenction are INACTIVE) you see should be within this limit.
If you see a systematic a higher number (or even an increasing number) of INACTIVE session, the most probable explanation is that the application gets the connection from the pool and "forget" to return it - those session are INACTIVE as well.
I have a project in which I used HikariCP for JDBC connection pooling. And HikariCP works just great for my needs. It also logs the stats of the pool like below.
2014-12-03 10:16:08 DEBUG HikariPool:559 - Before cleanup pool stats loginPool (total=8, inUse=0, avail=8, waiting=0)
2014-12-03 10:16:08 DEBUG HikariPool:559 - After cleanup pool stats loginPool (total=7, inUse=1, avail=7, waiting=0)
Just for experimental purposes I closed all the MySQL connections for the configured database using MySQL Workbench. But, still I see HikariCP logging the stats like before though there are no actual connections to the database. When there was a request for connection it immediately established the connections(initial 8), so everything works great.
So, my question is how does these connections are managed or implemented? I think the reason why HikariCP logs stats, as if there were connections, is because it has valid in memory references to connections, which are actually non existent(with database).
Is my understanding correct?
When you close the connections using MySQL Workbench, you are closing them on the server end. On the JDBC (client) side, the previously established connections will remain in existence until the client code attempts to use them. At that point, they will be found to be "broken"; i.e. the client will get exceptions when it tries to use them.
The client-side JDBC Connection objects only get closed or recycled when they are returned to the connection pool by your Java application code.
The connection pool created 8 connections at startup. You say you disconnected them using the workbench. Most connection pools won't know the connection is disconnected until it gets used.
Your assumption is correct. You manually killed the connections but the pool has a handle to 8 sockets which it assumes are connected. Given time your connection pool may have checked the connections for validity and attempted to reconnect them. I can't speak for HikariCP but this is what modern connections pools do.
Does anyone have any information comparing performance characteristics of different ConnectionPool implementations?
Background: I have an application that runs db updates in background threads to a mysql instance on the same box. Using the Datasource com.mchange.v2.c3p0.ComboPooledDataSource would give us occasional SocketExceptions:
com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.net.SocketException
MESSAGE: Broken pipe
STACKTRACE:
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
Increasing the mysql connection timeout increased the frequency of these errors.
These errors have disappeared on switching to a different connection pool (com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource); however the performance may be worse and the memory profile is noticeably so (we get fewer, and much larger, GC's than the c3p0 pool).
Whatever connection pool you use, you need to assume that the connection could be randomly closed at any moment and make your application deal with it.
In the case with a long-standing DB connection on a "trusted" network, what often happens is that the OS applies a time limit to how long connections can be open, or periodically runs some "connection cleanup" code. But the cause doesn't matter too much -- it's just part of networking life that you should assume the connection can be "pulled from under your feet", and deal with this scenario accordingly.
So given that, I really can't see the point of a connection pool framework that doesn't allow you to handle this case programmatically.
(Incidentally, this is another of my cases where I'm glad I just write my own connection pool code; no black boxes mysteriously eating memory, and no having to fish around to find the "magic parameter"...)
You may want to have a look at some benchmark numbers up at http://jolbox.com - the site hosting BoneCP, a connection pool that is faster than both C3P0 and DBCP.
I had this error pop up with mysql & c3p0 as well - I tried various things and eventually made it go away. I can't remember, but what might have solved it was the autoReconnect flag a la
url="jdbc:mysql://localhost:3306/database?autoReconnect=true"
Have you tried Apache DBCP? I don't know about c3po but DBCP can handle idle connections in different ways:
It can remove idle connections from the pool
It can run a query on idle connections after a certain period of inactivity
It can also test if a connection is valid just before giving it to the application, by running a query on it; if it gets an exception, it discards that connection and tries with another one (or creates a new one if it can). Way more robust.
Broken pipe
That roughly means that the other side has aborted/timedout/closed the connection. Aren't you keeping connections that long open? Ensure that your code is properly closing all JDBC resources (Connection, Statement and ResultSet) in the finally block.
Increasing the mysql connection timeout increased the frequency of these errors.
Take care that this timeout doesn't exceed the DB's own timeout setting.