We have a Java Spring MVC 2.5 application using Tomcat 6 and MySQL 5.0 . We have a bizarre scenario where for whatever reason, the amount of connections used in the c3p0 connection pool starts spiraling out of control and eventually brings tomcat down.
We monitor the c3p0 connection pooling through JMX and most of the time, connections are barely used. When this spiraling situation happens, our tomcat connection pool maxes out and apache starts queuing threads.
In the spiraling scenario, the database has low load and is not giving any errors or any obvious bad situation.
Starting to run out of ideas on how to detect this issue. I don't think doing a tomcat stack dump would do me any good when the situation is already spiraling out of control and not sure how I could catch it before it does spiral out of control.
We also use Terracotta which I don't believe by the logs that it is doing anything odd.
Any ideas would be greatly appreciated!
Cheers!
Somewhere you're leaking a connection. This can happen when you explicitly retrieve Hibernate sessions from the session factory, rather than getting the connection associated with an in-process transaction (can't remember the exact method names).
C3P0 will let you debug this situation with two configuration options (following is copied from docs, which are part of the download package):
unreturnedConnectionTimeout defines a limit (in seconds) to how long a Connection may remain checked out. If set to a nozero value, unreturned, checked-out Connections that exceed this limit will be summarily destroyed, and then replaced in the pool
set debugUnreturnedConnectionStackTraces to true, then a stack trace will be captured each time a Connection is checked-out. Whenever an unreturned Connection times out, that stack trace will be printed, revealing where a Connection was checked out that was not checked in promptly. debugUnreturnedConnectionStackTraces is intended to be used only for debugging, as capturing a stack trace can slow down Connection check-out
Related
Is there any way to check the number of connections used by data source?
I am getting connectionwaittimeout exception in my web application, so to check which function is utilizing more connections or not releasing any connection I want to check the number of connections used by data source at any point.
Depending on the implementation of your DataSource you probably can have access to some configurable property. For example the BasicDataSource.removedAbandonned from apache-commons allows to reclaim a connection back to the pool if it has been utilized for too long.
But of course you should go back to your code and make sure any manually borrowed Connections are properly closed in a try-catch-finally block (or the new Java 7 try-with resources syntax)
In the WebSphere Application Server you have at least two features that may help you:
1) PMI (Performance monitoring infrastructure)
You can enable various counters on your Datasoruce and monitor it. See JDBC counters for more details. The most useful for me are FreePoolSize, PercentUsed to see the pool utilization, WaitingThreadCount and WaitTime to see if any threads are waiting and how long, JDBCTime and UseTime to see whats the average query time in comparison to the time the connection is being held.
2) Connection leak detection
WebSphere contains trace settings that will allow you to dump connection pool when ConnectionWaitTimoutException appear. See following pdf for detailed description how to configure and use it - Understanding and Resolving ConnectionWaitTimeoutExceptions
This is shameful, but we know there are some activemq connection leaks. The code is old and has many twists and turns that makes finding the leaky flow very hard.
We fire many short leaved jobs from batch machine. We know that not all paths are closing the activemq connection properly. When connection is not closed, but job terminates, activemq holds that connection for some amount of time. Ultimately, there are some critical applications which get impacted because activemq max connection limit exceeds.
Is it possible to set connection name or other identifying information so that a non properly closed connection will appear in activemq's log files. This will tell us which log files need to be examined. Sheer number of jobs makes it very hard to find out which exact job caused the problem. However once we know the job, we can deduce enough information from logs to find and fix the connection leaks.
Right now all we see is ip address from which connection originated and since all the jobs originate from same machine, its not helpful to find out who caused the problem
If you add jms.clientID=something into your connection URL and turn on DEBUG logging in your conf/log4j.properties, you will get the client id in your debug log on AMQ. You could then write something to analyze your log and find the AMQ ID for a given clientID and match the logs that way.
If your process is truly exiting though, your connection should be going away at that point (ie you can't keep the connection alive if there's no process to service it).
If you are running on Linux, you can do an netstat -anp | grep 61616 (or whatever your AMQ port is) to see which PIDs still have connections to AMQ, and then another ps to see what those processes are.
Since as in some time the application throws a java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time I have enable monitoring JDBC Connection Pool on the relevant server instance from glass fish admin web console.
Then i remote debug the application where i hold the debug point at after get connection but before close it. Then i refresh the web administration console and check the value for NumConnFree Resource Statistics which still shows the initial value of 8. Since I am currently using a connection, it should be 7 right?
Any one face this kind of situation. I am not sure its a problem with administration web console.
Also what are other best way to monitor connection leaks? My goal was to check the value of NumPotentialConnLeak property and check the logs specific to any leaks. But since i faced above problem i am not sure the administration console shows correct data.
Not sure if you are looking at this thread anymore but I found this very useful
http://pe-kay.blogspot.ca/2011/10/using-glassfish-monitoring-and-finding.html?m=1
I used c3po pooling with my gui application. I have the following configuration
overrides.put("maxStatementsPerConnection", 30);
overrides.put("maxPoolSize",70);
overrides.put("checkoutTimeout", 50000);
Occasionally I get into a situation where an attempt to get a connection times out
java.sql.SQLException: An attempt by a client to checkout a Connection has timed out.
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:527)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
at com.jthink.jaikoz.db.Db.createConnection(Db.java:402)
Even though I'm sure I have no other connections open. In fact I did used to have some additional options enabled (debugUnreturnedConnectionStackTraces, unreturnedConnectionTimeout) to try and identify problems with not closing connections and found no problems. This problem rarely occurs and only happens after running it for some time. I'm using with an embedded Derby database.
As luck would have it when it failed this time I was running it with it Yourkit Profiler enabled, and I could do monitor profiling , and found that we have three c3po threads all waiting on each other, which is why I think there is actually a deadlock here
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread#0
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread#1
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread#2
Is this analogous to the setting of numHelperThreads ?
I took a screendump of this
Have I found a problem with c3po, can I code to recover from it ?
The three threads you see are indeed the helper threads. These do slow JDBC operations asynchronously, eg closing unused connections. The last line of the stacktrace
com.jthink.jaikoz.db.Db.createConnection(Db.java:402)
seems to indicate that C3P0 is trying to open a new connection, but the database refuses to create one. I assume the 'Jaikoz' database is refusing the connection, the C3P0 connection pool is not the issue here.
Simon
Does anyone have any information comparing performance characteristics of different ConnectionPool implementations?
Background: I have an application that runs db updates in background threads to a mysql instance on the same box. Using the Datasource com.mchange.v2.c3p0.ComboPooledDataSource would give us occasional SocketExceptions:
com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.net.SocketException
MESSAGE: Broken pipe
STACKTRACE:
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
Increasing the mysql connection timeout increased the frequency of these errors.
These errors have disappeared on switching to a different connection pool (com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource); however the performance may be worse and the memory profile is noticeably so (we get fewer, and much larger, GC's than the c3p0 pool).
Whatever connection pool you use, you need to assume that the connection could be randomly closed at any moment and make your application deal with it.
In the case with a long-standing DB connection on a "trusted" network, what often happens is that the OS applies a time limit to how long connections can be open, or periodically runs some "connection cleanup" code. But the cause doesn't matter too much -- it's just part of networking life that you should assume the connection can be "pulled from under your feet", and deal with this scenario accordingly.
So given that, I really can't see the point of a connection pool framework that doesn't allow you to handle this case programmatically.
(Incidentally, this is another of my cases where I'm glad I just write my own connection pool code; no black boxes mysteriously eating memory, and no having to fish around to find the "magic parameter"...)
You may want to have a look at some benchmark numbers up at http://jolbox.com - the site hosting BoneCP, a connection pool that is faster than both C3P0 and DBCP.
I had this error pop up with mysql & c3p0 as well - I tried various things and eventually made it go away. I can't remember, but what might have solved it was the autoReconnect flag a la
url="jdbc:mysql://localhost:3306/database?autoReconnect=true"
Have you tried Apache DBCP? I don't know about c3po but DBCP can handle idle connections in different ways:
It can remove idle connections from the pool
It can run a query on idle connections after a certain period of inactivity
It can also test if a connection is valid just before giving it to the application, by running a query on it; if it gets an exception, it discards that connection and tries with another one (or creates a new one if it can). Way more robust.
Broken pipe
That roughly means that the other side has aborted/timedout/closed the connection. Aren't you keeping connections that long open? Ensure that your code is properly closing all JDBC resources (Connection, Statement and ResultSet) in the finally block.
Increasing the mysql connection timeout increased the frequency of these errors.
Take care that this timeout doesn't exceed the DB's own timeout setting.