c3po helper threads to be deadlocking - java

I used c3po pooling with my gui application. I have the following configuration
overrides.put("maxStatementsPerConnection", 30);
overrides.put("maxPoolSize",70);
overrides.put("checkoutTimeout", 50000);
Occasionally I get into a situation where an attempt to get a connection times out
java.sql.SQLException: An attempt by a client to checkout a Connection has timed out.
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:527)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
at com.jthink.jaikoz.db.Db.createConnection(Db.java:402)
Even though I'm sure I have no other connections open. In fact I did used to have some additional options enabled (debugUnreturnedConnectionStackTraces, unreturnedConnectionTimeout) to try and identify problems with not closing connections and found no problems. This problem rarely occurs and only happens after running it for some time. I'm using with an embedded Derby database.
As luck would have it when it failed this time I was running it with it Yourkit Profiler enabled, and I could do monitor profiling , and found that we have three c3po threads all waiting on each other, which is why I think there is actually a deadlock here
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread#0
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread#1
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread#2
Is this analogous to the setting of numHelperThreads ?
I took a screendump of this
Have I found a problem with c3po, can I code to recover from it ?

The three threads you see are indeed the helper threads. These do slow JDBC operations asynchronously, eg closing unused connections. The last line of the stacktrace
com.jthink.jaikoz.db.Db.createConnection(Db.java:402)
seems to indicate that C3P0 is trying to open a new connection, but the database refuses to create one. I assume the 'Jaikoz' database is refusing the connection, the C3P0 connection pool is not the issue here.
Simon

Related

Lot of SHOW TRANSACTION ISOLATION LEVEL queries in postgres

I am using Hibernate 4, PostgreSQL and C3P0.
In my web application, after sometime I am getting multiple SHOW TRANSACTION ISOLATION LEVEL queries in database due to which my server gets hang. In my code all my connections are properly closed.
Is it due to a connection leak?
You should also check the state of each query, if it's idle it's most likely nothing problematic.
pg_stat_activity will show last query that was executed by each open connection. And c3p0 uses SHOW TRANSACTION ISOLATION LEVEL to keep the connection open (normal and expected behavior).
This is what's happening:
Connection is opened
SHOW TRANSACTION ISOLATION LEVEL is executed to keep the connection open.
Connection pool will send this query periodically (for example every 10 minutes) to keep the connection open.
Those queries show up in pg_stat_activity because in some cases those were the last queries executed via given connection. Also they will show up as idle because this connection is not in active use
It sounds you may be churning through the Connections in your Connection pool way too fast.
This could be because you have set an overly aggressive maxIdleTime or maxConnectionAge, or because Connections are failing Connection tests and getting evicted, or because your application mistakenly reconstructs the pool when it asks for Connections rather than holding and using a stable pool. (That's a very bad but surprisingly common mistake.)
c3p0 checks Connection isolation levels one time per Connection acquired. Since aquired Connections should have a long lifetime in the pool, the amortized overhead of that is negligible.
But if, due to some configuration problem or bug, your application has c3p0 continually acquiring Connections, one per client or much worse if you are reconstructing the pool for each client, then the transaction isolation checks might become the visible symptom of a worse underlying problem.
multiple SHOW TRANSACTION ISOLATION LEVEL queries in database due to which my server gets hang.
It's really hard (I would said impossible) that your server hang due to multiples queries of this. If your server hang you should check your configuration and that you are using the latest minor patch available for you version.
SHOW TRANSACTION ISOLATION LEVEL is executed every time the application calls Connection.getTransactionIsolation(), C3P0 calls getTransactionIsolation() every time it creates a connection.
If the connection pooler is creating and destroying lots of connections, you end up with many queries of SHOW TRANSACTION ISOLATION LEVEL to the database because the PgJDBC driver execute the query every single time it calls getTransactionIsolation().
change the Test connection on checkin and checkout as false in c3p0
I saw the same problem. It seemed to happen when using higher postgress version. I fixed it by upgrading the postgress driver 42.2.6.

How to detect tomcat connection pooling issue

We have a Java Spring MVC 2.5 application using Tomcat 6 and MySQL 5.0 . We have a bizarre scenario where for whatever reason, the amount of connections used in the c3p0 connection pool starts spiraling out of control and eventually brings tomcat down.
We monitor the c3p0 connection pooling through JMX and most of the time, connections are barely used. When this spiraling situation happens, our tomcat connection pool maxes out and apache starts queuing threads.
In the spiraling scenario, the database has low load and is not giving any errors or any obvious bad situation.
Starting to run out of ideas on how to detect this issue. I don't think doing a tomcat stack dump would do me any good when the situation is already spiraling out of control and not sure how I could catch it before it does spiral out of control.
We also use Terracotta which I don't believe by the logs that it is doing anything odd.
Any ideas would be greatly appreciated!
Cheers!
Somewhere you're leaking a connection. This can happen when you explicitly retrieve Hibernate sessions from the session factory, rather than getting the connection associated with an in-process transaction (can't remember the exact method names).
C3P0 will let you debug this situation with two configuration options (following is copied from docs, which are part of the download package):
unreturnedConnectionTimeout defines a limit (in seconds) to how long a Connection may remain checked out. If set to a nozero value, unreturned, checked-out Connections that exceed this limit will be summarily destroyed, and then replaced in the pool
set debugUnreturnedConnectionStackTraces to true, then a stack trace will be captured each time a Connection is checked-out. Whenever an unreturned Connection times out, that stack trace will be printed, revealing where a Connection was checked out that was not checked in promptly. debugUnreturnedConnectionStackTraces is intended to be used only for debugging, as capturing a stack trace can slow down Connection check-out

Relationship between JDBC sessions and Oracle processes

We are having a problem with too many Oracle processes being created (over 2,000) when connections are limited to 1,100 (using C3P0)
Two questions:
What's the relationship between an Oracle process and a JDBC connection? Is one Oracle process created for each session? Is one created for every JDBC statement? No relationship at all?
Did you ever face this scenario, where you are creating more processes than JDBC connections?
Any comment would be really appreciated.
There is one session per connection. This sounds like you have a connection leak, somewhere you're opening a new connection and not closing properly. One possibility is that you open, use and close a connection inside a try block and are handling an exception in a catch, or returning early for someother reason. If so you need to make sure the connection close is done in finally or it may not happen, leaving the connection (and thus session) hanging. Opening two connections in the same scope without an explicit close in between can also do this.
I'm not familiar with C3PO so don't know how connections are handled, or where and how your 1100 limit is imposed; if it (or you) have a connection pool and the 1100 you refer to is the maximm pool size, then this doesn't sound like the issue as you'd hit the pool cap before the session cap.
You can look in v$session to confirm that all the sessions are coming from JDBC, and there isn't something else connecting.
Maybe you want to check if your server runs in dedicated or shared mode (you probably want to switch it to shared mode if you want to decrease the number of active processes).
You can check that by doing
select server from v$session
More information about process architecture
http://docs.oracle.com/cd/B19306_01/server.102/b14220/process.htm
Shared/Dedicated server mode
http://docs.oracle.com/cd/B10501_01/server.920/a96521/manproc.htm

java mysql server getting slower

I manage my connections by JDBC connection pool (BoneCP) and I always close the connection, the preparedStatement und the ResultSet.
But, when my programm is running for several days, the mysql-server gets slower and slower (for testing, I let my programm insert an entry every second). After 2 days, there were several seconds between the entries and that is why I think that the mysql server is getting slower and can handle the incomming transaction. Am I right?
The mysql server also uses much more of RAM and does not release the resources. So does anyone know, how I could find the error causing this behaviour? Thanks in advice!
Use the MySQL Workbench to detect open connections. It also gives you a host of options to see performance of your database server.
Also [I might be mistaken about this part of your question], when you say
I use connection pooling
why do you close the connection? Isn't that the opposite of the purpose of connection pooling?

Performance comparison of JDBC connection pools

Does anyone have any information comparing performance characteristics of different ConnectionPool implementations?
Background: I have an application that runs db updates in background threads to a mysql instance on the same box. Using the Datasource com.mchange.v2.c3p0.ComboPooledDataSource would give us occasional SocketExceptions:
com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.net.SocketException
MESSAGE: Broken pipe
STACKTRACE:
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
Increasing the mysql connection timeout increased the frequency of these errors.
These errors have disappeared on switching to a different connection pool (com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource); however the performance may be worse and the memory profile is noticeably so (we get fewer, and much larger, GC's than the c3p0 pool).
Whatever connection pool you use, you need to assume that the connection could be randomly closed at any moment and make your application deal with it.
In the case with a long-standing DB connection on a "trusted" network, what often happens is that the OS applies a time limit to how long connections can be open, or periodically runs some "connection cleanup" code. But the cause doesn't matter too much -- it's just part of networking life that you should assume the connection can be "pulled from under your feet", and deal with this scenario accordingly.
So given that, I really can't see the point of a connection pool framework that doesn't allow you to handle this case programmatically.
(Incidentally, this is another of my cases where I'm glad I just write my own connection pool code; no black boxes mysteriously eating memory, and no having to fish around to find the "magic parameter"...)
You may want to have a look at some benchmark numbers up at http://jolbox.com - the site hosting BoneCP, a connection pool that is faster than both C3P0 and DBCP.
I had this error pop up with mysql & c3p0 as well - I tried various things and eventually made it go away. I can't remember, but what might have solved it was the autoReconnect flag a la
url="jdbc:mysql://localhost:3306/database?autoReconnect=true"
Have you tried Apache DBCP? I don't know about c3po but DBCP can handle idle connections in different ways:
It can remove idle connections from the pool
It can run a query on idle connections after a certain period of inactivity
It can also test if a connection is valid just before giving it to the application, by running a query on it; if it gets an exception, it discards that connection and tries with another one (or creates a new one if it can). Way more robust.
Broken pipe
That roughly means that the other side has aborted/timedout/closed the connection. Aren't you keeping connections that long open? Ensure that your code is properly closing all JDBC resources (Connection, Statement and ResultSet) in the finally block.
Increasing the mysql connection timeout increased the frequency of these errors.
Take care that this timeout doesn't exceed the DB's own timeout setting.

Categories