Hello i am extracting the connection object from the EclipseLink context by calling: Connection con = entityManager.unwrap(Connection.class);
i am responsible for releasing the Connection in order it to comeback into the pool however i need to know if the extracted connection is supposed to keep the original configuration set by EclipseLink i mean number of connections, Maximum number of connections and so on..if so then once it is returned into EclipseLink it is supposed to keep the same performance than working normally..
i need to know this cause maybe the experience of someone can help me in choosing if getting the connection in this way will keep a good performance as EclipseLink does when working its native JPA, thanks in advance..
You can only unwrap the Connection in the scope of a transaction. So you will get the same connection that the EntityManager is bound to (from the pool). You must not release this connection, EclipseLink will release when the transaction ends.
So, to be clear, you are NOT responsible for releasing the connection.
Related
I am using Hibernate 4, PostgreSQL and C3P0.
In my web application, after sometime I am getting multiple SHOW TRANSACTION ISOLATION LEVEL queries in database due to which my server gets hang. In my code all my connections are properly closed.
Is it due to a connection leak?
You should also check the state of each query, if it's idle it's most likely nothing problematic.
pg_stat_activity will show last query that was executed by each open connection. And c3p0 uses SHOW TRANSACTION ISOLATION LEVEL to keep the connection open (normal and expected behavior).
This is what's happening:
Connection is opened
SHOW TRANSACTION ISOLATION LEVEL is executed to keep the connection open.
Connection pool will send this query periodically (for example every 10 minutes) to keep the connection open.
Those queries show up in pg_stat_activity because in some cases those were the last queries executed via given connection. Also they will show up as idle because this connection is not in active use
It sounds you may be churning through the Connections in your Connection pool way too fast.
This could be because you have set an overly aggressive maxIdleTime or maxConnectionAge, or because Connections are failing Connection tests and getting evicted, or because your application mistakenly reconstructs the pool when it asks for Connections rather than holding and using a stable pool. (That's a very bad but surprisingly common mistake.)
c3p0 checks Connection isolation levels one time per Connection acquired. Since aquired Connections should have a long lifetime in the pool, the amortized overhead of that is negligible.
But if, due to some configuration problem or bug, your application has c3p0 continually acquiring Connections, one per client or much worse if you are reconstructing the pool for each client, then the transaction isolation checks might become the visible symptom of a worse underlying problem.
multiple SHOW TRANSACTION ISOLATION LEVEL queries in database due to which my server gets hang.
It's really hard (I would said impossible) that your server hang due to multiples queries of this. If your server hang you should check your configuration and that you are using the latest minor patch available for you version.
SHOW TRANSACTION ISOLATION LEVEL is executed every time the application calls Connection.getTransactionIsolation(), C3P0 calls getTransactionIsolation() every time it creates a connection.
If the connection pooler is creating and destroying lots of connections, you end up with many queries of SHOW TRANSACTION ISOLATION LEVEL to the database because the PgJDBC driver execute the query every single time it calls getTransactionIsolation().
change the Test connection on checkin and checkout as false in c3p0
I saw the same problem. It seemed to happen when using higher postgress version. I fixed it by upgrading the postgress driver 42.2.6.
Until now, whenever I query the database I open a fresh connection to the database. How do I implement the property that once I open the connection I can reuse that?
With this done, please tell me if I could leak the resources.
Basically you need JDBC connection pool, typically implementing DataSource interface. Have a look at dbcp and c3p0. Chances are you container/server already provides an implementation of connection pooling.
When you use a connection pool every time you open a connection you are actually taking one from the pool (or opening if pool is empty). When closing the connection, it is actually returned to the pool. The leak can only occur if you forget the latter. (or forget closing ResultSet, Statement...)
You can (and should) reuse db connections. Connection pooling is one of the techniques for this. A thorough tutorial on connection pooling can be read here : http://java.sun.com/developer/onlineTraining/Programming/JDCBook/conpool.html
How do I configure Hibernate so that each time I call sessionFactory.openSession() it connects with a new connection from the connection pool? The connection pool is managed by Websphere Application Server and is a JDBC Data Source.
Thanks
How do I configure Hibernate so that each time I call sessionFactory.openSession() it connects with a new connection from the connection pool?
This is the default behavior, each session will get a dedicated connection from the connection pool.
Right now, it appears that both sessions are using the same connection, because when the first session is closed (manually calling session.close()) sometimes, the other session will throw a "session closed" exception when trying to run more queries on it.
No they are not. But maybe the second connection gets released at the end of the transaction initiated for the request. Have a look at the hibernate.connection.release_mode configuration parameter, you might want to use on_close. But without more details on your transaction strategy, it's impossible to say anything.
The second session is open by a child thread which means that the child thread can keep living even after the (HTTP) request is complete.
Take my previous advice with a grain of salt, you should just not spawn unmanaged threads and I don't know how the application server will behave. I explain in this other answer what would be the right way.
situation: We have a web service running on tomcat accessing DB2 database on AS400, we are using JTOPEN drivers for JNDI connections handled by tomcat. For handling transactions and access to database we are using Spring.
For each select system takes JDBC connection from JNDI (i.e. from connection pool), does selection, and in the end it closes ResultSet, Statement and releases Connection in that order. That passes fine, shared lock on table dissappears.
When we want to do update the same way as we did with select (exception on ResultSet object, we don't have one in such situation), after releasing Connection to JNDI lock on table stays.
If we put maxIdle=0 for number of connections in JNDI configuration, this problem disappears, but this degrades performances, we have cca 100 users online on that service, we need few connections to be alive in pool.
What do you suggest?
Sounds like as if the auto-commit is by default disabled and that the code isn't calling connection.commit() anywhere. To fix this, either configure the connection pool so that it only returns connections with autoCommit = true, or change the JDBC code that it commits the transaction at end of the try block wherein the SQL action is been taken place.
Take a look at this.
It helped me in the same case.
I'm in the process of adding connection pooling to our java app.
The app can work with different rdbmses, and both as a desktop app and as a headless webservice. The basic implementation works fine in desktop mode (rdbms = derby). When running as a webservice (rdbms = mysql), we see that connection pooling is needed to get good concurrent behaviour.
My initial approach was to use dependency injection to decide between a basic DataSource or a connection pooling DataSource at startup time.
The problem in this scenario is that I don't know when to call connection.close(). My understanding is that when using connection pooling, you should close() after each query so the connection object can be recycled. But the current non-pooling implementation tries to hang on to the Connection object as long as possible.
Is there a solution to this problem? Is there a way to recycle a basic connection object? What happens if I don't call connection.close() with a thread pooled connection object?
Or is it simply a bad design to mix these two connection strategies?
If I understand you correctly, essentially you are doing your own pooling implementation by holding on to the connection as long as possible. If this strategy is successful (that is the program is behaving as you describe it), then you have your own pool already. The only thing adding a pool will gain you is to improve the response time when you make a new connection (as you won't really be making one, you will be getting it from the pool), which is apparently not happening all that often.
So, I would cycle back to the assumption underlying this question: Is it in fact the case that your concurrent performance problems are related to database pooling? If you are using transactions in the MySQL and not the Derby, that could be a big cause of concurrency issues, as an example of a different potential cause.
To answer your question directly, use a database pool all the time. They are very little overhead, and of course change the code to release connections quickly (that is, when the request is done, not as long as the user has a screen open, for example) rather than holding on to them forever.