Large amount of Inactive sessions in Oracle by Java - java

I have a web application which needs to connect to the database every now and then, I am sure that I am closing every instance of new connection that I am opening.
The issue is I have a lot of inactive sessions in oracle db of same user. I have tried pooling, I have tried to close all sessions but nothing seems to work fine. I have searched for possible solutions over Stack Overflow but unfortunately did not find answer to my solution. The closest I get to is Inactive session in Oracle by JDBC where the person asking the question has itself answer the question by saying that he modified the code.
Any answer, recommendation would be appreciated

I have tried pooling...
Without a connection pool your application has a direct controll about opening and closing the database connections. This is not the typical case as the acquiring of a physical connection is a costly operation.
A connection pool optimizes it while keeping a particular number of connection open and provides them on request to the application.
If a connection is closed by application, it is not closed in the DB, it is made available in the pool as idle. You can control among other parameters how many idle connections should be kept in the pool. E.g. for DBCP check the parameters minIdle and maxIdle. Except for some special cases with invalid connections, the number of idle connection (those connenction are INACTIVE) you see should be within this limit.
If you see a systematic a higher number (or even an increasing number) of INACTIVE session, the most probable explanation is that the application gets the connection from the pool and "forget" to return it - those session are INACTIVE as well.

Related

How costly is opening and closing of a DB connection in Connection Pool?

If we use any connection pooling framework or Tomcat JDBC pool then how much it is costly to open and close the DB connection?
Is it a good practice to frequently open and close the DB connection whenever DB operations are required?
Or same connection can be carried across different methods for DB operations?
Jdbc Connection goes through the network and usually works over TCP/IP and optionally with SSL. You can read this post to find out why it is expensive.
You can use a single connection across multiple methods for different db operations because for each DB operations you would need to create a Statement to execute.
Connection pooling avoids the overhead of creating Connections during a request and should be used whenever possible. Hikari is one of the fastest.
The answer is - its almost always recommended to re-use DB Connections. Thats the whole reason why Connection Pools exist. Not only for the performance, but also for the DB stability. For instance, if you don't limit the number of connections and mistakenly open 100s of DB connections, the DB might go down. Also lets say if DB connections don't get closed due to some reason (Out of Memory error / shut down / unhandled exception etc), you would have a bigger issue. Not only would this affect your application but it could also drag down other services using the common DB. Connection pool would contain such catastrophes.
What people don't realize that behind the simple ORM API there are often 100s of raw SQLs. Imagine running these sqls independent of connection pools - we are talking about a very large overhead.
I couldn't fathom running a commercial DB application without using Connection Pools.
Some good resources on this topic:
https://www.cockroachlabs.com/blog/what-is-connection-pooling/
https://stackoverflow.blog/2020/10/14/improve-database-performance-with-connection-pooling/
Whether the maintenance (opening, closing, testing) of the database connections in a DBConnection Pool affects the working performance of the application depends on the implementation of the pool and to some extent on the underlying hardware.
A pool can be implemented to run in its own thread, or to initialise all connections during startup (of the container), or both. If the hardware provides enough cores, the working thread (the "business payload") will not be affected by the activities of the pool at all.
Other connection pools are implemented to create a new connection only on demand (a connection is requested, but currently there is none available in the pool) and within the thread of the caller. In this case, the creation of that connection reduces the performance of the working thread – this time! It should not happen too often, otherwise your application needs too many connections and/or does not return them fast enough.
But whether you really need a Database Connection Pool at all depends from the kind of your application!
If we talk about a typical server application that is intended to run forever and to serve a permanently changing crowd of multiple clients at the same time, it will definitely benefit from a connection pool.
If we talk about a tool type application that starts, performs a more or less linear task in a defined amount of time, and terminates when done, then using a connection pool for the database connection(s) may cause more overhead than it provides advantages. For such an application it might be better to keep the connection open for the whole runtime.
Taking the RDBMS view, both does not make a difference: in both cases the connections are seen as open.
If you have performance as a key parameter then better to switch to the Hikari connection pool. If you are using spring-boot then by default Hikari connection pool is used and you do not need to add any dependency. The beautiful thing about the Hikari connection pool is its entire lifecycle is managed and you do not have to do anything.
Also, it is always recommended to close the connection and let it return to the connection pool so that other threads can use it, especially in multi-tenant environments. The best way to do this is using "try with resources" and that connection is always closed.
try(Connection con = datasource.getConnection()){
// your code here.
}
To create your data source you can pass the credentials and create your data source for example:
DataSource dataSource = DataSourceBuilder.create()
.driverClassName(JDBC_DRIVER)
.url(url)
.username(username)
.password(password)
.build();
Link: https://github.com/brettwooldridge/HikariCP
If you want to know the answer in your case, just write two implementations (one with a pool, one without) and benchmark the difference.
Exactly how costly it is, depends on so many factors that it is hard to tell without measuring
But in general, a pool will be more efficient.
The costly is always a definition of impact.
Consider, you have following environment.
A web application with assuming a UI-transaction (user click) and causes a thread on the webserver. This thread is coupled to one connection/thread on the database
10 connections per 60000ms / 1min or better to say 0.167 connections/s
10 connections per 1000ms / 1sec => 10 connections/s
10 connections per 100ms / 0.1sec => 100 connections/s
10 connections per 10ms / 0.01sec => 1000 connections/s
I have worked in even bigger environments.
And believe me the more you exceed the 100 conn/s by 10^x factors the more pain you will feel without having a clean connection pool.
The more connections you generate in 1 second the higher latency you generate and the higher impact is it for the database. And the more bandwidth you will eat for recreating over and over a new "water pipeline" for dropping a few drops of water from one side to the other side.
Now getting back, if you have to access a existing connection from a connection pool it is a matter of micros or few ms to access the database connection. So considering one, it is no real impact at all.
If you have a network in between, it will grow to probably x10¹ to x10² ms to create a new connection.
Considering now the impact on your webserver, that each user blocks a thread, memory and network connection it will impact also your webserver load. Typically you run into webserver (e.g. revProxy apache + tomcat, or tomcat alone) thread pools issues on high load environments, if the connections get exhausted or they need too long time (10¹, 10² millis) to create
Now considering also the database.
If you have open connection, each connection is typically mapped to a thread on a DB. So the DB can use thread based caches to make prepared statements and to reuse pre-calculated access plan to make the accesses to data on database very fast.
You may loose this option if you have to recreate the connection over and over again.
But as said, if you are in up to 10 connections per second you shall not face any bigger issue without a connection pool, except the first additional delay to access the DB.
If you get into higher levels, you will have to manage the resources better and to avoid any useless IO-delay like recreating the connection.
Experience hints:
it does not cost you anything to use a connection pool. If you have issues with the connection pool, in all my previous performance tuning projects it was a matter of bad configuration.
You can configure
a connection check to check the connection (use a real SQL to access a real db field). so on every new access the connection gets checked and if defective it gets kicked from the connection pool
you can define a lifetime of a connections, so that you get new connection after a defined time
=> all this together ensure that even if your admins are doing crap and do not inform you (killing connection / threads on DB) the pool gets quickly rebuilt and the impact stays very low. Read the docs of the connection pool.
Is one connection pool better as the other?
A clear no, it is only getting a matter if you get into high end, or into distributed environments/clusters or into cloud based environments. If you have one connection pool already and it is still maintained, stick to it and become a pro on your connection pool settings.

Will DB connection leakage cause too many inactive session in Oracle?

I was assigned a task about fixing too many inactive session in Oracle database which used in our Java application. We found there was java method which didn't close the JDBC connetion. I know this is a DB connection leakage, but I am not sure if it is the cause of the too many inactive session. I don't know if O can get to know which Java process cause this issue. Can someone help me?
I know this is a DB Connection leakage but I am not sure if it is the reason cause the too many too many inactive session.
Probably.
If each time you get a JDBC Connection you do actually create a new connection then you will also start a new session and when you do not close the connection then you will have an inactive session and the number of sessions will grow.
If you are using connection pooling then, when you close the connection, the connection is not actually closed but is returned to the pool. When the next connection is required it will request a connection from the pool and you will reuse the previous connection and the previous connection's session. In this case you should not see an increase in the total number of sessions while the connections are reused from the connection pool but you might see inactive sessions that are the pooled connections which are not currently in use.
It sounds like you are not using connection pooling and then the number of sessions will directly correlate to the number of connections.

Relationship between JDBC sessions and Oracle processes

We are having a problem with too many Oracle processes being created (over 2,000) when connections are limited to 1,100 (using C3P0)
Two questions:
What's the relationship between an Oracle process and a JDBC connection? Is one Oracle process created for each session? Is one created for every JDBC statement? No relationship at all?
Did you ever face this scenario, where you are creating more processes than JDBC connections?
Any comment would be really appreciated.
There is one session per connection. This sounds like you have a connection leak, somewhere you're opening a new connection and not closing properly. One possibility is that you open, use and close a connection inside a try block and are handling an exception in a catch, or returning early for someother reason. If so you need to make sure the connection close is done in finally or it may not happen, leaving the connection (and thus session) hanging. Opening two connections in the same scope without an explicit close in between can also do this.
I'm not familiar with C3PO so don't know how connections are handled, or where and how your 1100 limit is imposed; if it (or you) have a connection pool and the 1100 you refer to is the maximm pool size, then this doesn't sound like the issue as you'd hit the pool cap before the session cap.
You can look in v$session to confirm that all the sessions are coming from JDBC, and there isn't something else connecting.
Maybe you want to check if your server runs in dedicated or shared mode (you probably want to switch it to shared mode if you want to decrease the number of active processes).
You can check that by doing
select server from v$session
More information about process architecture
http://docs.oracle.com/cd/B19306_01/server.102/b14220/process.htm
Shared/Dedicated server mode
http://docs.oracle.com/cd/B10501_01/server.920/a96521/manproc.htm

Sleeping connections in SQL Server

Not being a database administrator (even less of a MS database admin :), I have received complaints that a piece of code I've written leaves "sleeping connections" behind in the database.
My code is Java, and uses Apache Commons DBCP for connection pooling. I also use Spring's JdbcTemplate to manage the connection's state, so not closing the connections is out of the question (since the library is doing that for me).
My main question is, from a DBA's point of view, can these connections cause outages or poor performance?
This question is related, currently the settings were left as they were there (infinite active/idle connections in the pool).
Really, to answer your question, an idea of the number of these "sleeping" connections would be good. It also matters whether this server's primary purpose is serving your application, or whether your application is one of many. Also relevant is whether there are multiple instances of your app (eg on multiple web servers), or whether it's just the one.
In my experience, there is little to no overhead associated with idle connections on modern hardware, as long as you don't reach into the hundreds. That said, looking at your previous question, allowing the pool to spawn an unbounded number of connections does not sound wise - I'd recommend setting a cap, even if you set it at a hundreds.
I can tell you from at least one painful situation with leaking connection pools, that having a thousand open connections to a single SQL server is expensive, even if they're idle. I seem to recall the server started losing it (failing to accept new connections, simple queries timing out, etc) when nearing the 2,000-connection range (this was SQL 2000 on mid-range hardware a few years ago).
Hope this helps!
Apache DBCP has maxIdle connections settings to 8 and maxActive settings to 8. This means that 8 number of active connections and 8 numbers of idle connections can exist in the pool. DBCP reuses the connections when the call for connection is made. You can set this according to your requirement. You can refer to the document below:
DBCP Configuration - Apache

Performance comparison of JDBC connection pools

Does anyone have any information comparing performance characteristics of different ConnectionPool implementations?
Background: I have an application that runs db updates in background threads to a mysql instance on the same box. Using the Datasource com.mchange.v2.c3p0.ComboPooledDataSource would give us occasional SocketExceptions:
com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.net.SocketException
MESSAGE: Broken pipe
STACKTRACE:
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
Increasing the mysql connection timeout increased the frequency of these errors.
These errors have disappeared on switching to a different connection pool (com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource); however the performance may be worse and the memory profile is noticeably so (we get fewer, and much larger, GC's than the c3p0 pool).
Whatever connection pool you use, you need to assume that the connection could be randomly closed at any moment and make your application deal with it.
In the case with a long-standing DB connection on a "trusted" network, what often happens is that the OS applies a time limit to how long connections can be open, or periodically runs some "connection cleanup" code. But the cause doesn't matter too much -- it's just part of networking life that you should assume the connection can be "pulled from under your feet", and deal with this scenario accordingly.
So given that, I really can't see the point of a connection pool framework that doesn't allow you to handle this case programmatically.
(Incidentally, this is another of my cases where I'm glad I just write my own connection pool code; no black boxes mysteriously eating memory, and no having to fish around to find the "magic parameter"...)
You may want to have a look at some benchmark numbers up at http://jolbox.com - the site hosting BoneCP, a connection pool that is faster than both C3P0 and DBCP.
I had this error pop up with mysql & c3p0 as well - I tried various things and eventually made it go away. I can't remember, but what might have solved it was the autoReconnect flag a la
url="jdbc:mysql://localhost:3306/database?autoReconnect=true"
Have you tried Apache DBCP? I don't know about c3po but DBCP can handle idle connections in different ways:
It can remove idle connections from the pool
It can run a query on idle connections after a certain period of inactivity
It can also test if a connection is valid just before giving it to the application, by running a query on it; if it gets an exception, it discards that connection and tries with another one (or creates a new one if it can). Way more robust.
Broken pipe
That roughly means that the other side has aborted/timedout/closed the connection. Aren't you keeping connections that long open? Ensure that your code is properly closing all JDBC resources (Connection, Statement and ResultSet) in the finally block.
Increasing the mysql connection timeout increased the frequency of these errors.
Take care that this timeout doesn't exceed the DB's own timeout setting.

Categories