Spring UncategorizedSQLException: ORA-01012 - java

I am trying to retrieve data form an Oracle database using jdbc (ojdbc14.jar). I have a limited number of concurrent connections when connecting to the database and these connections are managed by Websphere connection pool.
Sometimes when I make the call I see an UncategorizedSQLException exception thrown in my logs with one of the following oracle codes:
ORA-01012 (not logged in) exception
ORA-17410 (connection timed out, socket empty),
ORA-02396 exceeded maximum idle time, please connect again
Other times I get no exceptions and it works fine.
Anyone understand what might be happening here?
In Websphere I have my cache statement size set to 10. Not sure if it is relevant in this situation, when it looks like the connection is being dropped.

It looks like the database is deciding to drop the connection. It's a good idea to write your code in a way that doesn't require that a connection be held forever. A better choice is to have the program connect to the database, do its work, and disconnect. This eliminates the problem of the database deciding to disconnect the application due to inactivity/server overload/whatever, and the program needing to figure this out and make a reasonable stab at reconnecting.
I hope this helps.

Related

Do multiple open/close connections to SQl via Java effect performance?

I am working on a basic code to access a query in a database for an alert system. I understand that the database at work (Oracle based) automatically creates a pool of connections and I wanted to know if I connect, execute the query and close the connection every 5-15seconds would the performance drop dramatically and was it the correct way to do it or would I have to leave the connection open until the infinite loop is closed?
I have someone at work telling me that closing the connection would result in the database having to lookup a query each time from scratch but if I leave it open the query will be in a cache somewhere on the database.
ResultSet rs1 = MyStatement.executeQuery(QUERY_1);
while (rs1.next()){
// do something
}
rs1.close();
rs1 = MyStatement.executeQuery(QUERY_2);
while (rs1.next()){
// do something
}
rs1.close();
Oracle can't pool connections, this has to be done on the client side
You aren't closing any connections in the code example you posted
Opening a connection is a rather slow process, so use either a fixed set of connections (typically the set has size one for things like fat client applications) or a connection pool, that pools open connections for reuse. See this question for what might be viable options for connection pools: Connection pooling options with JDBC: DBCP vs C3P0 If you are running on an application server it will probably already provide a connection pooling solution. check the documentation.
closing stuff like the resultset in your code or a connection (not in your code) should be done in a finally block. Doing the closing (and the neccessary exception handling correct is actually rather difficult. Consider using something like the JdbcTemplate of Spring (http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/jdbc.html)
If you aren't using stuff like VPN (virtual private database) Oracle will cache execution plans of statements no matter if they come from the same connection or not. Also data accessed lately is kept in memory to make queries accessing similar data fast. So the performance decrease is really coming from the latency of establishing the connection. There is some overhead on the DB side for creating connections which in theory could affect the performance of the complete database, but it is likely to be irrelevant.
Every time a client connects to the database that connection has to be authenticated. This is obviously an overhead. Furthermore the database listener can only process a limited number of connections at the same time; if the number of simultaneous connection attempts exceeds that threshold they get put into a queue. That is also an overhead.
So the general answer is, yes, opening and closing connections is an expensive operation.
It is always beneficial to be using DB Connection pools especially if you are using a Java EE app server. Also using the connection pool which is out of the box in the Java EE app server is optimal as it will be optimized and performance tested by the App server development team.

Relationship between JDBC sessions and Oracle processes

We are having a problem with too many Oracle processes being created (over 2,000) when connections are limited to 1,100 (using C3P0)
Two questions:
What's the relationship between an Oracle process and a JDBC connection? Is one Oracle process created for each session? Is one created for every JDBC statement? No relationship at all?
Did you ever face this scenario, where you are creating more processes than JDBC connections?
Any comment would be really appreciated.
There is one session per connection. This sounds like you have a connection leak, somewhere you're opening a new connection and not closing properly. One possibility is that you open, use and close a connection inside a try block and are handling an exception in a catch, or returning early for someother reason. If so you need to make sure the connection close is done in finally or it may not happen, leaving the connection (and thus session) hanging. Opening two connections in the same scope without an explicit close in between can also do this.
I'm not familiar with C3PO so don't know how connections are handled, or where and how your 1100 limit is imposed; if it (or you) have a connection pool and the 1100 you refer to is the maximm pool size, then this doesn't sound like the issue as you'd hit the pool cap before the session cap.
You can look in v$session to confirm that all the sessions are coming from JDBC, and there isn't something else connecting.
Maybe you want to check if your server runs in dedicated or shared mode (you probably want to switch it to shared mode if you want to decrease the number of active processes).
You can check that by doing
select server from v$session
More information about process architecture
http://docs.oracle.com/cd/B19306_01/server.102/b14220/process.htm
Shared/Dedicated server mode
http://docs.oracle.com/cd/B10501_01/server.920/a96521/manproc.htm

How to fix error: [BEA][SQLServer JDBC Driver]No more data available to read

My java application does use DB Connection pooling. One of the functionality started failing today with this error:
[BEA][SQLServer JDBC Driver]No more data available to read
This doesn't occur daily. Once I restart my application server things look fine for some days and this error comes back again.
Anyone encountered this error? Reasons might vary, but I would like to know those various reasons to mitigate my issue.
Is it possible that the database or network connection has briefly had an outage? You might expect any currently open result sets then to become invalid with resulting errors.
I've never seen this particular error, but then I don't work with BEA or SQL Server, but a quick google does show other folks suggesting such a cause.
When you're using a connection pool, if you do get such a glitch, then all connections in teh pool become "stale" or invalid. My application server (WebSphere) has the option to discard the entire connection pool after particular errors are detected. The result then is that one unlucky request sees the error, but then subsequent requests get a new connection and recover. If you don't discard the whole pool then you get a failure as each stale connection is used and discarded.
I suggest you investigate to see a). whether your app server has such a capability b). how you application responds if the database is bounced, if this replicates the error then maybe you've found the cause.

java mysql server getting slower

I manage my connections by JDBC connection pool (BoneCP) and I always close the connection, the preparedStatement und the ResultSet.
But, when my programm is running for several days, the mysql-server gets slower and slower (for testing, I let my programm insert an entry every second). After 2 days, there were several seconds between the entries and that is why I think that the mysql server is getting slower and can handle the incomming transaction. Am I right?
The mysql server also uses much more of RAM and does not release the resources. So does anyone know, how I could find the error causing this behaviour? Thanks in advice!
Use the MySQL Workbench to detect open connections. It also gives you a host of options to see performance of your database server.
Also [I might be mistaken about this part of your question], when you say
I use connection pooling
why do you close the connection? Isn't that the opposite of the purpose of connection pooling?

Performance comparison of JDBC connection pools

Does anyone have any information comparing performance characteristics of different ConnectionPool implementations?
Background: I have an application that runs db updates in background threads to a mysql instance on the same box. Using the Datasource com.mchange.v2.c3p0.ComboPooledDataSource would give us occasional SocketExceptions:
com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.net.SocketException
MESSAGE: Broken pipe
STACKTRACE:
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
Increasing the mysql connection timeout increased the frequency of these errors.
These errors have disappeared on switching to a different connection pool (com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource); however the performance may be worse and the memory profile is noticeably so (we get fewer, and much larger, GC's than the c3p0 pool).
Whatever connection pool you use, you need to assume that the connection could be randomly closed at any moment and make your application deal with it.
In the case with a long-standing DB connection on a "trusted" network, what often happens is that the OS applies a time limit to how long connections can be open, or periodically runs some "connection cleanup" code. But the cause doesn't matter too much -- it's just part of networking life that you should assume the connection can be "pulled from under your feet", and deal with this scenario accordingly.
So given that, I really can't see the point of a connection pool framework that doesn't allow you to handle this case programmatically.
(Incidentally, this is another of my cases where I'm glad I just write my own connection pool code; no black boxes mysteriously eating memory, and no having to fish around to find the "magic parameter"...)
You may want to have a look at some benchmark numbers up at http://jolbox.com - the site hosting BoneCP, a connection pool that is faster than both C3P0 and DBCP.
I had this error pop up with mysql & c3p0 as well - I tried various things and eventually made it go away. I can't remember, but what might have solved it was the autoReconnect flag a la
url="jdbc:mysql://localhost:3306/database?autoReconnect=true"
Have you tried Apache DBCP? I don't know about c3po but DBCP can handle idle connections in different ways:
It can remove idle connections from the pool
It can run a query on idle connections after a certain period of inactivity
It can also test if a connection is valid just before giving it to the application, by running a query on it; if it gets an exception, it discards that connection and tries with another one (or creates a new one if it can). Way more robust.
Broken pipe
That roughly means that the other side has aborted/timedout/closed the connection. Aren't you keeping connections that long open? Ensure that your code is properly closing all JDBC resources (Connection, Statement and ResultSet) in the finally block.
Increasing the mysql connection timeout increased the frequency of these errors.
Take care that this timeout doesn't exceed the DB's own timeout setting.

Categories