java mysql server getting slower - java

I manage my connections by JDBC connection pool (BoneCP) and I always close the connection, the preparedStatement und the ResultSet.
But, when my programm is running for several days, the mysql-server gets slower and slower (for testing, I let my programm insert an entry every second). After 2 days, there were several seconds between the entries and that is why I think that the mysql server is getting slower and can handle the incomming transaction. Am I right?
The mysql server also uses much more of RAM and does not release the resources. So does anyone know, how I could find the error causing this behaviour? Thanks in advice!

Use the MySQL Workbench to detect open connections. It also gives you a host of options to see performance of your database server.
Also [I might be mistaken about this part of your question], when you say
I use connection pooling
why do you close the connection? Isn't that the opposite of the purpose of connection pooling?

Related

Connecting to multiple databases using same JDBC drivers performance

I have a requirement to write a java application (web based) that will connect to an Oracle 11G databases (currently is connecting 10-12 different oracle databases), read some data from it (all are select queries).
After iterating this arraylist, I am connecting each database, fire select query (same query I am firing for all databases), get record , put it in one global collection list and close connection and continuing same process over this loop.
Currently I am "Executors" to connect multiple databases.
again using ExecutorService executor = Executors.newFixedThreadPool(20);
give one surprise to me. for creting first database connection, it shows immediate log, but for subsequent database connection, it prints logs after 60 seconds, so i am not getting why this is taking 60 secs or more for all connections?
logically, it should take time for all connections as like one.
Please suggest performance improvement for this application.
Opening a database connection is an expensive operation; if possible you should connect to each database once and reuse the connection for all queries made to that database (also known as connection pooling). It's not clear if this is what you're already doing.
Another thing that seems strange is that you have made the openConnection method synchronized. I see no need for doing that, and depending on how your code is structured it may mean that only one thread can be opening a connection at a given time. Since connecting takes a long time you'd want each thread to make its connection in parallel.

Do multiple open/close connections to SQl via Java effect performance?

I am working on a basic code to access a query in a database for an alert system. I understand that the database at work (Oracle based) automatically creates a pool of connections and I wanted to know if I connect, execute the query and close the connection every 5-15seconds would the performance drop dramatically and was it the correct way to do it or would I have to leave the connection open until the infinite loop is closed?
I have someone at work telling me that closing the connection would result in the database having to lookup a query each time from scratch but if I leave it open the query will be in a cache somewhere on the database.
ResultSet rs1 = MyStatement.executeQuery(QUERY_1);
while (rs1.next()){
// do something
}
rs1.close();
rs1 = MyStatement.executeQuery(QUERY_2);
while (rs1.next()){
// do something
}
rs1.close();
Oracle can't pool connections, this has to be done on the client side
You aren't closing any connections in the code example you posted
Opening a connection is a rather slow process, so use either a fixed set of connections (typically the set has size one for things like fat client applications) or a connection pool, that pools open connections for reuse. See this question for what might be viable options for connection pools: Connection pooling options with JDBC: DBCP vs C3P0 If you are running on an application server it will probably already provide a connection pooling solution. check the documentation.
closing stuff like the resultset in your code or a connection (not in your code) should be done in a finally block. Doing the closing (and the neccessary exception handling correct is actually rather difficult. Consider using something like the JdbcTemplate of Spring (http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/jdbc.html)
If you aren't using stuff like VPN (virtual private database) Oracle will cache execution plans of statements no matter if they come from the same connection or not. Also data accessed lately is kept in memory to make queries accessing similar data fast. So the performance decrease is really coming from the latency of establishing the connection. There is some overhead on the DB side for creating connections which in theory could affect the performance of the complete database, but it is likely to be irrelevant.
Every time a client connects to the database that connection has to be authenticated. This is obviously an overhead. Furthermore the database listener can only process a limited number of connections at the same time; if the number of simultaneous connection attempts exceeds that threshold they get put into a queue. That is also an overhead.
So the general answer is, yes, opening and closing connections is an expensive operation.
It is always beneficial to be using DB Connection pools especially if you are using a Java EE app server. Also using the connection pool which is out of the box in the Java EE app server is optimal as it will be optimized and performance tested by the App server development team.

Relationship between JDBC sessions and Oracle processes

We are having a problem with too many Oracle processes being created (over 2,000) when connections are limited to 1,100 (using C3P0)
Two questions:
What's the relationship between an Oracle process and a JDBC connection? Is one Oracle process created for each session? Is one created for every JDBC statement? No relationship at all?
Did you ever face this scenario, where you are creating more processes than JDBC connections?
Any comment would be really appreciated.
There is one session per connection. This sounds like you have a connection leak, somewhere you're opening a new connection and not closing properly. One possibility is that you open, use and close a connection inside a try block and are handling an exception in a catch, or returning early for someother reason. If so you need to make sure the connection close is done in finally or it may not happen, leaving the connection (and thus session) hanging. Opening two connections in the same scope without an explicit close in between can also do this.
I'm not familiar with C3PO so don't know how connections are handled, or where and how your 1100 limit is imposed; if it (or you) have a connection pool and the 1100 you refer to is the maximm pool size, then this doesn't sound like the issue as you'd hit the pool cap before the session cap.
You can look in v$session to confirm that all the sessions are coming from JDBC, and there isn't something else connecting.
Maybe you want to check if your server runs in dedicated or shared mode (you probably want to switch it to shared mode if you want to decrease the number of active processes).
You can check that by doing
select server from v$session
More information about process architecture
http://docs.oracle.com/cd/B19306_01/server.102/b14220/process.htm
Shared/Dedicated server mode
http://docs.oracle.com/cd/B10501_01/server.920/a96521/manproc.htm

Java mysql connection time

I have a java program running 24/7. It accesses mysql database every 3 seconds only from 9.am to 3.PM. In this case when should I should open and close the MySql connection?
Should I open and close every 3 sec?
Should I open at 9.am and close at 3.pm?
Should I open once when the program start and never close it. But reconnect when connection is closed automatically and exceptions is thrown?
Why don't you simply use a connection pool. If that is too tedious since the connection will be frequently used you can reuse the same one imho.
While it is true that setting up and tearing down a MySQL connection is relatively cheap (when compared to, for example, Oracle), doing it every 3 seconds is a waste of resources. I'd cache the connection and save the overhead of creating a new database connection every time.
This depends very much on the situation. Do you connect over a WAN, is the MySQL server shared with other applications or will you be the only user (or at least will your application create most of the load?) If the database is mostly yours and it is near enough, there is little benefit in setting up and tearing down the connection daily.
This is what most applications do and this is what I'd recommend you do by default.
If you do not want to leave connections open overnight, you might able to configure your connection pool to open connections on demand and close them when they have been idle for a certain period of time -- say, 15 minutes. This would give you the benefit of being able to query the database whenever you wish and not having too many idle connections.

Spring UncategorizedSQLException: ORA-01012

I am trying to retrieve data form an Oracle database using jdbc (ojdbc14.jar). I have a limited number of concurrent connections when connecting to the database and these connections are managed by Websphere connection pool.
Sometimes when I make the call I see an UncategorizedSQLException exception thrown in my logs with one of the following oracle codes:
ORA-01012 (not logged in) exception
ORA-17410 (connection timed out, socket empty),
ORA-02396 exceeded maximum idle time, please connect again
Other times I get no exceptions and it works fine.
Anyone understand what might be happening here?
In Websphere I have my cache statement size set to 10. Not sure if it is relevant in this situation, when it looks like the connection is being dropped.
It looks like the database is deciding to drop the connection. It's a good idea to write your code in a way that doesn't require that a connection be held forever. A better choice is to have the program connect to the database, do its work, and disconnect. This eliminates the problem of the database deciding to disconnect the application due to inactivity/server overload/whatever, and the program needing to figure this out and make a reasonable stab at reconnecting.
I hope this helps.

Categories