I am writing a small program, which is going to be launched on Apache web-server (not Tomcat) through CGI in respond to a POST request.
The program does the following:
read the xml, sent via http in request
execute a stored procedure in a database with the data extracted from the xml
return the result of the stored procedure as the respond to the POST request
The database is Oracle. I use jdbc OCI to access it.
Class.forName("oracle.jdbc.OracleDriver");
String dbCS = "jdbc:oracle:oci:#//ip:port/service"
Connection conn = DriverManager.getConnection(dbCS, dbUserId, dbPwd);
CallableStatement cs = conn.prepareCall("{ call ? := my_pkg.my_sp(?,?,?,?)}");
cs.registerOutParameter(pReturnValue, OracleTypes.NUMBER);
cs.setInt("p1", p1);
cs.setString("p2", p2);
cs.setString("p3", p3);
cs.registerOutParameter("p_out", Types.VARCHAR);
try {
cs.executeQuery();
return cs.getString(pReqResponse);
} finally {
try {
cs.close();
} catch (SQLException ex) {
//....
}
}
While doing a single request, it worked fine (the whole programm finished in 2 sec.). However, if I tryed to send multiple POST requests at once, I got all of them stuck for some amount of time, depending on the quantity of requests (it's approximately, 10 sec. for 10 req., 15 sec. for 15 req.).
I tried to estimate, which part of code gave the delay. It appeared to be two lines:
Connection conn = DriverManager.getConnection(dbConnectionString, dbUserId, dbPwd);
CallableStatement cs = conn.prepareCall("{ call ? := my_pkg.my_sp(?,?,?,?)}");
The execution itself finished almost immediatelly.
Why is this so?
P.S.: I experimented the same on Windows7. Of course, it wasn't launched from a web server, but just as a simple console process. It also has to read the xml from a file on a hard drive. All concurrently launched instances of the programm finished in a second all together.
What prevents it from working as fast on Linux through Apache?
Based on comments
I tried to set poolling properties for my connection but all in vain. I tried the following:
While specifying UserId and Password in the url
jdbc:oracle:oci:login/password#//ip:port/service
I tried to set the connection properties:
Properties p = new Properties();
p.setProperty("Pooling", "true");
p.setProperty("Min Pool Size", "1");
p.setProperty("Max Pool Size", "10");
p.setProperty("Incr Pool Size", "4");
Connection conn = DriverManager.getConnection(dbConnectionString, p);
I tried to use OCI Connection Pooling:
OracleOCIConnectionPool cpool = new OracleOCIConnectionPool();
cpool.setUser("user");
cpool.setPassword("pwd");
cpool.setURL(dbConnectionString);
Properties p = new Properties();
p.put(OracleOCIConnectionPool.CONNPOOL_MIN_LIMIT, "1");
p.put(OracleOCIConnectionPool.CONNPOOL_MAX_LIMIT, "5");
p.put(OracleOCIConnectionPool.CONNPOOL_INCREMENT, "2");
p.put(OracleOCIConnectionPool.CONNPOOL_TIMEOUT, "10");
p.put(OracleOCIConnectionPool.CONNPOOL_NOWAIT, "true");
cpool.setPoolConfig(p);
Connection conn = (OracleOCIConnection) cpool.getConnection();
I tried to use the apache DBCP component:
basicDataSource = new BasicDataSource();
basicDataSource.setUsername("user");
basicDataSource.setPassword("pwd");
basicDataSource.setDriverClassName("oracle.jdbc.OracleDriver");
basicDataSource.setUrl(dbConnectionString);
Connection conn = basicDataSource.getConnection();
The behaviour remained the same, i.e. a big delay on getConnection in all concurrent requests.
All these attempts seem to try to solve some other problem to me, as in my case all connections are established from separate processes, and it looks unobvious to manage connections from one pool among different processes (am I mistaken here??).
What options do I have? Or probably did I do anything wrong?
Also I should say, I am quite new to java in general, so I may be missing some basic things..
Could this be an OS or web-server issue? Probably something should be setup there, not in code...?
Also I tried to use thin client instead of oci. However it worked even more weirdly: the first request finished in a second, while the second delayed for a minute.
Poor concurrency with Oracle JDBC drivers states a problem similar to mine.
In the end we found out that processes, launched by Apache through CGI, occupied all 100% of CPU (and a lion share of memory), so they simply did not have enough resources. Unfortunately I do not know, why a very simple and basic programm (reading an xml and establishing one connection to DB to execute a stored procedure) launched simultanuosly only 20 times, eats all resources.
However the solution appeared to be very obvious indeed. I've refactored it to a java web application using servlets, we deployed it on Apache Tomcat, and MAGIC....it started working as expected, without any visible effect on resources.
I think the problem is with cgi. When you make a cgi request, it starts a new cpu process to handle the request. Each new request is also in a new JVM, so connection pooling is not an option.
Even so, it should be quicker than that to get a connection. Maybe in Oracle itself there are config options governing the number of concurrent connections you can have, but I'm no Oracle expert.
Related
Vertx outlines that this is the normal way to connect to a database here https://vertx.io/docs/vertx-jdbc-client/java/ :
String databaseFile = "sqlite.db";
JDBCPool pool = JDBCPool.pool(
this.context.getVertx(),
new JDBCConnectOptions()
.setJdbcUrl("jdbc:sqlite:".concat(databaseFile)),
new PoolOptions()
.setMaxSize(1)
.setConnectionTimeout(CONNECTION_TIMEOUT)
);
This application I am writing has interprocess communication, so I want to use WAL mode, and synchronous=NORMAL to avoid heavy disk usage. The WAL pragma (PRAGMA journal_model=WAL) is set to the database itself, so I dont need to worry about it on application startup. However, the synchronous pragma is set per connection, so I need to set that when the appplication starts. Currently that looks like this:
// await this future
pool
.preparedQuery("PRAGMA synchronous=NORMAL")
.execute()
I can confirm that later on the synchronous pragma is set on the database connection.
pool
.preparedQuery("PRAGMA synchronous")
.execute()
.map(rows -> {
for (Row row : rows) {
System.out.println("pragma synchronous is " + row.getInteger("synchronous"))
}
})
and since I enforce a single connection in the pool, this should be fine. However I cant help but feel that there is a better way of doing this.
As a side note, I chose a single connection because sqlite is synchronous in nature, there is only ever one write happening at a time to the database. Creating write contention within a single application sounds detrimental rather than helpful, and I have designed my application to have as little concurrent writes within a single process as possible, though inter-process concurrency is real.
So these arent definitive answers, but I have tried a few other options, and want to outline them here.
For instance, vertx can instantiate a SQLClient without a pool:
JsonObject config = new JsonObject()
.put("url", "jdbc:sqlite:"+databaseFile)
.put("driver_class", "org.sqlite.jdbcDriver")
.put("max_pool_size", 1);
Vertx vertx = Vertx.vertx();
SQLClient client = JDBCClient.create(vertx, config);
though this still uses a connection pool, so I have to make the same adjustments to set a single connection in the pool, so that the pragma sticks.
There is also a SQLiteConfig class from the sqlite library, but I have no idea how to connect that into the vertx jdbc wrappers
org.sqlite.SQLiteConfig config = new org.sqlite.SQLiteConfig();
config.setSynchronous(SynchronousMode.NORMAL);
is a pool required with vertx? I did try running the sqlite jdbc driver directly, without a vertx wrapper. But this ran into all kinds of SQLITE_BUSY exceptions.
I'm creating app for library management with Java and MySQL ( JDBC to connect with DB ) , and I have a problem , I checked a lot of topics, books, and websites but I didn't find good answer for me. Is it the good way to deal with connections ? I think that one connection for entire app is good option in this case. My idea is that in every function in every class when I need to use Connection object , these functions will need a connection parameter. In main class I'll call manager object 'Man' for example and to every constructor etc I'll pass Man.getMyConn() as this parameter and call Man.close() when Main frame will be closed . Is it bad idea ? Maybe I should use singleton pattern or connection pool ?
Sorry for my English , I'm still learning.
public class manager {
private Connection myConn;
public manager() throws Exception {
Properties props = new Properties();
props.load(new FileInputStream("app.properties"));
String user = props.getProperty("user");
String password = props.getProperty("password");
String dburl = props.getProperty("dburl");
myConn = DriverManager.getConnection(dburl, user, password);
System.out.println("DB connection successful to: " + dburl);
}
public Connection getMyConn() {
return myConn;
}
//close class etc.
}
Usually not. Further answer depends on type of the application. If you're making web application then you should definitely go with connection pool. If you're making e.g. desktop application (where only one user can access it at the time), then you can open and close connection upon each request.
I have working applications that do it your way. As #Branislav says, it's not adequate if you want to do multiple concurrent queries. There's also a danger that the connection to the database might be lost, and you would need to restart your application to get a new one, unless you write code to catch that and recreate the connection.
Using a singleton would be overcomplicated. Having a getConnection() method (as you have done) is very important as it means you can easily change your code to use a pool later if you find you need to.
For inexplicable reasons however, this morning the performance increased for two of my Queries that used to be slow. I have no idea why.
I have no authority over the server, maybe someone changed something.
The problem is no more.
In a nutshell:
s.executeQuery(sql) runs extremely slowly within a tomcat servlet on server
Same query runs fine without servlet (simple java program) on the same machine
Not all queries are slow within the servlet. Only a few bigger ones do
Same servlet runs fast on another machine
UPDATES
Please read the updates below the text !
I have a servlet that executes SQL requests and sends back the results via JSON. For some reason, some requests take a huge amount of time to execute, but when I run them in any Oracle SQL Client, they are executed in no time.
I am talking about a difference of 1 second vs 5 minutes for the same SQL (that is not that complex).
How can this be explained ?
Is there a way to improve the performance of a java based SQL request ?
I am using the traditional way of executing queries:
java.sql.Connection conn = null;
java.sql.Statement s = null;
ResultSet rs = null;
String dbDriver = "oracle.jdbc.driver.OracleDriver";
String dbConnectionString = "jdbc:oracle:thin:#" + dbHost + ":" + dbPort + ":" + dbSid;
Class.forName(dbDriver).newInstance();
conn = DriverManager.getConnection(dbConnectionString, dbUser, dbPass);
s = conn.createStatement();
s.setQueryTimeout(9999);
rs = s.executeQuery(newStatement);
ResultSetMetaData rsmd = rs.getMetaData();
// Get the results
while (rs.next()) {
// collect the results
}
// close connections
I tried with ojdbc14 and ojdbc6 but there was no difference.
UPDATE 1:
I tried the same SQL in a local Java project (not a servlet) on my client machine, and I get the results immediately. So I assume the problem is coming from my servlet or the tomcat configuration ?
UPDATE 2:
The culprit is indeed rs = s.executeQuery(mySql); I tried to use preparedStatement instead, but there is no difference.
UPDATE 3:
I created a new Servlet running on a local Tomcat and the Query comes back fast. The problem is therefore coming from my production server or Tomcat config. Any ideas what config items could affect this ?
UPDATE 4:
I tried the same code in a normal java program instead of a servlet (still on the same server) and the results are coming fast. Ergo the problem comes from the Servlet itself (or Tomcat ?). Still don't know what to do, but I narrowed it down :)
UPDATE 5:
Jstack shows the following (It starts where my servlet is, I cut the rest)
"http-8080-3" daemon prio=3 tid=0x00eabc00 nid=0x2e runnable [0xaa9ee000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at oracle.net.ns.Packet.receive(Packet.java:311)
at oracle.net.ns.DataPacket.receive(DataPacket.java:105)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:305)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:249)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:171)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:89)
at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123)
at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79)
at oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:429)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:397)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:210)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:30)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:762)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:925)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1104)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1309)
- locked <0xe7198808> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:422)
So i am Stuck at java.net.SocketInputStream.socketRead0(Native Method) ?
In some cases (not sure if this applies to yours) setting fetchSize on the Statement object yields great performance improvements. It depends on the size of the resultSet that is being fetched.
Try playing with it by setting it to something bigger than default 10 for Oracle (see this link).
See Statement.setFetchSize.
Given your symptoms, I believe that your issue is not with your SQL client code and you are in fact looking at issues with your server. The stack shows that your client is waiting for a response. This tallies with the fact that you can run the client without any problem in a separate process.
So what you probably need to look at is systemic reasons why the SQL server is running slowly and how that may be tied to Tomcat. My experience in cases like this is its usually the disk, so I'd be inclined to check whether you are paging due to a lack of RAM when Tomcat is loaded, or suffering from much higher disk ops due to a reduced disk cache. Assuming you are running on a UNIX variant, I'd have a look at vmstat and iostat for a working and broken case to eliminate such issues.
For inexplicable reasons however, this morning the performance increased and my problem is no more. I have no idea why. I have no authority over the server, maybe someone changed something.
Since your thread is waiting on socket read, which means is waiting for a response from the database server I would :
Check database performance, make sure not the instance nor the query is getting impacted at some point in time during the day?
Check your network latencies between Java and DB Servers. Same as above. Probably traceroute?
Since you have not put the query, I can give you a scenario where it is possible. If you use a function in your query like to_char etc. then your table indexes wouldn't be used while executing query via JDBC but will work fine you run it in console. I don't exactly know why but there's something with JDBC driver. I had the exact same issue in db2 and I resolved it removing the use of functions.
Other scenario could be that a huge no of records is being fetched and proper batching is not implemented.
I'm pretty new to Servlets and JSP and to using databases.
At the moment I have a class in the 'model' part of my web app which has lots of methods that I've written to perform database queries and updates. At the moment, in each of those methods I am creating a database connection, doing the SQL stuff, then closing the connection.
This works fine while I'm just making small apps for myself but I'm starting to realise that if lots of people were using my app concurrently then it would start to become apparent that creating database connections and closing them for each method call is a time costly process. So I need to change the way I do things.
In Head First Servlet & JSP by Basham, Sierra & Bates, they describe how it's possible to use a ServletContextListener implementation to create an object on the deployment of the web app that will be added as an attribute of the ServletContext. The authors don't go into it, but imply that people often add a database connection as an attribute of the ServletContext. I thought I would like to implement this for myself, but after reading this stackoverflow article on database connection management I'm not so sure.
However as I'm just starting with servlets and JSP, let alone the rest of J2EE, a lot of that article is beyond me.
The points that stand out for me from that article are:
Something could happen to break that database connection and if we are relying only on that connection then we would need to redeploy our app in order to restart a connection. is this correct?
We should reply on the container to manage the database connections for us. Great, but how is this acheived? How can I communicate with the container? (Please bear in mind that I've just started with Servlets and JSP).
In terms of Servlet design in general, I have one servlet class per request type and that normally only has one type of call to a database ie a specific update or query. Instead of having a class with all the methods for querying the database, is it a better design for me to have the methods within their respective servlets or would that contravene the Model-View-Controller pattern?
I can't imagine that I'll be having too many problems with too many users slowing down the user experience just yet :) but I'd like to start doing things right if possible.
Many thanks in advance for your comments
Joe
The following page on Tomcat's website describes how to connect Tomcat and mySQL in detail. You do not want to roll your own, there are too many DataSource pools already available that have been debugged and tried in production environments.
The main thing about using a pool is that a connection is not terminated when you call close, instead it is just returned to the pool. Therefore it is important to make sure that you close your resources in a try/finally block. Look here for a sample.
I would check out connection pooling and specifically frameworks like C3P0 or Apache Commons DBCP.
Both these packages will look after maintaining and managing a collection of database connections for you.
Typically connections are established in advance, and handed out to requesting threads as they require them. Connections can be validated prior to being handed out (and remade in advance of the client using them if the connection is broken).
The way to go in a web application is to have a connection pool manage your connections. This will allow your threads of execution to share the database connections, which is an important point as connecting to a database is usually a costly operation. Using a connection pool is usually just a configuration task as most containers support managing a connection pool.
From the viewpoint of your code, there are very few changes. Basically:
Connection pools are accessed through the DataSource interface, while non-pooled connections may be accessed through the old DriverManager.
To get the DataSource you will usually have to use JNDI, as this is the standard method for publishing connection pools in a J2EE application.
You want to close() Connection objects as soon as possible. This returns the connection to the pool wothout disconnecting from the DB, so that other threads can use it.
As always, you should call close() on every JDBC resource (connections, statements, resultsets) to avoid leaking. This is specially important in a server application because they are infrequently restarted so leaks accumulate over time and eventually will make your application malfunction.
This is some example code from http://download.oracle.com/javase/1.4.2/docs/guide/jdbc/getstart/datasource.html (NOTE: not exception-safe). As you can see, once you get the Connection reference there is nothing really special.
Context ctx = new InitialContext();
DataSource ds = (DataSource)ctx.lookup("jdbc/AcmeDB");
Connection con = ds.getConnection("genius", "abracadabra");
con.setAutoCommit(false);
PreparedStatement pstmt = con.prepareStatement(
"SELECT NAME, TITLE FROM PERSONNEL WHERE DEPT = ?");
pstmt.setString(1, "SALES");
ResultSet rs = pstmt.executeQuery();
System.out.println("Sales Department:");
while (rs.next()) {
String name = rs.getString("NAME");
String title = rs.getString("TITLE");
System.out.println(name + " ; ;" + title);
}
pstmt.setString(1, "CUST_SERVICE");
ResultSet rs = pstmt.executeQuery();
System.out.println("Customer Service Department:");
while (rs.next()) {
String name = rs.getString("NAME");
String title = rs.getString("TITLE");
System.out.println(name + " ; ;" + title);
}
rs.close();
pstmt.close();
con.close();
The authors don't go into it, but imply that people often add a database connection as an attribute of the ServletContext.
That's not the standard way to handle this. The traditional approach is to use a connection pool i.e. a pool of ready to use connections. Then, applications borrow connections from and return them to the pool when done.
There are several standalone connection pool implementations available (C3P0, Commons DBCP, Bone CP) that you can bundle in your application. But when using a Servlet or Java EE container, I would use the connection pool provided by the container. Then, obtain a DataSource (a handle on a connection pool) via JNDI from the application to get a JDBC connection from it (and close it to return it to the pool).
The way to configure a pool is obviously container specific so you need to refer to the documentation of your container. The good news is that Tomcat provides several examples showing how to do this, how to obtain a datasource via JNDI and how to write proper JDBC code (read until the bottom of the page).
References
Apache Tomcat User Guide
JDBC Data Sources
JNDI Datasource HOW-TO
I recently wrote and deployed a Java web application to a server and I'm finding an unusual problem which didn't appear during development or testing.
When a user logs in after so long and goes to display data from the database, the page indicates that there are no records to see. But upon page refresh, the first x records are shown according to the pagination rules.
Checking the logs, I find:
ERROR|19 09 2009|09 28 54|http-8080-4|myDataSharer.database_access.Database_Metadata_DBA| - Error getting types of columns of tabular Dataset 12
com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.io.EOFException
STACKTRACE:
java.io.EOFException
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1956)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2368)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2867)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1616)
And so on for several hundred lines.
The application is currently set for about 100 users but is not yet in full use. It uses connection pooling between the Apache Tomcat servlets / jsps and a MySQL database with the following code example forming the general arrangement of a database operation, of which there are typically several per page:
// Gets a Dataset.
public static Dataset getDataset(int DatasetNo) {
ConnectionPool_DBA pool = ConnectionPool_DBA.getInstance();
Connection connection = pool.getConnection();
PreparedStatement ps = null;
ResultSet rs = null;
String query = ("SELECT * " +
"FROM Dataset " +
"WHERE DatasetNo = ?;");
try {
ps = connection.prepareStatement(query);
ps.setInt(1, DatasetNo);
rs = ps.executeQuery();
if (rs.next()) {
Dataset d = new Dataset();
d.setDatasetNo(rs.getInt("DatasetNo"));
d.setDatasetName(rs.getString("DatasetName"));
...
}
return d;
}
else {
return null;
}
}
catch(Exception ex) {
logger.error("Error getting Dataset " + DatasetNo + "\n", ex);
return null;
}
finally {
DatabaseUtils.closeResultSet(rs);
DatabaseUtils.closePreparedStatement(ps);
pool.freeConnection(connection);
}
}
Is anyone able to advise a way of correcting this problem?
I believe it is due to MySQL leaving connection poll connections open for up to eight hours but am not certain.
Thanks
Martin O'Shea.
Just to clarify one point made about my method of connection pooling, it isn't Oracle that I'm using in my application but a class of my own as follows:
package myDataSharer.database_access;
import java.sql.*;
import javax.sql.DataSource;
import javax.naming.InitialContext;
import org.apache.log4j.Logger;
public class ConnectionPool_DBA {
static Logger logger = Logger.getLogger(ConnectionPool_DBA.class.getName());
private static ConnectionPool_DBA pool = null;
private static DataSource dataSource = null;
public synchronized static ConnectionPool_DBA getInstance() {
if (pool == null) {
pool = new ConnectionPool_DBA();
}
return pool;
}
private ConnectionPool_DBA() {
try {
InitialContext ic = new InitialContext();
dataSource = (DataSource) ic.lookup("java:/comp/env/jdbc/myDataSharer");
}
catch(Exception ex) {
logger.error("Error getting a connection pool's datasource\n", ex);
}
}
public void freeConnection(Connection c) {
try {
c.close();
}
catch (Exception ex) {
logger.error("Error terminating a connection pool connection\n", ex);
}
}
public Connection getConnection() {
try {
return dataSource.getConnection();
}
catch (Exception ex) {
logger.error("Error getting a connection pool connection\n", ex);
return null;
}
}
}
I think the mention of Oracle is due to me using a similar name.
There are a few pointers on avoiding this situation, obtained from other sources, especially from the connection pool implementations of other drivers and from other application servers. Some of the information is already available in the Tomcat documentation on JNDI Data Sources.
Establish a cleanup/reaper schedule that will close connections in the pool, if they are inactive beyond a certain period. It is not good practice to leave a connection to the database open for 8 hours (the MySQL default). On most application servers, the inactive connection timeout value is configurable and is usually less than 15 minutes (i.e. connections cannot be left in the pool for more than 15 minutes unless they are being reused time and again). In Tomcat, when using a JNDI DataSource, use the removeAbandoned and removeAbandonedTimeout settings to do the same.
When a new connection is return from the pool to the application, ensure that it is tested first. For instance, most application servers that I know, can be configured so that connection to an Oracle database are tested with an execute of "SELECT 1 FROM dual". In Tomcat, use the validationQuery property to set the appropriate query for MySQL - I believe this is "SELECT 1" (without quotes). The reason why setting the value of the validationQuery property helps, is because if the query fails to execute, the connection is dropped from the pool, and new one is created in its place.
As far are the behavior of your application is concerned, the user is probably seeing the result of the pool returning a stale connection to the application for the first time. The second time around, the pool probably returns a different connection that can service the application's queries.
Tomcat JNDI Data Sources are based on Commons DBCP, so the configuration properties applicable to DBCP will apply to Tomcat as well.
I'd wonder why you're using ConnectionPool_DBA in your code instead of letting Tomcat handle the pooling and simply looking up the connection using JNDI.
Why are you using an Oracle connection pool with MySQL? When I do JNDI lookups and connection pooling, I prefer the Apache DBCP library. I find that it works very well.
I'd also ask if your DatabaseUtils methods throw any exceptions, because if either of the calls prior to your call to pool.freeConnection() throw one you'll never free up that connection.
I don't like your code much because a class that performs SQL operations should have its Connection instance passed into it, and should not have the dual responsibility of acquiring and using the Connection. A persistence class can't know if it's being used in a larger transaction. Better to have a separate service layer that acquires the Connection, manages the transaction, marshals the persistence classes, and cleans up when it's complete.
UPDATE:
Google turned up the Oracle class with the same name as yours. Now I really don't like your code, because you wrote something of your own when a better alternative was easily available. I'd ditch yours right away and redo this using DBCP and JNDI.
This error indicates server closes connection unexpectedly. This can occur in following 2 cases,
MySQL closes idle connection after certain time (default is 8 hours). When this occurs, no thread is responsible for closing the connection so it gets stale. This is most likely the cause if this error only happens after long idle.
If you don't completely read all the responses, the connection may get returned to the pool in busy state. Next time, a command is sent to MySQL and it closes connection for wrong state. If the error occurs quite frequent, this is probably the cause.
Meanwhile, setting up an eviction thread will help to alleviate the problem. Add something like this to the Data Source,
...
removeAbandoned="true"
removeAbandonedTimeout="120"
logAbandoned="true"
testOnBorrow="false"
testOnReturn="false"
timeBetweenEvictionRunsMillis="60000"
numTestsPerEvictionRun="5"
minEvictableIdleTimeMillis="30000"
testWhileIdle="true"
validationQuery="select now()"
Is there a router between the web server and the database that transparently closes idle TCP/IP connections?
If so, you must have your connection pool either discard unused-for-more-than-XX-minutes connections from the pool, or do some kind of ping every YY minutes on the connection to keep it active.
On the off chance you haven't found your answer I've been dealing with this for the last day. I am essentially doing the same thing you are except that I'm basing my pooling off of apache.commons.pool. Same exact error you are seeing EOF. Check your mysqld error log file which is most likely in your data directory. Look for mysqld crashing. mysqld_safe will restart your mysqld quickly if it crashes so it won't be apparent that this is the case unless you look in its logfile. /var/log is not help for this scenario.
Connections that were created before the crash will EOF after the crash.