I have the following java code fragment:
Connection conn = DriverManager.getConnection(connString, user, password);
Statement stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
stmt.setMaxRows(100);
stmt.setQueryTimeout(2);
try {
stmt.execute("SELECT SLEEP(100)"); // (1)
} finally {
stmt.close();
conn.close();
}
It is designed to simulate a long running query that will time out. However, it raises an exception that is not the expected one:
Streaming result set com.mysql.jdbc.RowDataDynamic# is still
active. No statements may be issued when any streaming result sets are
open and in use on a given connection. Ensure that you have called
.close() on any active streaming result sets before attempting more
queries. java.sql.SQLException: Streaming result set
com.mysql.jdbc.RowDataDynamic# is still active. No
statements may be issued when any streaming result sets are open and
in use on a given connection. Ensure that you have called .close() on
any active streaming result sets before attempting more queries.
Looking closer, it appears the error happens as the close method attempts to reset the MaxRows to its default value. However, if I replace line (1) with
stmt.executeQuery("SELECT SLEEP(100)");
The proper exception, com.mysql.jdbc.exceptions.MySQLTimeoutException is raised.
I would like to run execute, stream the results, have a limit on the number of rows, and not have this error crop up. Any suggestions?
Related
I have a connection to a PostgreSQL DB (using the PostgreSQL JDBC Driver), and I set the network timeout on it (using the setNetworkTimeout method), and there is something weird about it.
When I use the connection for simple queries like select * from table, which takes a lot of time, it works fine (waits for the query to return a result). But when I use the connection for queries which use functions (like select max(a) from table), which also take a lot of time, it throws an exception, as a result of a timeout.
example code:
// Queries which takes more than 5 seconds
String bigQuery = "select * from data.bigtable tb1 inner join data.bigtable tb2 on tb1.a like '%a%'";
String bigQueryWithFunction = "select max(tb1.a) from data.bigtable tb1 inner join data.bigtable tb2 on tb1.a like '%a%'";
// Creating a connection with 5 seconds network timeout
Connection con = source.getConnection();
con.setNetworkTimeout(Executors.newSingleThreadExecutor(), 5000);
con.setAutoCommit(false);
Statement st2 = con.createStatement();
st2.execute(bigQueryWithFunction); // This line DOES throws an exception
st2.execute(bigQuery); // This line DOES NOT throws an exception
(Ignore the logic of the queries.)
Can someone explain to me why it happens?
PostgresSQL streams the result rows to the client as soon as they become available.
In your first query, the first result row will be returned quite soon, even though it takes the query a long time to finish. The JDBC driver collects the results and waits until the query is done, but the network connection won't be idle for any longer time.
The second query takes about as long to complete as the first one, but it cannot return its first (and only) result row until all result rows from the join have been calculated. So there is a long idle time on the network connection, which causes the timeout.
I need to extract data from a remote Sql server database. I am using the mssql jdbc driver.
I noticed that often dwhen retrieving rows from the database the process suddenly hangs, giving no errors. It remains simply stuck and no more rows are processed.
The code to read from the database is the following:
String connectionUrl = "jdbc:sqlserver://10.10.10.28:1433;databaseName=MYDB;user=MYUSER;password=MYPWD;selectMethod=direct;sendStringParametersAsUnicode=false;responseBuffering=adaptive;";
String query = "SELECT * FROM MYTABLE";
try (Connection sourceConnection = DriverManager.getConnection(connectionUrl);
Statement stmt = sourceConnection.createStatement(SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY) ) {
stmt.setFetchSize(100);
resultSet = stmt.executeQuery(query);
while (resultSet.next()) {
// Often, after retrieving some rows, process remains stuck here
}
}
Usually the connection is established correctly, some rows are fetched, than at some point the process can become stuck in retrieving the next rows batch, giving no errors and not processing any new row. This happens some times, other times it completes succesfully.
AFAIK the only reason I can see is that at some point a connection problem occurs with the remote machine, but shouldn't I be notified of this from the driver?
I am not sure how I should handle these type of situations...is there anything I can do on my side to let the process complete even if there is a temporary connection problem with the remote server (of course if the connection is not recoverable there is nothing I can do)?
As another test, instead of the java jdbc driver I've tried the bcp utility to extract data from the remote database and even with this native utility I can observe the same problem: sometimes it completes succesfully, other times it retrieves some rows (say 20000) and then becomes stuck, no errors and no more rows processed.
I'm running into a strange situation with a prepared statement hitting a MySQL database using MySQL Connector/J. In certain environments, I periodically have issues with longer existing (> 5 minutes) prepared statements. I frequently get an exception when calling executeBatch that reads:
"No operations allowed after statement closed"
However, there is no code that could be closing the statement that I can see. The code looks something like the following:
private void execute(MyClass myObj, List<MyThing> things) throws SQLException {
Connection con = null;
PreparedStatement pstmt = null;
try {
con = ConnectionHelper.getConnection();
pstmt = con.prepareStatement(INSERT_SQL);
int c = 0;
for (MyThing thing : things) {
pstmt.setInt(1, myObj.getA());
pstmt.setLong(2, thing.getB());
pstmt.addBatch();
if (++c % 500 == 0) {
pstmt.executeBatch();
}
}
pstmt.executeBatch();
}
finally {
ConnectionHelper.close(pstmt, con);
}
}
ConnectionHelper.close essentially just calls close on the statement and the connection. ConnectionHelper.getConnection is a bit of a rabbit hole -- it roughly retrieves a connection from a pool using java.sql.DriverManager and proxool, then wraps it with Spring DataSourceUtils.
Usually it will fail on the last pstmt.executeBatch(), but will sometimes fail in other places. I've checked and wait_timeout and interactive_timeout are configured to defaults (definitely > 5 minutes). Moreover, in most cases, the connection and statement are used in the loop, but the a few seconds later the statement fails outside of the loop. The DB server and the app server are running on the same subnet, so network issues seem unlikely.
Looking for any tips on how to debug this issue -- at the moment, I'm trying to dig in to the MySQL Connector/J code to see if I can somehow get some additional debugging statements out. Unfortunately I can't attach a debugger, as it can only be reproduced in a select couple environments at the moment.
Take a look at the line:
if (++c % 500 == 0) {
pstmt.executeBatch();
}
What happens if that gets executed, but the loop terminates. Then you call pstmt.executeBatch again with nothing in the batch.
I am supporting some legacy code and it's chugged along fine until recently. I am looking for if there is a setting for JDBC Oracle thin connection where I can specify idle timeout via Java (no connection pooling)? A lot of resources online refer to connection pooling... is it even possible in my case (to specify idle timeout, in a non-pooling situation)? Or is idle time a setting on the specific DB user account?
Updates + Questions
I was able to log in as the user, and ran a query to try to find out resource limits. select * from USER_RESOURCE_LIMITS; However everything came back "UNLIMITED". Is it possible for another value (say from the JDBC connection) to override the "UNLIMITED"?
So the job holds onto the connection, while we actively query another system via DB links for a good duration of ~2+ hours... Now, why would the idle timeout even come into play?
Update #2
We switched to a different account (that has the same kind of DB link setup) and the job was able to finish like it did before. Which sort of points to something wonky with the Oracle user profile? But like I said, querying USER_RESOURCE_LIMITS shows both users to have "UNLIMITED" idle time. DBA pretty confirmed that too. What else could be causing this difference?
Update #3
Stack trace and such.
java.sql.SQLException: ORA-02396: exceeded maximum idle time, please connect again
ORA-06512: at line 1
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:184)
at oracle.jdbc.driver.T4CCallableStatement.execute_for_rows(T4CCallableStatement.java:873)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1086)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:2984)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3076)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4273)
at com.grocery.stand.Helper.getAccess(Helper.java:216)
at com.grocery.stand.fruitbasket.Dao.getPriceData(Dao.java:216)
at com.grocery.stand.fruitbasket.Dao.getPricees(Dao.java:183)
at com.grocery.stand.fruitbasket.UpdatePrice.updateAllFruitPrices(UpdatePrice.java:256)
at com.grocery.stand.fruitbasket.UpdatePrice.main(UpdatePrice.java:58)
SQL Exception while getting Data from SYSTEM_B
Exception while updating pricing : ORA-01012: not logged on
Exception in thread "main" java.sql.SQLException: ORA-01012: not logged on
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:277)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:272)
at oracle.jdbc.driver.T4C7Ocommoncall.receive(T4C7Ocommoncall.java:129)
at oracle.jdbc.driver.T4CConnection.do_rollback(T4CConnection.java:478)
at oracle.jdbc.driver.PhysicalConnection.rollback(PhysicalConnection.java:1045)
at com.grocery.stand.Helper.rollBack(Helper.java:75)
at com.grocery.stand.fruitbasket.UpdatePrice.updatePartNumbers(UpdatePrice.java:291)
at com.grocery.stand.fruitbasket.UpdatePrice.main(UpdatePrice.java:58)
Connection Code
public static Connection openConnection() throws SQLException {
String userName = propBundle.getString(DB_UID);
String password = propBundle.getString(DB_PWD);
String url = propBundle.getString(DB_URL);
Connection conn = null;
try {
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
conn = (Connection) DriverManager.getConnection(url, userName,
password);
conn.setAutoCommit(false);
} catch (SQLException sqle) {
sqle.printStackTrace(System.out);
throw sqle;
}
return conn;
}
Error occurs on line execute()
public static void getSystemAccess(Connection dbConnection) throws SQLException {
try {
CallableStatement authStmt = null;
String authorize = "CALL ABC.ACCESS_PROCEDURE#some_db_link()";
authStmt = dbConnection.prepareCall(authorize);
authStmt.execute();
authStmt.close();
} catch (SQLException sqle1) {
sqle1.printStackTrace();
throw new SQLException(sqle1.getMessage());
}
}
I'm not sure that I understand the question you're asking.
The error you are getting indicates that the Oracle user that you are using to connect to the database has a profile configured (in Oracle) that limits the amount of time the connection can be idle. Oracle is killing your connection when the connection remains idle too long. Normally, the solution to this sort of problem would be to go to the DBA and ask for the idle time to be increased or to look through your code and see why the connection is open and unused for so long. If you were using a connection pool (which it doesn't appear you are), it would make sense for some connections to remain open and idle for long periods of time. Since it doesn't appear that you are using a connection pool, the question is whether it makes sense for the application to hold open the connection for long periods of time without doing anything. If the application opens a connection when the user logs in at 9am and doesn't close it until the user shuts down at 5pm, it may make sense to adjust the IDLE_TIME setting for this user in the database. Otherwise, you may want to investigate whether it makes logical sense for the application to hold open the database connection so long without doing something or whether the application can be modified to close the connection when it is no longer needed.
I am facing an issue while executing queries.I use the same resultSet and statement for excecuting all the queries.Now I face an intermittent SQlException saying that connection is already closed.Now we have to either have separate resultSet for each query or have lock like structure.Can anyone tell which is better.I think introducing locks will slow down the process.Am I right?
Update:
To be more clear.The error may happen because the finally block gets called before all the queries get executed and the connection gets closed and exception will be thrown.
This is the exception I get
java.sql.SQLException: Connection has
already been closed. at
weblogic.jdbc.wrapper.PoolConnection.checkConnection(PoolConnection.java:81)
at
weblogic.jdbc.wrapper.ResultSet.preInvocationHandler(ResultSet.java:68)
at
weblogic.jdbc.wrapper.ResultSet_com_informix_jdbc_IfxResultSet.next(Unknown
Source) at
com.test.test.execute(test.java:76)
at
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:413)
at
org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:225)
at
org.apache.struts.action.ActionServlet.process(ActionServlet.java:1858)
at
org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:459)
at
javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
at
javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at
weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1077)
at
weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:465)
at
weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:348)
at
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:7047)
at
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at
weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3902)
at
weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2773)
at
weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
at
weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
Sample code:
ResultSet rst=null;
Statement stmt=null;
Connection con=DBConnection.getConnection();
stmt=con.createStatement();
rst=stmt.executeQuery("select * from dual");
while(rst.next())
{ : ://Some code }
rst=stmt.executeQuery("select * from doctor where degree="BM");
while(rst.next())
{ //blah blah }
finally
{
//close con,rst and stmt
}
you are not reusing the resultset, you are leaking resultsets.
rst=stmt.executeQuery... generates a new resultset and the previous resultset is never closed :(
It appears that the code in question has issues in multi-threaded environment.
DBConnection.getConnection() is probably returning the same connection to all threads. When multiple threads are processing multiple requests, the first thread that finishes execution of the method will close the connection, leaving all other threads high and sundry.
I'm speculating here, but is appears that the connection object returned by DBConnection is an instance member of the DBConnection object, and that would qualify as a bad practice for a connection manager in a multi-threaded environment.
A code fix would avoid the usage of instance members for Connection, Statement (and the like), and the ResultSet objects.
I'm not sure what's going on without knowing more about your code. Is it threaded ? Is the underlying database going down (or are you losing connectivity to it).
One thing I would do is to implement connection pooling (via Apache DBCP, say). This framework will maintain a pool of connections to your database and validate these connections before handing them out to you. You would ask for a new connection each time you make a query (or perhaps set of queries) but because they're pooled this shouldn't be a major oeverhead.
Unless your connection to the database has really been closed I think you did something more like this:
try {
return resultSet.getBoolean("SUCCESS");
} finally {
resultSet.close();
}
This code will actually close the connection before your result set is being evaluated, resulting in the exception you show.