I have a connection to a PostgreSQL DB (using the PostgreSQL JDBC Driver), and I set the network timeout on it (using the setNetworkTimeout method), and there is something weird about it.
When I use the connection for simple queries like select * from table, which takes a lot of time, it works fine (waits for the query to return a result). But when I use the connection for queries which use functions (like select max(a) from table), which also take a lot of time, it throws an exception, as a result of a timeout.
example code:
// Queries which takes more than 5 seconds
String bigQuery = "select * from data.bigtable tb1 inner join data.bigtable tb2 on tb1.a like '%a%'";
String bigQueryWithFunction = "select max(tb1.a) from data.bigtable tb1 inner join data.bigtable tb2 on tb1.a like '%a%'";
// Creating a connection with 5 seconds network timeout
Connection con = source.getConnection();
con.setNetworkTimeout(Executors.newSingleThreadExecutor(), 5000);
con.setAutoCommit(false);
Statement st2 = con.createStatement();
st2.execute(bigQueryWithFunction); // This line DOES throws an exception
st2.execute(bigQuery); // This line DOES NOT throws an exception
(Ignore the logic of the queries.)
Can someone explain to me why it happens?
PostgresSQL streams the result rows to the client as soon as they become available.
In your first query, the first result row will be returned quite soon, even though it takes the query a long time to finish. The JDBC driver collects the results and waits until the query is done, but the network connection won't be idle for any longer time.
The second query takes about as long to complete as the first one, but it cannot return its first (and only) result row until all result rows from the join have been calculated. So there is a long idle time on the network connection, which causes the timeout.
Related
I need to extract data from a remote Sql server database. I am using the mssql jdbc driver.
I noticed that often dwhen retrieving rows from the database the process suddenly hangs, giving no errors. It remains simply stuck and no more rows are processed.
The code to read from the database is the following:
String connectionUrl = "jdbc:sqlserver://10.10.10.28:1433;databaseName=MYDB;user=MYUSER;password=MYPWD;selectMethod=direct;sendStringParametersAsUnicode=false;responseBuffering=adaptive;";
String query = "SELECT * FROM MYTABLE";
try (Connection sourceConnection = DriverManager.getConnection(connectionUrl);
Statement stmt = sourceConnection.createStatement(SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY) ) {
stmt.setFetchSize(100);
resultSet = stmt.executeQuery(query);
while (resultSet.next()) {
// Often, after retrieving some rows, process remains stuck here
}
}
Usually the connection is established correctly, some rows are fetched, than at some point the process can become stuck in retrieving the next rows batch, giving no errors and not processing any new row. This happens some times, other times it completes succesfully.
AFAIK the only reason I can see is that at some point a connection problem occurs with the remote machine, but shouldn't I be notified of this from the driver?
I am not sure how I should handle these type of situations...is there anything I can do on my side to let the process complete even if there is a temporary connection problem with the remote server (of course if the connection is not recoverable there is nothing I can do)?
As another test, instead of the java jdbc driver I've tried the bcp utility to extract data from the remote database and even with this native utility I can observe the same problem: sometimes it completes succesfully, other times it retrieves some rows (say 20000) and then becomes stuck, no errors and no more rows processed.
I have the following java code fragment:
Connection conn = DriverManager.getConnection(connString, user, password);
Statement stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
stmt.setMaxRows(100);
stmt.setQueryTimeout(2);
try {
stmt.execute("SELECT SLEEP(100)"); // (1)
} finally {
stmt.close();
conn.close();
}
It is designed to simulate a long running query that will time out. However, it raises an exception that is not the expected one:
Streaming result set com.mysql.jdbc.RowDataDynamic# is still
active. No statements may be issued when any streaming result sets are
open and in use on a given connection. Ensure that you have called
.close() on any active streaming result sets before attempting more
queries. java.sql.SQLException: Streaming result set
com.mysql.jdbc.RowDataDynamic# is still active. No
statements may be issued when any streaming result sets are open and
in use on a given connection. Ensure that you have called .close() on
any active streaming result sets before attempting more queries.
Looking closer, it appears the error happens as the close method attempts to reset the MaxRows to its default value. However, if I replace line (1) with
stmt.executeQuery("SELECT SLEEP(100)");
The proper exception, com.mysql.jdbc.exceptions.MySQLTimeoutException is raised.
I would like to run execute, stream the results, have a limit on the number of rows, and not have this error crop up. Any suggestions?
I have a Java webapp accessing MSSQL database on a MS SQL Server 2012 running on the same machine.
Some of the queries fail after exactly 3 seconds with:
com.microsoft.sqlserver.jdbc.SQLServerException: The query has timed out.
It happens a couple of times a day, in the mornings, when the app is not heavy loaded.
On average the queries took less than 50 ms.
I'm using Microsoft JDBC Driver 4.0 and the queries will fail after exactly 3 seconds even if I use statement.setQueryTimeout(0);
Remote query timeout on the server is set to its default value (600s).
Any idea why the queries fail after 3s?
Edit:
Here are some of the queries:
UPDATE CampaignCalls SET note = 'Short Text' WHERE (saveTime IS NULL) AND (agent = ?)
This one updates no more than 50 rows
INSERT INTO CampaignCustomers (campaignId, clientId, completed, callTime)
SELECT ?, clientId, 0, callTime
FROM CampaignCustomers WITH (NOLOCK) WHERE campaignId = ?
This one copies no more than 1500 rows.
The connection to the server doesn't break. I'm reusing it a moment later with no problems.
I am wondering why the 3 seconds? Is there any other timeout setting I am not seeing? Even if the table is locked for some reason, why is the query timing out after exactly 3s?
Thank you all!
I'm running into a strange situation with a prepared statement hitting a MySQL database using MySQL Connector/J. In certain environments, I periodically have issues with longer existing (> 5 minutes) prepared statements. I frequently get an exception when calling executeBatch that reads:
"No operations allowed after statement closed"
However, there is no code that could be closing the statement that I can see. The code looks something like the following:
private void execute(MyClass myObj, List<MyThing> things) throws SQLException {
Connection con = null;
PreparedStatement pstmt = null;
try {
con = ConnectionHelper.getConnection();
pstmt = con.prepareStatement(INSERT_SQL);
int c = 0;
for (MyThing thing : things) {
pstmt.setInt(1, myObj.getA());
pstmt.setLong(2, thing.getB());
pstmt.addBatch();
if (++c % 500 == 0) {
pstmt.executeBatch();
}
}
pstmt.executeBatch();
}
finally {
ConnectionHelper.close(pstmt, con);
}
}
ConnectionHelper.close essentially just calls close on the statement and the connection. ConnectionHelper.getConnection is a bit of a rabbit hole -- it roughly retrieves a connection from a pool using java.sql.DriverManager and proxool, then wraps it with Spring DataSourceUtils.
Usually it will fail on the last pstmt.executeBatch(), but will sometimes fail in other places. I've checked and wait_timeout and interactive_timeout are configured to defaults (definitely > 5 minutes). Moreover, in most cases, the connection and statement are used in the loop, but the a few seconds later the statement fails outside of the loop. The DB server and the app server are running on the same subnet, so network issues seem unlikely.
Looking for any tips on how to debug this issue -- at the moment, I'm trying to dig in to the MySQL Connector/J code to see if I can somehow get some additional debugging statements out. Unfortunately I can't attach a debugger, as it can only be reproduced in a select couple environments at the moment.
Take a look at the line:
if (++c % 500 == 0) {
pstmt.executeBatch();
}
What happens if that gets executed, but the loop terminates. Then you call pstmt.executeBatch again with nothing in the batch.
I am doing an update to a DB2 table like this (java code):
// Some code ripped out for brevity...
sql.append("UPDATE " + TABLE_THREADS + " ");
sql.append("SET STATUS = ? ");
sql.append("WHERE ID = ?");
conn = getConn();
pstmt = conn.prepareStatement(sql.toString());
int idx1 = 0;
pstmt.setInt(++idx1, status);
pstmt.setInt(++idx1, id);
int rowsUpdated = pstmt.executeUpdate();
return rowsUpdated;
After a long while, I get a rollback and an error message:
UNSUCCESSFUL EXECUTION CAUSED BY DEADLOCK OR TIMEOUT. REASON CODE 00C9008E, TYPE OF RESOURCE 00000302, AND RESOURCE NAME SOME.THING.X'000002'. SQLCODE=-913, SQLSTATE=57033, DRIVER=3.57.82
The documentation for error -913 says this REASON CODE means it is a timeout. The resource type, 00000302 is a table space page, and I do not recognize the resource name at all.
When I run the SQL by itself, it works fine:
UPDATE MY.THREADS
SET STATUS = 1
WHERE ID = 156
I can SELECT and see the status has been updated. (Although when I run this SQL during the long wait period before the timeout, I have the same issue. It takes forever and I just cancel it).
There are several things happening in the transaction and I don't see any other updates to this table or record. There are create/delete triggers on the table, but no update triggers. I don't see any selects with cursors, or weird isolation level changes. I don't see much else in the transaction that would cause this.
Why am I getting this error? What else should I look for in the transaction?
EDIT:
I stepped through the code from the beginning of the request to where it gets 'stuck'. It seems as if there are 2 DAO's and both of them are creating a transaction. I think that might be the problem.
Sorry to answer my own question, but I found out the problem. This is a somewhat homemade framework where a DAO keeps track of it's own connection.
conn = getConn();
This will return the same connection for each DAO method while in an explicit transaction.
While I was stepping through the code, I found out that a method I was calling in my transaction was creating a new transaction, a new DAO, and therefore a new DB connection. So now I have 2 transactions open and 2 connections. It's easy to see at this point, that I am in fact deadlocking myself.
This caught me a little by surprise since the previous app I worked on allowed nested transactions. (Using the same DB connection for both transactions)
So SQL connections will timeout, you will need to specify to the connection that you want to "test" the connection before executing a query so that it can then reconnect if it's not still open.
I only have code for the apache commons DBCP and pooling, but here is what I do with my own connections. The important lines are connectionPool.setTestOnBorrow(true); and specifying the validation query factory.setValidationQuery("select 1");
GenericObjectPool connectionPool = new GenericObjectPool(null);
...
connectionPool.setTestOnBorrow(true); // test the connection before its made
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(connectURI,username, password);
final String validationQuery = null;
KeyedObjectPoolFactory statementPool = new GenericKeyedObjectPoolFactory(null);
PoolableConnectionFactory factory = new PoolableConnectionFactory(connectionFactory, connectionPool,
statementPool, validationQuery, defaultReadOnly, defaultAutoCommit);
factory.setValidationQuery("select 1"); // validate the connection with this statement