I'm attempting to provide a useful error message to users of a system where some queries may take a long time to execute. I've set a transaction timeout using Spring using #Transactional(timeout = 5).
This works as expected, but the caller of the annotated method receives a JpaSystemException: could not extract ResultSet exception (caused by GenericJDBCException: could not extract ResultSet, in turn caused by PSQLException: ERROR: canceling statement due to user request).
As far as I can tell, the exception is a result of a statement being cancelled by Hibernate and the JDBC driver once the timeout has been exceeded.
Is there any way I can determine that the exception was a result of the transaction timeout being exceeded so that I can provide an error message to the user about why their query failed?
The application is using Spring framework 4.2.9, Hibernate 4.3.11, and Postgres JDBC driver 9.4.1209. I realise these are quite old. If newer versions make handling this situation easier I would be interested to know.
How about you check the exception message of the cause of the cause against some regex pattern or just checking if the message contains the string like this?
exception.getCause().getCause().getMessage().equals("ERROR: canceling statement due to user request")
Related
I have a rest API in Springboot using Hikari for connection pooling. Hikari is used with default configurations (10 connections in the pool, 30 sec timeout waiting for a connection). The API itself is very simple
It first makes a JPA repository query to fetch some data from a PostgresDB. This part takes about 15-20milliseconds.
It then sends this data to a remote REST API that is slow and can take upwards of 120seconds.
Once the remote API responds, my API returns the result back to the client. A simplified version is shown below.
public ResponseEntity analyseData(int companyId) {
Company company = companyRepository.findById(companyId);//takes 20ms
Analysis analysis = callRemoteRestAPI(company.data) //takes 120seconds
return ResponseEntity.status(200).body(analysis);
}
The code does not have any #Transactional annotations. I find that the JDBC connection is held for the entire duration of my API (i.e ~ 120s). And hence if we get more than 10 requests they timeout waiting on the hikari connection pool (30s). But strictly speaking my API does not need the connection after the JPA query is done (Step 1 above).
Is there a way to get spring to release this connection immediately after the query instead of holding it till the entire API finishes processing ? Can Spring be configured to get a connection for every JPA request ? That way if I have multiple JPA queries interspersed with very slow operations the server throughput is not affected and it can handle more than 10 concurrent API requests. .
Essentially the problem is caused by the Spring OpenSessionInViewFilter that "binds a Hibernate Session to the thread for the entire processing of the request" This essentially acquires a connection from the pool when the first JPA query is executed and then holds on to it till the request is processed.
This page - https://www.baeldung.com/spring-open-session-in-view provides a clear and concise explanation about this feature. It has its pros and cons and the current opinion seems to be divided on its use.
Getting this exception when I use below code from Java.
db.getCollection(collectionName).renameCollection(session, new MongoNamespace(db.getName(), newCollectionName));
I went through Mongo DB documentation and mentioned some of the operations are restricted in multi document transactions.
If its restricted why does this method gets session as input?
How to execute this in the middle of the transaction ?
Got the same error while listCollectionNames and createCollection as well.
All,
I am working on a project to upgrade hibernate from 4.1.4.FINAL to 5.2.17.FINAL. We have a bunch of Sybase stored procedures executed using org.hibernate.jdbc.Work. These stored procedures raise errors with some valid error codes like 20010. The error messages raised are caught and used to display on the UI. Here is the Sybase syntax to raise errors.
raiserror 20005 'Invalid'
I see that the new version of hibernate delegates SQL exceptions to convert to a specific exception in the JDBCException hierarchy. See -
org.hibernate.exception.internal.StandardSQLExceptionConverter
If it doesn't find a specific exception then it creates GenericJDBCException with a default message set. For example see
org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork -
Here the SQL exception is caught and the convert method is called with message 'error executing work'. So genericJDBCException.getMessage() will give this message.
I know that GenericJDBCException.getSQLException().getMessage() will give the actual sql exception message. But it is not feasible to change the existing code.
Is there a way to add our own delegate so that I can check the error code and return an exception with the message in SQLException. Or is there any better way to handle this?
Thanks
We are having a problem with a prepared statement in Java. The exception seems to be very clear:
Root Exception stack trace:
com.microsoft.sqlserver.jdbc.SQLServerException: The statement must be executed before any results can be obtained.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getGeneratedKeys(SQLServerStatement.java:1973)
at org.apache.commons.dbcp.DelegatingStatement.getGeneratedKeys(DelegatingStatement.java:315)
It basically states that we are trying to fetch the query results before it has been executed. Sounds plausible. Now, the code which is causing this exception is as follows:
...
preparedStatement.executeUpdate();
ResultSet resultSet = preparedStatement.getGeneratedKeys();
if(resultSet.next()) {
retval = resultSet.getLong(1);
}
...
As you can see, we fetch the query result after we executed the statement.
In this case, we try to get the generated key from the ResultSet of the INSERT query we just succesfully executed.
Problem
We run this code on three different servers (load balanced, in docker containers). Strange enough, this exception only occurs on the third docker server. The other two docker servers have never ran into this exception.
Extra: the failing query is executed approxmately 13000 times per day. (4500 processed by server 3) Most of the times the query works well at server 3 as well. Sometimes, lets say 20 times per day, the query fails. Always the same query, always the same server. Never one of the other servers.
What we've tried
We checked the software versions. But this is all the same because all servers are running with the same docker image.
We updated to the newest Microsoft SQL driver for Java
We checked if all our PreparedStatements were constructed using PreparedStatement.RETURN_GENERATED_KEYS parameter.
It looks like it is some server configuration related problem, since the docker images are all the same. But we can't find the cause. Does anyone have suggestions what the problem can be? Or has anyone ever ran in this problem as well?
As I know, getGeneratedKeys() in case of batch execution is not supported by SQL Server.
Here is feature request which is not satisfied yet: https://github.com/Microsoft/mssql-jdbc/issues/245
My suggestion is that if for some reason on you third server batch insert was executed contitiously, this can cause the exception you mentioned (in case on other two only one item was inserted)
You can try to log the sql statement to check this
The setup:
2-node Cassandra 1.2.6 cluster
replicas=2
very large CQL3 table with no secondary index
Rowkey is a UUID.randomUUID().toString()
read consistency set to ONE
Using DataStax java driver 1.0
The request:
Attempting to do a table scan by "SELECT some-col from schema.table LIMIT nnn;"
The fail:
Once I go beyond a certain nnn LIMIT, I start to get NoHostAvailableExceptions from the driver.
It reads like this:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64)
at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214)
at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169)
at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered))
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
Given: This is probably not the most enlightened thing to do to a large table with millions of rows, but this is how I learn what not to do, so I would really appreciate someone who could volunteer how this kind of error can be debugged.
For example, when this happens, there are no indications that the nodes in the cluster ever had an issue with the request (there is nothing in the logs on either node that indicate any timeout or failure). Also, I enabled the trace on the driver, which gives you some nice autotrace (ala Oracle) info as long as the query succeeds. But in this case, the driver blows a NoHostAvailableException and no ExecutionInfo is available, so tracing has not provided any benefit in this case.
I also find it interesting that this does not seem to be recorded as a timeout (my JMX consoles tell me no timeouts have occurred). So, I am left not understanding WHERE the failure is actually occurring. I am left with the idea that it is the driver that is having a problem, but I don't know how to debug it (and I would really like to).
I have read several posts from folks that state that query'g for resultSets > 10000 rows is probably not a good idea, and I am willing to accept this, but I would like to understand what is causing the exception and where the exception is happening.
FWIW, I also tried bumping the timeout properties in the cassandra.yaml, but this made no difference whatsoever.
I welcome any suggestions, anecdotes, insults, or monetary contributions for my registration in the house of moron-developers.
Regards!!
My guess (and perhaps others can confirm) is that you are putting too high a load on the cluster by the query which is causing the timeout. So, yes, it's a little difficult to debug as it's not obvious what the root cause was: was the limit I set too large or is the cluster actually down?
You want to avoid setting large limits on the amount of data you request in a single query, typically by setting a reasonable limit and paging through the results, e.g.,
SELECT * FROM messages WHERE user_id = 101 LIMIT 1000;
SELECT * FROM messages WHERE user_id = 101 AND msg_id > [Last message ID received] LIMIT 1000;
The Automatic Paging functionality added in (see this document, where the code examples in this answer are copied from) is a big improvement in datastax java-driver as it removes the need to manually page and lets you do the following:
Statement stmt = new SimpleStatement("SELECT * FROM images");
stmt.setFetchSize(100);
ResultSet rs = session.execute(stmt);
// Iterate over the ResultSet here
While this won't necessarily solve your problem it will minimise the possibility that it was a "too-big" query.