Java Scheduled Job, timeout during long sql transaction - java

Overview
I have an #Scheduled job (using cron), called service method which provides some transactions to oracle database.
Problem
The problem is, that in some cases (big data processing) those transactions may last a long time (~17 minutes).
In the case of that duration, I get an exception's:
SQLTimeoutException: ORA-01013: user requested cancel of current
operation
QueryTimeoutException: PreparedStatementCallback;
That problem appears on spring-boot and Tomcat server as well.
Question
How I may avoid that behavior in my case, and which way would be the best ?
As I know, it is possible to set up query timeout, but according to:
this - as default it is not some limitation for query timeout.

Related

Why there is much time cost between took_millis and timeout in elasticsearch slow query

I meet some slow queries in the production environment and I config the slow log to find some slow queries info like this(query_string with 500ms timeout):
[2021-06-21T10:43:33,930][DEBUG][index.search.slowlog.query.xxxx] [xxx][g][2] took[1s], took_millis[1043], total_hits[424690], types[top], stats[], search_typ e[QUERY_THEN_FETCH], total_shards[6], source[{query_string}]
In this case, the query timeout is 500ms, and the took_millis in response is 1043ms.
As far as I know the timeout is only useful for the query parse and the took value represents the execution time in es without some external phases like Query timing: ‘took’ value and what I’m measuring. I have two questions:
Firstly, why there is 504ms(1043 - 500 = 504) between the timeout and took_millis?
Secondly, how can I know the detail time cost between the timeout and took_millis time?
Thanks a lot!
Setting a timeout on a query doesn't ensure that the query is actually cancelled when its execution time surpasses that timeout. The Elasticsearch documentation states:
"By default, a running search only checks if it is cancelled or not on
segment boundaries, therefore the cancellation can be delayed by large
segments."
https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search.html#global-search-timeout
Check issues 3627, 4586 and 2929
This can explain the 504ms between timeout and took_millis, your query just takes that long, and it is not cancelled in time.
To analyze the query execution and see what might be causing these long delays, you can rerun the query with the profile API. Note that if the slow execution of your query cannot be reproduced, this won't help you solve the issue. If your query runs fine most of the time, try to correlate these slow-running queries with external factors such as server load.

How to SET LOCK MODE in java application

I am working on a Java web application that uses Weblogic to connect to an Informix database. In the application we have multiple threads creating records in a table.
It happens pretty often that it fails and the following error is thrown:
java.sql.SQLException: Could not do a physical-order read to fetch next row....
Caused by: java.sql.SQLException: ISAM error: record is locked.
I am assuming that both threads are trying to insert or update when the record is locked.
I did some research and found that there is an option to set the database that instead of throwing an error, it should wait for the lock to be released.
SET LOCK MODE TO WAIT;
SET LOCK MODE TO WAIT 17;
I don't think that there is an option in JDBC to use this setting. How do I go about using this setting in my java web app?
You can always just send that SQL straight up, using createStatement(), and then send that exact SQL.
The more 'normal' / modern approach to this problem is a combination of MVCC, the transaction level 'SERIALIZABLE', retry, and random backoff.
I have no idea if Informix is anywhere near that advanced, though. Modern DBs such as Postgres are (mysql does not count as modern for the purposes of MVCC/serializable/retry/backoff, and transactional safety).
Doing MVCC/Serializable/Retry/Backoff in raw JDBC is very complicated; use a library such as JDBI or JOOQ.
MVCC: A mechanism whereby transactions are shallow clones of the underlying data. 2 separate transactions can both read and write to the same records in the same table without getting in each other's way. Things aren't 'saved' until you commit the transaction.
SERIALIZABLE: A transaction level (also called isolationlevel), settable with jdbcDbObj.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); - the safest level. If you know how version control systems work: You're asking the database to aggressively rebase everything so that the entire chain of commits is ordered into a single long line of events: Each transaction acts as if it was done after the previous transaction was completed. The simplest way to implement this level is to globally lock all the things. This is, of course, very detrimental to multithread performance. In practice, good DB engines (such as postgres) are smarter than that: Multiple threads can simultaneously run transactions without just being frozen and waiting for locks; the DB engine instead checks if the things that the transaction did (not just writing, also reading) is conflict-free with simultaneous transactions. If yes, it's all allowed. If not, all but one simultaneous transaction throw a retry exception. This is the only level that lets you do this sequence of events safely:
Fetch the balance of isaace's bank account.
Fetch the balance of rzwitserloot's bank account.
subtract €10,- from isaace's number, failing if the balance is insufficient.
add €10,- to rzwitserloot's number.
Write isaace's new balance to the db.
Write rzwitserloot's new balance to the db.
commit the transaction.
Any level less than SERIALIZABLE will silently fail the job; if multiple threads do the above simultaneously, no SQLExceptions occur but the sum of the balance of isaace and rzwitserloot will change over time (money is lost or created – in between steps 1 & 2 vs. step 5/6/7, another thread sets new balances, but these new balances are lost due to the update in 5/6/7). With serializable, that cannot happen.
RETRY: The way smart DBs solve the problem is by failing (with a 'retry' error) all but one transaction, by checking if all SELECTs done by the entire transaction are not affected by any transactions that been committed to the db after this transaction was opened. If the answer is yes (some selects would have gone differently), the transaction fails. The point of this error is to tell the code that ran the transaction to just.. start from the top and do it again. Most likely this time there won't be a conflict and it will work. The assumption is that conflicts CAN occur but usually do not occur, so it is better to assume 'fair weather' (no locks, just do your stuff), check afterwards, and try again in the exotic scenario that it conflicted, vs. trying to lock rows and tables. Note that for example ethernet works the same way (assume fair weather, recover errors afterwards).
BACKOFF: One problem with retry is that computers are too consistent: If 2 threads get in the way of each other, they can both fail, both try again, just to fail again, forever. The solution is that the threads twiddle their thumbs for a random amount of time, to guarantee that at some point, one of the two conflicting retriers 'wins'.
In other words, if you want to do it 'right' (see the bank account example), but also relatively 'fast' (not globally locking), get a DB that can do this, and use JDBI or JOOQ; otherwise, you'd have to write code to run all DB stuff in a lambda block, catch the SQLException, check the SqlState to see if it is indicating that you should retry (sqlstate codes are DB-engine specific), and if yes, rerun that lambda, after waiting an exponentially increasing amount of time that also includes a random factor. That's fairly complicated, which is why I strongly advise you rely on JOOQ or JDBI to take care of this for you.
If you aren't ready for that level of DB usage, just make a statement and send "SET LOCK MDOE TO WAIT 17;" as SQL statement straight up, at the start of opening any connection. If you're using a connection pool there is usually a place you can configure SQL statements to be run on connection start.
The Informix JDBC driver does allow you to automatically set the lock wait mode when you connect to the server.
Simply pass via the DataSource or connection URL the following parameter
IFX_LOCK_MODE_WAIT=17
The values for JDBC are
(-1) Wait forever
(0) not wait (default)
(> 0) wait this many seconds
See https://www.ibm.com/support/knowledgecenter/SSGU8G_14.1.0/com.ibm.jdbc.doc/ids_jdbc_040.htm
Connection conn = DriverManager.getConnection ( "jdbc:Informix-sqli://cleo:1550:
IFXHOST=cleo;PORTNO=1550;user=rdtest;password=my_passwd;IFX_LOCK_MODE_WAIT=17";);

Hibernate internal functioning

I'm working on Spring + Hibernate applications. We have applied transaction for service class methods. My questions
Can hibernate execute all db statements of the service method with single connection/session or it uses multiple connections? If it uses multiple connections is it possible to rollback db statements over multiple connections in case of any runtime exceptions?
Suppose if the service method's business logic execution takes more time than removeAbandonedTimeout value, how does the commit/rollback happens on abandoned connection?
Please correct me if i'm wrong any where. Thanks in advance.
UPDATE:-
If the query takes more time than removeAbandonedTimeout it throws exception. Suppose my service method has two db calls, between those two calls some business logic (with no db calls) is there. Before executing the first query it creates db connection, assume first db call took 1 second, and then the business logic execution took 60 seconds. If the connection is abandoned it this moment (if we set removeAbandonedTimeout to 60 seconds), to execute second db query it creates another connection, right? If the second query execution fails it has to rollback first query as they both share same transaction. How could it happen with abandoned connection?
Hibernate will do what you told it to :
When integrating with spring, it will use spring managed transaction. Check where you are opening / closing tx (use of #Transactional on public method or directly using TransactionTemplate) : all hibernate queries inside will run in the same transaction.
This is a question related to your connection pool (dbcp ?). Do not activate the removeAbandonedTimeout flag : removeAbandonedOnMaintenance : from documentation :
Setting one or both of these to true can recover db connections
from poorly written applications which fail to close connections.
If you are using typical spring / hibernate coding pattern, you are not poorly writting your application (from a database resource point of view), so no need to activate this flag : let your long running thread keep it's transaction. If an operation is running so slow without running any db query (that it would have triggered dbcp cleanup), you have two options :
Don't keep the connection to the database : cut your transaction in two : one before the long running process, one after
If you need to keep the tx open (because you have database lock you do not want to lose for example), try to optimize the code in between using cpu sampling tool for example.
Answers for your questions:
With single connection we can execute all db statements of service.We cannot rollback db statements over multiple connections at a time.
In general the removeAbandonedTimeout value should be set to the longest running query your applications might have.If the removeAbandonedTimeout exceeds and if removeAbandoned = true, the db connections can be recovered.

How to resolve java.sql.SQLException distributed transaction waiting for lock

We are using Oracle 11G and JDK1.8 combination.
In our application we are using XAConnection, XAResource for DB transaction.
ie) distributed transactions.
On few occasions we need to kill our Java process to stop the application.
After killing, if we restart our application then we are getting the below exception while doing DB transaction.
java.sql.SQLException: ORA-02049: timeout: distributed transaction
waiting for lock
After this for few hours we are unable to use our application till the lock releases.
Can someone provide me some solution so that we can continue working instead of waiting for the lock to release.
I have tried the below option:
a) Fetched the SID and killed the session using alter command.After this also table lock is not released.
I am dealing with very small amount of data.
I followed one topic similar with that with tips about what to do with distributed connections.
Oracle connections remains open until you end your local session or until the number of database links for your session exceeds the value of OPEN_LINKS. To reduce the network overhead associated with keeping a database link open, then use this clause to close the link explicitly if you do not plan to use it again in your session.
I believe that, by closing your connections and sessions after DDL execution, this issue should not happens.
Other possibility is given on this question:
One possible way might be to increase the INIT.ORA parameter for distributed_lock_timeout to a larger value. This would then give you a longer time to observe the v$lock table as the locks would last for longer.
To achieve automation of this, you can either
- Run an SQL job every 5-10 seconds that logs the values of v$lock or the query that sandos has given above into a table and then analyze it to see which session was causing the lock.
- Run a STATSPACK or an AWR Report. The sessions that got locked should show up with high elapsed time and hence can be identified.
v$session has 3 more columns blocking_instance, blocking_session, blocking_session_statusthat can be added to the query above to give a picture of what is getting locked.
I hope I helped you, my friend.

How do I kill a long-running servlet process from another servlet?

I have a Java servlet that runs a database query that may take several minutes to run before attempting to write to the response stream. During the database query the user may have disconnected (thus making the query useless).
Since I can't kill it in the database thread, I'm trying to kill it from another thread. Do any of the servlet containers provide a recommended way of "canceling" or "killing" a request thread? If I carry a reference to the Thread around, can I force an interrupt or similar?
Tour question is not about java threads. It is about killing database query into the database. I say it because as far as I understand your question what happens is that client sends HTTP request to servlet that performs JDBC connection and runs query that takes a lot of time. So, java does not work this time. The DB does. This means that you have to kill the DB query into the DB. How to do this? This depends on your database. MySql (for example) has a kind of command line shell that allows retrieving the list of current queries and terminating the queries. So this is what you can do. Your second servelet may connect to MySql, retrieve running queries, identify which one should be killed (this is application specific functionality) and kill it. I believe that once you do this the first servlet will get JDBCException and can exit.
This is the way to show list of running queries:
http://www.electrictoolbox.com/show-running-queries-mysql/
Here is how to kill query:
http://dev.mysql.com/doc/refman/5.0/en/kill.html
And the last note that probably should be the first. Check why is your query taking so long time? IMHO in most cases it means that your schema is not optimal or some index is missing. Generally, if your query takes more than 0.1 seconds check your DB schema.
If you are running a hour long DB query , you should not in first
place call from a servlet ,as you response stream will timeout, you will
get 504.
May i know what this query is doing, something involving
calculation and large updates or
inserts.
you should try placing this query in DB JOBS.
You can java.sql.Statement.cancel() you will have to have the running statements registered somewhere (ServletContext or whatever structure you find fit) and unregistered upon completion.
The JDBC driver must support this method (cancel()) as well, I don't know if PostgreSQL supports it, though

Categories