I have a Java servlet that runs a database query that may take several minutes to run before attempting to write to the response stream. During the database query the user may have disconnected (thus making the query useless).
Since I can't kill it in the database thread, I'm trying to kill it from another thread. Do any of the servlet containers provide a recommended way of "canceling" or "killing" a request thread? If I carry a reference to the Thread around, can I force an interrupt or similar?
Tour question is not about java threads. It is about killing database query into the database. I say it because as far as I understand your question what happens is that client sends HTTP request to servlet that performs JDBC connection and runs query that takes a lot of time. So, java does not work this time. The DB does. This means that you have to kill the DB query into the DB. How to do this? This depends on your database. MySql (for example) has a kind of command line shell that allows retrieving the list of current queries and terminating the queries. So this is what you can do. Your second servelet may connect to MySql, retrieve running queries, identify which one should be killed (this is application specific functionality) and kill it. I believe that once you do this the first servlet will get JDBCException and can exit.
This is the way to show list of running queries:
http://www.electrictoolbox.com/show-running-queries-mysql/
Here is how to kill query:
http://dev.mysql.com/doc/refman/5.0/en/kill.html
And the last note that probably should be the first. Check why is your query taking so long time? IMHO in most cases it means that your schema is not optimal or some index is missing. Generally, if your query takes more than 0.1 seconds check your DB schema.
If you are running a hour long DB query , you should not in first
place call from a servlet ,as you response stream will timeout, you will
get 504.
May i know what this query is doing, something involving
calculation and large updates or
inserts.
you should try placing this query in DB JOBS.
You can java.sql.Statement.cancel() you will have to have the running statements registered somewhere (ServletContext or whatever structure you find fit) and unregistered upon completion.
The JDBC driver must support this method (cancel()) as well, I don't know if PostgreSQL supports it, though
Related
If query execution takes a long time then the result is not coming to the Java layer. if we run the query in Oracle SQL developer there the query getting timed out. If anybody can help with this then its great.
Two things can happen,
1) In your java layer it takes too much time to process those db results to actual pojos. You might think like results are not coming but most of the time it takes too long to process those data. Because same query get executed really fast in oracle dev tools.
2) Oracle has locks and maybe some other thread or a request from a web application is holding on to a database record so the entire process waits for the table or row to get finished.
Overview
I have an #Scheduled job (using cron), called service method which provides some transactions to oracle database.
Problem
The problem is, that in some cases (big data processing) those transactions may last a long time (~17 minutes).
In the case of that duration, I get an exception's:
SQLTimeoutException: ORA-01013: user requested cancel of current
operation
QueryTimeoutException: PreparedStatementCallback;
That problem appears on spring-boot and Tomcat server as well.
Question
How I may avoid that behavior in my case, and which way would be the best ?
As I know, it is possible to set up query timeout, but according to:
this - as default it is not some limitation for query timeout.
Is it possible to call a stored procedure asynchronously using Hibernate? The used connection should be released after triggering the stored procedure while the query continues to run in the background on PostgreSQL. The stored procedure should then write a result in a dedicated table from which we can collect it later. Main motive is to prevent connection exhaustion of c3p0 connection pool.
I'm not sure you can do that with Hibernate. If you call the procedure the flow control from your program will pause until it finishes. However you could try to make the stored procedure async. That way when you make the call with hibernate, it will ask the DB to start an async job and your connection will be immediately released. You'll have to make also some queries to know if your stored procedure has finished.
There is no support within JDBC for asynchronous execution, so Hibernate does not offer such a feature either. If you want to return directly from a long running stored procedure call, you want to make this call in a separate thread, maybe even using a ThreadPoolExecutor. This will let you proceed with the thread which originally triggered the call. In this case you can most likely also process the result of the stored procedure in the executing thread, instead of writing into a temp table and then reading from there. However, this will depend on your usecase.
You will, however, need a connection for that, so it won't help you with running out of connections in the pool. Is this an actual problem? Have you configured the c3po pool correctly as well as the underlying database?
I've a typical scenario & need to understand best possible way to handle this, so here it goes -
I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network.
Also, this will be a scheduled task that will execute every 15 minutes.
I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval.
Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements.
There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes.
This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle.
So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ?
Current Test Code (on every cycle) -
Connects to remote service to get events.
Creates a connection with DB (single connection object).
Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done.
After above, calls the respective method based on type of operation & table.
Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters.
Commits the statement & returns to get event class to process next event.
Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again.
Thanks for help !
If you are inserting or updating one row at a time,You can consider executing a batch Insert or a batch Update. It has been proven that if you are attempting to update or insert rows after a certain quantity, you get much better performance.
The number of DB operations you are talking about (200 every 15 minutes) is tiny and will be easy to finish in less than 15 minutes. Some concrete suggestions:
You should profile your application to understand where it is spending its time. If you don't do this, then you don't know what to optimize next and you don't know if something you did helped or hurt.
If possible, try to get all of the events in one round-trip to the remote server.
You should reuse the connection to the remote service (probably by using a library that supports connection persistence and reuse).
You should reuse the DB connections by using a connection pooling library rather than creating a new connection for each insert/update/delete. Believe it or not, creating the connection probably takes 100+ times as long as doing your DB operation once you have the connection in hand.
You should consider doing multiple (or all) of the database operations in the same transaction rather than creating a new transaction for each row that is changed. However, you should carefully consider your failure modes such that you don't lose any events (if that is an important consideration).
You should consider utilizing prepared statement caching. This may help, but maybe not if Oracle is configured properly.
You should consider trying to analyze your operations to find any that can be batched together. This can be a lot faster if you have some "hot" operations that get done often.
"I've a typical scenario"
No you haven't. You have a bespoke architecture, with a unique data model, unique data and unique business requirements. That's not a bad thing, it's the state of pretty much every computer system that's not been bought off-the-shelf (and even some of them).
So, it's an experiment and you must approach it as such. There is no "best practice". Try various things and see what works best.
"need to understand best possible way to handle this"
You will improve your chances of success enormously by hiring somebody who understands Oracle databases.
I have a interface where users can select a stored proc and pass parameters to the stored proc.
Based on what the user selects, the query can run for a long time.
If the query takes more than 5 minutes I want to stop the query and send an email to the user asking him to contact the developer.
Basically how do we pass a time out parameter to query?
It is not possible to do it in the same SQL stored procedure, because this is a sequential execution, and there is not any possibility to fork a connexion or perform a parallel execution.
You can eventually create a external stored procedure in Java or C, that will create a thread for monitoring purposes and then trigger the finalisation of the job if it is getting too much time.
Also, you can create an infinitive loop in a SP that will be activated each minute to check the processes, and kill the ones that have taken more than a certain quantity of time, but this is NOT recommended.
You can use the built-in module UTL_MAIL to send an email and terminate a process via admin_cmd but you have to create a monitoring process in parallel, and that is not possible from the same connexion.
You can check a Serge Rielau's article in his blog that could give you many ideas: https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/sleep?lang=en
Query timeout is a client configuration parameter, set via db2cli.ini, db2dsdriver.cfg, or the JDBC connection property, depending on the client version and type. To enforce query run time limit on the server side, where the stored procedure runs, you will need to use Workload Manager.
In either case I don't think you'll be able to trigger an email notification to the client.