I'm working on Spring + Hibernate applications. We have applied transaction for service class methods. My questions
Can hibernate execute all db statements of the service method with single connection/session or it uses multiple connections? If it uses multiple connections is it possible to rollback db statements over multiple connections in case of any runtime exceptions?
Suppose if the service method's business logic execution takes more time than removeAbandonedTimeout value, how does the commit/rollback happens on abandoned connection?
Please correct me if i'm wrong any where. Thanks in advance.
UPDATE:-
If the query takes more time than removeAbandonedTimeout it throws exception. Suppose my service method has two db calls, between those two calls some business logic (with no db calls) is there. Before executing the first query it creates db connection, assume first db call took 1 second, and then the business logic execution took 60 seconds. If the connection is abandoned it this moment (if we set removeAbandonedTimeout to 60 seconds), to execute second db query it creates another connection, right? If the second query execution fails it has to rollback first query as they both share same transaction. How could it happen with abandoned connection?
Hibernate will do what you told it to :
When integrating with spring, it will use spring managed transaction. Check where you are opening / closing tx (use of #Transactional on public method or directly using TransactionTemplate) : all hibernate queries inside will run in the same transaction.
This is a question related to your connection pool (dbcp ?). Do not activate the removeAbandonedTimeout flag : removeAbandonedOnMaintenance : from documentation :
Setting one or both of these to true can recover db connections
from poorly written applications which fail to close connections.
If you are using typical spring / hibernate coding pattern, you are not poorly writting your application (from a database resource point of view), so no need to activate this flag : let your long running thread keep it's transaction. If an operation is running so slow without running any db query (that it would have triggered dbcp cleanup), you have two options :
Don't keep the connection to the database : cut your transaction in two : one before the long running process, one after
If you need to keep the tx open (because you have database lock you do not want to lose for example), try to optimize the code in between using cpu sampling tool for example.
Answers for your questions:
With single connection we can execute all db statements of service.We cannot rollback db statements over multiple connections at a time.
In general the removeAbandonedTimeout value should be set to the longest running query your applications might have.If the removeAbandonedTimeout exceeds and if removeAbandoned = true, the db connections can be recovered.
Related
i am having a simple controller /hello which internally calls Repository layer for a transaction using the default propagation level i.e Propagation.REQUIRED.
#Controller
public void hello(data) -> doTransaction(data)
#Repository
#Transactional
public void doTransaction(data){
jdbcTemplateInsertA(data);
Thread.sleep(15*1000);
jdbcTemplateUpdateB(data);
}
as per online docs
REQUIRED is the default propagation. Spring checks if there is an active transaction, and if nothing exists, it creates a new one. Otherwise, the business logic appends to the currently active transaction:
now let's say i hit the API 2 times, with data1 and then data2, before request 1 was finished API was hit 2nd time.
would there be only a single transaction or two separate independent transactions?
Also, i want to know is using #Transactional with jdbcTemplate valid?
i am having doubt on how jdbcTemplate internally gets the same connection object that was used by #Transactional annotation considering pool size >=2.
UPDATE:
You will have a transaction per thread. -> any docs/resources where i can find this?
For 2nd part, let me rephrase my question.
Let's say using hikari pool with size as 5 as per my understanding #Transactional would pick some connection from pool, let's say it picked up connectionId1554 now, when i call jdbc.execute(), how spring ensure that it uses connectionId1554 and not any other available connection from the pool
You will have a transaction per thread. If you have two requests, then you will have two threads (from your web container, like Tomcat), and therefore you'll have two separate transactions.
Based on the code and (lack of) configuration it is hard to tell how you setup your JdbcTemplate. Most likely the connection used by the JdbcTemplate comes from a database connection pool. If you've configured two connections then you can only use two connections and the usage of those connections will block other requests to the database if they are in use. Now, that being said, database connection pools are generally programmed to be super smart and one database connection can be used between multiple database requests/transactions depending on configuration settings.
let's say i hit the API 2 times, with data1 and then data2, before request 1 was finished API was hit 2nd time.
would there be only a single transaction or two separate independent transactions?
Each incoming request to your api will result in an independent transaction being started. If you think this through this makes sense: if you have 2 users each making a request at the same time, you would want each of their requests to be handled independently, it would be odd if both of their requests joined and were handled as a single transaction.
i want to know is using #Transactional with jdbcTemplate valid? i am having doubt on how jdbcTemplate internally gets the same connection object that was used by #Transactional annotation
This depends on how you have configured a Datasource and a connection pool. Are you running this on an appserver like Webpshere, Weblogic or WildFly? They provide Datasource connection pooling integrated with a TransactionManager too.
I am working on a Java web application that uses Weblogic to connect to an Informix database. In the application we have multiple threads creating records in a table.
It happens pretty often that it fails and the following error is thrown:
java.sql.SQLException: Could not do a physical-order read to fetch next row....
Caused by: java.sql.SQLException: ISAM error: record is locked.
I am assuming that both threads are trying to insert or update when the record is locked.
I did some research and found that there is an option to set the database that instead of throwing an error, it should wait for the lock to be released.
SET LOCK MODE TO WAIT;
SET LOCK MODE TO WAIT 17;
I don't think that there is an option in JDBC to use this setting. How do I go about using this setting in my java web app?
You can always just send that SQL straight up, using createStatement(), and then send that exact SQL.
The more 'normal' / modern approach to this problem is a combination of MVCC, the transaction level 'SERIALIZABLE', retry, and random backoff.
I have no idea if Informix is anywhere near that advanced, though. Modern DBs such as Postgres are (mysql does not count as modern for the purposes of MVCC/serializable/retry/backoff, and transactional safety).
Doing MVCC/Serializable/Retry/Backoff in raw JDBC is very complicated; use a library such as JDBI or JOOQ.
MVCC: A mechanism whereby transactions are shallow clones of the underlying data. 2 separate transactions can both read and write to the same records in the same table without getting in each other's way. Things aren't 'saved' until you commit the transaction.
SERIALIZABLE: A transaction level (also called isolationlevel), settable with jdbcDbObj.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); - the safest level. If you know how version control systems work: You're asking the database to aggressively rebase everything so that the entire chain of commits is ordered into a single long line of events: Each transaction acts as if it was done after the previous transaction was completed. The simplest way to implement this level is to globally lock all the things. This is, of course, very detrimental to multithread performance. In practice, good DB engines (such as postgres) are smarter than that: Multiple threads can simultaneously run transactions without just being frozen and waiting for locks; the DB engine instead checks if the things that the transaction did (not just writing, also reading) is conflict-free with simultaneous transactions. If yes, it's all allowed. If not, all but one simultaneous transaction throw a retry exception. This is the only level that lets you do this sequence of events safely:
Fetch the balance of isaace's bank account.
Fetch the balance of rzwitserloot's bank account.
subtract €10,- from isaace's number, failing if the balance is insufficient.
add €10,- to rzwitserloot's number.
Write isaace's new balance to the db.
Write rzwitserloot's new balance to the db.
commit the transaction.
Any level less than SERIALIZABLE will silently fail the job; if multiple threads do the above simultaneously, no SQLExceptions occur but the sum of the balance of isaace and rzwitserloot will change over time (money is lost or created – in between steps 1 & 2 vs. step 5/6/7, another thread sets new balances, but these new balances are lost due to the update in 5/6/7). With serializable, that cannot happen.
RETRY: The way smart DBs solve the problem is by failing (with a 'retry' error) all but one transaction, by checking if all SELECTs done by the entire transaction are not affected by any transactions that been committed to the db after this transaction was opened. If the answer is yes (some selects would have gone differently), the transaction fails. The point of this error is to tell the code that ran the transaction to just.. start from the top and do it again. Most likely this time there won't be a conflict and it will work. The assumption is that conflicts CAN occur but usually do not occur, so it is better to assume 'fair weather' (no locks, just do your stuff), check afterwards, and try again in the exotic scenario that it conflicted, vs. trying to lock rows and tables. Note that for example ethernet works the same way (assume fair weather, recover errors afterwards).
BACKOFF: One problem with retry is that computers are too consistent: If 2 threads get in the way of each other, they can both fail, both try again, just to fail again, forever. The solution is that the threads twiddle their thumbs for a random amount of time, to guarantee that at some point, one of the two conflicting retriers 'wins'.
In other words, if you want to do it 'right' (see the bank account example), but also relatively 'fast' (not globally locking), get a DB that can do this, and use JDBI or JOOQ; otherwise, you'd have to write code to run all DB stuff in a lambda block, catch the SQLException, check the SqlState to see if it is indicating that you should retry (sqlstate codes are DB-engine specific), and if yes, rerun that lambda, after waiting an exponentially increasing amount of time that also includes a random factor. That's fairly complicated, which is why I strongly advise you rely on JOOQ or JDBI to take care of this for you.
If you aren't ready for that level of DB usage, just make a statement and send "SET LOCK MDOE TO WAIT 17;" as SQL statement straight up, at the start of opening any connection. If you're using a connection pool there is usually a place you can configure SQL statements to be run on connection start.
The Informix JDBC driver does allow you to automatically set the lock wait mode when you connect to the server.
Simply pass via the DataSource or connection URL the following parameter
IFX_LOCK_MODE_WAIT=17
The values for JDBC are
(-1) Wait forever
(0) not wait (default)
(> 0) wait this many seconds
See https://www.ibm.com/support/knowledgecenter/SSGU8G_14.1.0/com.ibm.jdbc.doc/ids_jdbc_040.htm
Connection conn = DriverManager.getConnection ( "jdbc:Informix-sqli://cleo:1550:
IFXHOST=cleo;PORTNO=1550;user=rdtest;password=my_passwd;IFX_LOCK_MODE_WAIT=17";);
Overview
I have an #Scheduled job (using cron), called service method which provides some transactions to oracle database.
Problem
The problem is, that in some cases (big data processing) those transactions may last a long time (~17 minutes).
In the case of that duration, I get an exception's:
SQLTimeoutException: ORA-01013: user requested cancel of current
operation
QueryTimeoutException: PreparedStatementCallback;
That problem appears on spring-boot and Tomcat server as well.
Question
How I may avoid that behavior in my case, and which way would be the best ?
As I know, it is possible to set up query timeout, but according to:
this - as default it is not some limitation for query timeout.
Problem:
Program uses com.mchange.v2.c3p0.ComboPooledDataSource to connect to Sybase server
Program executes 2 methods, runSQL1() and runSQL2(), in sequence
runSQL1() executes SQL which creates a #temptable
SELECT * INTO #myTemp FROM TABLE1 WHERE X=2
runSQL2() executes SQL which reads from this #temptable
SELECT * FROM #myTemp WHERE Y=3
PROBLEM: runSQL2() gets handed a different DB connection from the pool than the one handed to runSQL1().
However, Sybase #temptables are connection-specific, therefore runSQL2() fails when it can't find the table.
The most obvious solution I can think of (aside from degenerate one of making pool size 1, at which point we don't even need a pool), is to somehow remember which specific connection from the pool was used by runSQL1(), and have runSQL2() request the same connection.
Is there a way to do this in com.mchange.v2.c3p0.ComboPooledDataSource?
If possible, I'd like an answer which is concurrency-safe (in other words, if connection used in runSQL1() is being used by another thread, runSQL2()'s call to get connection will wait until that connection is released by another thread).
However, if that's impossible, I'm OK with the answer which assumes that DB connections (the ones I care about) are all happening in one single thread, and therefore any connection requested by runSQL2() will be 100% available if it was available to runSQL1().
I'm also welcoming of any solutions that address the problem some other way, as long as they don't involve "stop using #temptables" as part of the solution.
Easiest and most obvious way to do that is just to request connection from the pool and then run runSQL1() and runSQL2() with that connection. Usage pattern being suggested in the question goes against general design principles of connection pool managers, as it will effectively promote them to some kind of transaction manager.
There are Java frameworks that might aid in the above. For example in Spring #Transaction or TransactionTemplate can be used to demarcate transaction boundaries and it will guarantee that single connection is used by single thread (or more precisely, according to transaction propagation annotations). Spring can use many transaction managers, but probably simplest would be to use DataSourceTransactionManager and it can also be configured to use c3p0 as DataSource.
Single threaded standalone java app entering a deadlock? Yes, with EclipseLink this is very easy. So after a week of debugging a very complex multi threaded app, I managed to narrow down the problem to this:
If you have only 1 thread in application, you'd assume that you need a DB connection pool of size 1 and that everything should work. Well, not in this case:
public void singleThreadedMethod() {
User u;
--start JPA transaction / create EM (1)--
u = em.find(User.class, userId);
-- end JPA transaction / close EM --
--start JPA transaction / create EM (2)--
u.getCompany(); // lazy assoc (3)
-- end JPA transaction / close EM--
}
EclipseLink will take 1 connection for (1), then return it to the pool. Then take another connection for (2), then try to take another connection for (3), although it looks like the code is inside a single transaction. Trick is that u is not managed in this transaction (2), but still "manages" to get to the DB to get its lazy association. If the DB pool size is 1, this creates a deadlock, since (2) is holding the only connection that (3) (inside (2)) is waiting on. Note that this is NOT a deadlock in DB, but in connection pool in Java.
This creates a debugging nightmare when you have pool of 20 connections and less than 20 threads and you wonder where that deadlock comes from!
I guess it's a long shot, but - is there any way to stop EclipseLink from doing what it does? I.e. to stop it from fetching lazy associations when not inside persistence context? Or to use thread's existing persistence context if it's there.