Dealing with RollbackException in Java - java

Is there any way to "replay" transaction?
I mean, sometimes I get RollbackException and rollback the transaction. Can I then "clone" the transaction and try again, or once rollback is called, transaction is lost?
I really need the changes, and really don't want to trace every change for rerunning later...
thanks,
udi

Why do you get the exception in the first place ? This seems to me to be the crux of the matter.
Are you relying on optimistic writing ? If so, then you'll have to wrap your database writes in some form of loop, incorporating (perhaps) a backoff and a number of retries. You can't do this automatically, unfortunately (unless you investigate some form of AOP solution wrapping your database writes with a retry strategy ?)

That depends where that transaction comes from. In Java/JDBC, a transaction is tied to a connection. You start one by setting setAutoCommit() to false (otherwise, every statement becomes its own little transaction).
There is nothing preventing you from reusing the connection after a transaction failed (i.e. you called rollback).
Things get more tricky when you use Spring. Spring wraps your methods in a transaction handler and this handler tries to guess what it should do with the current transaction from the exceptions that get thrown in the method. The next question is: Which wrapper created the current transaction? I just had a case where I would call a method foo() which would in turn call bar(), both #Transactional.
I wanted to catch errors from bar() in foo() and save them into the DB. That didn't work because the transaction was created for foo() (so I was still in a transaction which Spring thought broken by the exception in bar()) and it wouldn't let me save the error.
The solution was to create baz(), make it #Transactional(propagation=Propagation.REQUIRES_NEW) and call it from foo(). baz() would get a new, fresh transaction and would be able to write to the DB even though it was called from foo() which already had a (broken) transaction.

Another alternative is to use JDBC savepoints to partly roll back.

Related

How does default #Transactional work in the low level?

Does a propagation-Required default #Transactional collects all queries and executes them at the end of the method altogether or does it open a db transaction and executes BEGIN, every query as it finds it and when transaction finishes executes COMMIT?
Is this what is referred as Logical vs Physical transactions?
I am wondering that because I am using a #Transactional tests that executes GET endpoint + DELETE endpoont + GET endpoint with READ_UNCOMMITED, behavior manages to work well, but I see no trace of delete queries in the logs, only selects.
I would have expected I see all the queries issued and then a rollback, but I have the feeling that the transaction is just modifying the managed entities of the persistance context and just tries to save by the end of the test...
If I should be seeing all the delete queries as the repository.removes() are executed then it might be that for some reason hibernate is only logging queries out of a readonly=false transaction
Maybe this answer helps you: JPA flush vs commit
If there is an active transaction, JPA/Hibernate will execute the flush method when transaction is committed. Meanwhile, all changes applied to entities are collected in the Unit of Work.
In flush() the changes to the data are reflected in database after encountering flush, but it is still in transaction.flush() MUST be enclosed in a transaction context and you don't have to do it explicitly unless needed (in rare cases), when EntityTransaction.commit() does that for you.
You can change this behavior changing the flush strategy.

Spring #Transactional read-only mode rollback behaviour

I have some service layer method with #Transactional(readOnly=true) and this method causes some RuntimeException quite often (let's say it is some NotFoundException exception).
I'm using ORM Hibernate also for the DB interaction process.
Is it legal pattern to do so?
What is the default behaviour in this case in sense of "roll-back" behaviour? Can it influence somehow badly on connections's state or lead to any problems?
It isn't something like "why not try it by your self?". I have a suspicion that this could lead to the Transaction rolled back because it has been marked as rollback-only error in the same method after some number of exceptions. This could be very specific JDBC PostgreSQL driver error. That is why I'm wondering about this design in general: is it something legal or illegal to do so?
So as far as I understand, you are worried about the roll-back. In this case a readOnly is a select statement and usually there is nothing to roll-back from a read. The only place where this is handy is when you read under a lock and when the transaction finishes you release that lock.
AFAIK readOnly will set the flushmode to FlushMode.NEVER and that is good and bad at the same time. Good, because there will no dirty checking, as described here. Bad because if you call a read/write transaction within a readOnly transaction, the transaction will silently fail to commit because the session is not flushed. This is easily testable btw - and I hope things have not changed since I've tried this.
Then there is the pool of connections. I know that C3P0's default policy is to rollback any uncommitted work. The flag to control this is autoCommitOnClose.
Then there is this link about readOnly and postgres - which I have not worked with and can't really tell my opinion on.
Now to your point about Transaction rolled back because it has been marked as rollback-only. For a readOnly transaction there might be nothing to roll-back as I said before, so this really depends on how you chain your #Transactional methods IMO.

Synchronizing spring transaction

We use Spring and Hibernate in our project and has a layered Architechture. Controller -> Service -> Manager -> Dao. Transactions start in the Manager layer. A method in the service layer which updates an object in the db is called by many threads and this is causing to throw a stale object expection. So I made this method Synchronized and still see the stale object exception thrown. What am I doing wrong here? Any better way to handle this case?
Thanks for the help in advance.
The stale object exception is thrown when an entity has been modified between the time it was read and the time it's updated. This can happen inside a single transaction, but may also happen when you read an object in a transaction, modify it (in the controller layer, for example), then start another transaction and merge/update it (in this case, minutes or hours can separate the read and the update).
The exception is thrown to help you avoid conflicts between users.
If you don't care about conflicts (i.e. the last update always wins and replaces what the previous ones have written), then don't use optimistic locking. If you're concerned about conflicts, then StaleObjectExceptions will happen, and you should popup a meaningful message to the end user, asking him to reload the data and try to modify it again. There's no way to avoid them. You must just be optimistic and hope that they won't happen often.
Note that your synchronized trick will work only if
the exception happens only when reading and writing in the same transaction
updates to the entity are only made by this service
your application is not clustered.
It might also reduce the throughput dramatically, because you forbid any concurrent updates, regardless of which entities are updated by the concurrent transactions. It's like if you locked the whole table for the duration of the whole transaction.
My guess is that you would need to configure optimistic locking on the Hibernate side.

Whats the best practice around when to start a new session or transaction for batch jobs using spring/hibernate?

I have set of batch/cron jobs in Java that call my service classes. I'm using Hibernate and Spring as well.
Originally the batch layer was always creating an outer transaction, and then the batch job will call a service to get a list of objects from the DB w/ the same session, then call a service to process each object separately. Theres a tx-advice set for my service layer to rollback on any throwable. So if on the 5th object theres an exception, the first 4 objects that were processed gets rolled back too because they were all part of the same transaction.
So i was thinking this outer transaction created in the batch layer was unnecessary. I removed that, and now i call a service to get a list of objects. THen call another service to process each object separately, and if one of those objects fail, the other ones will still persist because its a new transaction/session for each service call. But the problem I have here now is after getting a list of objects, when i pass each object to a service to process, if i try to get one of the properties i get a lazy initialization error because the session used to load that object (from the list) is closed.
Some options i thought of were to just get a list of IDs in the batch job and pass each id to a service and the service will retrieve the whole object in that one session and process it. Another one is to set lazy loading to false for that object's attributes, but this would load everything everytime even if sometimes the nested attributes aren't needed.
I could always go back to the way it was originally w/ the outer transaction around every batch job, and then create another transaction in the batch job before each call to the service for processing each individual object...
What's the best practice for something like this?
Well I would say that you listed every possible option except OpenSessionInView. That would keep your session alive across transactions, but it's difficult to implement properly. So difficult that it's considered an AntiPattern by many.
However, since you're not implementing a web interface and you aren't dealing with a highly threaded environment, I would say that's the way to go. It's not like you're passing entities to views. Your biggest fear is an N+1 call to the database while iterating through a collection, but since this is a cron job, performance may not be a major issue when compared with code cleanliness. If you're really worried about it, just make sure you get all of your collections via a call to a DAO who can do a select *.
Additionally, you were effectively doing an Open Session In View before when you were doing everything in the same transaction. In Spring, Sessions are opened on a per transaction basis, so keeping a transaction open a long period of time is effectively the same as keeping a Session open a long period of time. The only real difference in your case will be the fact that you can commit periodically without fear of a lazy initialization error down the road.
Edit
All that being said, it takes a bit of time to set up an Open Session in View, so unless you have any particular issues against doing everything in the same transaction, you might consider just going back to that.
Also, I just noticed that you mentioned opening a transaction in the batch layer and then opening "mini transactions" in the Service layer. This is most emphatically NOT a good idea. Spring's annotation driven transactions will piggyback on any currently open transaction in the session. This means that transactions that are supposed to be read-only will suddenly become read-write if the currently open transaction is read-write. Additionally, the Session won't be flushed until the outermost transaction is finished anyways, so there's no point in marking the Service layer with #Transactional. Putting #Transactional on multiple layers only lends to a false sense of security.
I actually blogged about this issue some time ago.

Hibernate: A long read-only transaction will now require a small DB update in the middle

I have written quite a complicated engine of sorts which navigates up and down a large series of objects read in from the database.
So I have code that looks something like this:
public void go(long id) {
try {
beginTransaction();
Foo foo = someDao.find(id);
anotherObject.doSomething(foo);
commitTransaction();
} catch (Exception e) {
rollbackTransaction();
}
}
The code in doSomething(...) will call methods to get child objects of Foo and pass those child objects off to other classes and so on.
Prior to my problem, this use to just be a long read-only transaction. Now however, somewhere in the middle of all of this, there needs to be an update to the database. It is important that this update is committed straight away. As Hibernate doesn't support nested transactions, how would I deal with a situation like this to allow me to continue to pass my object around and still call getter methods to access children whilst having that database update get committed?
I thought of removing the long running transaction and having small transactions all over the place. Unfortunately, my code at the moment passes Foo and other child objects everywhere assuming it is still bound to the session. If this is my only solution, would that mean I would end up with ugly merge calls everywhere just to re-attach to the session so the getter methods work again? I'm sure there must be a more elegant solution.
Do the database update within your transaction, i.e. pass the required information to the thread performing your long transaction.
Alternatively, use entity listeners to signal what needs update, and then use the EntityManager.refresh method.
This will be getting a bit ugly with multi-threading and all, but note that you probably do not want the transaction to 'just update' at some random point in time, as that in many cases will yield unpredictable results, like breaking for-loops and such.
And if this is a n-level algorithm, is there any way of doing m levels at a time, save the state (say, the id's of the current scope), and run the next iteration in a new transaction? For this you can use one method without a transaction, which calls EJBs methods which are confined within their own transaction, returning state.
If you must stick with Hibernate (and cannot consider accessing the underlying JDBC driver, Spring Transactions or JTA), you can probably just spawn a thread to do the update and have the main thread wait until it's done (Thread.join()).
I've bitten the bullet and I believe splitting up the big transaction into smaller transactions to have more atomicity is best. This required some manual eager loading in the code however but my nested transaction issue is gone.

Categories