ApplicationException - Java - Hibernate - rollback related - java

My question is related to Transactions and Exceptions
Requirements:
I have 10 records to insert into database table. And after inserting every record, I insert data into another table. So if inserting to second table fails, I want to rollback that record.
Ex.
Say handle cash transfer (from one account to account) for 10 persons at a time.
pseudo code:
------------- Start of EJB method
for(int i = 0; i < TransferRecords.length; i++)
{
try
{
//Deduct cash from TransferRecord.accountFrom --- Includes use of Hibernate Session
//Add cash in TransferRecord.accountTo -- Includes use of Hibernate Session
} catch(AppException exception)
{
//Rollback the transaction only for this particular transfer (i)
// But here when I go for next record it says session is closed
}
}
---------End of EJB method
Here AppException is created with #ApplicaitonException(rollback=true) annotion.
The functionality we want is: Even if the transaction fails for TransferRecord (say 2), I want the data to be committed for record 0, record 1, record 3, record 4 (etc... and but not for record 2)
But the issue here is: when TransferRecord 2 fails and when I move to TransferRecord 3, I get "Session Closed" error.
My doubts are:
1. Is this a right approach? or should I run the for loop(for each TransferRecord) outside of the EJB
2. How can I make sure that session is not closed but transaction is rolled back (only for that for particular failed transaction)
Thank you in advance.
I am using EJB3, Hibernate 3.x, Jboss 4.2.x and I am using Container Managed Transaction.

Is this a right approach?
No, with CMT, you method is your transactional unit. So here, all your TransferRecord and handled in a same and unique transaction.
By the way, how do you rollback the transaction? Do you propagate a RuntimeException or do you call setRollbackOnly()? I'm just curious.
Or should I run the for loop (for each TransferRecord) outside of the EJB?
Why outside? Nothing forces you to do that. If you want to process each TransferRecord in its own transaction, you should pass them to another EJB method (the code below is inspired by this answer):
// supposing processRecords is defined on MyStatelessRemote1 and process defined on MyStatelessLocal1
#Stateless
#TransationAttribute(TransactionAttributeType.NOT_SUPPORTED)
public class MyStatelessBean1 implements MyStatelessLocal1, MyStatelessRemote1 {
#EJB
private MyStatelessLocal1 myBean;
public void processRecords(List<TransferRecord> objs) {
// No transactional stuff so no need for a transaction here
for(Object obj : objs) {
this.myBean.process(obj);
}
}
#TransationAttribute(TransactionAttributeType.REQUIRES_NEW)
public void process(TransferRecord transferRecord) {
// Transactional stuff performed in its own transaction
// ...
}
}
How can I make sure that session is not closed but transaction is rolled back (only for that for particular failed transaction)
I think I covered that part.

The only option you have here is either to use user transaction instead of container managed transaction of loop outside the bean so that everytime you enter the bean you get fresh entity manager with associated transaction and connection (basically session)

I think that you can create two separated transactions, the first for the TransferRecord(1) (doing a commit once everything is fine) and then starting other TX for all the TransferRecord(i+1).
Another approach is using savepoints, being able to rollback and discard everything past that savepoint (but I like prefer the first approach).
Regards.

Related

Hibernate Session read after commit not reflecting changed data

I'm working on some legacy code and trying to understand what's going on. EJB 3 + Hibernate 3.2 (pray for my poor soul)
There are 2 EJBs (A and B):
EJB-A retrieves an Object1 and modifies some field data.
SessionFactory sf = PersistenceManager.getSessionFactory;
Transaction tx = sessionContext.getUserTransaction;
tx.begin;
Session s = sf.getCurrentSesssion();
ObjectA a = (ObjectA)s.get(ObjectA.class, id);
a.setField(value);
s.update(a);
// other stuff happens and then transaction is committed
tx.commit();
// starts a new transaction?
tx = sessionContext.getUserTransaction();
tx.begin();
// Re initializes object
ObjectA a = (ObjectA)s.get(ObjectA.class, id);
EJB-B b = getService(EJB-b) // sorry pseudo coding this
b.performOperation();
Inside EJB-B, it reads the piece of data set in EJB-A. What I'm seeing is that the data is not updated when EJB-B reads it. I'm a bit confused, I assume that committing the transaction would force a flush and the session to push the data to the database.
Can someone explain to me what could be going on? Is it possible the commit call is performing some async call and is taking longer to actually persist the data than it takes to get to the next EJB and retrieve the data? Is there any way to get around that? Would it be possible to call tx.wasCommitted in a loop or something to wait until proceeding?
Thank you in advance for any insight.
Hibernate doesn't flush the data instantaneously, it flushes when the internal cache reaches some limit or the transaction is closed.
You can use flush here, refer Correct use of flush() in JPA/Hibernate.

Should I avoid big transaction and exclude read-only queries from transaction

I've seen articles saying that we should try to limit the scope of transaction, e.g. instead of doing this:
#Transactional
public void save(User user) {
queryData();
addData();
updateData();
}
We should exclude queryData from the transaction by using Spring's TransactionTemplate (or just move it out of the transactional method):
#Autowired
private TransactionTemplate transactionTemplate;
public void save(final User user) {
queryData();
transactionTemplate.execute((status) => {
addData();
updateData();
return Boolean.TRUE;
})
}
But my understanding is that since JDBC will always need a transaction for all operations, if I use the second way, there will be 2 transactions opened and closed, 1 for queryData (opened by JDBC), and another for codes inside transactionTemplate.execute opened by our class. If so, won't this be a waste of resources now that you've split 1 transaction into 2?
If an transaction starts , it will use up one DB connection. So we generally want the transaction to be completed as fast as possible , and delay to start it as much as we can until we really need to access DB such that the connection pool has more time to provide more available connections for other requests to use.
So if part of the workflow within your function requires to take some time to finish their work and that work is not required to access DB, it is true that it is better to limit the scope of the transaction to exclude this part of the codes.
But in your example, as both transaction are executed in series and both need to access DB , I don't see there are any points to separate them into two different transactions.
Also, in term of Hibernate, it is very normal to load and update the entities in the same transaction such that you do not need to deal with the detached entities if the entities that you update are loaded from another already closed transaction. Dealing with detached entities is not easy if you are not familiar with Hibernate.

Why isn't my Hibernate insert reflected in my Hibernate query?

I've been asked to write some coded tests for a hibernate-based data access object.
I figure that I'd start with a trivial test: when I save a model, it should be in the collection returned by dao.getTheList(). The problem is, no matter what, when I call dao.getTheList(), it is always an empty collection.
The application code is already working in production, so let's assume that the problem is just with my test code.
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
session.save(model);
session.flush();
final Collection<Model> actual = dao.getTheList();
assertEquals(1, actual.size());
}
The test output is expected:<1> but was:<0>
So far, I've tried explicitly committing after the insert, and disabling the cache, but that hasn't worked.
I'm not looking to become a master of Hibernate, and I haven't been given enough time to read the entire documentation. Without really knowing where to start, this seemed like this might be a good question for the community.
What can I do to make sure that my Hibernate insert is flushed/committed/de-cached/or whatever it is, before the verification step of the test executes?
[edit] Some additional info on what I've tried. I tried manually committing the transaction between the insert and the call to dao.getTheList(), but I just get the error Could not roll back Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
final Transaction firstTransaction = session.beginTransaction();
session.save(model);
session.flush();
firstTransaction.commit();
final Transaction secondTransaction = session.beginTransaction();
final Collection<SystemConfiguration> actual = dao.getTheList();
secondTransaction.commit();
assertEquals(1, actual.size());
}
I've also tried breaking taking the #Transactional annotation off the test thread and annotating each of 2 helper methods, one for each Hibernate job. For that, though I get the error: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here.
[/edit]
I think the underlying DBMS might hide the change to other transactions as long as the changing transaction is not completed yet. Is getTheList running in an extra transaction? Are you using oracle or postgres?

Why didn't read JPA find() method uncommitted changes?

I am puzzled by a JPA behavior which I did not expect in that way (using Eclipselink).
I run on Wildfly 10 (JDK-8) a stateless session EJB (3.2). My method call is - per default - encapsulated in a transaction.
Now my business method, when reading and updating a entity bean, did not recognize updates - especially the version number of the entity. So my call results in a
org.eclipse.persistence.exceptions.OptimisticLockException
My code looks simplified as this:
public ItemCollection process(MyData workitem) {
....
// load document from jpa
persistedDocument = manager.find(Document.class, id);
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 3
// change some data
....
manager.flush();
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 4
....
// load document from jpa once again
persistedDocument = manager.find(Document.class, id);
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 3 (!!)
// change some data
....
manager.flush();
// Throws OptimisticLockException !!
// ...Document#1fbf7c8e] cannot be updated because it has changed or been deleted since it was last read
...
}
If I put the code (which changes the data and flush the entity bean) in a method annotated with
#TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW)
everything works as expected.
But why is the second call of the find() method in my code not reading the new version number? I would expect version 4 after the flush() and find() call.
After all it looks like calling
manager.clear();
solves the problem. I thought that detaching the object should do the same, but in my case only calling clear() did fix the problem.
More findings:
After all it seems not be a good idea to call the methods detach() and flush() form a service layer. I did this, because I wanted to get the new version id of my entity before I left my business method to return this id to the client. I changed my strategy in this case and I removed all the 'bad stuff' with detaching and flushing my entity beans. The code becomes more clear and after all the code complexity was reduced dramatically.
And of course the entityManager now behaves correctly. If I query again the same entity bean several times in one transaction, the entityManager returns the correct updated version.
So the answer to my own question is: Leave the methods flush() and clear(), as long as there is no really good reason for it to use them.

Best way to handle multiple inserts

Currently we are using play 1.2.5 with Java and MySQL. We have a simple JPA model (a Play entity extending Model class) we save to the database.
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
At each web request we save multiple instances of the SimpleModel, for example:
JPAPlugin.startTx(false);
for (int i=0;i<5000;i++)
{
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
}
JPAPlugin.closeTx(false);
We are using the JPAPlugin.startTx and closeTx to manually start and end the transaction.
Everything works fine if there is only one request executing the transaction.
What we noticed is that if a second request tries to execute the loop simultaneously, the second request gets a "Lock wait timeout exceeded; try restarting transaction javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not insert: [SimpleModel]" since the first request locks the table but is not done until the second request times out.
This results in multiple:
ERROR AssertionFailure:45 - an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session)
org.hibernate.AssertionFailure: null id in SimpleModel entry (don't flush the Session after an exception occurs)
Another disinfect is that the CPU usage during the inserts goes crazy.
To fix this, I'm thinking to create a transaction aware queue to insert the entities sequentially but this will result in huge inserting times.
What is the correct way to handle this situation?
JPAPlugin on Play Framwork 1.2.5 is not thread-safe and you will not resolve this using this version of Play.
That problem is fixed on Play 2.x, but if you can't migrate try to use hibernate directly.
You should not need to handle transactions yourself in this scenario.
Instead either put your inserts in a controller method or in an asynchronous job if the task is time consuming.
Jobs and controller both handle transasctions.
However check that this is really what you are trying to achieve. Each http request creating 5000 records does not seem realistic. Perhaps it would make more sense to have a container model with a collection?
Do you really need a transaction for the entire insert? Does it matter if the database is not locked during the data import?
You can simply create a job and execute it for each insert:
for (int i=0;i<5000;i++)
{
new Job() {
doJob(){
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
}.now();
}
This will create a single transaction for each insert and get rid of your database lock issue.

Categories