Transaction rollback on CannotAcquireLockException - java

I am using spring AOP + hibernate transaction manager for managing my transactions. In my production environment, I am getting CannotAcquireLockException due to some jobs running parallelly.
I have a header table and an item, when I try to insert into header table, items are inserted automatically by hibernate cascade functionality. So when I am running into CannotAcquireLockException on item table, the only header is getting saved and not the item, even though they are in the same transaction.
unfortunately, I am not supposed to share my code, but please let me know if you need any details.
When I am getting any other exception the transaction is getting rolled back.

This is a definite deadlock situation. This is more related to DB error than a hibernate/Spring problem with your classes. I faced similar situation where a thread was doing Select and another thread was trying to Insert/Update the same row. Some quick solutions.
Use Select.. For Update sql query : This normally acquires the lock on particular index until operation is done.
On DB side : Creating indexes also help.
Hope this helps. More details here

Related

Trouble making transactional call

In my code I am saving data to a Microsoft SQL database version 10.50.4042.0
I am using hibernate-validator version 4.2.0.CR1 and hibernate-commons-annotations version 3.2.0.Final
I am working over several projects connected by Maven built on Spring framework.In my test project(using junit 4.8.2) I am trying to save the data into the database by selecting several xml files and converting them into database rows(one row at a time). Inside my project that is dealing with SQL transactions I am sending the data to the database using this annotation
#Transactional(readOnly = false, propagation = Propagation.REQUIRED, isolation = Isolation.READ_COMMITTED)
I think the problem occurs inside the Transactional process of hibernate.But there are no asynchronized calls and XML structure is totally valid. I tried different approaches but the problem seems to occur very randomly without any specific pattern and it fails to save only 1 random data row.
Error message I get is:
2016-06-09 12:41:01,578: ERROR [http-8080-Processor3] Could not synchronize database state with session
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:85)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:70)
at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:47)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2574)
at org.hibernate.persister.entity.AbstractEntityPersister.updateOrInsert(AbstractEntityPersister.java:2478)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2805)
at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:114)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:180)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:656)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
at com.sun.proxy.$Proxy48.save(Unknown Source)
There are no other SQL calls that are updating the row at the same time.
Can you help if you came across the same problem in hibernate.Or any other suggestions might help.
Thank you
Actually this problem means that
It was unable to found record with your given id.
To avoid this I always read record with same id if I found record back then I call update otherwise throw "exception record not found".
You have deleted a object and then tried to update it. On that case, you need to clear the session session.clear();
Actual Answer:
In the Hibernate mapping file for the id property, if you use any generator class, for that property you should not set the value explicitly by using a setter method.
If you set the value of the Id property explicitly, it will lead the error above. Check this to avoid this error.
Investigation Procedure:
However, to get a better handle as to what causes the problem, try the following:
In your hibernate configuration, set hibernate.show_sql to true.
This should show you the SQL that is executed and causes the
problem.
Set the log levels for Spring and Hibernate to DEBUG, again this
will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring
a transaction manager in Spring. This should give you a better idea
of the offending line of code.
Solution:
This Exception will arise in different scenario of save and saveOrupdate method of hibernate.
if Object is transient , need to save may occur this exception
Conditions.
Flushing the data before committing the object may lead to clear all object pending for persist.
If object has primary key which is auto generated and you are forcing has assigned key may cause the exception.
if you are cleaning the object before committing the object to database may lead this exception.
Zero or Incorrect ID: Hibernate excepts an primary or id of null (not initialize while saving) has per point 2 to mean the object was not saved. If you set the ID to zero or something else, Hibernate will try to update instead of insert, or it lead may to throw this exception.
Object Doesn’t Exist: This is the most easy to determine : has the object get deleted somehow? If so, trying to delete once again it will throw this exception.
Object is Stale: Hibernate caches objects from the session. If the object was modified, and Hibernate doesn’t know about it, it will throw this exception — note the StaleStateException part of the exception.
so after committing the object to database need to flush or clear or clean it ,not before.
As far as I know, there can be one of the below mentioned three reasons:
The primary key of the table in database is not correctly mapped in your code.
The update is being triggered for a non-backing object in database. You can easily find this out by looking at the id of the object in debug mode or print the id in logs and check if a record with that id actually exist in database at that time.
The isolation level. I see you are using READ_COMMITTED as the isolation level here. If above two did not make the exception go away, try setting DEFAULT as isolation level. Default will make the isolation level to be determined by the database tier which in your case is Microsoft SQL.
Share your code if none of the three mentioned above solve your issue.
Reading your explanation it seems also to me that is a problem related to the isolation level of your trasaction.
The READ_COMMITTED isolation level specify that it is possible to have access only to records not comprehended in an open transaction, so I think that in your case, it randomly happens that one other transaction accesses to one or plus of your records that are batch updated, and so it raises an exception.
I think that one solution is, giving a committ each row by time, verify each time the row's db state, or instead, you may manage this exception in a way to avoid halting of your process or transaction.

How to force commit Spring - hibernate transaction safely

We are using spring and hibernate for an web application:
The application has a shopping cart where user can place items in it. in order to hold the items to be viewed between different login's the item values in the shopping cart are stored in tables. when submitting the shopping cart the items will be saved into different table were we need to generate the order number.
When we insert the values into the table to get the order number, we use to get the max order number and add +1 to it. we are using spring transaction manager and hibernate, in the code flow we get the order number and update the hibernate object to hold the order num value. when i debug, i noticed that only when the complete transaction is issued the order number entity bean is being inserted.
Issue here is when we two request is being submitted to the server at the same time, the same order number is being used, and only one request data is getting inserted. could not insert the other request value which is again a unique one.
The order num in the table is a unique one.
i noticed when debugging the persistant layer is not getting inserted into the database even after issuing session flush
session.flush()
its just updating the memory and inserting the data to db only at the end of the spring transaction . i tried explicitly issuing a commit to transaction
session.getTransaction().commit();
this inserted the values into the database immediately, but on further code flow displayed message that could not start transaction.
Any help is highly appreciated.
Added:
Oracle database i used.
There is a sequence number which is unique for that table and also the order number maps to it.
follow these steps :- ,
1) Create a service method with propagation REQUIRES_NEW in different service class .
2)Move your code (whatever code you want to flush in to db ) in this new method .
3)Call this method from existing api (Because of proxy in spring, we have to call this new service method from different class otherwise REQUIRES_NEW will not work which make sure your flushing data ).
I would set the order number with a trigger which will run in the same transaction with the shopping cart insert one.
After you save the shopping cart, to see the updated order count, you'll have to call:
session.refresh(cart);
The count shouldn't be managed by Hibernate (insertable/updatable = false or #Transient).
Your first problem is that of serial access around the number generation when multiple thread are executing the same logic. If you could use Oracle sequences this would have been automatically taken care of at the database level as the sequences
are guranteed to return unique values any number of times they are called. However since this needs to be now managed at server side, you would need to
use synchronization mechanism around your number generation logic ( select max and increment by one) across the transaction boundary. You can make the Service
method synchronized ( your service class would be singleton and Spring managed) and declare the transaction boundary around it. However please note that this would be have performance implications and is usually bad for
scalability.
Another option could be variation of this - store the id to be allocated in a seperate table with one column "currentVal" and use pessimistic lock
for getting the next number. This way, the main table would not have any big lock. This way a lock would be held for the sequence generator code for the time the main entity creation transaction is complete. The main idea behind these techniques is to serialize
access to the sequence generator and hold the lock till the main entity transaction commits. Also delay the number generator as late as possible.
The solution suggested by #Vlad is an good one if using triggers is fine in your design.
Regarding your question around the flush behaviour, the SQL is sent to the database at flush call, however the data is not committed until the transaction is committed declaratively or a manual commit is called. The transaction can however see the data it purposes to change but not other transactions depending upon the isolation nature of transaction.

How can I programatically set the roll back point for a transaction in spring?

How can I specify the rollback point for a transaction in Spring?
Assuming the following scenario, I have to perform a really long insert into the db which takes quite some times (several minutes). This insert operation is wrapped in a transaction which ensures that if a problem occurs, the transaction is aborted and the database is restored to the status preceding the beginning of the transaction.
However, this solution affects the performance of the application since other transactions cannot access the db while the long transaction is being executed. I solved this issue by splitting the large transaction in several smaller transactions that perform the same operation. However, if one of these small transactions fails, the database rolls back to the status preceding this last transaction. Unfortunately, this would leave the database in an incorrect status. I want that if an errors occurs in any of these smaller transactions, the database rolls back to the status before the first small transaction ( i.e. exactly the same status, it would roll back if this operation is performed by a singular transaction).
Do you have any suggestion how I can achieve this using Spring transactions?
You should look to http://docs.spring.io/spring/docs/4.0.3.RELEASE/javadoc-api/org/springframework/transaction/TransactionStatus.html .
It has required functionality:
- create savepoint
- release savepoint
- rollback to savepoint
Of course, your transaction manager (and underlying JDBC driver, and DB) should support the functionality.
if you can use same primary key sequence for staging tables and production tables then you shall batch moving data from stg to prod. when a small transaction fails you can use the keys in staging table to delete from production table. that way you can restore production table to its original state

TX-row lock contention : Inserting Duplicate values for Unique Key

We are getting a TX-row lock contention error while trying to insert data
It happens while running a job which processes an xml with almost 10000 records, inserting the data into a table
We are having a unique key constraint on one of the columns in the table, and in the request we are getting duplicate values. This is causing the locks and thereby the job is taking more time.
We are using hibernate and spring. The DAO method we use is hibernate template's 'save' annotated with Spring Transaction Manager's #Transactional
Any suggestions please?
It's not clear whether you're getting locking problems or errors.
"TX-row lock contention" is an event indicating that two sessions are trying to insert the same value into a primary or unique constraint column set -- an error is not raised until the first one commits, then the second one gets the error. So you definitely have multiple sessions inserting rows. If you just had one session then you'd receive the error immediately, with no "TX-row lock contention" event being raised.
Suggestions:
Insert into a temporary table without the constraint, then load to the real table using logic that eliminates the duplicates
Eliminate the duplicates as part of the read of the XML.
Use Oracle's error logging syntax -- example here. http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#SQLRF55004

Hibernate Delete Error: Batch Update Returned Unexpected Row Count

I wrote this method below that is suppose to delete a member record from the database. But when I use it in my servlet it returns an error.
MemberDao Class
public static void deleteMember(Member member) {
Session hibernateSession = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = hibernateSession.beginTransaction();
hibernateSession.delete(member);
tx.commit();
}
Controller Part
if(delete != null) {
HttpSession httpSession = request.getSession();
Member member = (Member) httpSession.getAttribute("member");
MemberDao.deleteMember(member);
nextPage = "ledenlijst.jsp";
}
HTTP Status 500
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
Sometimes it even throws this error when I try to execute the page multiple times.
org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update
Does anybody know what exactly is causing these errors?
The error can be caused by several things. I'm not taking the credit for it, found it here.
Flushing the data before committing the object may lead to clear all
object pending for persist.
If object has primary key which is auto generated and you are
forcing an assigned key
if you are cleaning the object before committing the object to
database.
Zero or Incorrect ID: If you set the ID to zero or something else,
Hibernate will try to update instead of insert.
Object is Stale: Hibernate caches objects from the session. If the
object was modified, and Hibernate doesn’t know about it, it will
throw this exception — note the StaleStateException
Also look at this answer by beny23 which gives a few further hints to find the problem.
In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
The exception
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
use to be thrown when Hibernate notice that the entity he wants to flush to the database isn't exactly as it was at the beginning of the transaction.
I described more in details two different use cases that happen to me here.
In my case this exception was caused by wrong entity mapping. There were no cascade for relation, and referenced child entity wasn't saved before trying to reference it from parent. Changing it to
#OneToMany(cascade = CascadeType.ALL)
fixed the issue.
Surely best way to find cause of this exception is setting show_sql and DEBUG level for logs - it will stop just at the sql that caused the problem.
I was facing same issue.
The code was working in the testing environment. But it was not working in staging environment.
org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1
The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.)
So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back.
After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.
this is the solution for my case, maybe it will help you!
Actually it was a conversion problem between a database field type (timestamp on postgreSQL) and his equivalent property type (Calendar) on hibernate xml file.
When Hibernate did this update request, it didn't retrieve the row because the request interrogate with a bad convert Calendar value.
So I simply replaced property type "Calendar" in "Date" in Hibernate xml file and the problem was fixed.
i recently experienced this and what happened was that i used the update method and it was throwing an exception because there was no existing record. I changed the method to saveOrUpdate. It worked.
I experienced this same issue with hibernate/JPA 2.1 when using memcached as the secondary cache. You would get the above exception along with a StaleStateException. The resolution was different than what has been noted previously.
I noticed that if you have an operation that interleaves deletes and selects (finds) from within the same table and transaction, hibernate can become overwhelmed and report that stale state exception. It would only occur for us during production as multiple identical operations on different entities would occur on the same table. You would see the system timeout and toss exceptions.
The solution is to simply be more efficient. Rather than interleaving in a loop, attempt to resolve which items need to be read and do so, preferably in one operation. Then perform the delete in a separate operation. Again, all in the same transaction but don't pepper hibernate with read/delete/read/delete operations.
This is much faster and reduces the housekeeping load on Hibernate considerably. The problem went away. This occurs when you are using a secondary cache and won't occur otherwise as the load will be on the database for resolution without a secondary cache. That's another issue.
I have had this problem,
I checked my code, there weren't any problems but when i checked my data I found out, I had two entities with the same id!
So, the flush() could not work, because it works one by one for batch updates and it found 2 rows. Therefore, it didn't update and threw exception, but it was my problem. I don't know if it works fine for you or not!

Categories