Trouble making transactional call - java
In my code I am saving data to a Microsoft SQL database version 10.50.4042.0
I am using hibernate-validator version 4.2.0.CR1 and hibernate-commons-annotations version 3.2.0.Final
I am working over several projects connected by Maven built on Spring framework.In my test project(using junit 4.8.2) I am trying to save the data into the database by selecting several xml files and converting them into database rows(one row at a time). Inside my project that is dealing with SQL transactions I am sending the data to the database using this annotation
#Transactional(readOnly = false, propagation = Propagation.REQUIRED, isolation = Isolation.READ_COMMITTED)
I think the problem occurs inside the Transactional process of hibernate.But there are no asynchronized calls and XML structure is totally valid. I tried different approaches but the problem seems to occur very randomly without any specific pattern and it fails to save only 1 random data row.
Error message I get is:
2016-06-09 12:41:01,578: ERROR [http-8080-Processor3] Could not synchronize database state with session
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:85)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:70)
at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:47)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2574)
at org.hibernate.persister.entity.AbstractEntityPersister.updateOrInsert(AbstractEntityPersister.java:2478)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2805)
at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:114)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:180)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:656)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
at com.sun.proxy.$Proxy48.save(Unknown Source)
There are no other SQL calls that are updating the row at the same time.
Can you help if you came across the same problem in hibernate.Or any other suggestions might help.
Thank you
Actually this problem means that
It was unable to found record with your given id.
To avoid this I always read record with same id if I found record back then I call update otherwise throw "exception record not found".
You have deleted a object and then tried to update it. On that case, you need to clear the session session.clear();
Actual Answer:
In the Hibernate mapping file for the id property, if you use any generator class, for that property you should not set the value explicitly by using a setter method.
If you set the value of the Id property explicitly, it will lead the error above. Check this to avoid this error.
Investigation Procedure:
However, to get a better handle as to what causes the problem, try the following:
In your hibernate configuration, set hibernate.show_sql to true.
This should show you the SQL that is executed and causes the
problem.
Set the log levels for Spring and Hibernate to DEBUG, again this
will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring
a transaction manager in Spring. This should give you a better idea
of the offending line of code.
Solution:
This Exception will arise in different scenario of save and saveOrupdate method of hibernate.
if Object is transient , need to save may occur this exception
Conditions.
Flushing the data before committing the object may lead to clear all object pending for persist.
If object has primary key which is auto generated and you are forcing has assigned key may cause the exception.
if you are cleaning the object before committing the object to database may lead this exception.
Zero or Incorrect ID: Hibernate excepts an primary or id of null (not initialize while saving) has per point 2 to mean the object was not saved. If you set the ID to zero or something else, Hibernate will try to update instead of insert, or it lead may to throw this exception.
Object Doesn’t Exist: This is the most easy to determine : has the object get deleted somehow? If so, trying to delete once again it will throw this exception.
Object is Stale: Hibernate caches objects from the session. If the object was modified, and Hibernate doesn’t know about it, it will throw this exception — note the StaleStateException part of the exception.
so after committing the object to database need to flush or clear or clean it ,not before.
As far as I know, there can be one of the below mentioned three reasons:
The primary key of the table in database is not correctly mapped in your code.
The update is being triggered for a non-backing object in database. You can easily find this out by looking at the id of the object in debug mode or print the id in logs and check if a record with that id actually exist in database at that time.
The isolation level. I see you are using READ_COMMITTED as the isolation level here. If above two did not make the exception go away, try setting DEFAULT as isolation level. Default will make the isolation level to be determined by the database tier which in your case is Microsoft SQL.
Share your code if none of the three mentioned above solve your issue.
Reading your explanation it seems also to me that is a problem related to the isolation level of your trasaction.
The READ_COMMITTED isolation level specify that it is possible to have access only to records not comprehended in an open transaction, so I think that in your case, it randomly happens that one other transaction accesses to one or plus of your records that are batch updated, and so it raises an exception.
I think that one solution is, giving a committ each row by time, verify each time the row's db state, or instead, you may manage this exception in a way to avoid halting of your process or transaction.
Related
Transaction rollback on CannotAcquireLockException
I am using spring AOP + hibernate transaction manager for managing my transactions. In my production environment, I am getting CannotAcquireLockException due to some jobs running parallelly. I have a header table and an item, when I try to insert into header table, items are inserted automatically by hibernate cascade functionality. So when I am running into CannotAcquireLockException on item table, the only header is getting saved and not the item, even though they are in the same transaction. unfortunately, I am not supposed to share my code, but please let me know if you need any details. When I am getting any other exception the transaction is getting rolled back.
This is a definite deadlock situation. This is more related to DB error than a hibernate/Spring problem with your classes. I faced similar situation where a thread was doing Select and another thread was trying to Insert/Update the same row. Some quick solutions. Use Select.. For Update sql query : This normally acquires the lock on particular index until operation is done. On DB side : Creating indexes also help. Hope this helps. More details here
Mysql transaction getting 'Lock wait timeout exceeded' because of pending transaction FK insert
In my java application, I'm running a Transaction I where an insert is done on table A. Transaction I also calls a method which uses a separate connection to have a separate Transaction II in order to create a record in table B. (This table is used to keep track of internal ID generation and it is important, that whatever happens to Transaction I, that Transaction II is commited). Now, because of things, table B actually had a reference to the PK of table A (which is actually the problem). However since the record in A is just being created in the "outer" Transaction I, it is not yet commited, and the value to insert as reference in B (pointing to A) thus does not exist in Transaction II. (Or in other words, I would suspect that this value is not visible for Transaction II since it isn't commited) Now in this situation I'd expect to get a FK constraint violation error, but I am getting a com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Lock wait timeout exceeded; try restarting transaction. Testing the same scenario on a postgresql db raises the expected: org.postgresql.util.PSQLException: ERROR: insert or update on table "B" violates foreign key constraint I suspect that the mysql database knows about the insert of the first transaction and actually tries to wait in the second transaction for the first to finish to actually be able to insert the record of the second transaction? Of course that can't ever happen since the first transaction also waits for the second to complete. So the "inner" transaction aborts with the timeout. Is there a reason the two db systems behave differently, or do I miss something? Update: I think this has to do with the different default isolation levels of mysql (REPEATABLE_READ) and postgresql (READ_COMMITED)
You cannot have two separate connections and expect it to work. You should write a code where you can pass the current connection object to the method so that the connection is re used in that method which will not complain about fk.
The solution for my specific problem was to simply remove the FK constraint between Table B and Table A since it is not needed anyway. The different behaviour between MySQL and PostgreSQL definitely derives from the default setting regarding the isolation levels.
How to synchronize a code which is deployed on a cluster?
We have a stateless ejb which persists some data in an object oriented database. Unfortunately, today our persistence object does not have a unique key due to some unknown reason and altering the PO is also not possible today. So we decided to synchronize the code. Then we check if there is an object already persisted with the name(what we consider should be unique). Then we decide to persist or not. Later we realized that the code is deployed on a cluster which has three jboss instances. Can anyone please suggest an idea which does not allow to persist objects with the same name.
If you have a single database behind the JBoss cluster you can just apply a unique contraint to the column for example (I am assuming its an SQL database): ALTER TABLE your_table ADD CONSTRAINT unique_name UNIQUE (column_name); Then in the application code you may want to catch the SQL exception and let the user know they need to try again or whatever. Update: If you cannot alter the DB schema then you can achieve the same result by performing a SELECT query before insert to check for duplicate entries, if you are worried about 2 inserts happening at the same time you can look at applying a WRITE_LOCK to the row in question
TX-row lock contention : Inserting Duplicate values for Unique Key
We are getting a TX-row lock contention error while trying to insert data It happens while running a job which processes an xml with almost 10000 records, inserting the data into a table We are having a unique key constraint on one of the columns in the table, and in the request we are getting duplicate values. This is causing the locks and thereby the job is taking more time. We are using hibernate and spring. The DAO method we use is hibernate template's 'save' annotated with Spring Transaction Manager's #Transactional Any suggestions please?
It's not clear whether you're getting locking problems or errors. "TX-row lock contention" is an event indicating that two sessions are trying to insert the same value into a primary or unique constraint column set -- an error is not raised until the first one commits, then the second one gets the error. So you definitely have multiple sessions inserting rows. If you just had one session then you'd receive the error immediately, with no "TX-row lock contention" event being raised. Suggestions: Insert into a temporary table without the constraint, then load to the real table using logic that eliminates the duplicates Eliminate the duplicates as part of the read of the XML. Use Oracle's error logging syntax -- example here. http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#SQLRF55004
Hibernate Delete Error: Batch Update Returned Unexpected Row Count
I wrote this method below that is suppose to delete a member record from the database. But when I use it in my servlet it returns an error. MemberDao Class public static void deleteMember(Member member) { Session hibernateSession = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction tx = hibernateSession.beginTransaction(); hibernateSession.delete(member); tx.commit(); } Controller Part if(delete != null) { HttpSession httpSession = request.getSession(); Member member = (Member) httpSession.getAttribute("member"); MemberDao.deleteMember(member); nextPage = "ledenlijst.jsp"; } HTTP Status 500 org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1 Sometimes it even throws this error when I try to execute the page multiple times. org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update Does anybody know what exactly is causing these errors?
The error can be caused by several things. I'm not taking the credit for it, found it here. Flushing the data before committing the object may lead to clear all object pending for persist. If object has primary key which is auto generated and you are forcing an assigned key if you are cleaning the object before committing the object to database. Zero or Incorrect ID: If you set the ID to zero or something else, Hibernate will try to update instead of insert. Object is Stale: Hibernate caches objects from the session. If the object was modified, and Hibernate doesn’t know about it, it will throw this exception — note the StaleStateException Also look at this answer by beny23 which gives a few further hints to find the problem. In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem. Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem. Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
The exception org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1 use to be thrown when Hibernate notice that the entity he wants to flush to the database isn't exactly as it was at the beginning of the transaction. I described more in details two different use cases that happen to me here.
In my case this exception was caused by wrong entity mapping. There were no cascade for relation, and referenced child entity wasn't saved before trying to reference it from parent. Changing it to #OneToMany(cascade = CascadeType.ALL) fixed the issue. Surely best way to find cause of this exception is setting show_sql and DEBUG level for logs - it will stop just at the sql that caused the problem.
I was facing same issue. The code was working in the testing environment. But it was not working in staging environment. org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1 The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.) So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back. After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.
this is the solution for my case, maybe it will help you! Actually it was a conversion problem between a database field type (timestamp on postgreSQL) and his equivalent property type (Calendar) on hibernate xml file. When Hibernate did this update request, it didn't retrieve the row because the request interrogate with a bad convert Calendar value. So I simply replaced property type "Calendar" in "Date" in Hibernate xml file and the problem was fixed.
i recently experienced this and what happened was that i used the update method and it was throwing an exception because there was no existing record. I changed the method to saveOrUpdate. It worked.
I experienced this same issue with hibernate/JPA 2.1 when using memcached as the secondary cache. You would get the above exception along with a StaleStateException. The resolution was different than what has been noted previously. I noticed that if you have an operation that interleaves deletes and selects (finds) from within the same table and transaction, hibernate can become overwhelmed and report that stale state exception. It would only occur for us during production as multiple identical operations on different entities would occur on the same table. You would see the system timeout and toss exceptions. The solution is to simply be more efficient. Rather than interleaving in a loop, attempt to resolve which items need to be read and do so, preferably in one operation. Then perform the delete in a separate operation. Again, all in the same transaction but don't pepper hibernate with read/delete/read/delete operations. This is much faster and reduces the housekeeping load on Hibernate considerably. The problem went away. This occurs when you are using a secondary cache and won't occur otherwise as the load will be on the database for resolution without a secondary cache. That's another issue.
I have had this problem, I checked my code, there weren't any problems but when i checked my data I found out, I had two entities with the same id! So, the flush() could not work, because it works one by one for batch updates and it found 2 rows. Therefore, it didn't update and threw exception, but it was my problem. I don't know if it works fine for you or not!