I have a method in Dao Class that returns List<Object[]> back and I am using named Query
public List<Object[]> getListByCustomer(Session session, int customerId, List<Integer> strIds) {
Query namedQuery = session.createSQLQuery(QueryConstants.EXPORT);
namedQuery.setParameter("customer", customerId);
namedQuery.setParameter("stringId", strIds);
List<Object[]> objects = namedQuery.list();
return objects;
}
I want to pass List<Integer> strIds in stringId into the named query as follows :
public class QueryConstants {
public static final String EXPORT =
"SELECT sv.NAME, sv.TYPE, sv.CLIENT_ADDRESS, sv.NAME_REDUNDANT, sv.DEPARTURE_DATE, s1.CODE,sv.STATE, sv.CODE "
+ "FROM VIEW sv, PROCESS p1, SET s1 "
+ "WHERE sv.R_ID = p1.R_ID and p1.ISSUER_ID = s1.USER_ID and sv.CUSTOMER_ID = :customer and sv.R_ID IN (:stringId)";
}
But I get ORA-00932: inconsistent datatypes: expected NUMBER got BINARY.
Also when I remove sv.R_ID IN (:stringId) from the query it works fine and
when I pass Integer(strIds) instead of List<Integer> strIds into the query it works fine.
I'm using Oracle 10g.
This is a very misleading error, and may root from different causes, for me I was setting a parameter that it was supposedly a number but at runtime it was setting null, hence it was binary. On a separate occasion got this error due to bean creation error in spring and was not setting the parameter correctly as well.
I think you just need to use
IN :stringId
instead of
IN (:stringId)
For JPA
namedQuery.setParameter("stringId", strIds);
is correct, but for Hibernate you should use
namedQuery.setParameterList("stringId", strIds);
I encountered this same exception and found the below reason for that -
In my entity, a field was mapped to a custom object (Parent child relationship - #ManyToOne). Later, the relationship annotation was removed by developer but the datatype was not changed.
After removing the #ManyToOne annotation, the #Column annotation should have been used with appropriate data type (Integer).
Case your param is list. If list is empty then raise error, you must check that list not empty to avoid error.
Case your param is single value. Let use TO_NUMBER(:your_param) to avoid error.
It working on me.
In my case, I was using HQL (in the repository of spring data) with an Entity mapped with #Enumerated (ORDINAL). I was trying to use the the enum object directly in the where clausule. The solution was to use TO_NUMBER(:your_param) as mentioned by the member above.
I got the same error but for a different reason. In my case it was due to the order the parameters were supplied being different from the order defined in the query. I assumed (wrongly) that because the parameters were named the order didn't matter. Sadly, it seems like it does.
this is going long in my mind, i still wonder why spring jdbc made EmptyResultDataAccessException as runtime exception instead of forcing calling method (by making Exception) to catch a EmptyResultDataAccessException, i personally faced an issue , when i first implement spring jdbc, let us take this first scenario
public List<User> getUsers(String firstName){
JdbcTemplate jd = this.getJdbcTemplate();
List<User> userLst = jd.query("select query for user here matched firstname", BeanPropertyRowmapper(User.class))
}
in the above scenario even if this select query doesnt return any value, spring jdbc still creates new List object of type user and returns list object with size 0. so here spring jdbc is not throwing EmptyResultDataAccessException , instead it is creating new List object if there is no rec fetched from DB.
secondly, in case for querying for an object, it behaves differently.
User user = jd.queryForObject("select * from user where user_id = [EDITuser id here", User.class)
here spring jdbc throws EmptyResultDataAccessException incase it didnt find any record for *user_id = 1*.
more over, since EmptyResultDataAccessException is runtime exception, i was not forced to catch this exception and do some action for this exception, so most of time developer puzzled and it go unnoticed.
as i said in the first scenario , when i first coded, i was expecting userLst object as null, but actually spring jdbc creates new object which we didnt handle it.
posting this to make myself clear on this aspect. Thanks
There is most probably no way to make everyone happy. Take the contrary, you perfectly know that the row exists because it's a valid ID. Yet, calling this method forces you to catch this exception that will never be thrown so you have an empty catch block, which is bad.
This goes back to the use of runtime exception vs. checked exception, there are plenty of references on this site and elsewhere.
I run a unit test with two test methods: one creates an entity on the H2 database, the other one finds it by some select criteria and deletes it afterwards. Both methods wrap all database interactions in JTA user transactions (one per method).
Now after some (unknown) changes in the backend, the delete method fails with an optimistic lock exception:
Caused by: org.hibernate.OptimisticLockException: Newer version [null] of entity [[com.example.entities.MyEntity#10001]] found in database
at org.hibernate.action.internal.EntityVerifyVersionProcess.doBeforeTransactionCompletion(EntityVerifyVersionProcess.java:54)
at org.hibernate.engine.spi.ActionQueue$BeforeTransactionCompletionProcessQueue.beforeTransactionCompletion(ActionQueue.java:699)
at org.hibernate.engine.spi.ActionQueue.beforeTransactionCompletion(ActionQueue.java:321)
at org.hibernate.internal.SessionImpl.beforeTransactionCompletion(SessionImpl.java:613)
at org.hibernate.engine.transaction.synchronization.internal.SynchronizationCallbackCoordinatorImpl.beforeCompletion(SynchronizationCallbackCoordinatorImpl.java:122)
at org.hibernate.engine.transaction.synchronization.internal.RegisteredSynchronization.beforeCompletion(RegisteredSynchronization.java:53)
at bitronix.tm.BitronixTransaction.fireBeforeCompletionEvent(BitronixTransaction.java:532)
at bitronix.tm.BitronixTransaction.commit(BitronixTransaction.java:235)
... 97 more
The entity has a version property which is annotated with #Version. The entities value is 0 and there isn't actually a newer version of that entity on the database. It looks like the finder works as expected (it finds the persisted entity)
Actually, the validator does not find a "current version". I was able to debug my way through the hibernate classes until I found the prepared statement that should get the current entity (in AbstractEntityPersister):
public Object getCurrentVersion(Serializable id, SessionImplementor session) throws HibernateException {
// ...
try {
PreparedStatement st = session.getTransactionCoordinator()
.getJdbcCoordinator()
.getStatementPreparer()
.prepareStatement( getVersionSelectString() );
try {
getIdentifierType().nullSafeSet( st, id, 1, session );
ResultSet rs = session.getTransactionCoordinator().getJdbcCoordinator().getResultSetReturn().extract( st );
try {
if ( !rs.next() ) {
return null; // <- that' where I end up. version = null
}
The statement is correct, the id is correct too but the query result is empty.
prep68: select version from my_table where my_id =? {1: 10001}
But now the version number 0 is compared to null, they're not equal and that raises the OptimisticLockException.
Any help, tips, ideas and explanations are highly welcome.
It would appear this was a bug in Hibernate. When the transaction ends the entities being altered (with the remove() being one possible form of that) are fetched again to compare the database version number to that of the loaded entity and see if there's a difference. A difference implies the entity has been altered in the database during the transaction so it is aborted. Apparently, though, the entity would not be found exactly due to being removed. Of course at that point it's only removed in the entity manager, with the delete not yet being committed. I don't know whether that was the result of using the entity manager where it shouldn't be, or due to the deletes having been flushed and, although not committed yet, considered done within that transaction. In any case, the end result is comparing an actual version number with null and thus failing the lock test.
This has been fixed from Hibernate versions 4.3.8 and 5.0.0.Beta1 onwards. The issue can be found here: https://hibernate.atlassian.net/browse/HHH-9419
This is an old question but it took one and a half year from it being asked to a fix being available. Most people are likely to be using newer Hibernate versions now (or using EclipseLink which at that point did exhibit correct behaviour), but there's a project where I'm forced to use an older version for legacy reasons and just got stung by this.
I have a java project that runs on a webserver. I always hit this exception.
I read some documentation and found that pessimistic locking (or optimistic, but I read that pessimistic is better) is the best way to prevent this exception.
But I couldn't find any clear example that explains how to use it.
My method is like:
#Transactional
public void test(Email email, String subject) {
getEmailById(String id);
email.setSubject(subject);
updateEmail(email);
}
while:
Email is a Hibernate class (it will be a table in the database)
getEmailById(String id) is a function that returns an email (this method is not annotated with #Transactional)
updateEmail(email): is a method that updates the email.
Note: I use Hibernate for save, update & so on (example: session.getcurrentSession.save(email))
The exception:
ERROR 2011-12-21 15:29:24,910 Could not synchronize database state with session [myScheduler-1]
org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [email#21]
at org.hibernate.persister.entity.AbstractEntityPersister.check(AbstractEntityPersister.java:1792)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2435)
at org.hibernate.persister.entity.AbstractEntityPersister.updateOrInsert(AbstractEntityPersister.java:2335)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2635)
at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:115)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:168)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:656)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
at $Proxy130.generateEmail(Unknown Source)
at com.admtel.appserver.tasks.EmailSender.run(EmailNotificationSender.java:33)
at com.admtel.appserver.tasks.EmailSender$$FastClassByCGLIB$$ea0d4fc2.invoke(<generated>)
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149)
at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:688)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:55)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at org.springframework.aop.framework.adapter.AfterReturningAdviceInterceptor.invoke(AfterReturningAdviceInterceptor.java:50)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at org.springframework.aop.framework.adapter.MethodBeforeAdviceInterceptor.invoke(MethodBeforeAdviceInterceptor.java:50)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:621)
at com.admtel.appserver.tasks.EmailNotificationSender$$EnhancerByCGLIB$$33eb7303.run(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:273)
at org.springframework.scheduling.support.MethodInvokingRunnable.run(MethodInvokingRunnable.java:65)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:51)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
ERROR 2011-12-21 15:29:24,915 [ exception thrown < EmailNotificationSender.run() > exception message Object of class [Email] with identifier [211]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [Email#21] with params ] [myScheduler-1]
org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [Email] with identifier [21]: optimistic locking failed; nested exception is
Pessimistic locking is generally not recommended and it's very costly in terms of performance on database side. The problem that you have mentioned (the code part) a few things are not clear such as:
If your code is being accessed by multiple threads at the same time.
How are you creating session object (not sure if you are using Spring)?
Hibernate Session objects are NOT thread-safe. So if there are multiple threads accessing the same session and trying to update the same database entity, your code can potentially end up in an error situation like this.
So what happens here is that more than one threads try to update the same entity, one thread succeeds and when the next thread goes to commit the data, it sees that its already been modified and ends up throwing StaleObjectStateException.
EDIT:
There is a way to use Pessimistic Locking in Hibernate. Check out this link. But there seems to be some issue with this mechanism. I came across posting a bug in hibernate (HHH-5275), however. The scenario mentioned in the bug is as follows:
Two threads are reading the same database record; one of those threads
should use pessimistic locking thereby blocking the other thread. But
both threads can read the database record causing the test to fail.
This is very close to what you are facing. Please try this if this does not work, the only way I can think of is using Native SQL queries where you can achieve pessimistic locking in postgres database with SELECT FOR UPDATE query.
We have a queue manager that polls data and gives it to handlers for processing. To avoid picking up the same events again, the queue manager locks the record in the database with a LOCKED state.
void poll() {
record = dao.getLockedEntity();
queue(record);
}
this method wasn't transactional but dao.getLockedEntity() was transactional with REQUIRED.
All good and on the road, after few months in production, it failed with an optimistic locking exception.
After lots of debugging and checking in details we could find out that some one has changed the code like this:
#Transactional(propagation=Propagation.REQUIRED, readOnly=false)
void poll() {
record = dao.getLockedEntity();
queue(record);
}
So the record was queued even before the transaction in dao.getLockedEntity() gets committed (it uses the same transaction of poll method) and the object was changed underneath by the handlers (different threads) by the time the poll() method transaction gets committed.
We fixed the issue and it looks good now. I thought of sharing it because optimistic lock exceptions can be confusing and are difficult to debug.
It doesn't appear that you are actually using the email that you retrieve from the database, but an older copy that you get as a parameter. Whatever is being used for version control on the row has changed between when the previous version was retrieved and when you are doing the update.
You probably want your code to look more like:
#Transactional
public void test(String id, String subject) {
Email email = getEmailById(id);
email.setSubject(subject);
updateEmail(email);
}
I had the this problem on my project.
After I implemented optimistic locking, I got the same exception.
My mistake was that I did not remove the setter of the field that became the #Version. As the setter was being called in java space, the value of the field did not match the one generated by the DB anymore. So basically the version fields did not match anymore. At that point any modification on the entity resulted in:
org.hibernate.StaleObjectStateException: Row was updated or deleted by
another transaction (or unsaved-value mapping was incorrect)
I am using H2 in memory DB and Hibernate.
This exception is probably caused by optimistic locking (or by a bug in your code). You're probably using it without knowing. And your pseudo-code (which should be replaced by real code to be able to diagnose the problem) is wrong. Hibernate saves all the modifications done to attached entities automatically. You shouldn't ever call update, merge or saveOrUpdate on an attached entity. Just do
Email email = session.get(emailId);
email.setSubject(subject);
No need to call update. Hibernate will flush the changes automatically before committing the transaction.
I had problems with the same error on more than one Spring project.
For me a general solution was, to split my service Method, that each INSERT, UPDATE and DELETE action got an own Method with #Transactional.
I think this problem relates to the internal Spring managment, where database interactions are executed at the end of the method and, in my oppinion, this is the point, where the Exception is triggered.
Update and further solutions.
My problem was that I queried an #Entity Class object and changed a value without saving it because, strictly speaking, it was updated by another query (outside the scope), but since this object was internal to the sessions in a map now it had a different value, the next request was blocked with this message.
So I created a variable and saved the new values there and then sent them to the UpdateQuery, so Hibernate did not register any unsaved changes and the line could be updated.
Hibernate seems to send a lock statement to the database every time an object of the #Entity class is changed or at least to spear the line locally by primary key.
I had the same problem and in my case the problem was missing and/or incorrect equals implementation on some types of fields in the entity object. At commit time, Hibernate checks ALL entities loaded in the session to check if they are dirty. If any of the entities are dirty, hibernate tries to persist them - no matter of the fact that the actual object that is requested a save operation is not related to the other entities.
Entity dirtiness is done by comparing every property of given object (with their equals methods) or UserType.equals if property has an associated org.Hibernate.UserType.
Another thing that surprised me was, in my transaction (using Spring annotation #Transactional), I was dealing with a single entity. Hibernate was complaining about some random entity that's unrelated to that entity being saved. What I realized is there is an outermost transaction we create at REST controller level, so the scope of the session is too big and hence all objects ever loaded as part of request processing get checked for dirtiness.
Hope this helps someone, some day.
Thanks Rags
Just in case someone checked this thread and had the same issue as mine...
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
I'm using NHibernate, I received same error, during creation of an object...
I was passing the key manually, and also specified a GUID generator in mapping...
And hibernate generate same exact error for me,
So once I removed the GUID, and left the field empty, everything went just fine.
This answer may not help you, but will help someone like me, who just viewed your thread because of same error
check if the object exists or not in DB, if it exists get the object and refresh it:
if (getEntityManager().contains(instance)) {
getEntityManager().refresh(instance);
return instance;
}
if it fails the above if condition... find the Object with Id in DB, do the operation which you need, in this case exactly changes will reflects.
if (....) {
} else if (null != identity) {
E dbInstance = (E) getEntityManager().find(instance.getClass(), identity);
return dbInstance;
}
I had the experienced the same issue in different context of my project and there are different scenarios like
- object is accessed from various source like (server side and client)
- without any interval accessing the same object from a different place
In the first case
When I issue a server cal, before save the that object their one call from js and trying to save and another place, I got like, js call is going two, three times(I thing that call binding thing cause the issue)
I solved by
e.preventDefault()
The second case,
object.lock()
I was also receiving such an exception, but the problem was in my Entity identifier. I am using UUID and there are some problems in the way Spring works with them. So I just added this line to my entity identifier and it began working:
#Column(columnDefinition = "BINARY(16)")
Here you can find a little bit more information.
This error occurred for me when I was trying to update the same row from 2 different sessions. I updated a field in one browser while a second was open and had already stored the original object in its session. When I attempted to update from this second "stale" session I get the stale object error. In order to correct this I refetch my object to be updated from the database before I set the value to be updated, then save it as normal.
I also ran into this error when attempting to update an existing row after creating a new one, and spent ages scratching my head, digging through transaction and version logic, until I realised that I had used the wrong type for one of my primary key columns.
I used LocalDate when I should have been using LocalDateTime – I think this was causing hibernate to not be able to distinguish entities, leading to this error.
After changing the key to be a LocalDateTime, the error went away. Also, updating individual rows began to work as well – previously it would fail to find a row for updating, and testing this separate issue was actually what led me to my conclusions regarding the primary key mapping.
Don't set an Id to the object you are saving as the Id will be autogenerated
I had the same issue and for me, the case was a bit different, I was using Spring Data JPA and the entity class was annotated with #Entity and #Table annotation, and on the ID field I had #Id annotation but I missed adding #GeneratedValue since the DB table had the auto-increment identity field.
But the issue happened when we were doing bulk insert for these entities and since there was no Generator specified on the ID field, all entities had the default value (0) as the id field. and Started giving this exception:
javax.persistence.OptimisticLockException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) :[dao.entity.OrderAssortmentReportEntity#0]
We added the #GeneratedValue(strategy = GenerationType.IDENTITY) along with #Id and it worked.
I had the same problem in my grails project. The Bug was, that i overwrite the getter method of a collection field. This returned always a new version of the collection in other thread.
class Entity {
List collection
List getCollection() {
return collection.unique()
}
}
The solution was to rename the getter method:
class Entity {
List collection
List getUniqueCollection() {
return collection.unique()
}
}
if you are using Hibernate with Dropwizard,
this could happen if you are using id as autogenerated.
Remove #GeneratedValue
enter image description here
1. Reason for error
There is another situation: Error data.
#Column(name = "ID", unique = true, nullable = false, length = 32)
private String id;
One of the data is blank or null. When the front-end value is saved,
{
"cause": {
"cause": null,
"message": "Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.xxx#]"
},
"message": "Object of class [com.xxx] with identifier []: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.xxx#]"
}
2 .solving
Delete error data.
This problem happens if you are trying to update an object which is the same instance but retrieved from different List/Hash/ and so on, right from a different sub-thread.
In order to prevent StaleObjectStateException, in your hbm file write below code:
<timestamp name="lstUpdTstamp" column="LST_UPD_TSTAMP" source="db"/>
First check your imports, when you use session, transaction it should be org.hibernate
and remove #Transactinal annotation. and most important in Entity class if you have used #GeneratedValue(strategy=GenerationType.AUTO) or any other then at the time of model object creation/entity object creation should not create id.
final conclusion is if you want pass id filed i.e PK then remove #GeneratedValue from entity class.
Hibernate uses versioning to know that modified object you had is older than one which is currently persisted.
so when you update an entity don't include version in json body if its unwanted. just annotate with #Version in version column.
I had this problem in one of my apps, now, I know this is an old thread but here is my solution; I figured out by looking at the data inside the debugger that JVM actually didn't load it properly when Hibernate was trying to update the database (that is actually done in a different thread), so I added the keyword "volatile" to every field of the entities. It has some performance issues to do that but rather that than Heavy objects beeing thrown around...