I have an application that does:
void deleteObj(id){
MyObj obj = getObjById(id);
if (obj == null) {
throw new CustomException("doesn't exists");
}
em.remove(obj);//em is a javax.persistence.EntityManager
}
I haven't explicitly configure optimistic locking with version field.However, if two request are running in parallel, trying to delete the same object, then I get sometimes an HibernateOptimisticLockingFailureException and other times the "CustomException".
Is it normal to get HibernateOptimisticLockingFailureException without explicitly setting optimistic locking ? Does hibernate a default optimistic locking for detached objects ?
What are you doing to handle this HibernateOptimisticLockingFailureException ? Retry or inform to the user with a default message like "server busy" ?
First of all, HibernateOptimisticLockingFailureException is a result of Spring's persistence exception translation mechanism. It's thrown in response to StaleStateException, whose javadoc says:
Thrown when a version number or timestamp check failed, indicating that the Session contained stale data (when using long transactions with versioning). Also occurs if we try delete or update a row that does not exist.
From the common sense, optimistic lock exception occurs when data modification statement returns unexpected number of affected rows. It may be caused by mismatch of version value as well as by absence of the row at all.
To make sure that entity was actually removed you can try to flush the context by em.flush() right after removing and catch an exception thrown by it (note that it should be subclass of PersistenceException having StaleStateException as a cause).
Related
My code looks something like this:
#Transactional
public void save(Citizen citizen){
this.saveCitizen(citizen);
}
private void saveCitizen(Citizen citizen){
try{
citizenReposiory.save(citizen);
} catch(DataIntegrityViolationException exception){
//Exception on the line below
Citizen existingCitizen = citizenReposiory.findById(citizen.getId());
exisitingCitizen.setAge(50);
}
}
I'm first trying to save the citizen. If the exception is thrown it's because the citizen already exists in the database. In this case I want to update the existing row instead. However, in the code above I will get another exception when calling citizenReposiory.findById(citizen.getId());. Here's a snippet of the terminal:
[26-04-2020 00:35] WARN [o.h.engine.jdbc.spi.SqlExceptionHelper] - SQL Error: 1062, SQLState: 23000
[26-04-2020 00:35] ERROR [o.h.engine.jdbc.spi.SqlExceptionHelper] - Duplicate entry '10-2020-1' for key 'UKe4wgjj1wdqag5qhbcgnxhbvuj'
[26-04-2020 00:35] ERROR [org.hibernate.AssertionFailure] - HHH000099: an assertion failure occurred
(this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session):
org.hibernate.AssertionFailure: null id in dk.rsyd.mature.entities.WeeklyCare entry (don't flush the
Session after an exception occurs)
org.hibernate.AssertionFailure: null id in dk.rsyd.mature.entities.WeeklyCare entry (don't flush the
Session after an exception occurs)
What is happening here? Is it not possible to continue with an transaction after catching an exception? I have tried to add #Transactional(noRollbackFor = DataIntegrityViolationException.class) but that didn't help.
A different approach could be used. That is, you could first perform the findByID, and verify that the findByID returns a value, if it returns a value, and therefore it already exists, you can carry out the setAge operation, otherwise you can save the citizen. In this way you will always do a preliminary check and avoid saving an object that does not exist by going in exception.
If "The Citizen Object" that you submit to citizenReposiory.save() already have the primary key inside. Maybe you can just call saveOrUpdate() simply.
private void saveCitizen(Citizen citizen){
citizenReposiory.saveOrUpdate(citizen);
}
FYI
Hibernate saveOrUpdate behavior
Hibernate save() and saveOrUpdate() methods
In a recent task, after I created an object I flushed the result to the database. The database table had a unique constraint, meaning that if I tried to flush the same record for the second time, I would get a ConstraintViolationException. A sample snippet is shown below:
createEntityAndFlush(result);
sendAsyncRequestToThirdSystem(param);
The code for the createEntityAndFlush:
private T createEntityAndFlush(final T entity) throws ServiceException {
log.debug("Persisting {}", entity.getClass().getSimpleName());
getEntityManager().persist(entity);
getEntityManager().flush();
return entity;
}
The reason I used flush was that I wanted to make sure that a ConstraintViolationException would be thrown prior to finishing the transaction and thus calling the sendAsyncRequestToThirdSystem. But that was not the case, since sendAsyncRequestToThirdSystem was called after the exception was thrown.
To test the code in racing conditions, I used the ManagedExecutorService and created two runnable tasks (Future<?> submit(Runnable task)) to replicate the incoming request.
Eventually the problem was solved by trying performing a lock on a new table for each unique request id, but I would like to know where I was wrong in my first approach (ex. wrong use of flash, ManagedExecutorService was responsible for awkward behaviour). Thanks in advance!
The issue is that while flush() does flush the changes into the database, the transaction is still open, and the unique constraint will be checked when the transaction is committed (this may depend on the database, but at least with Postgres and any MVCC using DB).
So you will need to make sure that createEntityAndFlush(result); runs in its own transaction, possibly with a #Transactional(propagation = Propagation.REQUIRES_NEW) (or equivalent, if not using Spring) to see if the unique index is violated.
I have method which updates to Item quantity.
MyEntity has #Version annoted version property as long.
There is an item list endpoint /items
Also there is an item update endpoint /item/update (consider as product stock, buying an item)
So N concurrent users want's to update same item.
But there throws org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1 that exception. While updating.
And also at this time, /items endpoint couldn't return data. Or waits user to return with too much latency.(If updating users count too much at this time it also gets an exception timeout).
So How can I handle that situation without any missing? (Can be good implementation )
Unfortunately, JPA/Hibernate does not play nice with batch inserts when there is contention: whenever any exception is thrown in the context of a Hibernate session, you're out of luck.
See 13.2.3. Exception handling of: https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch13.html#transactions-optimistic
Specifically:
No exception thrown by Hibernate can be treated as recoverable. Ensure
that the Session will be closed by calling close() in a finally block.
In the past I have had to migrate JPA code to QueryDSL or fall back to raw SQL and JdbcTemplate (something like How to do multiple inserts in database using spring JDBC Template batch?).
I am working on play framework using jpa, I have a field with an unique constraint, after "try" to persist an entity with a repeated value, the framework shows an error page like this:
error page
When I try to catch this exception...
try{
JPA.em().persist(nArtist);
}catch(Exception e){
form.reject("username","user already exist");
return badRequest(create_artist.render(form));
}
The page still shows the message... ( I tried already with rollback exception ).
Pdta: That JPA.em() is the only time I called the em.
The call to EntityManager.persist does not guarantee changes to be flushed to the database immediately (which is the point at which constraint violations would emerge). If you want to force a flush, call EntityManager.flush right after persist
Do not use exceptions to handle conditions that could normally occur in your application and, above all, do not use the generic java.lang.Exception. The exceptions thrown from the persistence layer at persist time could mean a lot more things than the specific constraint violation that you're after
I am using Hibernate in a listener of Spring DefaultMessageLisenerContainer.
When I let the listener run with multiple threads, I often encounter this StaleStateException for a read only operation:
Query q = session.createQuery("SELECT k FROM Keyword k WHERE k.name = :name").setParameter("name", keywordName);
List<Keyword> kws = q.list()
The exception is thrown at q.list():
optimistic locking failed; nested exception is
org.hibernate.StaleObjectStateException: Row was updated or deleted by
another transaction (or unsaved-value mapping was incorrect)
Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [com.aurora.common.model.Keyword#7550]
at org.hibernate.persister.entity.AbstractEntityPersister.check(AbstractEntityPersister.java:1934)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2578)
at org.hibernate.persister.entity.AbstractEntityPersister.updateOrInsert(AbstractEntityPersister.java:2478)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2805)
at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:114)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:267)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:259)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:64)
at org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:1175)
at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1251)
at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102)
It is really strange as read operation should read a fresh copy from DB rather than check for a version conflict and throw StaleObjectStateException.
The name attribute is not the primary key of Keyword object.
UPDATE:
My data access code: I am using Spring's HibernateTransactionManager which support thread-bound Hibernate session. The Hibernate session is retrieved through SessionFactory.getCurrentSession() method.
Each transaction wrap around a invoke of listener by assigning the HibernateTransactionManager to MessageListenerContainer:
<jms:listener-container connection-factory="connectionFactory" concurrency="3-3" prefetch="6" transaction-manager="transactionManager">
<jms:listener destination="${requests}" response-destination="${replies}" ref="chunkHandler" method="handleChunk" />
</jms:listener-container>
UPDATE :
As in the suggested answer, there might be other operations causing staleObjectStateException.
I have tried logging out the Session.isDirty(), for all other operations prior to that. They are all read operation. Interestingly, the session is actually marked as dirty after the keyword select by name operation. The actual code is something like this:
for (String n : keywordNames) {
Keyword k = keywordDao.getKeywordByName(n);
}
The session is dirty after the first iteration. (KeywordDao.getKeywordByName implmentation is as above).
Any idea ? Thanks,
Khue.
I believe other answers given are not correct. Accessing row does not exist does not give StaleObjectStateException, and simply query an entity is not going to trigger optimistic lock for that entity too.
Further inspection on the stack trace will give some hints for the cause:
at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) When you are calling query.list()
at org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:1175) Hibernate will determine if auto flush of the session is required. By some reason Hibernate believe auto flush is required. (Probably due to you have previously done update on some Keyword entity in the same session, or other entities... that's something I cannot tell honestly)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2805) Then Hibernate will flush all the changes in the session to DB. And, the problem of StaleObjectStateException occurs here, which means Optimistic Concurrency check failure. The optimistic concurrency check failure MAY or MAY NOT relates to Keyword entity (coz it is simply flushing all updated entities in session to DB). However, in your case, it is actually related to Keyword entity ( Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [com.ncs.singtel.aurora.common.model.Keyword#7550])
Please verify what is the cause for the optimistic concurrency failure. Normally we simply rethrow the optimistic concurrency exception to caller and let caller decide if they want to invoke the function again. However it all depends on your design.
The stalestateException occurs when we try to access a row that doesn't exist. check your keyword.getName() to see what it returns.
Some other transactions could be updating Keyword entity at the same time as you read and your read operation could result in Stale objects.
This is optimistic locking. You can consider pessismistic locking , but it will seriously affect the performance.
I would suggest catch StaleObjectStateException and try to read again.