I'm using JBoss 7.1.1 and the default implementation of Hibernate that comes with it (4.0.1).
I have a message driven bean, that in the same transaction creates an entity and persists it using the entity manager. After that (still the same transaction) I find the newly created entity and try to use the entity manager to lock it with PESSIMISTIC_WRITE, but I get an OptimisticLockException. Its root is as follows:
Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [some.package.name.EntityName#aaa1a1a0-d568-11e1-9f99-d5a00a0a12b6]
at org.hibernate.dialect.lock.PessimisticWriteSelectLockingStrategy.lock(PessimisticWriteSelectLockingStrategy.java:95)
at org.hibernate.persister.entity.AbstractEntityPersister.lock(AbstractEntityPersister.java:1785)
at org.hibernate.event.internal.AbstractLockUpgradeEventListener.upgradeLock(AbstractLockUpgradeEventListener.java:99)
at org.hibernate.event.internal.DefaultLockEventListener.onLock(DefaultLockEventListener.java:85)
at org.hibernate.internal.SessionImpl.fireLock(SessionImpl.java:693)
at org.hibernate.internal.SessionImpl.fireLock(SessionImpl.java:686)
at org.hibernate.internal.SessionImpl.access$1100(SessionImpl.java:160)
at org.hibernate.internal.SessionImpl$LockRequestImpl.lock(SessionImpl.java:2164)
at org.hibernate.ejb.AbstractEntityManagerImpl.lock(AbstractEntityManagerImpl.java:1093)
... 202 more
Any ideas why I can't look up the newly created entity? Also, how can I make it available for searching right after it is created? Using the merge method of the EM doesn't seem to help ...
My understanding of your question is that within your message driven bean's transaction you're doing the following:
1. Create entityA
2. Persist entityA
3. entityB = find entityA
4. lock(entityB, PESSIMISTIC_WRITE)
and step 4 is throwing an exception.
I think Hibernate may not have flushed the persist between 2 and 3 so at that point A (and B) have version 0. Hibernate is then flushing the persist of A at the start of the lock(), which means B now has a stale version.
You could try flushing the persist before the find (so entityManager.flush() after 2).
Or you should be able to skip the find, since entityManager.persist(entityA) makes entityA a managed object, so the following sequence may work:
1. Create entityA
2. Persist entityA
3. lock(entityA, PESSIMISTIC_WRITE)
Related
the JPA optimistic locking doesn't throw an OptimisticLockException/StaleStateException where i would expect it.
Here is my setup:
i am using spring boot with spring data envers. So my repository are versioned, which should not influence the optimistic locking behaviour. In my entities the property version (Long) is annotated with #Version. My application consists of 3 layers:
persistence-layer
business-layer
transfer-layer
To map objects between the layers i use mapstruct.
When a request is received by the controller in the transfer-layer, the JSON-Payload is mapped to an business-layer object to process business rules to it. The version is always mapped through the whole lifecycle.
When i reach the persistence-layer, i use the ID of the object to find the corresponding entity in my database. The signature of my save-method looks like this:
#Transactional
public Entity saveEntity(BOEntity boEntity){
Entity e = entityRepository.findById(boEntity.getId());
entityMapper.updateEntity(boEntity, e);
entityRepository.save(e);
}
When the same entity is loaded by my clients, (e.g. two browser-tabs) each of them has the same version of the entity. Changes are made and saved in both clients.
The version is contained in the boEntity object and mapped into the entity.
Due to the findById call the entity is managed. The entitymanager will try to merge the entity and succeeds in both requests to do so.
The state of the entity of the first request is merged (with version 1). Hibernate calls the executeUpdate method and writes to the database. The version is increased to 2.
Now the second request delivers the entity in the former state with version 1. The save-method is called and the entity is retrieved from the persistence-context. It has the version 2, which is overwritten by the boEntity object with version 1.
When the entityManager now merges the entity, no exception is thrown.
My expectation is the second request to fail because of an old version.
Isn't it possible to overwrite the version of the entity?
I already read a lot of blog entries, but couldn't find any hint to do the trick.
The default JPA optimistic locking mechanism only works when a managed object is flushed but was changed in the meantime. What you want has to be coded manually. Just add the logic to your saveEntity method:
#Transactional
public Entity saveEntity(BOEntity boEntity){
Entity e = entityRepository.findById(boEntity.getId());
if (boEntity.getVersion() != e.getVersion()) {
throw new OptimisticLockException();
}
entityMapper.updateEntity(boEntity, e);
entityRepository.save(e);
}
While trying to update entity I'm first retrieving it from the database, then I'm mapping the TO from frontend on it using Orika Mapper.
Then I'm trying to retrieve some data not related to this entity using 'JpaRepository' and findAllByOrderByCode method. And while this operation I'm getting a strange error saying that: "An unexpected exception occurred: detached entity passed to persist:".
And this error refers not to the basic field from the entity but to the object from the collection from this entity.
Summarize:
I have entity A which have bidirectional mapping One to Many to the entity B:
class A {
List<B> b;
}
then I want to update whole A with an object from frontend which I mapped using Orika Mapper.
And while trying to get some data I have an error.
I found that Orika by default makes a deep copy for collections so entityA = customsClearanceOrderRepository.findById(requestTo.getId());
entityA which has List of entitiesB and which are tracked and included in persistence context is replaced with a deep copy of them so they have another address and it means their aren't any longer tracked by Hibernate.
So I tried to map those collections by myself, to just update the fields and not create a new object and then the problem has gone.
Everything would be fine but when I removed this line List<SthTo> all = someRefersToDb.findAllByOrderByCode(); // error appears here
then the problem also doesn't exist, even that I'm again using orika which makes this deep copy. And I understand that it works fine because of 'saveAndFlush' in fact while updating makes EntityManager.merge(entity) and the problem with another address for entities is not a problem for that (cause it copies not tracked object into persistence context).
entityA = entityARepository.findById(requestTo.getId());
entityAMapper.map(requestTo, entityA);
List<SthTo> all = someRefersToDb.findAllByOrderByCode(); // error appears here
EntityA entityASaved = entityARepository.saveAndFlush(entityA);
So I want to know what's going on here: someRefersToDb.findAllByOrderByCode();
Is there some kind of checking the state of the entityA?
Everything is by default, I mean there is no magical #Transactional(propagation = Propagation.REQUIRES_NEW) or sth like this.
I know why!
Hibernate while running someRefersToDb.findAllByOrderByCode();
in fact, call also session.flush() which is used to synchronize session data with the database. And since Orika changed the addresses of entities their aren't any longer a part of the persistence context and the synchronization fails.
I have a critical section of code where I need to read and lock an entity by id with pessimistic lock.
This section of code looks like this right now:
MyEntity entity = entityManager.find(MyEntity.class, key);
entityManager.refresh(entity, LockModeType.PESSIMISTIC_WRITE);
It works OK, but as I understand in case when there is no entity in the hibernate's cache, we will use 2 read transactions to a database. 1st transaction to find the entity by id and another transaction to refresh and lock the entity.
Is it possible to use only one transaction in such scenario?
I would imagine something like:
boolean skipCache = true;
MyEntity entity = entityManager.find(MyEntity.class, key,
LockModeType.PESSIMISTIC_WRITE, skipCache);
But there is no such parameter like skipCache. Is there another approach to read an entity by id directly from the database by using EntityManager?
UPDATE:
This query will hit the first level cache in case the entity exists in the cache. Thus, it may potentially return the outdated data and that is why isn't suitable for critical sections where any read should be blocked:
MyEntity entity = entityManager.find(MyEntity.class, key, LockModeType.PESSIMISTIC_WRITE);
The question is about skipping the cache and not about locking.
I've just found a method getReference in the EntityManager which gets an instance, whose state may be lazily fetched. As said in the documentation:
Get an instance, whose state may be lazily fetched. If the requested
instance does not exist in the database, the EntityNotFoundException
is thrown when the instance state is first accessed. (The persistence
provider runtime is permitted to throw the EntityNotFoundException
when getReference is called.) The application should not expect that
the instance state will be available upon detachment, unless it was
accessed by the application while the entity manager was open.
As a possible solution to find and lock an up to date entity by id in one query we can use the next code:
MyEntity entity = entityManager.getReference(MyEntity.class, key);
entityManager.refresh(entity, LockModeType.PESSIMISTIC_WRITE);
This query will create an entity (no database query) and then refresh and lock the entity.
Why not directly pass the requested lock along with the query itself?
MyEntity entity = entityManager.find(MyEntity.class, key, LockModeType.PESSIMISTIC_WRITE);
As far as I understand this is doing exactly what you wanted. (documentation)
You can also set entityManager property just before you use the find method to address not hitting the cache.
Specifying the Cache Mode
entityManager.setProperty("javax.persistence.cache.storeMode", CacheStoreMode.REFRESH);
MyEntity entity = entityManager.find(MyEntity.class, key);
Code is shared in github. Situation looks like this I have opened Transaction. I create ExampleEntity and ExampleChildEntity. They are connected by bidirectional reference. Steps which i take:
ExampleEntity.setChild(child);
saveAndFlush() // insert goes to db
ExampleEntity.setChild(null)
saveAndFlush() // delete goes to db
ExampleEntity.setChild(child)
saveAndFlush() // insert goes to db
ExampleEntity.setChild(null)
saveAndFlush() // no delete here
I tagged hibernate because I think it's Hibernate problem (I'm using spring data jpa) because when I switch JPA provider to EclipseLink everything goes well. Am I doing something wrong or it's a bug? I have tried setting child reference to parent to null but this does not work also. Example project:
https://github.com/pustypawel/delete-twice-bug
The entity I'm trying to save is a parent and child. When I save the entity (i.e. the parent and children saved at the same time), however with normal execution (in debug mode every time) I get a HibernateOptimisticLockingFailureException thrown during session flushing. The testing is on my local machine, single thread, and nobody is changing the entity as I'm also saving it.
We are using the following:
MySQL v5.5.x
Hibernate 4.3.11
Java 8
Spring 4.1.0
Key points:
The relationship between the parent and child is bi-directional one-to-many.
We use optimistic locking with the version column being a timestamp created by MySQL either during insert or during update. On the version field we specify #Generated(GenerationTime.ALWAYS) to ensure that the version details are obtained from the database automatically (avoid the time precision issue between Java and MySQL)
During saving a new entity (id = 0), I can see the logs that the entity is being inserted into the database, I can also see the child entities being inserted in the database (via the Hibernate logs). During this process, I can also see the a select is done to get the version details from the database.
Soon after the entities are inserted and the session is being flushed, there is a dirty checking is done on the collection and I see a message in the log that the collection is unreferenced. Straight after this, I see an update statement on the parent entity's table and this is where the problem occurs as the version value used in the update statement is different to what is in the database, the HibernateOptimisticLockingFailureException exception is thrown.
Hibernate Code
getHibernateTemplate().saveOrUpdate(parentEntity);
// a break point here and wait for 1 sec before executing
// always get the HibernateOptimisticLockingFailureException
getHibernateTemplate().flush();
Parent mapping
#Access(AccessType.FIELD)
#OneToMany(mappedBy="servicePoint", fetch=FetchType.EAGER, cascade={CascadeType.ALL}, orphanRemoval=true, targetEntity=ServicingMeter.class)
private List<ServicingMeter> meters = new ArrayList<ServicingMeter>();
Child mapping
#Access(AccessType.FIELD)
#ManyToOne(fetch=FetchType.EAGER, targetEntity=ServicePoint.class)
#JoinColumn(name="service_point_id", nullable=false)
private ServicePoint servicePoint;
Questions:
1. Why is there an update date on the parent table?
2. How can I avoid this update from happening?
3. Is there something wrong with the way my one-to-many mapping is setup?
The annotated log file can be found here