Entity Manager find after merge - java

We are facing a bug that is not always happening in our app, it happens when we use Entity manager find after updating an entity, for some reason it is retrieving the outdated version of the entity.
Is this happening because of the transaction taking time to commit the changes? and why is it not always happening?
The code for fetching the entity and updating it based on the primary key:
Order order = em.find(order.class, orderId);
em.refresh(order);
order.setStatus("OPEN");
em.merge(order);
em.flush();
The code after updating in another method (same order id):
Order order = em.find(order.class, orderId);
if (order == null) return;
if (!"OPEN".equals(order.getStatus))
throw new Exception(...);
else
//some logic
Sometimes, the exception is thrown, meaning the order status is not changed yet.
We are using JTA with weblogic and eclipse persistence as the JPA implementation.
If anyone has any clue what might be causing this, I would be grateful, if any extra info is required, feel free to call it out.

Is the two methods in the same transaction ? if not, it's possible that the the first transaction is not committed when you call the second method.
Second thing to verify, is the cache, it's possible that you implement the second level of cache by default, it cause error like this.

Related

Should Hibernate Session#merge do an insert when receiving an entity with an ID?

This seems like it would come up often, but I've Googled to no avail.
Suppose you have a Hibernate entity User. You have one User in your DB with id 1.
You have two threads running, A and B. They do the following:
A gets user 1 and closes its Session
B gets user 1 and deletes it
A changes a field on user 1
A gets a new Session and merges user 1
All my testing indicates that the merge attempts to find user 1 in the DB (it can't, obviously), so it inserts a new user with id 2.
My expectation, on the other hand, would be that Hibernate would see that the user being merged was not new (because it has an ID). It would try to find the user in the DB, which would fail, so it would not attempt an insert or an update. Ideally it would throw some kind of concurrency exception.
Note that I am using optimistic locking through #Version, and that does not help matters.
So, questions:
Is my observed Hibernate behaviour the intended behaviour?
If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
If the answer to 2. is yes, why is nobody complaining about it?
Please see the text from hibernate documentation below.
Copy the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance.
It clearly stated that copy the state(data) of object in database. if object is not there then save a copy of that data. When we say save a copy hibernate always create a record with new identifier.
Hibernate merge function works something like as follows.
It checks the status(attached or detached to the session) of entity and found it detached.
Then it tries to load the entity with identifier but not found in database.
As entity is not found then it treat that entity as transient.
Transient entity always create a new database record with new identifier.
Locking is always applied to attached entities. If entity is detached then hibernate will always load it and version value gets updated.
Locking is used to control concurrency problems. It is not the concurrency issue.
I've been looking at JSR-220, from which Session#merge claims to get its semantics. The JSR is sadly ambiguous, I have found.
It does say:
Optimistic locking is a technique that is used to insure that updates
to the database data corresponding to the state of an entity are made
only when no intervening transaction has updated that data since the
entity state was read.
If you take "updates" to include general mutation of the database data, including deletes, and not just a SQL UPDATE, which I do, I think you can make an argument that the observed behaviour is not compliant with optimistic locking.
Many people agree, given the comments on my question and the subsequent discovery of this bug.
From a purely practical point of view, the behaviour, compliant or not, could lead to quite a few bugs, because it is contrary to many developers' expectations. There does not seem to be an easy fix for it. In fact, Spring Data JPA seems to ignore this issue completely by blindly using EM#merge. Maybe other JPA providers handle this differently, but with Hibernate this could cause issues.
I'm actually working around this by using Session#update currently. It's really ugly, and requires code to handle the case when you try to update an entity that is detached, and there's a managed copy of it already. But, it won't lead to spurious inserts either.
1.Is my observed Hibernate behaviour the intended behaviour?
The behavior is correct. You just trying to do operations that are not protected against concurrent data modification :) If you have to split the operation into two sessions. Just find the object for update again and check if it is still there, throw exception if not. If there is one then lock it by using em.(class, primary key, LockModeType); or using #Version or #Entity(optimisticLock=OptimisticLockType.ALL/DIRTY/VERSION) to protect the object till the end of the transaction.
2.If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
Probably: yes
3.If the answer to 2. is yes, why is nobody complaining about it?
Because if you protect your operations using pessimistic or optimistic locking the problem will disappear:)
The problem you are trying to solve is called: Non-repeatable read

JPA not taking update into account within single Transaction

Within a transactional service method, I loop on querying a database to get the first 10 of entity A with a criteria.
I update each A entity from the list so that they don't match the criteria anymore, and call a flush() to make sure changes are made.
The second call to the query within the loop returns the exact same set of A entities.
Why isn't the flushed change on the entities taken into account?
I'm using JPA 2.0 with Hibernate 4.1.7.
The same process with Hibernate only seems to be working.
I've turned off the second level cache and the query cache, to no avail.
I'm using a rather plain configuration, JpaTransactionManager, Spring over JPA over Hibernate. Main method annotated with #Transactional.
The code would be something like this:
do {
modelList = exportContributionDao.getContributionListToExport(10);
for (M m : modelList) {
//export m to a file
m. (false);
super.flush();
}
} while (modelList.size() == 10);
With each iteration of the loop, the Dao method always return the same 10 results, JPA not taking into account the updated 'isToBeExported' attribute.
I'm not trying to solve a problem, rather I want to understand why JPA is not behaving as expected here.
I expect this to be a 'classic' problem.
No doubt it would be solved if the Transaction would be commited at each iteration.
ASAIK, the cache L1, i.e. the Session with Hibernate as the underlying JPA provider, should be up to date, and the second iteration query should take into account the updated Entities, even though the changes haven't been persisted yet.
So my question is: why isn't it the case? Misconfiguration or know behavior?
Flush does not necessarily commit the changes on the database. What do you want to achieve? From what I understand you do s.th. like:
Loop about entities
Within the loop, change the entity
Call 'flush' on the entity
Read the entity back again
You wonder why the data did not change on the database?
If this is correct, why do you re-read the changes and just don't work with the elements? After leaving the transaction, the changes will be automagically made persistent.
This should definitely be working.
This is a configuration problem on our part.
Apologies for the question, it was pretty hard to spot the reason on our part, but I hope the answer will at least be useful to some:
JPA definitely takes into account changes made on entities within a single transaction.

Hibernate's StaleObjectStateException still thrown after entity reloaded

I am playing with a standard optimistic concurrency control scenario with extended session / automatic versioning. I have an entity which I load in the first transaction, present to user for modification and save in the second one, both transactions sharing the same session. After the entity is somehow modified session.flush() at the end of the second transaction may throw a StaleObjectStateException in case a version inconsistency is detected meaning that a concurrent transaction has saved a next version of the entity in between.
I want to handle such an error in a most simple way -- just to reload entity losing current changes and continue with editing and saving again. First I tried this:
session.refresh(entity);
but after I modify and attempt to save this refreshed entity, I still get the same StaleObjectStateException, even though it does get refreshed and the version number appears consistent; yes I know that using refresh() in extended sessions is discouraged, but don't understand why. Is this behavior related to the reason it is discouraged?
Next I tried the following way to avoid using session.refresh():
session.evict(entity);
entity = session.load(MyEntity.class, id);
but it still results in StaleObjectStateException being raised at saving the entity which is not indeed stale.
The only way I managed to cope with the exception is this:
session.clear();
entity = session.load(MyEntity.class, id);
but isn't session.clear() the same as session.evict() pertaining to my concrete entity?
To resume, my questions are:
Why is StaleObjectStateException still thrown on a reloaded entity unless session.clear() is done?
What is the correct way to reload an entity which has already been loaded in the same session and why is refresh() bad? Is there something wrong with this approach to implement conversation?
I'm using Hibernate 4.1.7.Final, with no second-level cache.
My apologies if my question is repeating, but I fail to find a profound explanation...
When you get an exception in a session, than that session instance is broken. You cannot use that instance any more and you have to throw it away and create a new instance. The exception is not reset (as you can see you get the same exception again thought logically this should not happen). This is a general rule for using hibernate sessions. The reason for this is, hibernate does not always see why an exception appears and the state of the session instance may be inconsistent.
I don't know why it works after clear(). This may be accidentally. It is more prudent to use a new instance.
If you use a StatelessSession, then you don't have this restriction, but stateless sessions have other disadvantages, for example no caching.

JPA EntityManager: merge() is trying to create a new row in the db - why?

I'm using JPA through the Play Framework.
I'm checking to see if a User object is cached, and if so I retrieve it and merge() it such that I can update fields and save the changes later:
user = (User) Cache.get("user-auth-" + sessionAuthToken);
if (user != null) {
user = user.merge(); // I believe this is the same as EntityManager.merge()
}
However when I do this I get the following error:
PersistenceException occured :
org.hibernate.exception.ConstraintViolationException:
could not insert: [models.User]
...
Caused by: com.mysql.jdbc.exceptions.jdbc4.
MySQLIntegrityConstraintViolationException:
Duplicate entry '1235411688335416533' for key 'authToken'
It seems like its trying to insert a new user, even though this user should be, and is already in the database. Why would merge() do that?
Or perhaps I'm going about this entirely the wrong way - advice would be appreciated.
I think it may be a poblem with hashCode() and equals(). If there are not implemented correctly, a new entity my be inserted instead of updateing an existing one.
I believe your problem is the way Play manages the JPA environment (and transactions).
Once you receive a request the framework immediately creates the JPA manager and a transaction. From that moment onwards all your Model entities are automatically linked to the manager.
Play facilitates working with this model in 2 ways:
You have to explicitly indicate that you want to save changes to an object (via save())
Transaction is committed automatically unless there is an exception or you flag it for rollback (JPA.setRollbackOnly())
By running "merge" you are trying to add to the Manager an entity which is already there, which causes the unique key exception. If you just load the entity from the cache, you will be able to modify and call save() once done, and it will work.
See What is the proper way to re-attach detached objects in Hibernate?. Merge tries to write the stale state to the db in order to overwrite possible other concurrent updates. The linked question mentions session.lock(entity, LockMode.NONE); as a possible solution, I haven't tried it though.
If authToken is not a primary key, then perhaps primary key of the User instance being merged doesn't match the primary key of its counterpart in the database, therefore merge() thinks that it's a new User and tries to insert it.
So, check the primary key of the User, perhaps it have been corrupted or lost somehow.
It's an entity definition problem; specifically with regard to primary key/ unsaved values.
The entity definition needs to be correct, for Hibernate to recognize it as 'already saved'. For example, having 'null' in a nullable version field can cause Hibernate to disregard any existing ID & regard it as unsaved.
This is a Hibernate question, not just JPA. JPA is the interface -- you're having trouble with a specific implementation.

Hibernate calls flush on find- causes not-null error

I have a process which updates a tree to the database, and in doing so, performs reads to check for duplicates entities.
I'm finding that trying to do a criteria.uniqueResult() midway through this process causes the following error:
org.hibernate.PropertyValueException:
not-null property references a null or
transient value
Digging through the stack trace, I see that the uniqueResult() is flushing the session, attempting to perform updates that aren't ready to go to the database yet.
at org.hibernate.engine.Cascade.cascade(Cascade.java:153)
at org.hibernate.event.def.AbstractFlushingEventListener.cascadeOnFlush(AbstractFlushingEventListener.java:154)
at org.hibernate.event.def.AbstractFlushingEventListener.prepareEntityFlushes(AbstractFlushingEventListener.java:145)
at org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:88)
at org.hibernate.event.def.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:58)
at org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:996)
at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1589)
at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:306)
at org.hibernate.impl.CriteriaImpl.uniqueResult(CriteriaImpl.java:328)
at com.inversion.dal.BaseDAO.findUniqueByCriterion(BaseDAO.java:59)
Have I set something up wrong here?
Any help greatly appreciated.
Marty
hibernate remembers with objects needs to be saved. when issuing a select, hibernate will flush these changes. this ensures the select will return the correct results.
setting flushmode to anything else than FlushMode.AUTO will prevent this behaviour. But the error is in your code, where you pass an incomplete object to hibernate to persist or update. So the correct solution is to pass the object later to hibernate, when it is complete.
Turn off auto-flushing on the Session object to fix this exception.
Session s;
// if you're doing transactional work
s.setFlushMode(FlushMode.COMMIT);
// if you want to control flushes directly
s.setFlushMode(FlushMode.MANUAL);
This is not your error however. Something earlier in your code is causing the in memory objects to be in an invalid state which is trying to be persisted to the DB during the autoflush.
I've pulled my hair out many times trying to get to the bottom of these issues. The problem being that it's so difficult to get to the root of the issue causing it.
If you're nearing your wits end you can also execute the supporting queries such as findAll in a new session:
Domain.withNewSession { session -> ... }
That has usually circumvented the problem for me in most cases.

Categories