Hibernate merge and flush do not persist changes to an object - java

This is a Hibernate/JPA question around updating an object in the database.
I have an object that I created in another transaction using entityManager.persist(object).
This object is returned and then some values are updated. A call then is made in another method that is annotated with #Transactional. This method calls a merge method in a base class (shown below).
In the following code, rvalue has the modified information up until refresh is called. Once refresh is called, the data is back to the original values.
I would expect that the call to flush() would persist the information to the database.
I tried commenting out the refresh call, thinking the save does not happen until I leave the method wrapped in transactional. However, I am still finding the changes are not being persisted.
Any ideas on what may be happening would be greatly appreciated!
#Override
public T merge(T object) {
T rvalue = entityManager.merge(object);
entityManager.flush();
entityManager.refresh(rvalue);
return rvalue;
}

I figured out what the problem was. It turns out the issue did not have to do with the entityManager or the #Transaction settings. It was the Entity. One of the fields I was trying to update was marked as updatable=false. Once I found and changed that parameter all was well. I am used to seeing this setting on the #Column annotation on the getter where it is defined, but found that it was overridden with an #AttributeOverride annotation in the derived class.
While this turned out to be an easy and uncomplicated reason and fix, the path to get there had its issues. In all my troubleshooting (stepping through code, turning up logs for Hibernate and Spring, etc...) I did not run into any exceptions that the update failed due to this constraint.

Merge creates a new instance of your entity, copies the state from the supplied entity, and makes the new copy managed. The instance you pass in will not be managed (any changes you make will not be part of the transaction - unless you call merge again).
Source is Here

I had this problem because I called session.setDefaultReadOnly(true); on the Hibernate Session.

Related

What collateral effect Propagation.REQUIRES_NEW will cause to my app

I was getting an error like this
a different object with the same identifier value was already associated with the session:
I've searched and found that it could be fixed with CascadeType.MERGE or refactoring a lot of code to prevent that a same database object becomes two instances within the session.
I can't refactor it.
CascadeType.MERGE worked, but that means I would have to code a lot to resolve remove problems, since it was .ALL before, right?
I got it working putting
#Transactional(propagation = Propagation.REQUIRES_NEW)
above a method, of a class annoted with #Service, that query database, which was the one that was throwing the exception I mentioned.
I need help to understand if this new annotated method can bring me any future headache like it is now.
It is being called from some cron jobs beside the action I'm fixing.
In fact you create a separate transaction for the method call by annotating the method with #Transactional(propagation = Propagation.REQUIRES_NEW)
It means in case of an Exception thrown out of the method all DB changes (saving etc.) are applied and are not rolled back. This could significantly damage the business logic and could be source of inconsistent data in DB.
I would reconsider applying the Propagation.REQUIRES_NEW.
Merge sounds much more suitable in this case.
None of the solution you listed are acceptable IMHO.
Deferring some part of treatment to a new transaction will break the atomicity (all or nothing) of your unit of work, and changing the cascading type will imply that you manually handle all operation automatically cascaded before.
The right approach is to understand why hibernate encounters 2 different object instances with the same identifier, the most common causes is because you manually persists (save) a detached / transient object with a fixed identifier while this one already exists in the session (a managed object with the same identifier is already in the session).
You could try to manually re-attach (merge / update /saveOrUpdate) the detached object instance causing the problem.
You have to be aware of the entity lifecycle to properly understand what happens here.

update() and merge behave differently in case of updating an item in OneToMany collection

I have this a class like bellow:
#Entity
#Table(name="work")
public class Work {
#Id
#Column(name="id")
private String id;
#OneToMany(orphanRemoval=true ,mappedBy="work", cascade=CascadeType.ALL , fetch=FetchType.EAGER)
private List<PersonRole> personRoleList;
}
As mine is an web application, when i update (comes from client) a personRoleList item and call :
session.update(work); //`work` is in detached state
It does not update the existing personRoleList item it actually add a new one.
Some other people also having the same problem. REF:
using-saveorupdate-in-hibernate-creates-new-records-instead-of-updating-existi
jpa-onetomany-not-deleting-child
I tried all suggested solution, but none of them work for me.
But then i just tried :
session.merge(work); //replacing session.update(work)
And it works as expected.!!
This is where I get confused. Because I can't find any explanation for this difference in behaviors in case of OneToMany relationship (or may be i missed ). I read some threads to understand the differences between update() and merge() and gone through the doc. REF:
what-are-the-differences-between-the-different-saving-methods-in-hibernate
differences-among-save-update-saveorupdate-merge-methods-in-session
But still it is not clear What are those behavioral pattern/logic/steps that creating this difference.?
Merge attempts to associate a currently transient object with a persistent object currently under management by the session by 'merging' them into one entity. Its intended use is when you have a detached object and an attached object and wish to resolve them.
In a merge(), Hibernate will read the entity from the database if there isn't already a managed instance in the session. In your example, this will result in Hibernate eagerly loading the collection (due to fetch=FetchType.EAGER). Then when your session ends, Hibernate will check for changes in the collection (due to cascade=CascadeType.ALL) and will perform the appropriate UPDATE in the database.
This differs from the update() scenario because in an update Hibernate always (by default) assumes the object is dirty and schedules an UPDATE. This update is likely what's causing creation of a new element in your collection - Hibernate hasn't looked in the database to bring the collection into session before issuing the UPDATE.
I'd bet you can get the desired behavior of update() by setting
select-before-update="true"
in your class mapping or by using the lock method to re-attach your object to the session before making changes.
From Chapter 9 of Java Persistence with Hibernate
It doesn’t matter if the item object is modified before or after it’s passed to
update(). The important thing here is that the call to update() is reattaching the detached instance to the new Session (and persistence context). Hibernate
always treats the object as dirty and schedules an SQL UPDATE, which will be executed during flush. You can see the same unit of work in figure 9.8.
You may be surprised and probably hoped that Hibernate could know that you
modified the detached item’s description (or that Hibernate should know you did
not modify anything). However, the new Session and its fresh persistence context
don’t have this information. Neither does the detached object contain some internal list of all the modifications you’ve made.
UDPATE in the database is needed. One way to avoid this UDPATE statement is to
configure the class mapping of Item with the select-before-update="true"
attribute. Hibernate then determines whether the object is dirty by executing a
SELECT statement and comparing the object’s current state to the current data-
base state.

Making sure a transaction commits before another one starts when using different classes in Seam

I have a page with two buttons, clicking each actually calls methods in two different classes who have entityManager injected into them.
Now when in class 1 the method save() is called, an entity is updated with the most recent values. Before the method returns, I call entityManager.flush() so that changes are flushed to the database.
Immediately after that if I click the other button which calls method advance() of class 2 and load the same entity using entityManger.find(entity.class, Long.valueOf(entityId)), the fields that were updated in the previous method call show to be null.
Do I need to do any configuration to make sure that this does not happen, or how can I share the Entity Manager between these two classes so that I can make sure that calls after the flush operation work on the updated database.
The transaction is commited. That's not the problem.
From the java doc of the EntityManager.find() method:
If the entity instance is contained in the persistence context, it is returned from there.
It means that : the find won't fetch the object from the DB if it is already present in your entityManager.
To refresh the entity, simply call refresh(entity):
MyEntity myEntity = entityManger.find(MyEntity.class, Long.valueOf(entityId));
entityManger.refresh(myEntity);

EntityManager refresh

I have web application using JPA. This entity manager keeps bunch of entites and suddenly I update the database from other side. I use MySQL and I use PhpMyAdmin and change some row.
How to tell entity manager to re-synchronize, e.g. to forgot all the entites in cache?
I know there is refresh(Object) method, but is there any possibility how to do refreshAll() or something what results in this?
It is sure this is expensive operation but if it has to be done.
entityManager.getEntityManagerFactory().getCache().evictAll()
Refresh is something different since it modifies your object. This line will just empty the cache, so if you fetch objects changed outside the entity manager, it will do an actual database query instead of using the outdated cached value.
I had a similar issue and the evictAll() line above worked for me.
Alternatively, the #Cache annotation on the entity class worked too, with the benefit of being able to control caching parameters:
#Cache(coordinationType=CacheCoordinationType.INVALIDATE_CHANGED_OBJECTS)
See: http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching
If you are using EclipseLink instead of Hibernate the hint is:
em.createNamedQuery("SomeEntity.SomeNamedQuery")
.setHint(QueryHints.REFRESH, true)
.getResultList();
Well, for some people (like me) that tried to add factory.getCache().evictAll(); and doesn't work, and are used JPA + Hibernate, to refresh a query add the hint org.hibernate.cacheMode to IGNORE. Example:
em.createNamedQuery("SomeEntity.SomeNamedQuery")
.setHint("org.hibernate.cacheMode", "IGNORE")
.getResultList();
cache.evictAll is not working for me. So to retrieve data pushed from another app, I peform :
em.getTransaction().begin();
em.getTransaction().commit();
After that, my find query retrieves refreshed data. I don't know if it's very safe solution but it works properly.
When you read an object into an EntityManager, it becomes part of the persistence context, and the same object will remain in the EntityManager until you either clear() it and get a new EntityManager.
So if you update the database, the EntityManager will not see the change unless you call refresh() on the object, or clear() the EntityManager. This has nothing to do with the shared cache (L2) or the persistence context (L1). If you also also using a shared cache, and updating the database directly, then your shared cache will be out of date. You need to refresh() the object, or mark it as invalid to be refreshed the next time it is queried.
Code must follow the way like.
DETACH
REFRESH
MERGE
FLUSH

StaleObjectstateException row was updated or deleted by

I am getting this exception in a controller of a web application based on spring framework using hibernate. I have tried many ways to counter this but could not resolve it.
In the controller's method, handleRequestInternal, there are calls made to the database mainly for 'read', unless its a submit action.
I have been using, Spring's Session but moved to getHibernateTemplate() and the problem still remains.
basically, this the second call to the database throws this exception. That is:
1) getEquipmentsByNumber(number) { firstly an equipment is fetched from the DB based on the 'number', which has a list of properties and each property has a list of values. I loop through those values (primitive objects Strings) to read in to variables)
2) getMaterialById(id) {fetches materials based on id}
I do understand that the second call, most probably, is making the session to "flush", but I am only 'reading' objects, then why does the second call throws the stale object state exception on the Equipment property if there is nothing changed?
I cannot clear the cache after the call since it causes LazyExceptions on objects that I pass to the view.
I have read this:
https://forums.hibernate.org/viewtopic.php?f=1&t=996355&start=0
but could not solve the problem based on the suggestions provided.
How can I solve this issue? Any ideas and thoughts are appreciated.
UPDATE:
What I just tested is that in the function getEquipmentsByNumber() after reading the variables from list of properties, I do this: getHibernateTemplate().flush(); and now the exception is on this line rather then the call to fetch material (that is getMaterialById(id)).
UPDATE:
Before explicitly calling flush, I am removing the object from session cache so that no stale object remains in the cache.
getHibernateTemplate().evict(equipment);
getHibernateTemplate().flush();
OK, so now the problem has moved to the next fetch from DB after I did this. I suppose I have to label the methods as synchronized and evict the Objects as soon as I am finished reading their contents! it doesn't sound very good.
UPDATE:
Made the handleRequestInternal method "synchronized". The error disappeared. Ofcourse, not the best solution, but what to do!
Tried in handleRequestInternal to close the current session and open a new one. But it would cause other parts of the app not to work properly. Tried to use ThreadLocal that did not work either.
You're mis-using Hibernate in some way that causes it to think you're updating or deleting objects from the database.
That's why calling flush() is throwing an exception.
One possibility: you're incorrectly "sharing" Session or Entities, via member field(s) of your servlet or controller. This is the main reason 'synchronized' would change your error symptoms.. Short solution: don't ever do this. Sessions and Entities shouldn't & don't work this way -- each Request should get processed independently.
Another possibility: unsaved-value defaults to 0 for "int" PK fields. You may be able to type these as "Integer" instead, if you really want to use 0 as a valid PK value.
Third suggestion: use Hibernate Session explicitly, learn to write simple correct code that works, then load the Java source for Hibernate/ Spring libraries so you can read & understand what these libraries are actually doing for you.
I also have been struggling with this exception, but when it continued to recur even when I put a lock on the object (and in a test environment, where I knew I was the only process touching the object), I decided to give the parenthetical in the stack trace its due consideration.
org.hibernate.StaleObjectStateException: Row was updated or deleted by
another transaction (or unsaved-value mapping was incorrect):
[com.rc.model.mexp.MerchantAccount#59132]
In our case it turned out that the mapping was wrong; we had type="text" in the mapping for one field that was a mediumtext type in the database, and it seems that Hibernate really hates that, at least under certain circumstances. We removed the type specification altogether from the mapping for this field, and the problem was resolved.
Now the weird thing is that in our production environment, with the supposedly problematic mapping in place, we do NOT get this exception. Does anybody have any idea why this might be? We are using the same version of MySQL - "5.0.22-log" (I don't know what the "-log" means) - in dev and production envs.
Here are 3 possibilities (as I do not know exactly, which kind of hibernate session handling you are using). Add one after another and test:
Use bi-directional mapping with inverse=true between parent object and child object, so the change in parent or child will get propagate to the other end of relation properly.
Add support for Optimistic Locking using TimeStamp or Version column
Use join query to fetch the whole object graph [ parent+children] together to avoid the second call altogether.
Lastly, if and only if nothing works:
Load the parent again by Id (you have that already) and populate modified data then update.
Life will be good! :)
This problem was something that I had experienced and was quite frustrating, although there has to be something a little odd going on in your DAO/Hibernate calls, because if you're doing a lookup by ID there is no reason to get a stale state, since that is just a simple lookup for an object.
First, make sure all your methods are annotated with #Transaction(required=true) // you'll have to look up the exact syntax
However, this exception is usually thrown when you try to make changes to an object that has been detached from the session it was retrieved from. The solution to this is often not simple and would require more code posted so we can see exactly what is going on; my general suggestion would be to create a #Service that performs these kinds of things within a single transaction

Categories