I was getting an error like this
a different object with the same identifier value was already associated with the session:
I've searched and found that it could be fixed with CascadeType.MERGE or refactoring a lot of code to prevent that a same database object becomes two instances within the session.
I can't refactor it.
CascadeType.MERGE worked, but that means I would have to code a lot to resolve remove problems, since it was .ALL before, right?
I got it working putting
#Transactional(propagation = Propagation.REQUIRES_NEW)
above a method, of a class annoted with #Service, that query database, which was the one that was throwing the exception I mentioned.
I need help to understand if this new annotated method can bring me any future headache like it is now.
It is being called from some cron jobs beside the action I'm fixing.
In fact you create a separate transaction for the method call by annotating the method with #Transactional(propagation = Propagation.REQUIRES_NEW)
It means in case of an Exception thrown out of the method all DB changes (saving etc.) are applied and are not rolled back. This could significantly damage the business logic and could be source of inconsistent data in DB.
I would reconsider applying the Propagation.REQUIRES_NEW.
Merge sounds much more suitable in this case.
None of the solution you listed are acceptable IMHO.
Deferring some part of treatment to a new transaction will break the atomicity (all or nothing) of your unit of work, and changing the cascading type will imply that you manually handle all operation automatically cascaded before.
The right approach is to understand why hibernate encounters 2 different object instances with the same identifier, the most common causes is because you manually persists (save) a detached / transient object with a fixed identifier while this one already exists in the session (a managed object with the same identifier is already in the session).
You could try to manually re-attach (merge / update /saveOrUpdate) the detached object instance causing the problem.
You have to be aware of the entity lifecycle to properly understand what happens here.
Related
This seems like it would come up often, but I've Googled to no avail.
Suppose you have a Hibernate entity User. You have one User in your DB with id 1.
You have two threads running, A and B. They do the following:
A gets user 1 and closes its Session
B gets user 1 and deletes it
A changes a field on user 1
A gets a new Session and merges user 1
All my testing indicates that the merge attempts to find user 1 in the DB (it can't, obviously), so it inserts a new user with id 2.
My expectation, on the other hand, would be that Hibernate would see that the user being merged was not new (because it has an ID). It would try to find the user in the DB, which would fail, so it would not attempt an insert or an update. Ideally it would throw some kind of concurrency exception.
Note that I am using optimistic locking through #Version, and that does not help matters.
So, questions:
Is my observed Hibernate behaviour the intended behaviour?
If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
If the answer to 2. is yes, why is nobody complaining about it?
Please see the text from hibernate documentation below.
Copy the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance.
It clearly stated that copy the state(data) of object in database. if object is not there then save a copy of that data. When we say save a copy hibernate always create a record with new identifier.
Hibernate merge function works something like as follows.
It checks the status(attached or detached to the session) of entity and found it detached.
Then it tries to load the entity with identifier but not found in database.
As entity is not found then it treat that entity as transient.
Transient entity always create a new database record with new identifier.
Locking is always applied to attached entities. If entity is detached then hibernate will always load it and version value gets updated.
Locking is used to control concurrency problems. It is not the concurrency issue.
I've been looking at JSR-220, from which Session#merge claims to get its semantics. The JSR is sadly ambiguous, I have found.
It does say:
Optimistic locking is a technique that is used to insure that updates
to the database data corresponding to the state of an entity are made
only when no intervening transaction has updated that data since the
entity state was read.
If you take "updates" to include general mutation of the database data, including deletes, and not just a SQL UPDATE, which I do, I think you can make an argument that the observed behaviour is not compliant with optimistic locking.
Many people agree, given the comments on my question and the subsequent discovery of this bug.
From a purely practical point of view, the behaviour, compliant or not, could lead to quite a few bugs, because it is contrary to many developers' expectations. There does not seem to be an easy fix for it. In fact, Spring Data JPA seems to ignore this issue completely by blindly using EM#merge. Maybe other JPA providers handle this differently, but with Hibernate this could cause issues.
I'm actually working around this by using Session#update currently. It's really ugly, and requires code to handle the case when you try to update an entity that is detached, and there's a managed copy of it already. But, it won't lead to spurious inserts either.
1.Is my observed Hibernate behaviour the intended behaviour?
The behavior is correct. You just trying to do operations that are not protected against concurrent data modification :) If you have to split the operation into two sessions. Just find the object for update again and check if it is still there, throw exception if not. If there is one then lock it by using em.(class, primary key, LockModeType); or using #Version or #Entity(optimisticLock=OptimisticLockType.ALL/DIRTY/VERSION) to protect the object till the end of the transaction.
2.If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
Probably: yes
3.If the answer to 2. is yes, why is nobody complaining about it?
Because if you protect your operations using pessimistic or optimistic locking the problem will disappear:)
The problem you are trying to solve is called: Non-repeatable read
This is a Hibernate/JPA question around updating an object in the database.
I have an object that I created in another transaction using entityManager.persist(object).
This object is returned and then some values are updated. A call then is made in another method that is annotated with #Transactional. This method calls a merge method in a base class (shown below).
In the following code, rvalue has the modified information up until refresh is called. Once refresh is called, the data is back to the original values.
I would expect that the call to flush() would persist the information to the database.
I tried commenting out the refresh call, thinking the save does not happen until I leave the method wrapped in transactional. However, I am still finding the changes are not being persisted.
Any ideas on what may be happening would be greatly appreciated!
#Override
public T merge(T object) {
T rvalue = entityManager.merge(object);
entityManager.flush();
entityManager.refresh(rvalue);
return rvalue;
}
I figured out what the problem was. It turns out the issue did not have to do with the entityManager or the #Transaction settings. It was the Entity. One of the fields I was trying to update was marked as updatable=false. Once I found and changed that parameter all was well. I am used to seeing this setting on the #Column annotation on the getter where it is defined, but found that it was overridden with an #AttributeOverride annotation in the derived class.
While this turned out to be an easy and uncomplicated reason and fix, the path to get there had its issues. In all my troubleshooting (stepping through code, turning up logs for Hibernate and Spring, etc...) I did not run into any exceptions that the update failed due to this constraint.
Merge creates a new instance of your entity, copies the state from the supplied entity, and makes the new copy managed. The instance you pass in will not be managed (any changes you make will not be part of the transaction - unless you call merge again).
Source is Here
I had this problem because I called session.setDefaultReadOnly(true); on the Hibernate Session.
I am playing with a standard optimistic concurrency control scenario with extended session / automatic versioning. I have an entity which I load in the first transaction, present to user for modification and save in the second one, both transactions sharing the same session. After the entity is somehow modified session.flush() at the end of the second transaction may throw a StaleObjectStateException in case a version inconsistency is detected meaning that a concurrent transaction has saved a next version of the entity in between.
I want to handle such an error in a most simple way -- just to reload entity losing current changes and continue with editing and saving again. First I tried this:
session.refresh(entity);
but after I modify and attempt to save this refreshed entity, I still get the same StaleObjectStateException, even though it does get refreshed and the version number appears consistent; yes I know that using refresh() in extended sessions is discouraged, but don't understand why. Is this behavior related to the reason it is discouraged?
Next I tried the following way to avoid using session.refresh():
session.evict(entity);
entity = session.load(MyEntity.class, id);
but it still results in StaleObjectStateException being raised at saving the entity which is not indeed stale.
The only way I managed to cope with the exception is this:
session.clear();
entity = session.load(MyEntity.class, id);
but isn't session.clear() the same as session.evict() pertaining to my concrete entity?
To resume, my questions are:
Why is StaleObjectStateException still thrown on a reloaded entity unless session.clear() is done?
What is the correct way to reload an entity which has already been loaded in the same session and why is refresh() bad? Is there something wrong with this approach to implement conversation?
I'm using Hibernate 4.1.7.Final, with no second-level cache.
My apologies if my question is repeating, but I fail to find a profound explanation...
When you get an exception in a session, than that session instance is broken. You cannot use that instance any more and you have to throw it away and create a new instance. The exception is not reset (as you can see you get the same exception again thought logically this should not happen). This is a general rule for using hibernate sessions. The reason for this is, hibernate does not always see why an exception appears and the state of the session instance may be inconsistent.
I don't know why it works after clear(). This may be accidentally. It is more prudent to use a new instance.
If you use a StatelessSession, then you don't have this restriction, but stateless sessions have other disadvantages, for example no caching.
The question title basically says it all. Is it possible in JPA/Hibernate to gracefully prevent the deletion of an entity from the database? What I would like is to flag the entity as "hidden" instead of actually removing it.
I also want the Cascade semantics to be preserved, such that if I try to delete an entity that owns a collection of some other entity, the owning entity and every entity in its collection get marked as hidden without any extra work being necessary on my part, beyond implementing the #PreRemove handler that prevents the deletion and marks the entity as hidden.
Is this possible, or do I need to figure out some other approach?
Is it possible in JPA/Hibernate to gracefully prevent the deletion of an entity from the database?
Yes, as long as you avoid using EntityManager.remove(entity) this is possible. If you do use EntityManager.remove(), then the JPA provider will flag the object for deletion using a corresponding SQL DELETE statement implying that a elegant solution will not be possible once you flag the object for deletion.
In Hibernate, you can achieve this using #SQLDelete and #Where annotations. However, this will not play well with JPA, as EntityManager.find() is known to ignore the filter specified in the #Where annotation.
A JPA-only solution would therefore involve, adding a flag i.e. a column, in the Entity classes to distinguish logically deleted entities in the database from "alive" entities. You will need to use appropriate queries (JPQL and native) to ensure that logically deleted entities will not be available in the result sets. You can use the #PreUpdate and #PrePersist annotations to hook onto the entity lifecycle events to ensure that the flag is updated on persist and update events. Again, you will need to ensure that you will not invoke the EntityManager.remove method.
I would have suggested using the #PreRemove annotation to hook onto the lifecycle event that is triggered for removal of entities, but using an entity listener to prevent deletion is fraught with trouble for the reasons stated below:
If you need to prevent the SQL DELETE from occurring in a logical sense, you will need to persist object in the same transaction to recreate it*. The only problem is that it is not a good design decision to reference the EntityManager in a EntityListener, and by inference invoke EntityManager.persist in the listener. The rationale is quite simple - you might end up obtaining a different EntityManager reference in the EntityListener and this will only result in vague and confusing behavior in your application.
If you need to prevent the SQL DELETE in the transaction itself from occurring, then you must throw an Exception in your EntityListener. This usually ends up rolling back the transaction (especially if the Exception is a RuntimeException or an application exception that is declared to be one that causes rollbacks), and does not offer any benefit, for the entire transaction will be rolled back.
If you have the option of using EclipseLink instead of Hibernate, then it appears that an elegant solution is possible if you define an appropriate DescriptorCustomizer or by using the AdditionalCriteria annotation. Both of these appear to work well with the EntityManager.remove and EntityManager.find invocations. However, you might still need to write your JPQL or native queries to account for the logically deleted entities.
* This is outlined in the JPA Wikibook on the topic of cascading Persist:
if you remove an object to have it deleted, if you then call persist on the object, it will resurrect the object, and it will become persistent again. This may be desired if it is intentional, but the JPA spec also requires this behavior for cascade persist. So if you remove an object, but forget to remove a reference to it from a cascade persist relationship, the remove will be ignored.
I am getting this exception in a controller of a web application based on spring framework using hibernate. I have tried many ways to counter this but could not resolve it.
In the controller's method, handleRequestInternal, there are calls made to the database mainly for 'read', unless its a submit action.
I have been using, Spring's Session but moved to getHibernateTemplate() and the problem still remains.
basically, this the second call to the database throws this exception. That is:
1) getEquipmentsByNumber(number) { firstly an equipment is fetched from the DB based on the 'number', which has a list of properties and each property has a list of values. I loop through those values (primitive objects Strings) to read in to variables)
2) getMaterialById(id) {fetches materials based on id}
I do understand that the second call, most probably, is making the session to "flush", but I am only 'reading' objects, then why does the second call throws the stale object state exception on the Equipment property if there is nothing changed?
I cannot clear the cache after the call since it causes LazyExceptions on objects that I pass to the view.
I have read this:
https://forums.hibernate.org/viewtopic.php?f=1&t=996355&start=0
but could not solve the problem based on the suggestions provided.
How can I solve this issue? Any ideas and thoughts are appreciated.
UPDATE:
What I just tested is that in the function getEquipmentsByNumber() after reading the variables from list of properties, I do this: getHibernateTemplate().flush(); and now the exception is on this line rather then the call to fetch material (that is getMaterialById(id)).
UPDATE:
Before explicitly calling flush, I am removing the object from session cache so that no stale object remains in the cache.
getHibernateTemplate().evict(equipment);
getHibernateTemplate().flush();
OK, so now the problem has moved to the next fetch from DB after I did this. I suppose I have to label the methods as synchronized and evict the Objects as soon as I am finished reading their contents! it doesn't sound very good.
UPDATE:
Made the handleRequestInternal method "synchronized". The error disappeared. Ofcourse, not the best solution, but what to do!
Tried in handleRequestInternal to close the current session and open a new one. But it would cause other parts of the app not to work properly. Tried to use ThreadLocal that did not work either.
You're mis-using Hibernate in some way that causes it to think you're updating or deleting objects from the database.
That's why calling flush() is throwing an exception.
One possibility: you're incorrectly "sharing" Session or Entities, via member field(s) of your servlet or controller. This is the main reason 'synchronized' would change your error symptoms.. Short solution: don't ever do this. Sessions and Entities shouldn't & don't work this way -- each Request should get processed independently.
Another possibility: unsaved-value defaults to 0 for "int" PK fields. You may be able to type these as "Integer" instead, if you really want to use 0 as a valid PK value.
Third suggestion: use Hibernate Session explicitly, learn to write simple correct code that works, then load the Java source for Hibernate/ Spring libraries so you can read & understand what these libraries are actually doing for you.
I also have been struggling with this exception, but when it continued to recur even when I put a lock on the object (and in a test environment, where I knew I was the only process touching the object), I decided to give the parenthetical in the stack trace its due consideration.
org.hibernate.StaleObjectStateException: Row was updated or deleted by
another transaction (or unsaved-value mapping was incorrect):
[com.rc.model.mexp.MerchantAccount#59132]
In our case it turned out that the mapping was wrong; we had type="text" in the mapping for one field that was a mediumtext type in the database, and it seems that Hibernate really hates that, at least under certain circumstances. We removed the type specification altogether from the mapping for this field, and the problem was resolved.
Now the weird thing is that in our production environment, with the supposedly problematic mapping in place, we do NOT get this exception. Does anybody have any idea why this might be? We are using the same version of MySQL - "5.0.22-log" (I don't know what the "-log" means) - in dev and production envs.
Here are 3 possibilities (as I do not know exactly, which kind of hibernate session handling you are using). Add one after another and test:
Use bi-directional mapping with inverse=true between parent object and child object, so the change in parent or child will get propagate to the other end of relation properly.
Add support for Optimistic Locking using TimeStamp or Version column
Use join query to fetch the whole object graph [ parent+children] together to avoid the second call altogether.
Lastly, if and only if nothing works:
Load the parent again by Id (you have that already) and populate modified data then update.
Life will be good! :)
This problem was something that I had experienced and was quite frustrating, although there has to be something a little odd going on in your DAO/Hibernate calls, because if you're doing a lookup by ID there is no reason to get a stale state, since that is just a simple lookup for an object.
First, make sure all your methods are annotated with #Transaction(required=true) // you'll have to look up the exact syntax
However, this exception is usually thrown when you try to make changes to an object that has been detached from the session it was retrieved from. The solution to this is often not simple and would require more code posted so we can see exactly what is going on; my general suggestion would be to create a #Service that performs these kinds of things within a single transaction