Within a transactional service method, I loop on querying a database to get the first 10 of entity A with a criteria.
I update each A entity from the list so that they don't match the criteria anymore, and call a flush() to make sure changes are made.
The second call to the query within the loop returns the exact same set of A entities.
Why isn't the flushed change on the entities taken into account?
I'm using JPA 2.0 with Hibernate 4.1.7.
The same process with Hibernate only seems to be working.
I've turned off the second level cache and the query cache, to no avail.
I'm using a rather plain configuration, JpaTransactionManager, Spring over JPA over Hibernate. Main method annotated with #Transactional.
The code would be something like this:
do {
modelList = exportContributionDao.getContributionListToExport(10);
for (M m : modelList) {
//export m to a file
m. (false);
super.flush();
}
} while (modelList.size() == 10);
With each iteration of the loop, the Dao method always return the same 10 results, JPA not taking into account the updated 'isToBeExported' attribute.
I'm not trying to solve a problem, rather I want to understand why JPA is not behaving as expected here.
I expect this to be a 'classic' problem.
No doubt it would be solved if the Transaction would be commited at each iteration.
ASAIK, the cache L1, i.e. the Session with Hibernate as the underlying JPA provider, should be up to date, and the second iteration query should take into account the updated Entities, even though the changes haven't been persisted yet.
So my question is: why isn't it the case? Misconfiguration or know behavior?
Flush does not necessarily commit the changes on the database. What do you want to achieve? From what I understand you do s.th. like:
Loop about entities
Within the loop, change the entity
Call 'flush' on the entity
Read the entity back again
You wonder why the data did not change on the database?
If this is correct, why do you re-read the changes and just don't work with the elements? After leaving the transaction, the changes will be automagically made persistent.
This should definitely be working.
This is a configuration problem on our part.
Apologies for the question, it was pretty hard to spot the reason on our part, but I hope the answer will at least be useful to some:
JPA definitely takes into account changes made on entities within a single transaction.
Related
I am using hibernate envers for auditing.
It works fine but today I realized that it doesnt if I create entities in a for-loop.
After set log true for sql queries I figured out, that the rev-tables are not updated after each iteration. Somehow hibernate collects all changes and fires the audit command in the end of a request? How can I let hibernate to do auditing after each iteration in my for-loop?
What I already tried:
for (...) {
Obj a = new Obj();
objRepository.save(a);
entityManager.flush();
entityManager.clear();
}
As #gtosto points out, Hibernate Envers operates on a transaction boundary basis and therefore audit records won't be flushed and persisted until commit.
One way to synchronize this would be to manually control the transaction boundary yourself as a part of the for-loop so you basically persist small buckets of the list and commit.
The downside here is that can be performance intensive, particularly if the list of objects you're trying to persist is quite large.
The jira issue HHH-9622 outlines a request to make the AuditProcess flushable; however, there are consequences to introducing such behavior that need to be considered.
In fact the problem was that I added the #Transactional annotation to the respective class. Remove it and hibernate will fire the audit commands as soon as you call objRepository.save(a). No need for entity manager.
This seems like it would come up often, but I've Googled to no avail.
Suppose you have a Hibernate entity User. You have one User in your DB with id 1.
You have two threads running, A and B. They do the following:
A gets user 1 and closes its Session
B gets user 1 and deletes it
A changes a field on user 1
A gets a new Session and merges user 1
All my testing indicates that the merge attempts to find user 1 in the DB (it can't, obviously), so it inserts a new user with id 2.
My expectation, on the other hand, would be that Hibernate would see that the user being merged was not new (because it has an ID). It would try to find the user in the DB, which would fail, so it would not attempt an insert or an update. Ideally it would throw some kind of concurrency exception.
Note that I am using optimistic locking through #Version, and that does not help matters.
So, questions:
Is my observed Hibernate behaviour the intended behaviour?
If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
If the answer to 2. is yes, why is nobody complaining about it?
Please see the text from hibernate documentation below.
Copy the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance.
It clearly stated that copy the state(data) of object in database. if object is not there then save a copy of that data. When we say save a copy hibernate always create a record with new identifier.
Hibernate merge function works something like as follows.
It checks the status(attached or detached to the session) of entity and found it detached.
Then it tries to load the entity with identifier but not found in database.
As entity is not found then it treat that entity as transient.
Transient entity always create a new database record with new identifier.
Locking is always applied to attached entities. If entity is detached then hibernate will always load it and version value gets updated.
Locking is used to control concurrency problems. It is not the concurrency issue.
I've been looking at JSR-220, from which Session#merge claims to get its semantics. The JSR is sadly ambiguous, I have found.
It does say:
Optimistic locking is a technique that is used to insure that updates
to the database data corresponding to the state of an entity are made
only when no intervening transaction has updated that data since the
entity state was read.
If you take "updates" to include general mutation of the database data, including deletes, and not just a SQL UPDATE, which I do, I think you can make an argument that the observed behaviour is not compliant with optimistic locking.
Many people agree, given the comments on my question and the subsequent discovery of this bug.
From a purely practical point of view, the behaviour, compliant or not, could lead to quite a few bugs, because it is contrary to many developers' expectations. There does not seem to be an easy fix for it. In fact, Spring Data JPA seems to ignore this issue completely by blindly using EM#merge. Maybe other JPA providers handle this differently, but with Hibernate this could cause issues.
I'm actually working around this by using Session#update currently. It's really ugly, and requires code to handle the case when you try to update an entity that is detached, and there's a managed copy of it already. But, it won't lead to spurious inserts either.
1.Is my observed Hibernate behaviour the intended behaviour?
The behavior is correct. You just trying to do operations that are not protected against concurrent data modification :) If you have to split the operation into two sessions. Just find the object for update again and check if it is still there, throw exception if not. If there is one then lock it by using em.(class, primary key, LockModeType); or using #Version or #Entity(optimisticLock=OptimisticLockType.ALL/DIRTY/VERSION) to protect the object till the end of the transaction.
2.If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
Probably: yes
3.If the answer to 2. is yes, why is nobody complaining about it?
Because if you protect your operations using pessimistic or optimistic locking the problem will disappear:)
The problem you are trying to solve is called: Non-repeatable read
Background
I have a java/spring system where transactions are managed manually via a custom HandlerInterceptor. That is to say:
at the begining of every request a transaction is opened (an unfortunate part of the system is that any request might result in a write to the db)
an EntityManager instance joins the transaction
the entity manager is used to load entities which are modified. The EntityManager tracks all changes
at the end of every request the EntityManager is flushed and committed
Yes this is not ideal, but I did not create this system and it's simple enough to allow us to work within it's confines - I'm not looking to change it without good reason.
I am not used to commit-all-tracked-entities-on-flush behavior and so have been doing something like:
//change entity
if(ovalValidator.isValid(entity))
em.persist(entity);
I need to fix this to work with my new understanding and switching the above to this seems to work:
//change entity
if(!ovalValidator.isValid(entity))
em.detach(entity);
My question
It is my understanding that this just removes the entity from the flush queue even if it IS marked as dirty. Is this correct? Is there a better way to achieve what I am trying to (don't save changes to that entity)? Is there anything I need to look out for if I'm doing this?
detache removes the entity from the session (changeTracking, lazyloading, ...) it does what you want. You could also implement en interceptor removing the dirty mark of the invalid entities but i think your solution would work as well
I have web application using JPA. This entity manager keeps bunch of entites and suddenly I update the database from other side. I use MySQL and I use PhpMyAdmin and change some row.
How to tell entity manager to re-synchronize, e.g. to forgot all the entites in cache?
I know there is refresh(Object) method, but is there any possibility how to do refreshAll() or something what results in this?
It is sure this is expensive operation but if it has to be done.
entityManager.getEntityManagerFactory().getCache().evictAll()
Refresh is something different since it modifies your object. This line will just empty the cache, so if you fetch objects changed outside the entity manager, it will do an actual database query instead of using the outdated cached value.
I had a similar issue and the evictAll() line above worked for me.
Alternatively, the #Cache annotation on the entity class worked too, with the benefit of being able to control caching parameters:
#Cache(coordinationType=CacheCoordinationType.INVALIDATE_CHANGED_OBJECTS)
See: http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching
If you are using EclipseLink instead of Hibernate the hint is:
em.createNamedQuery("SomeEntity.SomeNamedQuery")
.setHint(QueryHints.REFRESH, true)
.getResultList();
Well, for some people (like me) that tried to add factory.getCache().evictAll(); and doesn't work, and are used JPA + Hibernate, to refresh a query add the hint org.hibernate.cacheMode to IGNORE. Example:
em.createNamedQuery("SomeEntity.SomeNamedQuery")
.setHint("org.hibernate.cacheMode", "IGNORE")
.getResultList();
cache.evictAll is not working for me. So to retrieve data pushed from another app, I peform :
em.getTransaction().begin();
em.getTransaction().commit();
After that, my find query retrieves refreshed data. I don't know if it's very safe solution but it works properly.
When you read an object into an EntityManager, it becomes part of the persistence context, and the same object will remain in the EntityManager until you either clear() it and get a new EntityManager.
So if you update the database, the EntityManager will not see the change unless you call refresh() on the object, or clear() the EntityManager. This has nothing to do with the shared cache (L2) or the persistence context (L1). If you also also using a shared cache, and updating the database directly, then your shared cache will be out of date. You need to refresh() the object, or mark it as invalid to be refreshed the next time it is queried.
Code must follow the way like.
DETACH
REFRESH
MERGE
FLUSH
We are using Hibernate Spring MVC with OpenSessionInView filter.
Here is a problem we are running into (pseudo code)
transaction 1
load object foo
transaction 1 end
update foo's properties (not calling session.save or session.update but only foo's setters)
validate foo (using hibernate validator)
if validation fails ?
go back to edit screen
transaction 2 (read only)
load form backing objects from db
transaction 2 end
go to view
else
transaction 3
session.update(foo)
transaction 3 end
the problem we have is if the validation fails
foo is marked "dirty" in the hibernate session (since we use OpenSessionInView we only have one session throughout the http request), when we load the form backing objects (like a list of some entities using an HQL query), hibernate before performing the query checks if there are dirty objects in the session, it sees that foo is and flushes it, when transaction 2 is committed the updates are written to the database.
The problem is that even though it is a read only transaction and even though foo wasn't updated in transaction 2 hibernate doesn't have knowledge of which object was updated in which transaction and doesn't flush only objects from that transaction.
Any suggestions? did somebody ran into similar problem before
Update: this post sheds some more light on the problem: http://brian.pontarelli.com/2007/04/03/hibernate-pitfalls-part-2/
You can run a get on foo to put it into the hibernate session, and then replace it with the object you created elsewhere. But for this to work, you have to know all the ids for your objects so that the ids will look correct to Hibernate.
There are a couple of options here. First is that you don't actually need transaction 2 since the session is open you could just load the backing objects from the db, thus avoiding the dirty check on the session. The other option is to evict foo from the session after it is retrieved and later use session.merge() to reattach it when you what your changes to be stored.
With hibernate it is important to understand what exactly is going on under the covers. At every commit boundary it will attempt to flush all changes to objects in the current session regardless of whether or not the changes where made in the current transaction or any transaction at all for that matter. This is way you don't actually need to call session.update() for any object that is already in the session.
Hope this helps
There is a design issue here. Do you think an ORM is a transparent abstraction of your datastore, or do you think it's a set of data manipulation libraries? I would say that Hibernate is the former. Its whole reason for existing is to remove the distinction between your in-memory object state and your database state. It does provide low-level mechanisms to allow you to pry the two apart and deal with them separately, but by doing so you're removing a lot of Hibernate's value.
So very simply - Hibernate = your database. If you don't want something persisted, don't change your persistent objects.
Validate your data before you update your domain objects. By all means validate domain objects as well, but that's a last line of defense. If you do get a validation error on a persistent object, don't swallow the exception. Unless you prevent it, Hibernate will do the right thing, which is to close the session there and then.
What about using Session.clear() and/or Session.evict()?
What about setting singleSession=false on the filter? That might put your operations into separate sessions so you don't have to deal with the 1st level cache issues. Otherwise you will probably want to detach/attach your objects manually as the user above suggests. You could also change the FlushMode on your Session if you don't want things being flushed automatically (FlushMode.MANUAL).
Implement a service layer, take a look at spring's #Transactional annotation, and mark your methods as #Transactional(readOnly=true) where applicable.
Your flush mode is probably set to auto, which means you don't really have control of when a DB commit happens.
You could also set your flush mode to manual, and your services/repos will only try to synchronize the db with your app when you tell them to.