My question is related to strange read/select behavior when same query returns different results after each call. Description of my situation is written below:
I have the following code, returning list of documents from DB
#RequestMapping(value={"/docs"}, method = RequestMethod.GET)
#ResponseBody
public ArrayList<Document> getMetaData(ModelMap modelMap) {
return (ArrayList<Document>)documentDAO.getDocuments();
}
DocumentDAO.getDocuments looks like
public List<Document> getDocuments() {
Query query = entityManager.createQuery("from Document");
List<Document> list = query.getResultList();
for(Document doc:list) System.out.println(doc.getName()+" "+doc.isSigned());
return list;
}
In other controller, I'm also extracting Document and changing boolean property with
Document doc = documentDAO.getDocumentById(id)
doc.setSigned(true);
documentDAO.updateDocument(doc); // IS IT NECESSARY??
getById and updateDocument are the following:
public Document getDocumentById(Long id) {
return entityManager.find(Document.class, id);
}
#Transactional
public void updateDocument(Document document) {
entityManager.merge(document);
entityManager.flush();
}
Questions:
As far as I know, setting property of managed object is enough to propagate changes to DB. But I want to flush changes immediately. Is my approach with extra call of update is appropriate solution or calling setter is enough for making immediate changes in DB? By extra update I mean documentDAO.updateDocument(doc); // IS IT NECESSARY??
How JPA stores managed objects - in some internal data structure or simply keeps them in references like Document doc;? Internal structure most likely makes duplicate/sameID managed object impossible, references most likely makes possible to have multiple managed objects with same id and other properties.
How merge works internally - tries to find managed object with the same ID in internal storage and, in the case of detecting, refreshes it's fields or simply updates DB?
If internal storage really exists (most likely this is persistence context, futher PC), what is criteria for distinquish managed objects? #Id annotated field of hibernate model?
My main problem is different results of entityManager.createQuery("from Document");
System.out.println(doc.getName()+" "+doc.isSigned()); shows isSigned true on odd calls and false on even calls.
I suspect that first select-all-query returns entities with isSigned=false and put them to PC, after that user performs some operation which grabs entity byID, sets isSigned=true and just extracted entity conflicts with already presented in PC. First object has isSigned=false, second has isSigned=true and PC confused and returns different managed objects in rotation. But how its possible? In my mind, PC has mechanisms to not allow such confusing ambigious situations by keeping only one managed object for each unique id.
First of all you want to enrol both the read and the write in a single transactional service method:
#Transactional
public void signDocument(Long id) {
Document doc = documentDAO.getDocumentById(id)
doc.setSigned(true);
}
So this code should reside on the Service side, not in your web Controller.
As far as I know, setting property of managed object is enough to propagate changes to DB. But I want to flush changes immediately. Is
my approach with extra call of update is appropriate solution or
calling setter is enough for making immediate changes in DB? By extra
update I mean documentDAO.updateDocument(doc); // IS IT NECESSARY??
This applies only to managed entities, as long as the Persistence Context is still open. That's why you need a transactional service method instead.
How JPA stores managed objects - in some internal data structure or simply keeps them in references like Document doc;? Internal structure
most likely makes duplicate/sameID managed object impossible,
references most likely makes possible to have multiple managed objects
with same id and other properties.
The JPA 1st level cache simply stores entities as they are, it doesn't use any other data representation. In a Persistence Context you can have one and only one entity representation (Class and Identifier). In the context of a JPA Persistence Context, the managed entity equality is the same with entity identity.
How merge works internally - tries to find managed object with the
same ID in internal storage and, in the case of detecting, refreshes
it's fields or simply updates DB?
The merge operation makes sense for reattaching detached entities. A managed entity state is automatically synchronized with the database during flush-time. The automatic dirty checking mechanism takes care of this.
If internal storage really exists (most likely this is persistence context, further PC), what is criteria for distinguish managed objects? #Id annotated field of hibernate model?
The PersistenceContext is a session-level cache. The managed objects always have an identifier and an associated database row.
I suspect that first select-all-query returns entities with
isSigned=false and put them to PC, after that user performs some
operation which grabs entity byID, sets isSigned=true and just
extracted entity conflicts with already presented in PC.
In the same Persistence Context scope this can't ever happen. If you load an entity through a query, the entity gets caches in the 1st level cache. If you try to load it again with another query or with the EntityManager.find() you will still get the same object reference, that's already cached.
If the first query happens against a Persistence Context and the second query/find will be issued on a second Persistence Context, then each Persistence Context will have to cache its own version of the entities being queried.
First object has isSigned=false, second has isSigned=true and PC
confused and returns different managed objects in rotation. But how
its possible?
This can't happen. The Persistence Context always maintains entity object integrity.
Related
I would like to make this question as generic as possible without submitting extensive code and configuration samples so that answer submitters can cover a wide range of possibilities, therefore make it somewhat "academic".
I have two entity classes, Foo and Bar. They are wired to the persistence store (in my case PostgreSQL but I think that shouldn't matter) using JPA with Hibernate as the provider. They are managed by FooDao and BarDao respectively and both DAOs extend a BaseDao which contains a save method:
public T save(T object)
{
return (T) hibernateTemplate.merge(object);
}
which neither DAO overrides (meaning they use the superclass method as is).
The problem is, when I call myFooDao.save(myFoo), it actually persists the objects to the DB but when I call myBarDao.save(myBar), the object is not persisted, YET NO EXCEPTION IS THROWN.
All of this runs out of a Spring context and both DAOs are injected. I should also add both tables have primary keys each tied to its own sequence. While the Bar insertion never actually gets persisted, the associated sequence does get incremented every time, which is odd. So Hibernate does prepare a transaction but getting the next value from the sequence, which increments the sequence, but the new row never shows in the datable.
I am looking to explore some general circumstances under which anomaly can occur. For one, could it be that the configuration is set so that Foo is auto-committed but Bar is not and I should dive into the context configs to find discrepancies? Or could it be that Hibernate thinks the write is successfully committed because the DB engine does not report a failure properly?
Hibernate does not necessarily persist your changes after each updating query (saveOrUpdate, merge for instance).
Its behavior toward persistency is defined by the FlushMode of the Session tied to your HibernateTemplate. The possible FlushModes are described here : https://docs.jboss.org/hibernate/orm/3.5/api/org/hibernate/FlushMode.html
By default, an Hibernate Session is setted to FlushMode.AUTO. It means that if not absolutly and explictly needed by following queries (to maintain database consistency), no persistent changes are done, except allocation of id by iterating sequences.
It is the result you observed.
To answer your question, if you want to persist your change immediatly after a merge, you will need either :
1) Changing the flush strategy of the Session tied to you HibernateTemplate to "ALWAYS" before merging (or when instanciating the HibernateTemplate).
hibernateTemplate.setFlushModeName("FLUSH_ALWAYS");
2) Explicitly flushing the Session after merging.
hibernateTemplate.flush();
But you should also note that HibernateTemplate is a deprecated approach to interact with databases using Hibernate, in particular because HibernateTemplate does not lead people to properly deal with database transactions.
In the first place, your merge used in a transaction would have automatically been persisted when the transaction is committed with FlushMode.AUTO.
In a Spring application, you could use a #Transactional annotation, which implicitly executes all the logic included in the annotated method through a transaction.
#Autowired
private SessionFactory sessionFactory;
#Transactional
public void doUpdate(Object myObject) {
Session hibSession = sessionFactory.getCurrentSession();
hibSession.merge(myObject);
}
See the complete explanation about Spring transaction management here : http://docs.spring.io/spring-framework/docs/4.2.x/spring-framework-reference/html/transaction.html (16.5.6 paragraph for #Transactional annotation).
What is the state of your entity at the time of merge? If the entity is in the persistence context (e.g. the session), then an update will occur, if there are any changes made to the object. (if no change, Hibernate will quietly ignore the merge.)
If the entity is not in the persistence context, but it is stored in the DB, then a new row will be inserted, so you'll have duplicate.
Also, please ensure that you are implementing equals() and hashCode() methods for your entity.
I have this a class like bellow:
#Entity
#Table(name="work")
public class Work {
#Id
#Column(name="id")
private String id;
#OneToMany(orphanRemoval=true ,mappedBy="work", cascade=CascadeType.ALL , fetch=FetchType.EAGER)
private List<PersonRole> personRoleList;
}
As mine is an web application, when i update (comes from client) a personRoleList item and call :
session.update(work); //`work` is in detached state
It does not update the existing personRoleList item it actually add a new one.
Some other people also having the same problem. REF:
using-saveorupdate-in-hibernate-creates-new-records-instead-of-updating-existi
jpa-onetomany-not-deleting-child
I tried all suggested solution, but none of them work for me.
But then i just tried :
session.merge(work); //replacing session.update(work)
And it works as expected.!!
This is where I get confused. Because I can't find any explanation for this difference in behaviors in case of OneToMany relationship (or may be i missed ). I read some threads to understand the differences between update() and merge() and gone through the doc. REF:
what-are-the-differences-between-the-different-saving-methods-in-hibernate
differences-among-save-update-saveorupdate-merge-methods-in-session
But still it is not clear What are those behavioral pattern/logic/steps that creating this difference.?
Merge attempts to associate a currently transient object with a persistent object currently under management by the session by 'merging' them into one entity. Its intended use is when you have a detached object and an attached object and wish to resolve them.
In a merge(), Hibernate will read the entity from the database if there isn't already a managed instance in the session. In your example, this will result in Hibernate eagerly loading the collection (due to fetch=FetchType.EAGER). Then when your session ends, Hibernate will check for changes in the collection (due to cascade=CascadeType.ALL) and will perform the appropriate UPDATE in the database.
This differs from the update() scenario because in an update Hibernate always (by default) assumes the object is dirty and schedules an UPDATE. This update is likely what's causing creation of a new element in your collection - Hibernate hasn't looked in the database to bring the collection into session before issuing the UPDATE.
I'd bet you can get the desired behavior of update() by setting
select-before-update="true"
in your class mapping or by using the lock method to re-attach your object to the session before making changes.
From Chapter 9 of Java Persistence with Hibernate
It doesn’t matter if the item object is modified before or after it’s passed to
update(). The important thing here is that the call to update() is reattaching the detached instance to the new Session (and persistence context). Hibernate
always treats the object as dirty and schedules an SQL UPDATE, which will be executed during flush. You can see the same unit of work in figure 9.8.
You may be surprised and probably hoped that Hibernate could know that you
modified the detached item’s description (or that Hibernate should know you did
not modify anything). However, the new Session and its fresh persistence context
don’t have this information. Neither does the detached object contain some internal list of all the modifications you’ve made.
UDPATE in the database is needed. One way to avoid this UDPATE statement is to
configure the class mapping of Item with the select-before-update="true"
attribute. Hibernate then determines whether the object is dirty by executing a
SELECT statement and comparing the object’s current state to the current data-
base state.
I am trying to model a transient operations solution schema in Hibernate and I am unsure how to get the object graph and behavior I want from the model.
The table structure uses a correlation table (many-to-many) to create lists of users for the operation:
Operation OperationUsers Users
op_id op_id user_id
... user_id ...
In modeling the persistent class Operation.java using hibernate annotations, I created:
#ManyToMany(fetch=FetchType.LAZY)
#JoinColumn(name="op_id")
public List<User> users() { return userlist; }
So far, I have the following questions:
When a user is removed from the list, how do I avoid Hibernate
deleting the user from the Users table? It should just be removed
from the correlation table, not the Users table. I cannot see a valid
CascadeType to accomplish this.
Do I need to put anything more in the method body?
Do I need to add more annotation arguments?
I am expecting to do this without futzing with the User class.
Please tell me that I do not have to mess with User.java!
It's possible I'm overthinking this, but that's the nature of learning... Thanks in advance for any help you can offer!
From the documentation:
Hibernate defines and supports the following object states:
*Transient - an object is transient if it has just been instantiated using the new operator, and it is not associated with a Hibernate Session. It has no persistent representation in the database and no identifier value has been assigned. Transient instances will be destroyed by the garbage collector if the application does not hold a reference anymore. Use the Hibernate Session to make an object persistent (and let Hibernate take care of the SQL statements that need to be executed for this transition).
*Persistent - a persistent instance has a representation in the database and an identifier value. It might just have been saved or loaded, however, it is by definition in the scope of a Session. Hibernate will detect any changes made to an object in persistent state and synchronize the state with the database when the unit of work completes. Developers do not execute manual UPDATE statements, or DELETE statements when an object should be made transient.
*Detached - a detached instance is an object that has been persistent, but its Session has been closed. The reference to the object is still valid, of course, and the detached instance might even be modified in this state. A detached instance can be reattached to a new Session at a later point in time, making it (and all the modifications) persistent again. This feature enables a programming model for long running units of work that require user think-time. We call them application transactions, i.e., a unit of work from the point of view of the user.
As explained in this answer, you can detach your entity using Session.evict() to prevent hibernate from updating the database or simply clone it and make the needed changes on the copy.
It turns out that the specific answer to my primary question (#1 and the main topic) is: "Do not specify any CascadeType on the property."
The answer is mentioned sorta sideways in the answer to this question.
In a container managed transaction i get a detached object and merge it so that the detached object is brought to managed state.My initial question is by caching the Pojo java objects and merging is a better idea to get the object into session or performing the get of the data from the DB to get in to session context a better idea in terms of cost of operation/time involved in getting the data from the DB?If i am performing an merge at start to get the object into the session context and doing the modification on this merged object will the hibernate take care of generating all the required sql statements and at the end will it be taken care ?
Please comment back which is better approach to get the entity to session , using a merge of the cached detached object or fetching the data from the DB is lesser time consumption?
when you call detach and then merge, merge returns you the attached entity in the context. it's a common mistake that users would use the passed entity after merge operation hoping that would be managed but this is not the case. you have to use the returned entity from merge which will be managed by hibernate and subsequent changes will be flushed at transaction end automatically.
it doesnt matter much when u load your entity coz hibernate will anyways fire a select if it is already not loaded in the context. also even if you keep on doing changes to your managed entity, hibernate will fire update only when you exit your transaction or call flush() explicitly.
Copy the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance. The given instance does not become associated with the session. This operation cascades to associated instances if the association is mapped with cascade="merge".
According to the API it saves a copy when you perform the merge and then returns a new instance. Based on my experience its always better to merge at the end after you have performed all the updates on the objects in detached state. Its better because you will call merge operation only at the end when the object state is ready to be persisted.
Also this will perform better because the object is moved to persistent context at the end and hence Hibernate will not have to come into picture till the end.
I have an entity defined as follows:
public class Version {
#Id
private Long id;
private String content;
#Transient
private Model model;
//...
}
From what I can see, when a find operation is done on Entity Manager, it makes a SELECT on the underlying database only once, and then the entity is cached in the Entity Manager. However, I see that if I assign a Model to the model property, this change is not reflected to the cached entity. E.g. if in one call, a find operation is done and Model is assigned, when I do find again from another EJB, model property is null again. Is this change not reflected to the cached entity? Perhaps because it's #Transient?
The entity manager maintains a first level cache, and this first level cache is thrown away as soon as the transaction has ended. Else, the cache would return stale values, since other transactions, in the same application or in another one, could modify or remove the cached entities.
Moreover, concurrent transactions each have their own session-level cache, and thus their own instance of the same entity.
If in a subsequent transaction, you find the same entity, a new SQL query will be issued, and a different instance of the entity will be returned.
If something must be remembered across transactions for a given entity, then it should be made persistent in in the database. That's the point of a database.
I have to disagree with #JB Nizet. JPA's EntityManager and Hibernate's Session offer an extended Persistence Context. It is not at all true that "first level cache is thrown away as soon as the transaction has ended".
Persistence Context can be either Transaction Scoped-- the Persistence
Context 'lives' for the length of the transaction, or Extended-- the
Persistence Context spans multiple transactions.
https://web.archive.org/web/20131212234524/https://blogs.oracle.com/carolmcdonald/entry/jpa_caching
The solution however is correct, you have to persist changes to the object if you want it to be changed in the cache.
If you are using EclipseLink then the merge into the shared cache of transients can be configured in two ways.
If a #CloneCopyPolicy is used, then the object from the persistence context will be cloned into the shared cache, preserving the transient fields.
If a #InstantiationCopyPolicy is used, then a new instance will be created for the shared cache, and transients will not be preserved.
If you are using weaving and field access, then the default is #CloneCopyPolicy, otherwise #InstantiationCopyPolicy. You can also configure this using
You can also control what is merged into the shared cache using a DescriptorEventListener and the postMerge/postClone events.