JPA: getReference() vs new ''mock'' object with ID only? - java

I wonder is it livable to associate an entity with a child entity by using not a proxy object but by creating a new object and setting Id manually? Like this?
#Transactional
public void save(#NonNull String name, #NonNull Long roleId) {
User user = new User();
user.setName(name);
Role role = new Role(); role.setRoleId(roleId);
// Instead of:
// roleRepository.getOne(roleId);
user.setRole(role);
userRepository.save(user);
}
I know that the accepted and well-documented way to do it is by calling smth. like:
em.getReference(Role.class, roleId) ;
or if use Spring Data
roleRepository.getOne(roleId);
or Hibernetish way:
session.load(Role.class, roleId)
So the question is, what bad consequences can one face if he does this trick by cheating the JPA provider and using this new object with set Id? Note, the only reason to do getOne() is to associate a newly created entity with an existing one. Yet the Role mock object is not managed, no fear of loosing any data. It simply does its job for connecting two entities.
From the Hibernate documentation:
getReference() obtains a reference to the entity. The state may or may
not be initialized. If the entity is already associated with the
current running Session, that reference (loaded or not) is returned.
If the entity is not loaded in the current Session and the entity
supports proxy generation, an uninitialized proxy is generated and
returned, otherwise the entity is loaded from the database and
returned.
So after testing I found that it basically does not even hit the database to check the presence of ID and save() would fail at commit if FK constraint is violated. It just requires additional dependency to auto-wire (RoleRepository).
So why should I have this proxy fetched by invoking getOne() instead of this mock object created with new if my case is as simple as this one? What and when may go wrong with this approach?
Thank you for clarifying things.
EDIT:
Hibernate/JPA, save a new entity while only setting id on #OneToOne association
This related topic doesn't answer the question. I am asking why calling JPA's API getReference() is better and what wrong may happen to me if I adopt this practice of creating a new "mock" objects with a given Id with new operator?

Related

Hibernate Update Exception: a different object with the same identifier value was already associated with the session [duplicate]

I have two user Objects and while I try to save the object using
session.save(userObj);
I am getting the following error:
Caused by: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session:
[com.pojo.rtrequests.User#com.pojo.rtrequests.User#d079b40b]
I am creating the session using
BaseHibernateDAO dao = new BaseHibernateDAO();
rtsession = dao.getSession(userData.getRegion(),
BaseHibernateDAO.RTREQUESTS_DATABASE_NAME);
rttrans = rtsession.beginTransaction();
rttrans.begin();
rtsession.save(userObj1);
rtsession.save(userObj2);
rtsession.flush();
rttrans.commit();
rtsession.close(); // in finally block
I also tried doing the session.clear() before saving, still no luck.
This is for the first I am getting the session object when a user request comes, so I am getting why is saying that object is present in session.
Any suggestions?
I have had this error many times and it can be quite hard to track down...
Basically, what hibernate is saying is that you have two objects which have the same identifier (same primary key) but they are not the same object.
I would suggest you break down your code, i.e. comment out bits until the error goes away and then put the code back until it comes back and you should find the error.
It most often happens via cascading saves where there is a cascade save between object A and B, but object B has already been associated with the session but is not on the same instance of B as the one on A.
What primary key generator are you using?
The reason I ask is this error is related to how you're telling hibernate to ascertain the persistent state of an object (i.e. whether an object is persistent or not). The error could be happening because hibernate is trying to persist an object that is already persistent. In fact, if you use save hibernate will try and persist that object, and maybe there is already an object with that same primary key associated with the session.
Example
Assuming you have a hibernate class object for a table with 10 rows based on a primary key combination (column 1 and column 2). Now, you have removed 5 rows from the table at some point of time. Now, if you try to add the same 10 rows again, while hibernate tries to persist the objects in database, 5 rows which were already removed will be added without errors. Now the remaining 5 rows which are already existing, will throw this exception.
So the easy approach would be checking if you have updated/removed any value in a table which is part of something and later are you trying to insert the same objects again
This is only one point where hibernate makes more problems than it solves.
In my case there are many objects with the same identifier 0, because they are new and don't have one. The db generates them. Somewhere I have read that 0 signals Id not set. The intuitive way to persist them is iterating over them and saying hibernate to save the objects. But You can't do that - "Of course You should know that hibernate works this and that way, therefore You have to.."
So now I can try to change Ids to Long instead of long and look if it then works.
In the end it's easier to do it with a simple mapper by your own, because hibernate is just an additional intransparent burden.
Another example: Trying to read parameters from one database and persist them in another forces you to do nearly all work manually. But if you have to do it anyway, using hibernate is just additional work.
USe session.evict(object); The function of evict() method is used to remove instance from the session cache. So for first time saving the object ,save object by calling session.save(object) method before evicting the object from the cache. In the same way update object by calling session.saveOrUpdate(object) or session.update(object) before calling evict().
This can happen when you have used same session object for read & write. How?
Say you have created one session.
You read a record from employee table with primary key Emp_id=101
Now You have modified the record in Java.
And you are going to save the Employee record in database.
we have not closed session anywhere here.
As the object that was read also persist in the session. It conflicts with the object that we wish to write. Hence this error comes.
As somebody already pointed above i ran into this problem when i had cascade=all on both ends of a one-to-many relationship, so let's assume A --> B (one-to-many from A and many-to-one from B) and was updating instance of B in A and then calling saveOrUpdate(A) , it was resulting in a circular save request i.e save of A triggers save of B that triggers save of A... and in the third instance as the entity( of A) was tried to be added to the sessionPersistenceContext the duplicateObject exception was thrown. I could solve it by removing cascade from one end.
You can use session.merge(obj), if you are doing save with different sessions with same identifier persistent object.
It worked, I had same issue before.
I ran into this problem by:
Deleting an object (using HQL)
Immediately storing a new object with the same id
I resolved it by flushing the results after the delete, and clearing the cache before saving the new object
String delQuery = "DELETE FROM OasisNode";
session.createQuery( delQuery ).executeUpdate();
session.flush();
session.clear();
This problem occurs when we update the same object of session, which we have used to fetch the object from database.
You can use merge method of hibernate instead of update method.
e.g. First use session.get() and then you can use session.merge (object). This method will not create any problem. We can also use merge() method to update object in database.
I also ran into this problem and had a hard time to find the error.
The problem I had was the following:
The object has been read by a Dao with a different hibernate session.
To avoid this exception, simply re-read the object with the dao that is going to save/update this object later on.
so:
class A{
readFoo(){
someDaoA.read(myBadAssObject); //Different Session than in class B
}
}
class B{
saveFoo(){
someDaoB.read(myBadAssObjectAgain); //Different Session than in class A
[...]
myBadAssObjectAgain.fooValue = 'bar';
persist();
}
}
Hope that save some people a lot of time!
Get the object inside the session, here an example:
MyObject ob = null;
ob = (MyObject) session.get(MyObject.class, id);
By default is using the identity strategy but I fixed it by adding
#ID
#GeneratedValue(strategy = GenerationType.IDENTITY)
Are your Id mappings correct? If the database is responsible for creating the Id through an identifier, you need to map your userobject to that ..
Check if you forgot to put #GenerateValue for #Id column.
I had same problem with many to many relationship between Movie and Genre. The program threw
Hibernate Error: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session
error.
I found out later that I just have to make sure you have #GenerateValue to the GenreId get method.
I encountered this problem with deleting an object, neither evict nor clear helped.
/**
* Deletes the given entity, even if hibernate has an old reference to it.
* If the entity has already disappeared due to a db cascade then noop.
*/
public void delete(final Object entity) {
Object merged = null;
try {
merged = getSession().merge(entity);
}
catch (ObjectNotFoundException e) {
// disappeared already due to cascade
return;
}
getSession().delete(merged);
}
before the position where repetitive objects begin , you should close the session
and then you should start a new session
session.close();
session = HibernateUtil.getSessionFactory().openSession();
so in this way in one session there is not more than one entities that have the same identifier.
I had a similar problem. In my case I had forgotten to set the increment_by value in the database to be the same like the one used by the cache_size and allocationSize. (The arrows point to the mentioned attributes)
SQL:
CREATED 26.07.16
LAST_DDL_TIME 26.07.16
SEQUENCE_OWNER MY
SEQUENCE_NAME MY_ID_SEQ
MIN_VALUE 1
MAX_VALUE 9999999999999999999999999999
INCREMENT_BY 20 <-
CYCLE_FLAG N
ORDER_FLAG N
CACHE_SIZE 20 <-
LAST_NUMBER 180
Java:
#SequenceGenerator(name = "mySG", schema = "my",
sequenceName = "my_id_seq", allocationSize = 20 <-)
Late to the party, but may help for coming users -
I got this issue when i select a record using getsession() and again update another record with same identifier using same session causes the issue. Added code below.
Customer existingCustomer=getSession().get(Customer.class,1);
Customer customerFromUi;// This customer details comiong from UI with identifer 1
getSession().update(customerFromUi);// Here the issue comes
This should never be done . Solution is either evict session before update or change business logic.
just check the id whether it takes null or 0 like
if(offersubformtwo.getId()!=null && offersubformtwo.getId()!=0)
in add or update where the content are set from form to Pojo
I'm new to NHibernate, and my problem was that I used a different session to query my object than I did to save it. So the saving session didn't know about the object.
It seems obvious, but from reading the previous answers I was looking everywhere for 2 objects, not 2 sessions.
#GeneratedValue(strategy=GenerationType.IDENTITY), adding this annotation to the primary key property in your entity bean should solve this issue.
I resolved this problem .
Actually this is happening because we forgot implementation of Generator Type of PK property in the bean class. So make it any type like as
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private int id;
when we persist the objects of bean ,every object acquired same ID ,so first object is saved ,when another object to be persist then HIB FW through this type of Exception: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session.
The problem happens because in same hibernate session you are trying to save two objects with same identifier.There are two solutions:-
This is happening because you have not configured your mapping.xml file correctly for id fields as below:-
<id name="id">
<column name="id" sql-type="bigint" not-null="true"/>
<generator class="hibernateGeneratorClass"</generator>
</id>
Overload the getsession method to accept a Parameter like isSessionClear,
and clear the session before returning the current session like below
public static Session getSession(boolean isSessionClear) {
if (session.isOpen() && isSessionClear) {
session.clear();
return session;
} else if (session.isOpen()) {
return session;
} else {
return sessionFactory.openSession();
}
}
This will cause existing session objects to be cleared and even if hibernate doesn't generate a unique identifier ,assuming you have configured your database properly for a primary key using something like Auto_Increment,it should work for you.
Otherwise than what wbdarby said, it even can happen when an object is fetched by giving the identifier of the object to a HQL. In the case of trying to modify the object fields and save it back into DB(modification could be insert, delete or update) over the same session, this error will appear. Try clearing the hibernate session before saving your modified object or create a brand new session.
Hope i helped ;-)
I have the same error I was replacing my Set with a new one get from Jackson.
To solve this I keep the existing set, I remove from the old set the element unknown into the new list with retainAll.
Then I add the new ones with addAll.
this.oldSet.retainAll(newSet);
this.oldSet.addAll(newSet);
No need to have the Session and manipulate it.
Try this. The below worked for me!
In the hbm.xml file
We need to set the dynamic-update attribute of class tag to true:
<class dynamic-update="true">
Set the class attribute of the generator tag under unique column to identity:
<generator class="identity">
Note: Set the unique column to identity rather than assigned.
I just had the same problem .I solve it by adding this line:
#GeneratedValue(strategy=GenerationType.IDENTITY)
Another thing that worked for me was to make the instance variable Long in place of long
I had my primary key variable long id;
changing it to Long id; worked
All the best
You always can do a session flush.
Flush will synchronize the state of all your objects in session (please, someone correct me if i'm wrong), and maybe it would solve your problem in some cases.
Implementing your own equals and hashcode may help you too.
You can check your Cascade Settings. The Cascade settings on your models could be causing this. I removed Cascade Settings (Essentially not allowing Cascade Inserts/Updates) and this solved my problem
I found this error as well. What worked for me is to make sure that the primary key (that is auto-generated) is not a PDT (i.e. long, int, ect.), but an object (i.e. Long, Integer, etc.)
When you create your object to save it, make sure you pass null and not 0.

How to persist many-to-many collection in JPA2 if I have only IDs?

Here is My many-to-many collection:
#ManyToMany
#JoinTable(name="affiliated_databases",
joinColumns=
#JoinColumn(name="database_id", referencedColumnName="id"),
inverseJoinColumns=
#JoinColumn(name="affiliated_database_id", referencedColumnName="id")
)
public Set<Database> affiliatedOrgs;
And in my service class method I have only IDs of this collection.
Is there any good solution to persist this collection without reading its elements from database?
I'm trying to do something like this:
for (Long affId: affIds) {
Database affDatabase = new Database();
affDatabase.setId(affId);
target.getAffiliatedOrgs.add(affDatabase);
}
dao.save(target);
It's work but 1)it looks for me somehow not elegant;
2) it may potentially create errors if this target object will be somewhere used in future... Or maybe it's a good solution and my doubts are vain?
So is there more elegant way to persist this collection without reading all it's objects from DB and not provoke errors in future.
You may want to use EntityManager.getReference(). It creates an entity "proxy" object, with all of its properties lazily fetched (if needed).
Get an instance, whose state may be lazily fetched. If the requested instance does not exist in the database, the EntityNotFoundException is thrown when the instance state is first accessed. (The persistence provider runtime is permitted to throw the EntityNotFoundException when getReference is called.) The application should not expect that the instance state will be available upon detachment, unless it was accessed by the application while the entity manager was open.

How to create new Row's without JPA consider it an update?

I'm working in a project right now, here is a piece of code:
public boolean getAll() {
TypedQuery<Tag> query = em.createQuery("SELECT c FROM Tag c WHERE (c.tagName !=?1 AND c.tagName !=?2 AND c.tagName !=?3) ", Tag.class);
query.setParameter(1, "Complete");
query.setParameter(2, "GroupA");
query.setParameter(3, "GroupB");
List<Tag> Tag= query.getResultList();
But when I try to do something like this:
Tag.get(2).setTagName = "Hello";
em.persist(Tag.get(2));
It considers it to be an update instead of a create? How can I make JPA understand that it's not database related, to detach the chains with the Database and create new register only changing its name for example?
Thanks a lot for any help!
Best regards!
EDIT:
Using the em.detach just before changing it values and persisting each of the list worked just fine!
Thanks everyone!
You haven't showed us how you are obtaining your list, but there are two key points here:
everything read in from an EntityManager is managed - JPA checks
these managed objects for changes and will synchronize them with the
database when required (either by committing the transaction or
calling flush).
Calling persist on a managed entity is a no-op - the entity is
already managed, and will be synchronized with the database if it
isn't in there yet.
So the first Tag.get(2).setTagName = "Hello"; call is what causes your update, while the persist is a no-op.
What you need do to instead is create a new instance of your tag object and set the field. Create a clone method on your object that copies everything but the ID field, and then call persist on the result to get an insert for a new Entity.
The decision whether to update or create a new entity object is done based on the primary key. You're probably using an ID on every object. Change or remove it and persist then. This should create a new entry.
If that doesn't work, you might need to detach the object from the Entity Manager first:
em.detach(tagObj);
and persist it afterwards:
em.persist(tagObj);
You can also force an update instead of creation by using
em.merge(tagObj)
There is no equivalent for forced creation AFAIK. persist will do both depending on PK.

One DAO per entity - how to handle references?

I am writing an application that has typical two entities: User and UserGroup. The latter may contain one or more instances of the former. I have following (more/less) mapping for that:
User:
public class User {
#Id
#GeneratedValue
private long id;
#ManyToOne(cascade = {CascadeType.MERGE})
#JoinColumn(name="GROUP_ID")
private UserGroup group;
public UserGroup getGroup() {
return group;
}
public void setGroup(UserGroup group) {
this.group = group;
}
}
User group:
public class UserGroup {
#Id
#GeneratedValue
private long id;
#OneToMany(mappedBy="group", cascade = {CascadeType.REMOVE}, targetEntity = User.class)
private Set<User> users;
public void setUsers(Set<User> users) {
this.users = users;
}
}
Now I have a separate DAO class for each of these entities (UserDao and UserGroupDao). All my DAOs have EntityManager injected using #PersistenceContext annotation, like this:
#Transactional
public class SomeDao<T> {
private Class<T> persistentClass;
#PersistenceContext
private EntityManager em;
public T findById(long id) {
return em.find(persistentClass, id);
}
public void save(T entity) {
em.persist(entity);
}
}
With this layout I want to create a new user and assign it to existing user group. I do it like this:
UserGroup ug = userGroupDao.findById(1);
User u = new User();
u.setName("john");
u.setGroup(ug);
userDao.save(u);
Unfortunately I get following exception:
object references an unsaved transient instance - save the transient
instance before flushing: x.y.z.model.User.group ->
x.y.z.model.UserGroup
I investigated it and I think it happens becasue each DAO instance has different entityManager assigned (I checked that - the references in each DAO to entity manager are different) and for user entityManager does not manager the passed UserGroup instance.
I've tried to merge the user group assigned to user into UserDAO's entity manager. There are two problems with that:
It still doesn't work - the entity manager wants to overwrite the existing UserGroup and it gets exception (obviously)
even if it worked I would end up writing merge code for each related entity
Described case works when both find and persist are made using the same entity manager. This points to a question(s):
Is my design broken? I think it is pretty similar to recommended in this answer. Should there be single EntityManager for all DAOs (the web claims otherwise)?
Or should the group assignment be done inside the DAO? in this case I would end up writing a lot of code in the DAOs
Should I get rid of DAOs? If yes, how to handle data access nicely?
any other solution?
I am using Spring as container and Hibernate as JPA implementation.
Different instances of EntityManager are normal in Spring. It creates proxies that dynamically use the entity manager that is currently in a transaction if one exists. Otherwise, a new one will be created.
The problem is that your transactions are too short. Retrieving your user group executes in a transaction (because the findById method is implicitly #Transactional ). But then the transaction commits and the group is detached. When you save the new user, it will create a new transaction which fails because the user references a detached entity.
The way to solve this (and to do such things in general) is to create a method that does the whole operation in a single transaction. Just create that method in a service class (any Spring-managed component will work) and annotate it with #Transactional as well.
I don't know Spring, but the JPA issue is that you are persisting a User that has a reference to a UserGroup, but JPA thinks the UserGroup is transient.
transient is one of the life-cycle states a JPA entity can be in. It means it's just created with the new operator, but has not been persisted yet (does not have a persistent identity yet).
Since you obtain your UserGroup instance via a DAO, it seems like something is wrong there. Your instance should not be transient, but detached. Can you print the Id of the UserGroup instance just after your received it from the DAO? And perhaps also show the findById implementation?
You don't have cascade persist on the group relation, so this normally should just work if the entity was indeed detached. Without a new entity, JPA simply has no way to set the FK correctly, since it would need the Id of the UserGroup instance here but that (seemingly) doesn't exist.
A merge should also not "overwrite" your detached entity. What is the exception that you're getting here?
I only partially agree with the answers being given by the others here about having to put everything in one transaction. Yes, this indeed may be more convenient as the UserGroup instance will still be 'attached', but it should not be -necessary-. JPA is perfectly capable of persisting new entities with references to either other new entities or existing (detached) entities that were obtained in another transaction. See e.g. JPA cascade persist and references to detached entities throws PersistentObjectException. Why?
I am not sure how but I've managed to solve this. The user group I was trying to assign the user to had NULL version field in database (the field annotated with #Version). I figured out it was an issue when I was testing GWT RequestFactory that was using this table. When I set the field to 1 everything started to work (no changes in transaction handling were needed).
If the NULL version field really caused the problem then this would be one of the most misleading exception messages I have ever got.

What is the proper way to re-attach detached objects in Hibernate?

I have a situation in which I need to re-attach detached objects to a hibernate session, although an object of the same identity MAY already exist in the session, which will cause errors.
Right now, I can do one of two things.
getHibernateTemplate().update( obj )
This works if and only if an object doesn't already exist in the hibernate session. Exceptions are thrown stating an object with the given identifier already exists in the session when I need it later.
getHibernateTemplate().merge( obj )
This works if and only if an object exists in the hibernate session. Exceptions are thrown when I need the object to be in a session later if I use this.
Given these two scenarios, how can I generically attach sessions to objects? I don't want to use exceptions to control the flow of this problem's solution, as there must be a more elegant solution...
So it seems that there is no way to reattach a stale detached entity in JPA.
merge() will push the stale state to the DB,
and overwrite any intervening updates.
refresh() cannot be called on a detached entity.
lock() cannot be called on a detached entity,
and even if it could, and it did reattach the entity,
calling 'lock' with argument 'LockMode.NONE'
implying that you are locking, but not locking,
is the most counterintuitive piece of API design I've ever seen.
So you are stuck.
There's an detach() method, but no attach() or reattach().
An obvious step in the object lifecycle is not available to you.
Judging by the number of similar questions about JPA,
it seems that even if JPA does claim to have a coherent model,
it most certainly does not match the mental model of most programmers,
who have been cursed to waste many hours trying understand
how to get JPA to do the simplest things, and end up with cache
management code all over their applications.
It seems the only way to do it is discard your stale detached entity
and do a find query with the same id, that will hit the L2 or the DB.
Mik
All of these answers miss an important distinction. update() is used to (re)attach your object graph to a Session. The objects you pass it are the ones that are made managed.
merge() is actually not a (re)attachment API. Notice merge() has a return value? That's because it returns you the managed graph, which may not be the graph you passed it. merge() is a JPA API and its behavior is governed by the JPA spec. If the object you pass in to merge() is already managed (already associated with the Session) then that's the graph Hibernate works with; the object passed in is the same object returned from merge(). If, however, the object you pass into merge() is detached, Hibernate creates a new object graph that is managed and it copies the state from your detached graph onto the new managed graph. Again, this is all dictated and governed by the JPA spec.
In terms of a generic strategy for "make sure this entity is managed, or make it managed", it kind of depends on if you want to account for not-yet-inserted data as well. Assuming you do, use something like
if ( session.contains( myEntity ) ) {
// nothing to do... myEntity is already associated with the session
}
else {
session.saveOrUpdate( myEntity );
}
Notice I used saveOrUpdate() rather than update(). If you do not want not-yet-inserted data handled here, use update() instead...
Entity states
JPA defines the following entity states:
New (Transient)
A newly created object that hasn’t ever been associated with a Hibernate Session (a.k.a Persistence Context) and is not mapped to any database table row is considered to be in the New (Transient) state.
To become persisted we need to either explicitly call the EntityManager#persist method or make use of the transitive persistence mechanism.
Persistent (Managed)
A persistent entity has been associated with a database table row and it’s being managed by the currently running Persistence Context. Any change made to such an entity is going to be detected and propagated to the database (during the Session flush-time).
With Hibernate, we no longer have to execute INSERT/UPDATE/DELETE statements. Hibernate employs a transactional write-behind working style and changes are synchronized at the very last responsible moment, during the current Session flush-time.
Detached
Once the currently running Persistence Context is closed all the previously managed entities become detached. Successive changes will no longer be tracked and no automatic database synchronization is going to happen.
Entity state transitions
You can change the entity state using various methods defined by the EntityManager interface.
To understand the JPA entity state transitions better, consider the following diagram:
When using JPA, to reassociate a detached entity to an active EntityManager, you can use the merge operation.
When using the native Hibernate API, apart from merge, you can reattach a detached entity to an active Hibernate Sessionusing the update methods, as demonstrated by the following diagram:
Merging a detached entity
The merge is going to copy the detached entity state (source) to a managed entity instance (destination).
Consider we have persisted the following Book entity, and now the entity is detached as the EntityManager that was used to persist the entity got closed:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
While the entity is in the detached state, we modify it as follows:
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
Now, we want to propagate the changes to the database, so we can call the merge method:
doInJPA(entityManager -> {
Book book = entityManager.merge(_book);
LOGGER.info("Merging the Book entity");
assertFalse(book == _book);
});
And Hibernate is going to execute the following SQL statements:
SELECT
b.id,
b.author AS author2_0_,
b.isbn AS isbn3_0_,
b.title AS title4_0_
FROM
book b
WHERE
b.id = 1
-- Merging the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
If the merging entity has no equivalent in the current EntityManager, a fresh entity snapshot will be fetched from the database.
Once there is a managed entity, JPA copies the state of the detached entity onto the one that is currently managed, and during the Persistence Context flush, an UPDATE will be generated if the dirty checking mechanism finds that the managed entity has changed.
So, when using merge, the detached object instance will continue to remain detached even after the merge operation.
Reattaching a detached entity
Hibernate, but not JPA supports reattaching through the update method.
A Hibernate Session can only associate one entity object for a given database row. This is because the Persistence Context acts as an in-memory cache (first level cache) and only one value (entity) is associated with a given key (entity type and database identifier).
An entity can be reattached only if there is no other JVM object (matching the same database row) already associated with the current Hibernate Session.
Considering we have persisted the Book entity and that we modified it when the Book entity was in the detached state:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
We can reattach the detached entity like this:
doInJPA(entityManager -> {
Session session = entityManager.unwrap(Session.class);
session.update(_book);
LOGGER.info("Updating the Book entity");
});
And Hibernate will execute the following SQL statement:
-- Updating the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
The update method requires you to unwrap the EntityManager to a Hibernate Session.
Unlike merge, the provided detached entity is going to be reassociated with the current Persistence Context and an UPDATE is scheduled during flush whether the entity has modified or not.
To prevent this, you can use the #SelectBeforeUpdate Hibernate annotation which will trigger a SELECT statement that fetched loaded state which is then used by the dirty checking mechanism.
#Entity(name = "Book")
#Table(name = "book")
#SelectBeforeUpdate
public class Book {
//Code omitted for brevity
}
Beware of the NonUniqueObjectException
One problem that can occur with update is if the Persistence Context already contains an entity reference with the same id and of the same type as in the following example:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
try {
doInJPA(entityManager -> {
Book book = entityManager.find(
Book.class,
_book.getId()
);
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(_book);
});
} catch (NonUniqueObjectException e) {
LOGGER.error(
"The Persistence Context cannot hold " +
"two representations of the same entity",
e
);
}
Now, when executing the test case above, Hibernate is going to throw a NonUniqueObjectException because the second EntityManager already contains a Book entity with the same identifier as the one we pass to update, and the Persistence Context cannot hold two representations of the same entity.
org.hibernate.NonUniqueObjectException:
A different object with the same identifier value was already associated with the session : [com.vladmihalcea.book.hpjp.hibernate.pc.Book#1]
at org.hibernate.engine.internal.StatefulPersistenceContext.checkUniqueness(StatefulPersistenceContext.java:651)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performUpdate(DefaultSaveOrUpdateEventListener.java:284)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsDetached(DefaultSaveOrUpdateEventListener.java:227)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:92)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:73)
at org.hibernate.internal.SessionImpl.fireSaveOrUpdate(SessionImpl.java:682)
at org.hibernate.internal.SessionImpl.saveOrUpdate(SessionImpl.java:674)
Conclusion
The merge method is to be preferred if you are using optimistic locking as it allows you to prevent lost updates.
The update is good for batch updates as it can prevent the additional SELECT statement generated by the merge operation, therefore reducing the batch update execution time.
Undiplomatic answer: You're probably looking for an extended persistence context. This is one of the main reasons behind the Seam Framework... If you're struggling to use Hibernate in Spring in particular, check out this piece of Seam's docs.
Diplomatic answer: This is described in the Hibernate docs. If you need more clarification, have a look at Section 9.3.2 of Java Persistence with Hibernate called "Working with Detached Objects." I'd strongly recommend you get this book if you're doing anything more than CRUD with Hibernate.
If you are sure that your entity has not been modified (or if you agree any modification will be lost), then you may reattach it to the session with lock.
session.lock(entity, LockMode.NONE);
It will lock nothing, but it will get the entity from the session cache or (if not found there) read it from the DB.
It's very useful to prevent LazyInitException when you are navigating relations from an "old" (from the HttpSession for example) entities. You first "re-attach" the entity.
Using get may also work, except when you get inheritance mapped (which will already throw an exception on the getId()).
entity = session.get(entity.getClass(), entity.getId());
I went back to the JavaDoc for org.hibernate.Session and found the following:
Transient instances may be made persistent by calling save(), persist() or
saveOrUpdate(). Persistent instances may be made transient by calling delete(). Any instance returned by a get() or load() method is persistent. Detached instances may be made persistent by calling update(), saveOrUpdate(), lock() or replicate(). The state of a transient or detached instance may also be made persistent as a new persistent instance by calling merge().
Thus update(), saveOrUpdate(), lock(), replicate() and merge() are the candidate options.
update(): Will throw an exception if there is a persistent instance with the same identifier.
saveOrUpdate(): Either save or update
lock(): Deprecated
replicate(): Persist the state of the given detached instance, reusing the current identifier value.
merge(): Returns a persistent object with the same identifier. The given instance does not become associated with the session.
Hence, lock() should not be used straightway and based on the functional requirement one or more of them can be chosen.
I did it that way in C# with NHibernate, but it should work the same way in Java:
public virtual void Attach()
{
if (!HibernateSessionManager.Instance.GetSession().Contains(this))
{
ISession session = HibernateSessionManager.Instance.GetSession();
using (ITransaction t = session.BeginTransaction())
{
session.Lock(this, NHibernate.LockMode.None);
t.Commit();
}
}
}
First Lock was called on every object because Contains was always false. The problem is that NHibernate compares objects by database id and type. Contains uses the equals method, which compares by reference if it's not overwritten. With that equals method it works without any Exceptions:
public override bool Equals(object obj)
{
if (this == obj) {
return true;
}
if (GetType() != obj.GetType()) {
return false;
}
if (Id != ((BaseObject)obj).Id)
{
return false;
}
return true;
}
Session.contains(Object obj) checks the reference and will not detect a different instance that represents the same row and is already attached to it.
Here my generic solution for Entities with an identifier property.
public static void update(final Session session, final Object entity)
{
// if the given instance is in session, nothing to do
if (session.contains(entity))
return;
// check if there is already a different attached instance representing the same row
final ClassMetadata classMetadata = session.getSessionFactory().getClassMetadata(entity.getClass());
final Serializable identifier = classMetadata.getIdentifier(entity, (SessionImplementor) session);
final Object sessionEntity = session.load(entity.getClass(), identifier);
// override changes, last call to update wins
if (sessionEntity != null)
session.evict(sessionEntity);
session.update(entity);
}
This is one of the few aspects of .Net EntityFramework I like, the different attach options regarding changed entities and their properties.
I came up with a solution to "refresh" an object from the persistence store that will account for other objects which may already be attached to the session:
public void refreshDetached(T entity, Long id)
{
// Check for any OTHER instances already attached to the session since
// refresh will not work if there are any.
T attached = (T) session.load(getPersistentClass(), id);
if (attached != entity)
{
session.evict(attached);
session.lock(entity, LockMode.NONE);
}
session.refresh(entity);
}
Sorry, cannot seem to add comments (yet?).
Using Hibernate 3.5.0-Final
Whereas the Session#lock method this deprecated, the javadoc does suggest using Session#buildLockRequest(LockOptions)#lock(entity)and if you make sure your associations have cascade=lock, the lazy-loading isn't an issue either.
So, my attach method looks a bit like
MyEntity attach(MyEntity entity) {
if(getSession().contains(entity)) return entity;
getSession().buildLockRequest(LockOptions.NONE).lock(entity);
return entity;
Initial tests suggest it works a treat.
Perhaps it behaves slightly different on Eclipselink. To re-attach detached objects without getting stale data, I usually do:
Object obj = em.find(obj.getClass(), id);
and as an optional a second step (to get caches invalidated):
em.refresh(obj)
try getHibernateTemplate().replicate(entity,ReplicationMode.LATEST_VERSION)
In the original post, there are two methods, update(obj) and merge(obj) that are mentioned to work, but in opposite circumstances. If this is really true, then why not test to see if the object is already in the session first, and then call update(obj) if it is, otherwise call merge(obj).
The test for existence in the session is session.contains(obj). Therefore, I would think the following pseudo-code would work:
if (session.contains(obj))
{
session.update(obj);
}
else
{
session.merge(obj);
}
to reattach this object, you must use merge();
this methode accept in parameter your entity detached and return an entity will be attached and reloaded from Database.
Example :
Lot objAttach = em.merge(oldObjDetached);
objAttach.setEtat(...);
em.persist(objAttach);
calling first merge() (to update persistent instance), then lock(LockMode.NONE) (to attach the current instance, not the one returned by merge()) seems to work for some use cases.
Property hibernate.allow_refresh_detached_entity did the trick for me. But it is a general rule, so it is not very suitable if you want to do it only in some cases. I hope it helps.
Tested on Hibernate 5.4.9
SessionFactoryOptionsBuilder
try getHibernateTemplate().saveOrUpdate()

Categories