I have a situation in which I need to re-attach detached objects to a hibernate session, although an object of the same identity MAY already exist in the session, which will cause errors.
Right now, I can do one of two things.
getHibernateTemplate().update( obj )
This works if and only if an object doesn't already exist in the hibernate session. Exceptions are thrown stating an object with the given identifier already exists in the session when I need it later.
getHibernateTemplate().merge( obj )
This works if and only if an object exists in the hibernate session. Exceptions are thrown when I need the object to be in a session later if I use this.
Given these two scenarios, how can I generically attach sessions to objects? I don't want to use exceptions to control the flow of this problem's solution, as there must be a more elegant solution...
So it seems that there is no way to reattach a stale detached entity in JPA.
merge() will push the stale state to the DB,
and overwrite any intervening updates.
refresh() cannot be called on a detached entity.
lock() cannot be called on a detached entity,
and even if it could, and it did reattach the entity,
calling 'lock' with argument 'LockMode.NONE'
implying that you are locking, but not locking,
is the most counterintuitive piece of API design I've ever seen.
So you are stuck.
There's an detach() method, but no attach() or reattach().
An obvious step in the object lifecycle is not available to you.
Judging by the number of similar questions about JPA,
it seems that even if JPA does claim to have a coherent model,
it most certainly does not match the mental model of most programmers,
who have been cursed to waste many hours trying understand
how to get JPA to do the simplest things, and end up with cache
management code all over their applications.
It seems the only way to do it is discard your stale detached entity
and do a find query with the same id, that will hit the L2 or the DB.
Mik
All of these answers miss an important distinction. update() is used to (re)attach your object graph to a Session. The objects you pass it are the ones that are made managed.
merge() is actually not a (re)attachment API. Notice merge() has a return value? That's because it returns you the managed graph, which may not be the graph you passed it. merge() is a JPA API and its behavior is governed by the JPA spec. If the object you pass in to merge() is already managed (already associated with the Session) then that's the graph Hibernate works with; the object passed in is the same object returned from merge(). If, however, the object you pass into merge() is detached, Hibernate creates a new object graph that is managed and it copies the state from your detached graph onto the new managed graph. Again, this is all dictated and governed by the JPA spec.
In terms of a generic strategy for "make sure this entity is managed, or make it managed", it kind of depends on if you want to account for not-yet-inserted data as well. Assuming you do, use something like
if ( session.contains( myEntity ) ) {
// nothing to do... myEntity is already associated with the session
}
else {
session.saveOrUpdate( myEntity );
}
Notice I used saveOrUpdate() rather than update(). If you do not want not-yet-inserted data handled here, use update() instead...
Entity states
JPA defines the following entity states:
New (Transient)
A newly created object that hasn’t ever been associated with a Hibernate Session (a.k.a Persistence Context) and is not mapped to any database table row is considered to be in the New (Transient) state.
To become persisted we need to either explicitly call the EntityManager#persist method or make use of the transitive persistence mechanism.
Persistent (Managed)
A persistent entity has been associated with a database table row and it’s being managed by the currently running Persistence Context. Any change made to such an entity is going to be detected and propagated to the database (during the Session flush-time).
With Hibernate, we no longer have to execute INSERT/UPDATE/DELETE statements. Hibernate employs a transactional write-behind working style and changes are synchronized at the very last responsible moment, during the current Session flush-time.
Detached
Once the currently running Persistence Context is closed all the previously managed entities become detached. Successive changes will no longer be tracked and no automatic database synchronization is going to happen.
Entity state transitions
You can change the entity state using various methods defined by the EntityManager interface.
To understand the JPA entity state transitions better, consider the following diagram:
When using JPA, to reassociate a detached entity to an active EntityManager, you can use the merge operation.
When using the native Hibernate API, apart from merge, you can reattach a detached entity to an active Hibernate Sessionusing the update methods, as demonstrated by the following diagram:
Merging a detached entity
The merge is going to copy the detached entity state (source) to a managed entity instance (destination).
Consider we have persisted the following Book entity, and now the entity is detached as the EntityManager that was used to persist the entity got closed:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
While the entity is in the detached state, we modify it as follows:
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
Now, we want to propagate the changes to the database, so we can call the merge method:
doInJPA(entityManager -> {
Book book = entityManager.merge(_book);
LOGGER.info("Merging the Book entity");
assertFalse(book == _book);
});
And Hibernate is going to execute the following SQL statements:
SELECT
b.id,
b.author AS author2_0_,
b.isbn AS isbn3_0_,
b.title AS title4_0_
FROM
book b
WHERE
b.id = 1
-- Merging the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
If the merging entity has no equivalent in the current EntityManager, a fresh entity snapshot will be fetched from the database.
Once there is a managed entity, JPA copies the state of the detached entity onto the one that is currently managed, and during the Persistence Context flush, an UPDATE will be generated if the dirty checking mechanism finds that the managed entity has changed.
So, when using merge, the detached object instance will continue to remain detached even after the merge operation.
Reattaching a detached entity
Hibernate, but not JPA supports reattaching through the update method.
A Hibernate Session can only associate one entity object for a given database row. This is because the Persistence Context acts as an in-memory cache (first level cache) and only one value (entity) is associated with a given key (entity type and database identifier).
An entity can be reattached only if there is no other JVM object (matching the same database row) already associated with the current Hibernate Session.
Considering we have persisted the Book entity and that we modified it when the Book entity was in the detached state:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
We can reattach the detached entity like this:
doInJPA(entityManager -> {
Session session = entityManager.unwrap(Session.class);
session.update(_book);
LOGGER.info("Updating the Book entity");
});
And Hibernate will execute the following SQL statement:
-- Updating the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
The update method requires you to unwrap the EntityManager to a Hibernate Session.
Unlike merge, the provided detached entity is going to be reassociated with the current Persistence Context and an UPDATE is scheduled during flush whether the entity has modified or not.
To prevent this, you can use the #SelectBeforeUpdate Hibernate annotation which will trigger a SELECT statement that fetched loaded state which is then used by the dirty checking mechanism.
#Entity(name = "Book")
#Table(name = "book")
#SelectBeforeUpdate
public class Book {
//Code omitted for brevity
}
Beware of the NonUniqueObjectException
One problem that can occur with update is if the Persistence Context already contains an entity reference with the same id and of the same type as in the following example:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
try {
doInJPA(entityManager -> {
Book book = entityManager.find(
Book.class,
_book.getId()
);
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(_book);
});
} catch (NonUniqueObjectException e) {
LOGGER.error(
"The Persistence Context cannot hold " +
"two representations of the same entity",
e
);
}
Now, when executing the test case above, Hibernate is going to throw a NonUniqueObjectException because the second EntityManager already contains a Book entity with the same identifier as the one we pass to update, and the Persistence Context cannot hold two representations of the same entity.
org.hibernate.NonUniqueObjectException:
A different object with the same identifier value was already associated with the session : [com.vladmihalcea.book.hpjp.hibernate.pc.Book#1]
at org.hibernate.engine.internal.StatefulPersistenceContext.checkUniqueness(StatefulPersistenceContext.java:651)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performUpdate(DefaultSaveOrUpdateEventListener.java:284)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsDetached(DefaultSaveOrUpdateEventListener.java:227)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:92)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:73)
at org.hibernate.internal.SessionImpl.fireSaveOrUpdate(SessionImpl.java:682)
at org.hibernate.internal.SessionImpl.saveOrUpdate(SessionImpl.java:674)
Conclusion
The merge method is to be preferred if you are using optimistic locking as it allows you to prevent lost updates.
The update is good for batch updates as it can prevent the additional SELECT statement generated by the merge operation, therefore reducing the batch update execution time.
Undiplomatic answer: You're probably looking for an extended persistence context. This is one of the main reasons behind the Seam Framework... If you're struggling to use Hibernate in Spring in particular, check out this piece of Seam's docs.
Diplomatic answer: This is described in the Hibernate docs. If you need more clarification, have a look at Section 9.3.2 of Java Persistence with Hibernate called "Working with Detached Objects." I'd strongly recommend you get this book if you're doing anything more than CRUD with Hibernate.
If you are sure that your entity has not been modified (or if you agree any modification will be lost), then you may reattach it to the session with lock.
session.lock(entity, LockMode.NONE);
It will lock nothing, but it will get the entity from the session cache or (if not found there) read it from the DB.
It's very useful to prevent LazyInitException when you are navigating relations from an "old" (from the HttpSession for example) entities. You first "re-attach" the entity.
Using get may also work, except when you get inheritance mapped (which will already throw an exception on the getId()).
entity = session.get(entity.getClass(), entity.getId());
I went back to the JavaDoc for org.hibernate.Session and found the following:
Transient instances may be made persistent by calling save(), persist() or
saveOrUpdate(). Persistent instances may be made transient by calling delete(). Any instance returned by a get() or load() method is persistent. Detached instances may be made persistent by calling update(), saveOrUpdate(), lock() or replicate(). The state of a transient or detached instance may also be made persistent as a new persistent instance by calling merge().
Thus update(), saveOrUpdate(), lock(), replicate() and merge() are the candidate options.
update(): Will throw an exception if there is a persistent instance with the same identifier.
saveOrUpdate(): Either save or update
lock(): Deprecated
replicate(): Persist the state of the given detached instance, reusing the current identifier value.
merge(): Returns a persistent object with the same identifier. The given instance does not become associated with the session.
Hence, lock() should not be used straightway and based on the functional requirement one or more of them can be chosen.
I did it that way in C# with NHibernate, but it should work the same way in Java:
public virtual void Attach()
{
if (!HibernateSessionManager.Instance.GetSession().Contains(this))
{
ISession session = HibernateSessionManager.Instance.GetSession();
using (ITransaction t = session.BeginTransaction())
{
session.Lock(this, NHibernate.LockMode.None);
t.Commit();
}
}
}
First Lock was called on every object because Contains was always false. The problem is that NHibernate compares objects by database id and type. Contains uses the equals method, which compares by reference if it's not overwritten. With that equals method it works without any Exceptions:
public override bool Equals(object obj)
{
if (this == obj) {
return true;
}
if (GetType() != obj.GetType()) {
return false;
}
if (Id != ((BaseObject)obj).Id)
{
return false;
}
return true;
}
Session.contains(Object obj) checks the reference and will not detect a different instance that represents the same row and is already attached to it.
Here my generic solution for Entities with an identifier property.
public static void update(final Session session, final Object entity)
{
// if the given instance is in session, nothing to do
if (session.contains(entity))
return;
// check if there is already a different attached instance representing the same row
final ClassMetadata classMetadata = session.getSessionFactory().getClassMetadata(entity.getClass());
final Serializable identifier = classMetadata.getIdentifier(entity, (SessionImplementor) session);
final Object sessionEntity = session.load(entity.getClass(), identifier);
// override changes, last call to update wins
if (sessionEntity != null)
session.evict(sessionEntity);
session.update(entity);
}
This is one of the few aspects of .Net EntityFramework I like, the different attach options regarding changed entities and their properties.
I came up with a solution to "refresh" an object from the persistence store that will account for other objects which may already be attached to the session:
public void refreshDetached(T entity, Long id)
{
// Check for any OTHER instances already attached to the session since
// refresh will not work if there are any.
T attached = (T) session.load(getPersistentClass(), id);
if (attached != entity)
{
session.evict(attached);
session.lock(entity, LockMode.NONE);
}
session.refresh(entity);
}
Sorry, cannot seem to add comments (yet?).
Using Hibernate 3.5.0-Final
Whereas the Session#lock method this deprecated, the javadoc does suggest using Session#buildLockRequest(LockOptions)#lock(entity)and if you make sure your associations have cascade=lock, the lazy-loading isn't an issue either.
So, my attach method looks a bit like
MyEntity attach(MyEntity entity) {
if(getSession().contains(entity)) return entity;
getSession().buildLockRequest(LockOptions.NONE).lock(entity);
return entity;
Initial tests suggest it works a treat.
Perhaps it behaves slightly different on Eclipselink. To re-attach detached objects without getting stale data, I usually do:
Object obj = em.find(obj.getClass(), id);
and as an optional a second step (to get caches invalidated):
em.refresh(obj)
try getHibernateTemplate().replicate(entity,ReplicationMode.LATEST_VERSION)
In the original post, there are two methods, update(obj) and merge(obj) that are mentioned to work, but in opposite circumstances. If this is really true, then why not test to see if the object is already in the session first, and then call update(obj) if it is, otherwise call merge(obj).
The test for existence in the session is session.contains(obj). Therefore, I would think the following pseudo-code would work:
if (session.contains(obj))
{
session.update(obj);
}
else
{
session.merge(obj);
}
to reattach this object, you must use merge();
this methode accept in parameter your entity detached and return an entity will be attached and reloaded from Database.
Example :
Lot objAttach = em.merge(oldObjDetached);
objAttach.setEtat(...);
em.persist(objAttach);
calling first merge() (to update persistent instance), then lock(LockMode.NONE) (to attach the current instance, not the one returned by merge()) seems to work for some use cases.
Property hibernate.allow_refresh_detached_entity did the trick for me. But it is a general rule, so it is not very suitable if you want to do it only in some cases. I hope it helps.
Tested on Hibernate 5.4.9
SessionFactoryOptionsBuilder
try getHibernateTemplate().saveOrUpdate()
Related
I wonder is it livable to associate an entity with a child entity by using not a proxy object but by creating a new object and setting Id manually? Like this?
#Transactional
public void save(#NonNull String name, #NonNull Long roleId) {
User user = new User();
user.setName(name);
Role role = new Role(); role.setRoleId(roleId);
// Instead of:
// roleRepository.getOne(roleId);
user.setRole(role);
userRepository.save(user);
}
I know that the accepted and well-documented way to do it is by calling smth. like:
em.getReference(Role.class, roleId) ;
or if use Spring Data
roleRepository.getOne(roleId);
or Hibernetish way:
session.load(Role.class, roleId)
So the question is, what bad consequences can one face if he does this trick by cheating the JPA provider and using this new object with set Id? Note, the only reason to do getOne() is to associate a newly created entity with an existing one. Yet the Role mock object is not managed, no fear of loosing any data. It simply does its job for connecting two entities.
From the Hibernate documentation:
getReference() obtains a reference to the entity. The state may or may
not be initialized. If the entity is already associated with the
current running Session, that reference (loaded or not) is returned.
If the entity is not loaded in the current Session and the entity
supports proxy generation, an uninitialized proxy is generated and
returned, otherwise the entity is loaded from the database and
returned.
So after testing I found that it basically does not even hit the database to check the presence of ID and save() would fail at commit if FK constraint is violated. It just requires additional dependency to auto-wire (RoleRepository).
So why should I have this proxy fetched by invoking getOne() instead of this mock object created with new if my case is as simple as this one? What and when may go wrong with this approach?
Thank you for clarifying things.
EDIT:
Hibernate/JPA, save a new entity while only setting id on #OneToOne association
This related topic doesn't answer the question. I am asking why calling JPA's API getReference() is better and what wrong may happen to me if I adopt this practice of creating a new "mock" objects with a given Id with new operator?
I have a custom EmptyInterceptor that I use to set information on the creation date, last modification date, created by user, and last modified user by overriding onSave and onFlushDirty.
This system has worked pretty well for us but we just found an issue where we have code that is blindly calling the setters on an entity using some input data. In this case, even if the data has not changed, hibernate will fire our custom interceptor and we'll set the last modified user and date. This causes Hibernate to update the entity in the database. We know this because we have an insert and update trigger on this table. If we disable the interceptor, Hibernate will not update the entity in the database.
We'd like to avoid setting the user and date if the entity hasn't really changed so no update happens in those situations. I've read about the way Hibernate does its dirty entity checking and I have the previousState and currentState arrays passed into onFlushDirty and I can loop through to do this check myself. Is there any easier way to do that?
I looked at HibernateSession.isDirty() but that doesn't tell me if this particular entity has changed, just if there's a changed entity in the session.
UPDATE
It turns out that the offending code blindly calling setters was not the issue. It was the offending code setting a Collection of child objects instead of modifying the Collection that was already there. In this case, Hibernate thinks the entity has changed - or at least thinks so enough to call the Interceptor.
From the design perspective, this check should really be done on the front-end/client side. If the front-end determines that the record is modified by the user, then it should submit an update to the server.
If you want to do this in the middle tier (server-side), then you should think about the lifecycle of a Hibernate entity: Transient, Persistent, Detached, Removed, and also think about how session.save() is different from session.merge() and session.saveOrUpdate().
Furthermore, you also have to consider the different design patterns when managing session, such as "session-per-operation" anti-pattern, session-per-request, or session-per-conversation patterns, etc...
If you have an open session, and your entity is already in the session (from another operation), then Hibernate can in fact do the dirty checking for you. But if you have a detached entity and that entity doesn't exist in the session, and say you do a merge, Hibernate will first fetch that entity from the data store (database) by issuing a SELECT and puts that managed entity in the persistent context, and then it will merge the two entities, the one you provide and the one hibernate fetches and puts in the persistence context, and then it checks to see if anything has changed, hence dirty-checking.
In your case, since you want to exclude the last-modified-user name and time from dirty checking, you might as well get the entity by ID (since you presumably have the ID on the detached entity) and then use equals() or hashCode() to do your own version of dirty checking, and if they are not, then call merge.
You may think this is an extra trip to the DB, but it's not, because even in normal cases, Hibernate still makes the extra trip to the DB, if the entity is not already in the persistent context (i.e. session), and if it is, and you do a get-by-id, Hibernate will just return to you what it already has in the session, and won't hit the DB.
This argument doesn't apply to saveOrUpdate, in which case, Hibernate will simply push the updates to the DB without dirty checking (if the entity is not already in the session), and if it is already in the session, it will throw an exception saying that the entity is already in the session.
In case anyone needs to solve the same problem without changing the code that causes this issue, I implemented the compare as:
/**
* Called upon entity UPDATE. For our BaseEntity, populates updated by with logged in user ID and
* updated date-time.
* <p>
* Note that this method is called even if an object has NOT changed but someone's called a setter on
* a Collection related to a child object (as opposed to modifying the Collection itself). Because of
* that, the comparisons below are necessary to make sure the entity has really changed before setting
* the update date and user.
*
* #see org.hibernate.EmptyInterceptor#onFlushDirty(java.lang.Object, java.io.Serializable, java.lang.Object[], java.lang.Object[], java.lang.String[], org.hibernate.type.Type[])
*/
#Override
public boolean onFlushDirty(Object entity, Serializable id, Object[] currentState, Object[] previousState,
String[] propertyNames, Type[] types)
{
boolean changed = false;
if (entity instanceof BaseEntity) {
logger.debug("onFlushDirty method called on " + entity.getClass().getCanonicalName());
// Check to see if this entity really changed(see Javadoc above).
boolean reallyChanged = false;
for (int i = 0; i < propertyNames.length; i++) {
// Don't care about the collection types because those can change and we still don't consider
// this object changed when that happens.
if (!(types[i] instanceof CollectionType)) {
boolean equals = Objects.equals(previousState[i], currentState[i]);
if (!equals) {
reallyChanged = true;
break;
}
}
}
if (reallyChanged) {
String userId = somehowGetUserIdForTheUserThatMadeTheRequest();
Date dateTimeStamp = new Date();
// Locate the correct field and update it.
for (int i = 0; i < propertyNames.length; i++) {
if (UPDATE_BY_FIELD_NAME.equals(propertyNames[i])) {
currentState[i] = userId;
changed = true;
}
if (UPDATE_DATE_FIELD_NAME.equals(propertyNames[i])) {
currentState[i] = dateTimeStamp;
changed = true;
}
}
}
}
return changed;
}
}
I'm working on a legacy code base that uses JPA (not JPA-2), and have come across the following method in a DAO implementation class to retrieve a single entity by ID (which is also it's primary key):
public EmailTemplate findEmailTemplateById(long id) {
LOG.debug("Entering findEmailTemplateById(id='" + id + "')");
// Construct JPQL query
String queryString = "SELECT a FROM EmailTemplate a " +
"WHERE templateId = :templateId";
Query query = entityManager.createQuery(queryString);
query.setParameter("templateId", id);
LOG.debug("Using query " + queryString);
List<EmailTemplate> resultList = query.getResultList();
LOG.debug("Exiting findEmailTemplateByName(id='" + id + "') results size " + resultList.size() + " ( returns null if 0 )");
if (resultList.isEmpty() || resultList.size() == 0) {
return null;
} else {
return resultList.get(0);
}
}
I now need to write a similar DAO class for a different entity, and my method to find the entity by it's primary key looks a lot simpler! :
#Override
public EmailTemplateEdit findEmailTemplateEditById(long id) {
LOG.debug("Entering findEmailTemplateEditById(id={})", id);
return entityManager.find(EmailTemplateEdit.class, id);
}
The original author is not around to ask, so I'm wondering if anyone can suggest reasons as to why he constructed a JPQL query rather than simply using EntityManager#find(Class<T> entityClass, Object primaryKey)?
The javadoc for the find method says:
If the entity instance is contained in the persistence context, it is
returned from there.
which suggests some form of caching and/or delayed writes. The javadoc for the createQuery and getResultList methods don't say anything like this.
I am unaware of any business or technical requirement in this application that would preclude caching, or of any issues resulting from stale entities or similar. I will check these with the rest of the project team when available, but I just thought I'd canvas the opinion of the SO community to see if there might be other reasons why a query was constructed and executed instead of simply using find
(I've seen this: When use createQuery() and find() methods of EntityManager?. Whilst it answers the question re: difference between createQuery and find, it doesn't answer it in context of finding entities by primary key)
Updated with additional info
From looking at the other methods in the original DAO class, it looks like there has been a deliberate/conscious decision to not take advantage of JPA managed objects. As above, the method to find by primary key uses a JPQL query. The method to delete an entity also uses a JPQL query. And the method to update an entity makes a copy of the passed in entity object and calls EntityManager#merge with the copy (thus the copy is a managed object, but is never used or returned from the method)
Weird ....
Short answer, there is no difference between find and a select query.
Your question suggests that you are not entirely familiar with what an EntityManager and a Persistence context is.
EntityManager implementation are not required to be thread safe. If the EntityManager is injected by Spring or and EJB-container it is thread safe (because it is a thread-local proxy), if it is application managed (you created it by calling EntityManagerFactory.createEntityManager(), it is not thread safe, and you can't stor it in a variable, but have to create a new one every time.
The Persistence Context, is where entities live, whenever you create a new EntityManager you get a new Persistence context (there are exceptions to this rule). When you persist an Entity, or load an existing entity from the db (using find or query) it will be managed by the persistence context. When you commit a transaction JPA runs through ALL Entities managed by the Persistence context, and checks the state of the entity to find out which queries should be sent to the database.
The PersistenceContext can be seen as a first-level cache on top of the database. It is meant to have a short lifespan, typically no longer than the transaction. If you re-use the same entityManager for multiple transactions, the size could grow as more data is loaded, this is bad because every transaction has to run through all entities in the persistence context.
THe title can't definitely reflect my question, however I don't know how to express. I have a JPA entity (VatOperatorBalance which has a field saleBalance), lets say I retrieve the entity at the first time, and get a entity (VatOperatorBalance#3d6396f5), its saleBalance is 100.0. Now there are other operations which has modified the saleBalance to 200, now I query from database and get a new entity (VatOperatorBalance#10f8ed), sure the saleBalance of this entity is 200.0. However what make me confused it the saleBalance of the old entity (VatOperatorBalance#3d6396f5) is also 200.0.
All these queries and operations are in a single transaction, and the query isn't by EntityManager.find(java.lang.Class<T> entityClass, java.lang.Object primaryKey) which will return entity from cache.
Below is my code
#Rollback(true)
#Test
public void testSale_SingleBet_OK() throws Exception {
// prepare request ...
// query the VatOperatorBalance first
VatOperatorBalance oldBalance = this.getVatOperatorBalanceDao().findByOperator("OPERATOR-111");
//this.entityManager.detach(oldBalance);
logger.debug("------ oldBalance(" + oldBalance + ").");
// the operation which will modify the oldBalance
Context saleReqCtx = this.getDefaultContext(TransactionType.SELL_TICKET.getRequestType(),
clientTicket);
saleReqCtx.setGameTypeId(GameType.VAT.getType() + "");
Context saleRespCtx = doPost(this.mockRequest(saleReqCtx));
RaffleTicket respTicket = (RaffleTicket) saleRespCtx.getModel();
this.entityManager.flush();
this.entityManager.clear();
// assert vat sale balance
VatOperatorBalance newBalance = this.getVatOperatorBalanceDao().findByOperator("OPERATOR-111");
logger.debug("------ newBalance(" + newBalance + ").");
assertEquals(oldBalance.getSaleBalance().add(respTicket.getTotalAmount()).doubleValue(), newBalance
.getSaleBalance().doubleValue(), 0);
}
This testcase will fail, I don't understand why this will happen. JPA entity manager will update all entities of same entity type? The oldBalance entity and newBlance entity have same entityId, however they are different Java instance, what happened in JPA entity manager? If I detach the oldBalance entity from EntityManager, testcase will pass.
Note: my test is using Spring4.0.5 and JPA2.1
#piet.t since the entityManager would recognize it is the same entity by its primary key (feel free to try it). So all changes made to this entity through the same entityManager will all affect the same java instance
So in a entity manager, a given entity type with given primary key, there should be only one java instance or managed entity(if query from entity manager, no matter what query criteria, by id or not, the same java instance(managed entity) will be returned).
However in my test case, the entity 'oldBalance' will be updated by "the operation which will modify the oldBalance", and then the call of entityManager.clear() will detach all entities managed by this entity manager, that says 'oldBalance' is detached too.
And the 'newBalance' is managed entity then, that is why they have different java instance identifier. If 'oldBalance' is managed, for example by call entityManager.merge(), it will be the same instance of 'newBalance'.
I think most of your confusion does arise from the flush()-call in your code.
Calling flush will always store the changed value to the database - that's the whoe point of calling flush. When using transactions the changed value might still not be visible via other connections due to the databases transaction machanism but your entityManager will only see the changed value.
Without the clear-call your query - even though it is not using find - would still return the same instance that was previously created (VatOperatorBalance#3d6396f5) since the entityManager would recognize it is the same entity by its primary key (feel free to try it). So all changes made to this entity through the same entityManager will all affect the same java instance while modifications through another entity manager will most likely cause an exception because the entity was update from another transaction.
Some queries might cause an implicit flush, since the cached changes might influence the query-result, so all changes have to be written to the database before executing the query to get a correct result-set.
I hope that does help a bit.
What would be the easiest way to detach a specific JPA Entity Bean that was acquired through an EntityManager. Alternatively, could I have a query return detached objects in the first place so they would essentially act as 'read only'?
The reason why I want to do this is becuase I want to modify the data within the bean - with in my application only, but not ever have it persisted to the database. In my program, I eventually have to call flush() on the EntityManager, which would persist all changes from attached entities to the underyling database, but I want to exclude specific objects.
(may be too late to answer, but can be useful for others)
I'm developing my first system with JPA right now. Unfortunately I'm faced with this problem when this system is almost complete.
Simply put. Use Hibernate, or wait for JPA 2.0.
In Hibernate, you can use 'session.evict(object)' to remove one object from session. In JPA 2.0, in draft right now, there is the 'EntityManager.detach(object)' method to detach one object from persistence context.
No matter which JPA implementation you use, Just use entityManager.detach(object) it's now in JPA 2.0 and part of JEE6.
If you need to detach an object from the EntityManager and you are using Hibernate as your underlying ORM layer you can get access to the Hibernate Session object and use the Session.evict(Object) method that Mauricio Kanada mentioned above.
public void detach(Object entity) {
org.hibernate.Session session = (Session) entityManager.getDelegate();
session.evict(entity);
}
Of course this would break if you switched to another ORM provider but I think this is preferably to trying to make a deep copy.
Unfortunately, there's no way to disconnect one object from the entity manager in the current JPA implementation, AFAIR.
EntityManager.clear() will disconnect all the JPA objects, so that might not be an appropriate solution in all the cases, if you have other objects you do plan to keep connected.
So your best bet would be to clone the objects and pass the clones to the code that changes the objects. Since primitive and immutable object fields are taken care of by the default cloning mechanism in a proper way, you won't have to write a lot of plumbing code (apart from deep cloning any aggregated structures you might have).
As far as I know, the only direct ways to do it are:
Commit the txn - Probably not a reasonable option
Clear the Persistence Context - EntityManager.clear() - This is brutal, but would clear it out
Copy the object - Most of the time your JPA objects are serializable, so this should be easy (if not particularly efficient).
If using EclipseLink you also have the options,
Use the Query hint, eclipselink.maintain-cache"="false - all returned objects will be detached.
Use the EclipseLink JpaEntityManager copy() API to copy the object to the desired depth.
If there aren't too many properties in the bean, you might just create a new instance and set all of its properties manually from the persisted bean.
This could be implemented as a copy constructor, for example:
public Thing(Thing oldBean) {
this.setPropertyOne(oldBean.getPropertyOne());
// and so on
}
Then:
Thing newBean = new Thing(oldBean);
this is quick and dirty, but you can also serialize and deserialize the object.
Since I am using SEAM and JPA 1.0 and my system has a fuctinality that needs to log all fields changes, i have created an value object or data transfer object if same fields of the entity that needs to be logged. The constructor of the new pojo is:
public DocumentoAntigoDTO(Documento documentoAtual) {
Method[] metodosDocumento = Documento.class.getMethods();
for(Method metodo:metodosDocumento){
if(metodo.getName().contains("get")){
try {
Object resultadoInvoke = metodo.invoke(documentoAtual,null);
Method[] metodosDocumentoAntigo = DocumentoAntigoDTO.class.getMethods();
for(Method metodoAntigo : metodosDocumentoAntigo){
String metodSetName = "set" + metodo.getName().substring(3);
if(metodoAntigo.getName().equals(metodSetName)){
metodoAntigo.invoke(this, resultadoInvoke);
}
}
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
}
}
}
In JPA 1.0 (tested using EclipseLink) you could retrieve the entity outside of a transaction. For example, with container managed transactions you could do:
public MyEntity myMethod(long id) {
final MyEntity myEntity = retrieve(id);
// myEntity is detached here
}
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public MyEntity retrieve(long id) {
return entityManager.find(MyEntity.class, id);
}
Do deal with a similar case I have created a DTO object that extends the persistent entity object as follows:
class MyEntity
{
public static class MyEntityDO extends MyEntity {}
}
Finally, an scalar query will retrieve the desired non managed attributes:
(Hibernate) select p.id, p.name from MyEntity P
(JPA) select new MyEntity(p.id, p.name) from myEntity P
If you get here because you actually want to pass an entity across a remote boundary then you just put some code in to fool the hibernazi.
for(RssItem i : result.getChannel().getItem()){
}
Cloneable wont work because it actually copies the PersistantBag across.
And forget about using serializable and bytearray streams and piped streams. creating threads to avoid deadlocks kills the entire concept.
I think there is a way to evict a single entity from EntityManager by calling this
EntityManagerFactory emf;
emf.getCache().evict(Entity);
This will remove particular entity from cache.
Im using entityManager.detach(returnObject);
which worked for me.
I think you can also use method EntityManager.refresh(Object o) if primary key of the entity has not been changed. This method will restore original state of the entity.