I have a generic JPA repository implementation dealing with many type of entities as follows:
#Component
#Transactional(transactionManager = "ubldbTransactionManager")
public class CatalogueRepositoryImpl {
...
#PersistenceContext(unitName = eu.nimble.utility.Configuration.UBL_PERSISTENCE_UNIT_NAME)
private EntityManager em;
public <T> void deleteEntity(T entity) {
if(!em.contains(entity)) {
entity = em.merge(entity);
}
em.remove(entity);
}
public <T> List<T> getEntities(String queryStr) {
Query query = em.createQuery(queryStr);
List<T> result = query.getResultList();
return result;
}
...
}
At some point I realized that some of the entities have not been deleted. Then, I found out that some managed entities cause cancellation of the removal, as described at: https://stackoverflow.com/a/16901857/502059
Since the method is generic, it included various types of entities inside. As a work around, I wanted to get rid of the entities causing cancellation of deletion and I added em.flush() and em.clear() at the beginning of the deleteEntity method. Although this worked, I feel that this is a dirty workaround.
So, I'm asking some best practices for such a case. For example, would creating a new EntityManager in deleteEntity be an alternative? I didn't want this since I wanted Spring to handle managing the scope of EntityManagers and transactions.
One last point about Spring-managed EntityManager: I also wonder whether the em in the example managed by Spring is an application-scope one? If so, wouldn't it keep all the retrieved entities and expand continuously?
If you're using Hibernate, you don't need to do the merge prior to deleting the entity. That's only a JPA requirement, but Hibernate is more lenient in that regard.
You could also do something like:
entity = em.getReference(entity.getClass(), entity.getId());
em.remove(entity);
If that does not work, it could be because you are not cascading the REMOVE operation to child associations.
You can make the T argument extend an Identifiable interface which defines the getId method and have your entities implement this interface so that your method is more generic.
Would like to hear experts on best practice of editing JPA entities from JSF UI.
So, a couple of words about the problem.
Imagine I have the persisted object MyEntity and I fetch it for editing. In DAO layer I use
return em.find(MyEntity.class, id);
Which returns MyEntity instance with proxies on "parent" entities - imagine one of them is MyParent. MyParent is fetched as the proxy greeting to #Access(AccessType.PROPERTY):
#Entity
public class MyParent {
#Id
#Access(AccessType.PROPERTY)
private Long id;
//...
}
and MyEntity has the reference to it:
#ManyToOne(fetch = FetchType.LAZY)
#LazyToOne(LazyToOneOption.PROXY)
private MyParent myParent;
So far so good. In UI I simply use the fetched object directly without any value objects created and use the parent object in the select list:
<h:selectOneMenu value="#{myEntity.myParent.id}" id="office">
<f:selectItems value="#{parents}"/>
</h:selectOneMenu>
Everything is rendered ok, no LazyInitializationException occurs. But when I save the object I recieve the
LazyInitializationException: could not initialize proxy - no Session
on MyParent proxy setId() method.
I can easily fix the problem if I change the MyParent relation to EAGER
#ManyToOne(fetch = FetchType.EAGER)
private MyParent myParent;
or fetch the object using left join fetch p.myParent (actually that's how I do now). In this case the save operation works ok and the relation is changed to the new MyParent object transparently. No additional actions (manual copies, manual references settings) need to be done. Very simple and convenient.
BUT. If the object references 10 other object - the em.find() will result 10 additional joins, which isn't a good db operation, especially when I don't use references objects state at all. All I need - is links to objects, not their state.
This is a global issue, I would like to know, how JSF specialists deal with JPA entities in their applications, which is the best strategy to avoid both extra joins and LazyInitializationException.
Extended persistence context isn't ok for me.
Thanks!
You should provide exactly the model the view expects.
If the JPA entity happens to match exactly the needed model, then just use it right away.
If the JPA entity happens to have too few or too much properties, then use a DTO (subclass) and/or a constructor expression with a more specific JPQL query, if necessary with an explicit FETCH JOIN. Or perhaps with Hibernate specific fetch profiles, or EclipseLink specific attribute groups. Otherwise, it may either cause lazy initializtion exceptions over all place, or consume more memory than necessary.
The "open session in view" pattern is a poor design. You're basically keeping a single DB transaction open during the entire HTTP request-response processing. Control over whether to start a new DB transaction or not is completely taken away from you. You cannot spawn multiple transactions during the same HTTP request when the business logic requires so. Keep in mind that when a single query fails during a transaction, then the entire transaction is rolled back. See also When is it necessary or convenient to use Spring or EJB3 or all of them together?
In JSF perspective, the "open session in view" pattern also implies that it's possible to perform business logic while rendering the response. This doesn't go very well together with among others exception handling whereby the intent is to show a custom error page to the enduser. If a business exception is thrown halfway rendering the response, whereby the enduser has thus already received the response headers and a part of the HTML, then the server cannot clear out the response anymore in order to show a nice error page. Also, performing business logic in getter methods is a frowned upon practice in JSF as per Why JSF calls getters multiple times.
Just prepare exactly the model the view needs via usual service method calls in managed bean action/listener methods, before render response phase starts. For example, a common situation is having an existing (unmanaged) parent entity at hands with a lazy loaded one-to-many children property, and you'd like to render it in the current view via an ajax action, then you should just let the ajax listener method fetch and initialize it in the service layer.
<f:ajax listener="#{bean.showLazyChildren(parent)}" render="children" />
public void showLazyChildren(Parent parent) {
someParentService.fetchLazyChildren(parent);
}
public void fetchLazyChildren(Parent parent) {
parent.setLazyChildren(em.merge(parent).getLazyChildren()); // Becomes managed.
parent.getLazyChildren().size(); // Triggers lazy initialization.
}
Specifically in JSF UISelectMany components, there's another, completely unexpected, probable cause for a LazyInitializationException: during saving the selected items, JSF needs to recreate the underlying collection before filling it with the selected items, however if it happens to be a persistence layer specific lazy loaded collection implementation, then this exception will also be thrown. The solution is to explicitly set the collectionType attribute of the UISelectMany component to the desired "plain" type.
<h:selectManyCheckbox ... collectionType="java.util.ArrayList">
This is in detail asked and answered in org.hibernate.LazyInitializationException at com.sun.faces.renderkit.html_basic.MenuRenderer.convertSelectManyValuesForModel.
See also:
LazyInitializationException in selectManyCheckbox on #ManyToMany(fetch=LAZY)
What is lazy loading in Hibernate?
For Hibernate >= 4.1.6 read this https://stackoverflow.com/a/11913404/3252285
Using the OpenSessionInView Filter (Design pattern) is very usefull, but in my opinion it dosn't solve the problem completely, here's why :
If we have an Entity stored in Session or handled by a Session Bean or retrieved from the cache, and one of its collections has not been initialized during the same loading request, then we could get the Exception at any time we call it later, even if we use the OSIV desing pattern.
Lets detail the problem:
Any hibernate Proxy need to be attached to an Opened Session to works correctly.
Hibernate is not offering any tool (Listener or Handler) to reatach the proxy in case his session is closed or he's detached from its own session.
Why hibernate dosn't offer that ? :
because its not easy to identify to which Session, the Proxy should be reatached, but in many cases we could.
So how to reattach the proxy when the LazyInitializationException happens ?.
In my ERP, i modify thoses Classes : JavassistLazyInitializer and AbstractPersistentCollection, then i never care about this Exception any more (used since 3 years without any bug) :
class JavassistLazyInitializer{
#Override
public Object invoke(
final Object proxy,
final Method thisMethod,
final Method proceed,
final Object[] args) throws Throwable {
if ( this.constructed ) {
Object result;
try {
result = this.invoke( thisMethod, args, proxy );
}
catch ( Throwable t ) {
throw new Exception( t.getCause() );
}
if ( result == INVOKE_IMPLEMENTATION ) {
Object target = null;
try{
target = getImplementation();
}catch ( LazyInitializationException lze ) {
/* Catching the LazyInitException and reatach the proxy to the right Session */
EntityManager em = ContextConfig.getCurrent().getDAO(
BaseBean.getWcx(),
HibernateProxyHelper.getClassWithoutInitializingProxy(proxy)).
getEm();
((Session)em.getDelegate()).refresh(proxy);// attaching the proxy
}
try{
if (target==null)
target = getImplementation();
.....
}
....
}
and the
class AbstractPersistentCollection{
private <T> T withTemporarySessionIfNeeded(LazyInitializationWork<T> lazyInitializationWork) {
SessionImplementor originalSession = null;
boolean isTempSession = false;
boolean isJTA = false;
if ( session == null ) {
if ( allowLoadOutsideTransaction ) {
session = openTemporarySessionForLoading();
isTempSession = true;
}
else {
/* Let try to reatach the proxy to the right Session */
try{
session = ((SessionImplementor)ContextConfig.getCurrent().getDAO(
BaseBean.getWcx(), HibernateProxyHelper.getClassWithoutInitializingProxy(
owner)).getEm().getDelegate());
SessionFactoryImplementor impl = (SessionFactoryImplementor) ((SessionImpl)session).getSessionFactory();
((SessionImpl)session).getPersistenceContext().addUninitializedDetachedCollection(
impl.getCollectionPersister(role), this);
}catch(Exception e){
e.printStackTrace();
}
if (session==null)
throwLazyInitializationException( "could not initialize proxy - no Session" );
}
}
if (session==null)
throwLazyInitializationException( "could not initialize proxy - no Session" );
....
}
...
}
NB :
I didn't fix all the possiblities like JTA or other cases.
This solution works even better when you activate the cache
A very common approach is to create an open entity manager in view filter. Spring provides one (check here).
I can't see that you're using Spring, but that's not really a problem, you can adapt the code in that class for your needs. You can also check the filter Open Session in View, which does the same, but it keeps a hibernate session open rather than an Entity Manager.
This approach might not be good for your application, there're a few discussions in SO about this pattern or antipattern. Link1. I think that for most applications (smalish, less than 20 concurrent users) this solution works just fine.
Edit
There's a Spring class ties better with FSF here
There is no standard support for open session in view in EJB3, see this answer.
The fetch type of mappings is just a default option, i can be overriden at query time. This is an example:
select g from Group g fetch join g.students
So an alternative in plain EJB3 is to make sure that all the data necessary for rendering the view is loaded before the render starts, by explicitly querying for the needed data.
Lazy Loading is an important feature that can boost performance nicely. However the usability of this is way worse than it should be.
Especially when you start to deal with AJAX-Requests, encountering uninitialized collections, the Annotation ist just usefull to tell Hibernate don't load this right away. Hibernate is not taking care of anything else, but will throw a LazyInitializationException at you - as you experienced.
My solution to this - which might be not perfect or a nightmare over all - works in any scenario, by applying the following rules (I have to admit, that this was written at the very beginning, but works ever since):
Every Entity that is using fetch = FetchType.LAZY has to extend LazyEntity, and call initializeCollection() in the getter of the collection in question, before it is returned. (A custom validator is taking care of this constraints, reporting missing extensions and/or calls to initializeCollection)
Example-Class (User, which has groups loaded lazy):
public class User extends LazyEntity{
#OneToMany(mappedBy = "user", fetch = FetchType.LAZY)
#BatchSize(size = 5)
List<Group> groups;
public List<Group> getGroups(){
initializeCollection(this.groups);
return this.groups;
}
}
Where the implementation of initializeCollection(Collection collection) looks like the following. The In-Line comments should give you an idea of what is required for which scenario. The method is synchronized to avoid 2 active sessions transfering ownership of an entity while another session is currently fetching data. (Only appears when concurrent Ajax-Requests are going on on the same instance.)
public abstract class LazyEntity {
#SuppressWarnings("rawtypes")
protected synchronized void initializeCollection(Collection collection) {
if (collection instanceof AbstractPersistentCollection) {
//Already loaded?
if (!Hibernate.isInitialized(collection)) {
AbstractPersistentCollection ps = (AbstractPersistentCollection) collection;
//Is current Session closed? Then this is an ajax call, need new session!
//Else, Hibernate will know what to do.
if (ps.getSession() == null) {
//get an OPEN em. This needs to be handled according to your application.
EntityManager em = ContextHelper.getBean(ServiceProvider.class).getEntityManager();
//get any Session to obtain SessionFactory
Session anySession = em.unwrap(Session.class);
SessionFactory sf = anySession.getSessionFactory();
//get a new session
Session newSession = sf.openSession();
//move "this" to the new session.
newSession.update(this);
//let hibernate do its work on the current session.
Hibernate.initialize(collection);
//done, we can abandon the "new Session".
newSession.close();
}
}
}
}
}
But be aware, that this approach needs you to validate IF an Entity is associated to the CURRENT session, whenever you save it - else you have to move the whole Object-Tree to the current session again before calling merge().
Open Session in View design pattern can be easy implemented in Java EE environment (with no dependency to hibernate, spring or something else out side Java EE). It is mostly the same as in OpenSessionInView, but instead of Hibernate session you should use JTA transaction
#WebFilter(urlPatterns = {"*"})
public class JTAFilter implements Filter{
#Resource
private UserTransaction ut;
#Override
public void init(FilterConfig filterConfig) throws ServletException {
}
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
try{
ut.begin();
chain.doFilter(request, response);
}catch(NotSupportedException | SystemException e){
throw new ServletException("", e);
} finally {
try {
if(ut.getStatus()!= Status.STATUS_MARKED_ROLLBACK){
ut.commit();
}
} catch (Exception e) {
throw new ServletException("", e);
}
}
}
#Override
public void destroy() {
}
}
What happens when committing a transaction from the entity manager that doesn't contain any dirty object? Can it be that no COMMIT command is sent to the DB?
I was having some test cases failing now and then without a reasonable cause. After some investigation, I have now a theory which I would like to have confirmed here.
I have a small fixture framework to prepare the data on the DB for each test. The fixtures use such a method to store objects to the DB using JPA (Hibernate):
public <R> R doInTransaction(final Function<EntityManager, R> whatToDo) {
final EntityManager em = emf.createEntityManager();
final R result;
try {
try {
em.getTransaction().begin();
result = whatToDo.apply(em);
em.getTransaction().commit();
} finally {
if (em.getTransaction().isActive()) {
em.getTransaction().rollback();
}
}
} finally {
em.close();
}
return result;
}
So, the fixture calls this method passing the whatToDo function where objects are persisted and the method wraps a transaction around the passed function. My failing test cases are using a fixture that relies on legacy code that uses stored procedures and store the objects directly via JDBC, i. e. instead of using em.persist(), I use the following in the passed function to call the stored procedures:
em.unwrap(Session.class).doWork(connection -> {
// stored procedures are called here directly over JDBC
});
So, my theory is that JPA on this circumstance is not immediately committing as there are no JPA dirty objects managed by the EntityManager. Consequently, the actual commits occurs only later, i. e. after the assertion of my test and the test fails. Could it be?
What is the transactional behaviour of Hibernate when "unwrapping" the connection out of the EntityManager?
I've added now an em.flush() before the em.getTransaction().commit() and it seems to help, but I'm still not 100% confident that this solves the issue. Can somebody confirm?
The behavior is the same regardless of unwrapping the connection. if you are not using JTA, the alternative is the underlying transaction provided by JDBC, i.e. local transaction. (or you can implement your own managed transaction provider)
When you unwrap the connection and deal with JDBC directly, you still get the same connection that is initially obtained by this session/entity manager. So it's the same effect.
I am having quite complex methods which create different entities during its execution and use them. For instance, I create some images and then I add them to an article:
#Transactional
public void createArticle() {
List<Image> images = ...
for (int i = 0; i < 10; i++) {
// creating some new images, method annotated #Transactional
images.add(repository.createImage(...));
}
Article article = getArticle();
article.addImages(images);
em.merge(article);
}
This correctly works – images have their IDs and then they are added to the article. The problem is that during this execution the database is locked and nothing can be modified. This is very unconvinient because images might be processed by some graphic processor and it might take some time.
So we might try to remove the #Transactional from the main method. This could be good.
What happens is that images are correctly created and have their ID. But once I try to add them to article and call merge, I get javax.persistence.EntityNotFoundException for Image with ID XXXX. The entity manager can't see that the image was created and have its ID. So the database is not locked, but we can't do anything either.
So what can I do? I don't want to have the database locked during the whole execution and I want to be able to access the created entities!
I am using current version of Spring and Hibernate, everything defined by Annotations. I don't use session factory, I am accessing everything via javax.persistence.EntityManager.
Consider leveraging the Hibernate cascading functionality for persisting object trees in one go with minimal database locking:
#Entity
public class Article {
#OneToMany(cascade=CascadeType.MERGE)
private List<Images> images;
}
#Transactional
public void createArticle() {
//images created as Java objects in memory, no DAOs called yet
List<Image> images = ...
Article article = getArticle();
article.addImages(images);
// cascading will save the article AND the images
em.merge(article);
}
Like this the article AND it's images will get persisted at the end of the transaction in a single transaction with a minimal lifetime. Up until then no locking occurred on the database.
Alternativelly split the createArticle in two #Transactional business methods, one createImages and the other addImagesToArticle and call them one after the other in a third method in another bean:
#Service
public class OtherBean {
#Autowired
private YourService yourService;
// note that no transactional annotation is used, this is intentional
public otherMethod() {
yourService.createImages(); // first transaction - images are committed
yourService.addImagesToArticle(); // second transaction - images are added to article
}
}
You could try setting the transaction isolation on your datasource to READ_UNCOMMITTED, though that can lead to inconsistencies so it is generally not a recommended thing to do.
My best guess is that your transaction isolation level is SERIALIZABLE. That's why the DB locks affected tables for the whole duration of a transaction.
If that's the case change the level to READ_COMMITTED. Hibernate (or any JPA provider) works nicely with this one.
It won't lock anything unless you explicitly call entityManager.lock(someEntity, LockModeType.SomeLockType))
Also when you choose transaction boundaries firstly think in terms of atomicity. If createArticle() is an atomic unit of work it just has to be made transactional, breaking it into smaller transactions for the sake of 'optimization' is wrong.
I have a situation in which I need to re-attach detached objects to a hibernate session, although an object of the same identity MAY already exist in the session, which will cause errors.
Right now, I can do one of two things.
getHibernateTemplate().update( obj )
This works if and only if an object doesn't already exist in the hibernate session. Exceptions are thrown stating an object with the given identifier already exists in the session when I need it later.
getHibernateTemplate().merge( obj )
This works if and only if an object exists in the hibernate session. Exceptions are thrown when I need the object to be in a session later if I use this.
Given these two scenarios, how can I generically attach sessions to objects? I don't want to use exceptions to control the flow of this problem's solution, as there must be a more elegant solution...
So it seems that there is no way to reattach a stale detached entity in JPA.
merge() will push the stale state to the DB,
and overwrite any intervening updates.
refresh() cannot be called on a detached entity.
lock() cannot be called on a detached entity,
and even if it could, and it did reattach the entity,
calling 'lock' with argument 'LockMode.NONE'
implying that you are locking, but not locking,
is the most counterintuitive piece of API design I've ever seen.
So you are stuck.
There's an detach() method, but no attach() or reattach().
An obvious step in the object lifecycle is not available to you.
Judging by the number of similar questions about JPA,
it seems that even if JPA does claim to have a coherent model,
it most certainly does not match the mental model of most programmers,
who have been cursed to waste many hours trying understand
how to get JPA to do the simplest things, and end up with cache
management code all over their applications.
It seems the only way to do it is discard your stale detached entity
and do a find query with the same id, that will hit the L2 or the DB.
Mik
All of these answers miss an important distinction. update() is used to (re)attach your object graph to a Session. The objects you pass it are the ones that are made managed.
merge() is actually not a (re)attachment API. Notice merge() has a return value? That's because it returns you the managed graph, which may not be the graph you passed it. merge() is a JPA API and its behavior is governed by the JPA spec. If the object you pass in to merge() is already managed (already associated with the Session) then that's the graph Hibernate works with; the object passed in is the same object returned from merge(). If, however, the object you pass into merge() is detached, Hibernate creates a new object graph that is managed and it copies the state from your detached graph onto the new managed graph. Again, this is all dictated and governed by the JPA spec.
In terms of a generic strategy for "make sure this entity is managed, or make it managed", it kind of depends on if you want to account for not-yet-inserted data as well. Assuming you do, use something like
if ( session.contains( myEntity ) ) {
// nothing to do... myEntity is already associated with the session
}
else {
session.saveOrUpdate( myEntity );
}
Notice I used saveOrUpdate() rather than update(). If you do not want not-yet-inserted data handled here, use update() instead...
Entity states
JPA defines the following entity states:
New (Transient)
A newly created object that hasn’t ever been associated with a Hibernate Session (a.k.a Persistence Context) and is not mapped to any database table row is considered to be in the New (Transient) state.
To become persisted we need to either explicitly call the EntityManager#persist method or make use of the transitive persistence mechanism.
Persistent (Managed)
A persistent entity has been associated with a database table row and it’s being managed by the currently running Persistence Context. Any change made to such an entity is going to be detected and propagated to the database (during the Session flush-time).
With Hibernate, we no longer have to execute INSERT/UPDATE/DELETE statements. Hibernate employs a transactional write-behind working style and changes are synchronized at the very last responsible moment, during the current Session flush-time.
Detached
Once the currently running Persistence Context is closed all the previously managed entities become detached. Successive changes will no longer be tracked and no automatic database synchronization is going to happen.
Entity state transitions
You can change the entity state using various methods defined by the EntityManager interface.
To understand the JPA entity state transitions better, consider the following diagram:
When using JPA, to reassociate a detached entity to an active EntityManager, you can use the merge operation.
When using the native Hibernate API, apart from merge, you can reattach a detached entity to an active Hibernate Sessionusing the update methods, as demonstrated by the following diagram:
Merging a detached entity
The merge is going to copy the detached entity state (source) to a managed entity instance (destination).
Consider we have persisted the following Book entity, and now the entity is detached as the EntityManager that was used to persist the entity got closed:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
While the entity is in the detached state, we modify it as follows:
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
Now, we want to propagate the changes to the database, so we can call the merge method:
doInJPA(entityManager -> {
Book book = entityManager.merge(_book);
LOGGER.info("Merging the Book entity");
assertFalse(book == _book);
});
And Hibernate is going to execute the following SQL statements:
SELECT
b.id,
b.author AS author2_0_,
b.isbn AS isbn3_0_,
b.title AS title4_0_
FROM
book b
WHERE
b.id = 1
-- Merging the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
If the merging entity has no equivalent in the current EntityManager, a fresh entity snapshot will be fetched from the database.
Once there is a managed entity, JPA copies the state of the detached entity onto the one that is currently managed, and during the Persistence Context flush, an UPDATE will be generated if the dirty checking mechanism finds that the managed entity has changed.
So, when using merge, the detached object instance will continue to remain detached even after the merge operation.
Reattaching a detached entity
Hibernate, but not JPA supports reattaching through the update method.
A Hibernate Session can only associate one entity object for a given database row. This is because the Persistence Context acts as an in-memory cache (first level cache) and only one value (entity) is associated with a given key (entity type and database identifier).
An entity can be reattached only if there is no other JVM object (matching the same database row) already associated with the current Hibernate Session.
Considering we have persisted the Book entity and that we modified it when the Book entity was in the detached state:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
We can reattach the detached entity like this:
doInJPA(entityManager -> {
Session session = entityManager.unwrap(Session.class);
session.update(_book);
LOGGER.info("Updating the Book entity");
});
And Hibernate will execute the following SQL statement:
-- Updating the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
The update method requires you to unwrap the EntityManager to a Hibernate Session.
Unlike merge, the provided detached entity is going to be reassociated with the current Persistence Context and an UPDATE is scheduled during flush whether the entity has modified or not.
To prevent this, you can use the #SelectBeforeUpdate Hibernate annotation which will trigger a SELECT statement that fetched loaded state which is then used by the dirty checking mechanism.
#Entity(name = "Book")
#Table(name = "book")
#SelectBeforeUpdate
public class Book {
//Code omitted for brevity
}
Beware of the NonUniqueObjectException
One problem that can occur with update is if the Persistence Context already contains an entity reference with the same id and of the same type as in the following example:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
try {
doInJPA(entityManager -> {
Book book = entityManager.find(
Book.class,
_book.getId()
);
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(_book);
});
} catch (NonUniqueObjectException e) {
LOGGER.error(
"The Persistence Context cannot hold " +
"two representations of the same entity",
e
);
}
Now, when executing the test case above, Hibernate is going to throw a NonUniqueObjectException because the second EntityManager already contains a Book entity with the same identifier as the one we pass to update, and the Persistence Context cannot hold two representations of the same entity.
org.hibernate.NonUniqueObjectException:
A different object with the same identifier value was already associated with the session : [com.vladmihalcea.book.hpjp.hibernate.pc.Book#1]
at org.hibernate.engine.internal.StatefulPersistenceContext.checkUniqueness(StatefulPersistenceContext.java:651)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performUpdate(DefaultSaveOrUpdateEventListener.java:284)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsDetached(DefaultSaveOrUpdateEventListener.java:227)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:92)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:73)
at org.hibernate.internal.SessionImpl.fireSaveOrUpdate(SessionImpl.java:682)
at org.hibernate.internal.SessionImpl.saveOrUpdate(SessionImpl.java:674)
Conclusion
The merge method is to be preferred if you are using optimistic locking as it allows you to prevent lost updates.
The update is good for batch updates as it can prevent the additional SELECT statement generated by the merge operation, therefore reducing the batch update execution time.
Undiplomatic answer: You're probably looking for an extended persistence context. This is one of the main reasons behind the Seam Framework... If you're struggling to use Hibernate in Spring in particular, check out this piece of Seam's docs.
Diplomatic answer: This is described in the Hibernate docs. If you need more clarification, have a look at Section 9.3.2 of Java Persistence with Hibernate called "Working with Detached Objects." I'd strongly recommend you get this book if you're doing anything more than CRUD with Hibernate.
If you are sure that your entity has not been modified (or if you agree any modification will be lost), then you may reattach it to the session with lock.
session.lock(entity, LockMode.NONE);
It will lock nothing, but it will get the entity from the session cache or (if not found there) read it from the DB.
It's very useful to prevent LazyInitException when you are navigating relations from an "old" (from the HttpSession for example) entities. You first "re-attach" the entity.
Using get may also work, except when you get inheritance mapped (which will already throw an exception on the getId()).
entity = session.get(entity.getClass(), entity.getId());
I went back to the JavaDoc for org.hibernate.Session and found the following:
Transient instances may be made persistent by calling save(), persist() or
saveOrUpdate(). Persistent instances may be made transient by calling delete(). Any instance returned by a get() or load() method is persistent. Detached instances may be made persistent by calling update(), saveOrUpdate(), lock() or replicate(). The state of a transient or detached instance may also be made persistent as a new persistent instance by calling merge().
Thus update(), saveOrUpdate(), lock(), replicate() and merge() are the candidate options.
update(): Will throw an exception if there is a persistent instance with the same identifier.
saveOrUpdate(): Either save or update
lock(): Deprecated
replicate(): Persist the state of the given detached instance, reusing the current identifier value.
merge(): Returns a persistent object with the same identifier. The given instance does not become associated with the session.
Hence, lock() should not be used straightway and based on the functional requirement one or more of them can be chosen.
I did it that way in C# with NHibernate, but it should work the same way in Java:
public virtual void Attach()
{
if (!HibernateSessionManager.Instance.GetSession().Contains(this))
{
ISession session = HibernateSessionManager.Instance.GetSession();
using (ITransaction t = session.BeginTransaction())
{
session.Lock(this, NHibernate.LockMode.None);
t.Commit();
}
}
}
First Lock was called on every object because Contains was always false. The problem is that NHibernate compares objects by database id and type. Contains uses the equals method, which compares by reference if it's not overwritten. With that equals method it works without any Exceptions:
public override bool Equals(object obj)
{
if (this == obj) {
return true;
}
if (GetType() != obj.GetType()) {
return false;
}
if (Id != ((BaseObject)obj).Id)
{
return false;
}
return true;
}
Session.contains(Object obj) checks the reference and will not detect a different instance that represents the same row and is already attached to it.
Here my generic solution for Entities with an identifier property.
public static void update(final Session session, final Object entity)
{
// if the given instance is in session, nothing to do
if (session.contains(entity))
return;
// check if there is already a different attached instance representing the same row
final ClassMetadata classMetadata = session.getSessionFactory().getClassMetadata(entity.getClass());
final Serializable identifier = classMetadata.getIdentifier(entity, (SessionImplementor) session);
final Object sessionEntity = session.load(entity.getClass(), identifier);
// override changes, last call to update wins
if (sessionEntity != null)
session.evict(sessionEntity);
session.update(entity);
}
This is one of the few aspects of .Net EntityFramework I like, the different attach options regarding changed entities and their properties.
I came up with a solution to "refresh" an object from the persistence store that will account for other objects which may already be attached to the session:
public void refreshDetached(T entity, Long id)
{
// Check for any OTHER instances already attached to the session since
// refresh will not work if there are any.
T attached = (T) session.load(getPersistentClass(), id);
if (attached != entity)
{
session.evict(attached);
session.lock(entity, LockMode.NONE);
}
session.refresh(entity);
}
Sorry, cannot seem to add comments (yet?).
Using Hibernate 3.5.0-Final
Whereas the Session#lock method this deprecated, the javadoc does suggest using Session#buildLockRequest(LockOptions)#lock(entity)and if you make sure your associations have cascade=lock, the lazy-loading isn't an issue either.
So, my attach method looks a bit like
MyEntity attach(MyEntity entity) {
if(getSession().contains(entity)) return entity;
getSession().buildLockRequest(LockOptions.NONE).lock(entity);
return entity;
Initial tests suggest it works a treat.
Perhaps it behaves slightly different on Eclipselink. To re-attach detached objects without getting stale data, I usually do:
Object obj = em.find(obj.getClass(), id);
and as an optional a second step (to get caches invalidated):
em.refresh(obj)
try getHibernateTemplate().replicate(entity,ReplicationMode.LATEST_VERSION)
In the original post, there are two methods, update(obj) and merge(obj) that are mentioned to work, but in opposite circumstances. If this is really true, then why not test to see if the object is already in the session first, and then call update(obj) if it is, otherwise call merge(obj).
The test for existence in the session is session.contains(obj). Therefore, I would think the following pseudo-code would work:
if (session.contains(obj))
{
session.update(obj);
}
else
{
session.merge(obj);
}
to reattach this object, you must use merge();
this methode accept in parameter your entity detached and return an entity will be attached and reloaded from Database.
Example :
Lot objAttach = em.merge(oldObjDetached);
objAttach.setEtat(...);
em.persist(objAttach);
calling first merge() (to update persistent instance), then lock(LockMode.NONE) (to attach the current instance, not the one returned by merge()) seems to work for some use cases.
Property hibernate.allow_refresh_detached_entity did the trick for me. But it is a general rule, so it is not very suitable if you want to do it only in some cases. I hope it helps.
Tested on Hibernate 5.4.9
SessionFactoryOptionsBuilder
try getHibernateTemplate().saveOrUpdate()