JPA - create-if-not-exists entity? - java

I have several mapped objects in my JPA / Hibernate application. On the network I receive packets that represent updates to these objects, or may in fact represent new objects entirely.
I'd like to write a method like
<T> T getOrCreate(Class<T> klass, Object primaryKey)
that returns an object of the provided class if one exists in the database with pk primaryKey, and otherwise creates a new object of that class, persists it and returns it.
The very next thing I'll do with the object will be to update all its fields, within a transaction.
Is there an idiomatic way to do this in JPA, or is there a better way to solve my problem?

I'd like to write a method like <T> T getOrCreate(Class<T> klass, Object primaryKey)
This won't be easy.
A naive approach would be to do something like this (assuming the method is running inside a transaction):
public <T> T findOrCreate(Class<T> entityClass, Object primaryKey) {
T entity = em.find(entityClass, primaryKey);
if ( entity != null ) {
return entity;
} else {
try {
entity = entityClass.newInstance();
/* use more reflection to set the pk (probably need a base entity) */
return entity;
} catch ( Exception e ) {
throw new RuntimeException(e);
}
}
}
But in a concurrent environment, this code could fail due to some race condition:
T1: BEGIN TX;
T2: BEGIN TX;
T1: SELECT w/ id = 123; //returns null
T2: SELECT w/ id = 123; //returns null
T1: INSERT w/ id = 123;
T1: COMMIT; //row inserted
T2: INSERT w/ name = 123;
T2: COMMIT; //constraint violation
And if you are running multiple JVMs, synchronization won't help. And without acquiring a table lock (which is pretty horrible), I don't really see how you could solve this.
In such case, I wonder if it wouldn't be better to systematically insert first and handle a possible exception to perform a subsequent select (in a new transaction).
You should probably add some details regarding the mentioned constraints (multi-threading? distributed environment?).

Using pure JPA one can solve this optimistically in a multi-threaded solution with nested entity managers (really we just need nested transactions but I don't think that is possible with pure JPA). Essentially one needs to create a micro-transaction that encapsulates the find-or-create operation. This performance won't be fantastic and isn't suitable for large batched creates but should be sufficient for most cases.
Prerequisites:
The entity must have a unique constraint violation that will fail if two instances are created
You have some kind of finder to find the entity (can find by primary key with EntityManager.find or by some query) we will refer to this as finder
You have some kind of factory method to create a new entity should the one you are looking for fail to exist, we will refer to this as factory.
I'm assuming that the given findOrCreate method would exist on some repository object and it is called in the context of an existing entity manager and an existing transaction.
If the transaction isolation level is serializable or snapshot this won't work. If the transaction is repeatable read then you must not have attempted to read the entity in the current transaction.
I'd recommend breaking the logic below into multiple methods for maintainability.
Code:
public <T> T findOrCreate(Supplier<T> finder, Supplier<T> factory) {
EntityManager innerEntityManager = entityManagerFactory.createEntityManager();
innerEntityManager.getTransaction().begin();
try {
//Try the naive find-or-create in our inner entity manager
if(finder.get() == null) {
T newInstance = factory.get();
innerEntityManager.persist(newInstance);
}
innerEntityManager.getTransaction().commit();
} catch (PersistenceException ex) {
//This may be a unique constraint violation or it could be some
//other issue. We will attempt to determine which it is by trying
//to find the entity. Either way, our attempt failed and we
//roll back the tx.
innerEntityManager.getTransaction().rollback();
T entity = finder.get();
if(entity == null) {
//Must have been some other issue
throw ex;
} else {
//Either it was a unique constraint violation or we don't
//care because someone else has succeeded
return entity;
}
} catch (Throwable t) {
innerEntityManager.getTransaction().rollback();
throw t;
} finally {
innerEntityManager.close();
}
//If we didn't hit an exception then we successfully created it
//in the inner transaction. We now need to find the entity in
//our outer transaction.
return finder.get();
}

I must point out there's some flaw in #gus an's answer. It could lead to an apparent problem in a concurrent situation. If there are 2 threads reading the count, they would both get 0 and then do the insertion. So duplicate rows created.
My suggestion here is to write your native query like the one below:
insert into af_label (content,previous_level_id,interval_begin,interval_end)
select "test",32,9,13
from dual
where not exists (select * from af_label where previous_level_id=32 and interval_begin=9 and interval_end=13)
It's just like an optimistic lock in the program. But we make the db engine to decide and find the duplicates by your customized attributes.

How about use orElse function after findByKeyword? You can return a new instance if no record is found.
SearchCount searchCount = searchCountRepository.findByKeyword(keyword)
.orElse(SearchCount.builder()
.keyword(keyword)
.count(0)
.build()) ;

There is an easy solution to have this tackled even in a concurrent environment.
Use optimistic locking on your entities with
#Version private Long version; and <column name="version" type="BIGINT"/> in your liquibase table creation.
If you try to save a new entity thats (composite) p_pkey already exists, a DataIntegrityViolationException will be thrown up to the JpaRepository. So there's no need to worry about concurrent locking in your service layer - the database will know if an entity exists.

Related

How to check special conditions before saving data with Hibernate

Sample Scenario
I have a limit that controls the total value of a column. If I make a save that exceeds this limit, I want it to throw an exception. For example;
Suppose I have already added the following data: LIMIT = 20
id
code
value
1
A
15
2
A
5
3
B
12
4
B
3
If I insert (A,2) it exceeds the limit and I want to get exception
If I insert (B,4) the transaction should be successful since it didn't exceed the limit
code and value are interrelated
What can I do
I can check this scenario with required queries. For example, I write a method for it and I can check it in the save method. That's it.
However, I'm looking for a more useful solution than this
For example, is there any annotation when designing Entity ?
Can I do this without calling the method that provides this control every time ?
What examples can I give ?
#UniqueConstraint checking if it adds the same values
Using transaction
The most common and long-accepted way is to simply abstract in a suitable form (in a class, a library, a service, ...) the business rules that govern the behavior you describe, within a transaction:
#Transactional(propagation = Propagation.REQUIRED)
public RetType operation(ReqType args) {
...
perform operations;
...
if(fail post conditions)
throw ...;
...
}
In this case, if when calling a method there is already an open transaction, that transaction will be used (and there will be no interlocks), if there is no transaction created, it will create a new one so that both the operations and the postconditions check are performed within the same transaction.
Note that with this strategy both operation and invariant check transactions can combine multiple transactional states managed by the TransactionManager (e.g. Redis, MySQL, MQS, ... simultaneously and in a coordinated manner).
Using only the database
It has not been used for a long time (in favor of the first way) but using TRIGGERS was the canonical option used some decades ago to check postconditions, but this solution is usually coupled to the specific database engine (e.g. in PostgreSQL or MySQL).
It could be useful in the case where the client making the modifications is unable or unwilling (not safe) to check postconditions (e.g. bash processes) within a transaction. But nowadays it is infrequent.
The use of TRIGGERS may also be preferable in certain scenarios where efficiency is required, as there are certain optimization options within the database scripts.
Neither Hibernate nor Spring Data JPA have anything built-in for this scenario. You have to program the transaction logic in your repository yourself:
#PersistenceContext
EntityManager em;
public addValue(String code, int value) {
var checkQuery = em.createQuery("SELECT SUM(value) FROM Entity WHERE code = :code", Integer.class);
checkQuery.setParameter("code", code);
if (checkQuery.getSingleResult() + value > 20) {
throw new LimitExceededException("attempted to exceed limit for " + code);
}
var newEntity = new Entity();
newEntity.setCode(code);
newEntity.setValue(value);
em.persist(newEntity);
}
Then (it's important!) you have to define SERIALIZABLE isolation level on the #Transactional annotations for the methods that work with this table.
Read more about serializable isolation level here, they have an oddly similar example.
Note that you have to consider retrying the failed transaction. No idea how to do this with Spring though.
You should use a singleton (javax/ejb/Singleton)
#Singleton
public class Register {
#Lock(LockType.WRITE)
public register(String code, int value) {
if(i_can_insert_modify(code, value)) {
//use entityManager or some dao
} else {
//do something
}
}
}

Good practice for saving only unique by property records to the database

I would like to know what good practice is in the following situation:
#Entity
#Table(name = "word")
class WordEntity(uuid: UUID? = null,
#Column(nullable = false, unique = true) val name: String
) : BaseEntity(uuid)
If i have an Iterable of this entity and want to call the method saveAll to persist it to the database a ConstraintViolationException can be thrown.
So my goal is to add only unique records to the database. I can loop and do something like this:
fun saveAll(words: List<WordRequest>): List<WordDTO> {
...
for (wordEntity in wordEntities) {
try {
result.add(wordRepository.save(wordEntity))
} catch (e: RuntimeException) { }
}
...
}
OR i can do a findByName on every loop to check if it exists or not.
So my question is which option should i go for and is there a better way to handle this?
You've got 2 options:
Some databases support INSERT IF NOT EXIST syntax. It's doing exactly what you need. But it means you need write a native SQL.
If you want to stick with ORM - you'll have to open a new session (and start new transaction) for each of the records. Because if an exception is thrown you can't keep relying on the same Session (EntityManager), the behavior is undocumented.
Checking if record already exists may lower the number of failed INSERTs (you can do this in 1 SELECT using in() statement), but it doesn't guarantee anything - there's a time between your SELECT and INSERT. Another transaction could INSERT in between. Well, unless you use Serializable isolation level.

What Query should i use in Hibernate to fetch POJO?

I learnt Hibernate and used it to reduce my Java code to a vast extent and also able
to reduce the time spent for DB's. Now what type of query should i use to finish my
operations for getting a DB list to be displayed, to update and delete.
My code for deletion is
String newToken = "DELETEUSER";
if(!TokenManager.checkRoleToken(newToken)){
return;
}
Session session = Main.getSession(); //calling the main method to get sesion
Leavetable table = new Leavetable; // intialisation of object table
try{
Transaction tr = session.beginTransaction();
table = session.createQuery();
session.delete(table); // deletion of the object and its properties from selected leaveID
tr.commit();
}
finally{
session.close();
}
My code for Db updation
public void updateLeaveTable( Leavetable leave ) {
String newToken = "ADDUSER";
if( !TokenManager.checkRoleToken( newToken ) ) {
return;
}
Session session = Main.getSession(); // calling the main method to get
// session
try {
session = Main.getSession();
Transaction tr = session.beginTransaction();
session.saveOrUpdate( leave ); // here without query the table gets
// updated How?
tr.commit();
}
finally {
session.close();
}
}
What type of query should I follow. My final task before going into project. When I
know this will start my life as a developer. Any suggestions Please.
Do you mean a HQL query? Well, a typical query on your Leavetable entity would looks like this:
Query q = session.createQuery("from Leavetable t where t.someField = :value");
q.setParameter("value", foo);
List<Leavetable> results = q.list();
However, if you just want to retrieve an entity by identifier, see Session#load() or Session#get(). I don't want to make things too much confusing but while both methods are similar, there is an important difference between both of them. Quoting the Hibernate Forums:
Retrieving objects by identifier
The following Hibernate code snippet
retrieves a User object from the
database:
User user = (User) session.get(User.class, userID);
The get() method is special because
the identifier uniquely identifies a
single instance of a class. Hence it’s
common for applications to use the
identifier as a convenient handle to a
persistent object. Retrieval by
identifier can use the cache when
retrieving an object, avoiding a
database hit if the object is already
cached. Hibernate also provides a
load() method:
User user = (User) session.load(User.class, userID);
The load() method is older; get() was
added to Hibernate’s API due to user
request. The difference is trivial:
If load() can’t find the object in
the cache or database, an exception is
thrown. The load() method never
returns null. The get() method
returns null if the object can’t be
found.
The load() method may return a proxy
instead of a real persistent instance.
A proxy is a placeholder that triggers
the loading of the real object when
it’s accessed for the first time; we
discuss proxies later in this section.
On the other hand, get() never
returns a proxy.
Choosing between get() and load()
is easy: If you’re certain the
persistent object exists, and
nonexistence would be considered
exceptional, load() is a good
option. If you aren’t certain there is
a persistent instance with the given
identifier, use get() and test the
return value to see if it’s null.
Using load() has a further
implication: The application may
retrieve a valid reference (a proxy)
to a persistent instance without
hitting the database to retrieve its
persistent state. So load() might
not throw an exception when it doesn’t
find the persistent object in the
cache or database; the exception would
be thrown later, when the proxy is
accessed.
Of course, retrieving an object by
identifier isn’t as flexible as using
arbitrary queries.
See also the Hibernate Documentation (links below).
Reference
Hibernate Core Reference Guide
10.3. Loading an object
Chapter 14. HQL: The Hibernate Query Language

hibernate column uniqueness question

I'm still in the process of learning hibernate/hql and I have a question that's half best practices question/half sanity check.
Let's say I have a class A:
#Entity
public class A
{
#Id #GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
#Column(unique=true)
private String name = "";
//getters, setters, etc. omitted for brevity
}
I want to enforce that every instance of A that gets saved has a unique name (hence the #Column annotation), but I also want to be able to handle the case where there's already an A instance saved that has that name. I see two ways of doing this:
1) I can catch the org.hibernate.exception.ConstraintViolationException that could be thrown during the session.saveOrUpdate() call and try to handle it.
2) I can query for existing instances of A that already have that name in the DAO before calling session.saveOrUpdate().
Right now I'm leaning towards approach 2, because in approach 1 I don't know how to programmatically figure out which constraint was violated (there are a couple of other unique members in A). Right now my DAO.save() code looks roughly like this:
public void save(A a) throws DataAccessException, NonUniqueNameException
{
Session session = sessionFactory.getCurrentSession();
try
{
session.beginTransaction();
Query query = null;
//if id isn't null, make sure we don't count this object as a duplicate
if(obj.getId() == null)
{
query = session.createQuery("select count(a) from A a where a.name = :name").setParameter("name", obj.getName());
}
else
{
query = session.createQuery("select count(a) from A a where a.name = :name " +
"and a.id != :id").setParameter("name", obj.getName()).setParameter("name", obj.getName());
}
Long numNameDuplicates = (Long)query.uniqueResult();
if(numNameDuplicates > 0)
throw new NonUniqueNameException();
session.saveOrUpdate(a);
session.getTransaction().commit();
}
catch(RuntimeException e)
{
session.getTransaction().rollback();
throw new DataAccessException(e); //my own class
}
}
Am I going about this in the right way? Can hibernate tell me programmatically (i.e. not as an error string) which value is violating the uniqueness constraint? By separating the query from the commit, am I inviting thread-safety errors, or am I safe? How is this usually done?
Thanks!
I think that your second approach is best.
To be able to catch the ConstraintViolation exception with any certainty that this particular object caused it, you would need to flush the session immediately after the call to saveOrUpdate. This could introduce performance problems if you need to insert a number of these objects at a time.
Even though you would be testing if the name already exists in the table on every save action, this would still be faster than flushing after every insert. (You could always benchmark to confirm.)
This also allows you to structure your code in such a way that you could call a 'validator' from a different layer. For example, if this unique property is the email of a new user, from the web interface you can call the validation method to determine if the email address is acceptable. If you went with the first option, you would only know if the email was acceptable after trying to insert it.
Approach 1 would be ok if:
There is only one constraint in the entity.
There is only one dirty object in the session.
Remember that the object may not be saved until flush() is called or the transaction commited.
For best error reporting I would:
Use approach two for every constraint violation, so I can give an specific error for each of them..
Implement an interceptor that in case of an constraint exception retries the transaction (a max number of times) so the violation can't be caught in one of the tests. This is only needed depending on the transaction isolation level.

What is the proper way to re-attach detached objects in Hibernate?

I have a situation in which I need to re-attach detached objects to a hibernate session, although an object of the same identity MAY already exist in the session, which will cause errors.
Right now, I can do one of two things.
getHibernateTemplate().update( obj )
This works if and only if an object doesn't already exist in the hibernate session. Exceptions are thrown stating an object with the given identifier already exists in the session when I need it later.
getHibernateTemplate().merge( obj )
This works if and only if an object exists in the hibernate session. Exceptions are thrown when I need the object to be in a session later if I use this.
Given these two scenarios, how can I generically attach sessions to objects? I don't want to use exceptions to control the flow of this problem's solution, as there must be a more elegant solution...
So it seems that there is no way to reattach a stale detached entity in JPA.
merge() will push the stale state to the DB,
and overwrite any intervening updates.
refresh() cannot be called on a detached entity.
lock() cannot be called on a detached entity,
and even if it could, and it did reattach the entity,
calling 'lock' with argument 'LockMode.NONE'
implying that you are locking, but not locking,
is the most counterintuitive piece of API design I've ever seen.
So you are stuck.
There's an detach() method, but no attach() or reattach().
An obvious step in the object lifecycle is not available to you.
Judging by the number of similar questions about JPA,
it seems that even if JPA does claim to have a coherent model,
it most certainly does not match the mental model of most programmers,
who have been cursed to waste many hours trying understand
how to get JPA to do the simplest things, and end up with cache
management code all over their applications.
It seems the only way to do it is discard your stale detached entity
and do a find query with the same id, that will hit the L2 or the DB.
Mik
All of these answers miss an important distinction. update() is used to (re)attach your object graph to a Session. The objects you pass it are the ones that are made managed.
merge() is actually not a (re)attachment API. Notice merge() has a return value? That's because it returns you the managed graph, which may not be the graph you passed it. merge() is a JPA API and its behavior is governed by the JPA spec. If the object you pass in to merge() is already managed (already associated with the Session) then that's the graph Hibernate works with; the object passed in is the same object returned from merge(). If, however, the object you pass into merge() is detached, Hibernate creates a new object graph that is managed and it copies the state from your detached graph onto the new managed graph. Again, this is all dictated and governed by the JPA spec.
In terms of a generic strategy for "make sure this entity is managed, or make it managed", it kind of depends on if you want to account for not-yet-inserted data as well. Assuming you do, use something like
if ( session.contains( myEntity ) ) {
// nothing to do... myEntity is already associated with the session
}
else {
session.saveOrUpdate( myEntity );
}
Notice I used saveOrUpdate() rather than update(). If you do not want not-yet-inserted data handled here, use update() instead...
Entity states
JPA defines the following entity states:
New (Transient)
A newly created object that hasn’t ever been associated with a Hibernate Session (a.k.a Persistence Context) and is not mapped to any database table row is considered to be in the New (Transient) state.
To become persisted we need to either explicitly call the EntityManager#persist method or make use of the transitive persistence mechanism.
Persistent (Managed)
A persistent entity has been associated with a database table row and it’s being managed by the currently running Persistence Context. Any change made to such an entity is going to be detected and propagated to the database (during the Session flush-time).
With Hibernate, we no longer have to execute INSERT/UPDATE/DELETE statements. Hibernate employs a transactional write-behind working style and changes are synchronized at the very last responsible moment, during the current Session flush-time.
Detached
Once the currently running Persistence Context is closed all the previously managed entities become detached. Successive changes will no longer be tracked and no automatic database synchronization is going to happen.
Entity state transitions
You can change the entity state using various methods defined by the EntityManager interface.
To understand the JPA entity state transitions better, consider the following diagram:
When using JPA, to reassociate a detached entity to an active EntityManager, you can use the merge operation.
When using the native Hibernate API, apart from merge, you can reattach a detached entity to an active Hibernate Sessionusing the update methods, as demonstrated by the following diagram:
Merging a detached entity
The merge is going to copy the detached entity state (source) to a managed entity instance (destination).
Consider we have persisted the following Book entity, and now the entity is detached as the EntityManager that was used to persist the entity got closed:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
While the entity is in the detached state, we modify it as follows:
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
Now, we want to propagate the changes to the database, so we can call the merge method:
doInJPA(entityManager -> {
Book book = entityManager.merge(_book);
LOGGER.info("Merging the Book entity");
assertFalse(book == _book);
});
And Hibernate is going to execute the following SQL statements:
SELECT
b.id,
b.author AS author2_0_,
b.isbn AS isbn3_0_,
b.title AS title4_0_
FROM
book b
WHERE
b.id = 1
-- Merging the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
If the merging entity has no equivalent in the current EntityManager, a fresh entity snapshot will be fetched from the database.
Once there is a managed entity, JPA copies the state of the detached entity onto the one that is currently managed, and during the Persistence Context flush, an UPDATE will be generated if the dirty checking mechanism finds that the managed entity has changed.
So, when using merge, the detached object instance will continue to remain detached even after the merge operation.
Reattaching a detached entity
Hibernate, but not JPA supports reattaching through the update method.
A Hibernate Session can only associate one entity object for a given database row. This is because the Persistence Context acts as an in-memory cache (first level cache) and only one value (entity) is associated with a given key (entity type and database identifier).
An entity can be reattached only if there is no other JVM object (matching the same database row) already associated with the current Hibernate Session.
Considering we have persisted the Book entity and that we modified it when the Book entity was in the detached state:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
We can reattach the detached entity like this:
doInJPA(entityManager -> {
Session session = entityManager.unwrap(Session.class);
session.update(_book);
LOGGER.info("Updating the Book entity");
});
And Hibernate will execute the following SQL statement:
-- Updating the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
The update method requires you to unwrap the EntityManager to a Hibernate Session.
Unlike merge, the provided detached entity is going to be reassociated with the current Persistence Context and an UPDATE is scheduled during flush whether the entity has modified or not.
To prevent this, you can use the #SelectBeforeUpdate Hibernate annotation which will trigger a SELECT statement that fetched loaded state which is then used by the dirty checking mechanism.
#Entity(name = "Book")
#Table(name = "book")
#SelectBeforeUpdate
public class Book {
//Code omitted for brevity
}
Beware of the NonUniqueObjectException
One problem that can occur with update is if the Persistence Context already contains an entity reference with the same id and of the same type as in the following example:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
try {
doInJPA(entityManager -> {
Book book = entityManager.find(
Book.class,
_book.getId()
);
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(_book);
});
} catch (NonUniqueObjectException e) {
LOGGER.error(
"The Persistence Context cannot hold " +
"two representations of the same entity",
e
);
}
Now, when executing the test case above, Hibernate is going to throw a NonUniqueObjectException because the second EntityManager already contains a Book entity with the same identifier as the one we pass to update, and the Persistence Context cannot hold two representations of the same entity.
org.hibernate.NonUniqueObjectException:
A different object with the same identifier value was already associated with the session : [com.vladmihalcea.book.hpjp.hibernate.pc.Book#1]
at org.hibernate.engine.internal.StatefulPersistenceContext.checkUniqueness(StatefulPersistenceContext.java:651)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performUpdate(DefaultSaveOrUpdateEventListener.java:284)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsDetached(DefaultSaveOrUpdateEventListener.java:227)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:92)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:73)
at org.hibernate.internal.SessionImpl.fireSaveOrUpdate(SessionImpl.java:682)
at org.hibernate.internal.SessionImpl.saveOrUpdate(SessionImpl.java:674)
Conclusion
The merge method is to be preferred if you are using optimistic locking as it allows you to prevent lost updates.
The update is good for batch updates as it can prevent the additional SELECT statement generated by the merge operation, therefore reducing the batch update execution time.
Undiplomatic answer: You're probably looking for an extended persistence context. This is one of the main reasons behind the Seam Framework... If you're struggling to use Hibernate in Spring in particular, check out this piece of Seam's docs.
Diplomatic answer: This is described in the Hibernate docs. If you need more clarification, have a look at Section 9.3.2 of Java Persistence with Hibernate called "Working with Detached Objects." I'd strongly recommend you get this book if you're doing anything more than CRUD with Hibernate.
If you are sure that your entity has not been modified (or if you agree any modification will be lost), then you may reattach it to the session with lock.
session.lock(entity, LockMode.NONE);
It will lock nothing, but it will get the entity from the session cache or (if not found there) read it from the DB.
It's very useful to prevent LazyInitException when you are navigating relations from an "old" (from the HttpSession for example) entities. You first "re-attach" the entity.
Using get may also work, except when you get inheritance mapped (which will already throw an exception on the getId()).
entity = session.get(entity.getClass(), entity.getId());
I went back to the JavaDoc for org.hibernate.Session and found the following:
Transient instances may be made persistent by calling save(), persist() or
saveOrUpdate(). Persistent instances may be made transient by calling delete(). Any instance returned by a get() or load() method is persistent. Detached instances may be made persistent by calling update(), saveOrUpdate(), lock() or replicate(). The state of a transient or detached instance may also be made persistent as a new persistent instance by calling merge().
Thus update(), saveOrUpdate(), lock(), replicate() and merge() are the candidate options.
update(): Will throw an exception if there is a persistent instance with the same identifier.
saveOrUpdate(): Either save or update
lock(): Deprecated
replicate(): Persist the state of the given detached instance, reusing the current identifier value.
merge(): Returns a persistent object with the same identifier. The given instance does not become associated with the session.
Hence, lock() should not be used straightway and based on the functional requirement one or more of them can be chosen.
I did it that way in C# with NHibernate, but it should work the same way in Java:
public virtual void Attach()
{
if (!HibernateSessionManager.Instance.GetSession().Contains(this))
{
ISession session = HibernateSessionManager.Instance.GetSession();
using (ITransaction t = session.BeginTransaction())
{
session.Lock(this, NHibernate.LockMode.None);
t.Commit();
}
}
}
First Lock was called on every object because Contains was always false. The problem is that NHibernate compares objects by database id and type. Contains uses the equals method, which compares by reference if it's not overwritten. With that equals method it works without any Exceptions:
public override bool Equals(object obj)
{
if (this == obj) {
return true;
}
if (GetType() != obj.GetType()) {
return false;
}
if (Id != ((BaseObject)obj).Id)
{
return false;
}
return true;
}
Session.contains(Object obj) checks the reference and will not detect a different instance that represents the same row and is already attached to it.
Here my generic solution for Entities with an identifier property.
public static void update(final Session session, final Object entity)
{
// if the given instance is in session, nothing to do
if (session.contains(entity))
return;
// check if there is already a different attached instance representing the same row
final ClassMetadata classMetadata = session.getSessionFactory().getClassMetadata(entity.getClass());
final Serializable identifier = classMetadata.getIdentifier(entity, (SessionImplementor) session);
final Object sessionEntity = session.load(entity.getClass(), identifier);
// override changes, last call to update wins
if (sessionEntity != null)
session.evict(sessionEntity);
session.update(entity);
}
This is one of the few aspects of .Net EntityFramework I like, the different attach options regarding changed entities and their properties.
I came up with a solution to "refresh" an object from the persistence store that will account for other objects which may already be attached to the session:
public void refreshDetached(T entity, Long id)
{
// Check for any OTHER instances already attached to the session since
// refresh will not work if there are any.
T attached = (T) session.load(getPersistentClass(), id);
if (attached != entity)
{
session.evict(attached);
session.lock(entity, LockMode.NONE);
}
session.refresh(entity);
}
Sorry, cannot seem to add comments (yet?).
Using Hibernate 3.5.0-Final
Whereas the Session#lock method this deprecated, the javadoc does suggest using Session#buildLockRequest(LockOptions)#lock(entity)and if you make sure your associations have cascade=lock, the lazy-loading isn't an issue either.
So, my attach method looks a bit like
MyEntity attach(MyEntity entity) {
if(getSession().contains(entity)) return entity;
getSession().buildLockRequest(LockOptions.NONE).lock(entity);
return entity;
Initial tests suggest it works a treat.
Perhaps it behaves slightly different on Eclipselink. To re-attach detached objects without getting stale data, I usually do:
Object obj = em.find(obj.getClass(), id);
and as an optional a second step (to get caches invalidated):
em.refresh(obj)
try getHibernateTemplate().replicate(entity,ReplicationMode.LATEST_VERSION)
In the original post, there are two methods, update(obj) and merge(obj) that are mentioned to work, but in opposite circumstances. If this is really true, then why not test to see if the object is already in the session first, and then call update(obj) if it is, otherwise call merge(obj).
The test for existence in the session is session.contains(obj). Therefore, I would think the following pseudo-code would work:
if (session.contains(obj))
{
session.update(obj);
}
else
{
session.merge(obj);
}
to reattach this object, you must use merge();
this methode accept in parameter your entity detached and return an entity will be attached and reloaded from Database.
Example :
Lot objAttach = em.merge(oldObjDetached);
objAttach.setEtat(...);
em.persist(objAttach);
calling first merge() (to update persistent instance), then lock(LockMode.NONE) (to attach the current instance, not the one returned by merge()) seems to work for some use cases.
Property hibernate.allow_refresh_detached_entity did the trick for me. But it is a general rule, so it is not very suitable if you want to do it only in some cases. I hope it helps.
Tested on Hibernate 5.4.9
SessionFactoryOptionsBuilder
try getHibernateTemplate().saveOrUpdate()

Categories