HibernateOptimisticLockingFailureException when saving a new entity - java

The entity I'm trying to save is a parent and child. When I save the entity (i.e. the parent and children saved at the same time), however with normal execution (in debug mode every time) I get a HibernateOptimisticLockingFailureException thrown during session flushing. The testing is on my local machine, single thread, and nobody is changing the entity as I'm also saving it.
We are using the following:
MySQL v5.5.x
Hibernate 4.3.11
Java 8
Spring 4.1.0
Key points:
The relationship between the parent and child is bi-directional one-to-many.
We use optimistic locking with the version column being a timestamp created by MySQL either during insert or during update. On the version field we specify #Generated(GenerationTime.ALWAYS) to ensure that the version details are obtained from the database automatically (avoid the time precision issue between Java and MySQL)
During saving a new entity (id = 0), I can see the logs that the entity is being inserted into the database, I can also see the child entities being inserted in the database (via the Hibernate logs). During this process, I can also see the a select is done to get the version details from the database.
Soon after the entities are inserted and the session is being flushed, there is a dirty checking is done on the collection and I see a message in the log that the collection is unreferenced. Straight after this, I see an update statement on the parent entity's table and this is where the problem occurs as the version value used in the update statement is different to what is in the database, the HibernateOptimisticLockingFailureException exception is thrown.
Hibernate Code
getHibernateTemplate().saveOrUpdate(parentEntity);
// a break point here and wait for 1 sec before executing
// always get the HibernateOptimisticLockingFailureException
getHibernateTemplate().flush();
Parent mapping
#Access(AccessType.FIELD)
#OneToMany(mappedBy="servicePoint", fetch=FetchType.EAGER, cascade={CascadeType.ALL}, orphanRemoval=true, targetEntity=ServicingMeter.class)
private List<ServicingMeter> meters = new ArrayList<ServicingMeter>();
Child mapping
#Access(AccessType.FIELD)
#ManyToOne(fetch=FetchType.EAGER, targetEntity=ServicePoint.class)
#JoinColumn(name="service_point_id", nullable=false)
private ServicePoint servicePoint;
Questions:
1. Why is there an update date on the parent table?
2. How can I avoid this update from happening?
3. Is there something wrong with the way my one-to-many mapping is setup?
The annotated log file can be found here

Related

Hibernate JPA cannot fetch new record after immediately saving it(within ms) by non-primary key(email,type) but can fetch by primary key(id)

My system has a patient entity that contains email(string), type(integer) fields, both non primary, not unique and not null fields along with other fields and of course a primary key id.
After I save a new patient entity, when I search the entity in the database by jpa query findById it works perfectly fine and it fetches that new entity that was just saved few ms ago.
But when I search the newly saved entity by email and type by jpa query findByEmailAndTypeAndEmailIsNotNull it returns nothing but if I run the very same findByEmailAndTypeAndEmailIsNotNull query after 1 second of saving that new entity then it returns that newly saved entity.
Can some one diagnose the problem, is it even related to JPA? or with Database itself?
Edit:
#Transactional
public synchronized Patient addPatient(PatientProfileDto patientProfileDto, Integer facilityId)
throws ResourceAlreadyExistsException, EntityNotFoundException, ClientException {
// some code
performPatientCreationValidations(ownerDto, ownerDemographicDto.getNationality().getId(), Boolean.FALSE, Boolean.FALSE);
// patient creation
patientRepository.saveAndFlush(patient);
// some code to link patient to other entities
}
private void performPatientCreationValidations(...params)
throws ResourceAlreadyExistsException, ClientException {
if (patientRepository.findByEmailAndTypeAndEmailIsNotNull(patientDto.getEmail(), PatientType.OWNER.getId()).isPresent()) {
// throw error
}
}
If I hit the API 5 times in a row with neglible delay then 2 duplicate patient would get created, however on the last 3 apis hits it will throw error as it should. It should have also thrown the error upon 2nd api hit it receives. Also note the adding of patient is synchronized function so when one api hit is completed saving the patient then another api hit acquires lock on function n go on.
After I save a new patient entity, when I search the entity in the database by jpa query findById it works perfectly fine
JpaRepository.findById() works fine as it takes the entity from Hibernate's 1st level cache, not from the database, because inserting into the database is usually deferred until necessary, e.g. until your session is flushed. An entity can be fetched from the 1st level cache only by its id.
So you have to either flush the Session manually with JpaRepository.flush(), or use JpaRepository.saveAndFlush() instead of JpaRepository.save() or execute your operations within one transaction. In this case your requests share one session and Hibernate is going to flush its cache as soon as it gets a new query for the same entity.

Spring JPA always caches data [duplicate]

This question already has answers here:
Spring Data JPA Update #Query not updating?
(5 answers)
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Let's suppose to have this situation:
We have Spring Data configured in the standard way, there is a Respository object, an Entity object and all works well.
Now for some complex motivations I have to use EntityManager (or JdbcTemplate, whatever is at a lower level than Spring Data) directly to update the table associated to my Entity, with a native SQL query. So, I'm not using Entity object, but simply doing a database update manually on the table I use as entity (it's more correct to say the table from which I get values, see next rows).
The reason is that I had to bind my spring-data Entity to a MySQL view that makes UNION of multiple tables, not directly to the table I need to update.
What happens is:
In a functional test, I call the "manual" update method (on table from which the MySQL view is created) as previously described (through entity-manager) and if I make a simple Respository.findOne(objectId), I get the old object (not updated one). I have to call Entitymanager.refresh(object) to get the updated object.
Why?
Is there a way to "synchronize" (out of the box) objects (or force some refresh) in spring-data? Or am I asking for a miracle?
I'm not ironical, but maybe I'm not so expert, maybe (or probably) is my ignorance. If so please explain me why and (if you want) share some advanced knowledge about this amazing framework.
If I make a simple Respository.findOne(objectId) I get old object (not
updated one). I've to call Entitymanager.refresh(object) to get
updated object.
Why?
The first-level cache is active for the duration of a session. Any object entity previously retrieved in the context of a session will be retrieved from the first-level cache unless there is reason to go back to the database.
Is there a reason to go back to the database after your SQL update? Well, as the book Pro JPA 2 notes (p199) regarding bulk update statements (either via JPQL or SQL):
The first issue for developers to consider when using these [bulk update] statements
is that the persistence context is not updated to reflect the results
of the operation. Bulk operations are issued as SQL against the
database, bypassing the in-memory structures of the persistence
context.
which is what you are seeing. That is why you need to call refresh to force the entity to be reloaded from the database as the persistence context is not aware of any potential modifications.
The book also notes the following about using Native SQL statements (rather than JPQL bulk update):
■ CAUTION Native SQL update and delete operations should not be
executed on tables mapped by an entity. The JP QL operations tell the
provider what cached entity state must be invalidated in order to
remain consistent with the database. Native SQL operations bypass such
checks and can quickly lead to situations where the inmemory cache is
out of date with respect to the database.
Essentially then, should you have a 2nd level cache configured then updating any entity currently in the cache via a native SQL statement is likely to result in stale data in the cache.
In Spring Boot JpaRepository:
If our modifying query changes entities contained in the persistence context, then this context becomes outdated.
In order to fetch the entities from the database with latest record.
Use #Modifying(clearAutomatically = true)
#Modifying annotation has clearAutomatically attribute which defines whether it should clear the underlying persistence context after executing the modifying query.
Example:
#Modifying(clearAutomatically = true)
#Query("UPDATE NetworkEntity n SET n.network_status = :network_status WHERE n.network_id = :network_id")
int expireNetwork(#Param("network_id") Integer network_id, #Param("network_status") String network_status);
Based on the way you described your usage, fetching from the repo should retrieve the updated object without the need to refresh the object as long as the method which used the entity manager to merge has #transactional
here's a sample test
#DirtiesContext(classMode = ClassMode.AFTER_CLASS)
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = ApplicationConfig.class)
#EnableJpaRepositories(basePackages = "com.foo")
public class SampleSegmentTest {
#Resource
SampleJpaRepository segmentJpaRepository;
#PersistenceContext
private EntityManager entityManager;
#Transactional
#Test
public void test() {
Segment segment = new Segment();
ReflectionTestUtils.setField(segment, "value", "foo");
ReflectionTestUtils.setField(segment, "description", "bar");
segmentJpaRepository.save(segment);
assertNotNull(segment.getId());
assertEquals("foo", segment.getValue());
assertEquals("bar",segment.getDescription());
ReflectionTestUtils.setField(segment, "value", "foo2");
entityManager.merge(segment);
Segment updatedSegment = segmentJpaRepository.findOne(segment.getId());
assertEquals("foo2", updatedSegment.getValue());
}
}

Hibernate Envers Get Revision right after Persisting Data

I'm using Spring JPA to persist data in my application. I also use Hibernate Envers to create a history for every record I enter into my core table. I would like to get the revision immediately after the write transaction, and show the user what revision was created for the change(s) s/he just made.
In other words:
Step 1: entity -- persisted --> entity table -- envers --> audit table
Step 2: return me the audit version just created
I have taken the approach of persisting the data first, and then retrieve the latest rev info from the audit table in a separate call. This will eventually be inconsistent as the number of users increases.
MyEntity mySavedEntity = myEntityRepository.save(myEntity);
AuditReader reader = AuditReaderFactory.get(entityManager);
List<Number> revisions = reader.getRevisions(MyEntity .class, mySavedEntity.getId());
// ... get the latest revision and pass it back to the user ...
How do I attack this problem? - Thank you
You can make Envers log the version column of your entities by setting org.hibernate.envers.do_not_audit_optimistic_locking_field to false
Then use that column and the value of the version attribute of the entity after the transaction for retrieving the revision.
Maybe you can use the ValidityAuditStrategy, you can check it here, this will let you know your last valid entry and in this case get the last revision.

Create and find a jpa entity in the same transaction?

I'm using JBoss 7.1.1 and the default implementation of Hibernate that comes with it (4.0.1).
I have a message driven bean, that in the same transaction creates an entity and persists it using the entity manager. After that (still the same transaction) I find the newly created entity and try to use the entity manager to lock it with PESSIMISTIC_WRITE, but I get an OptimisticLockException. Its root is as follows:
Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [some.package.name.EntityName#aaa1a1a0-d568-11e1-9f99-d5a00a0a12b6]
at org.hibernate.dialect.lock.PessimisticWriteSelectLockingStrategy.lock(PessimisticWriteSelectLockingStrategy.java:95)
at org.hibernate.persister.entity.AbstractEntityPersister.lock(AbstractEntityPersister.java:1785)
at org.hibernate.event.internal.AbstractLockUpgradeEventListener.upgradeLock(AbstractLockUpgradeEventListener.java:99)
at org.hibernate.event.internal.DefaultLockEventListener.onLock(DefaultLockEventListener.java:85)
at org.hibernate.internal.SessionImpl.fireLock(SessionImpl.java:693)
at org.hibernate.internal.SessionImpl.fireLock(SessionImpl.java:686)
at org.hibernate.internal.SessionImpl.access$1100(SessionImpl.java:160)
at org.hibernate.internal.SessionImpl$LockRequestImpl.lock(SessionImpl.java:2164)
at org.hibernate.ejb.AbstractEntityManagerImpl.lock(AbstractEntityManagerImpl.java:1093)
... 202 more
Any ideas why I can't look up the newly created entity? Also, how can I make it available for searching right after it is created? Using the merge method of the EM doesn't seem to help ...
My understanding of your question is that within your message driven bean's transaction you're doing the following:
1. Create entityA
2. Persist entityA
3. entityB = find entityA
4. lock(entityB, PESSIMISTIC_WRITE)
and step 4 is throwing an exception.
I think Hibernate may not have flushed the persist between 2 and 3 so at that point A (and B) have version 0. Hibernate is then flushing the persist of A at the start of the lock(), which means B now has a stale version.
You could try flushing the persist before the find (so entityManager.flush() after 2).
Or you should be able to skip the find, since entityManager.persist(entityA) makes entityA a managed object, so the following sequence may work:
1. Create entityA
2. Persist entityA
3. lock(entityA, PESSIMISTIC_WRITE)

Hibernate/Spring HibernateTemplate.findByCriteria(Deatched Criteria dc) executes a sql update on view

I am trying to search a view based on given criteria. This view has a few fields for multiple different entities in my application that a user may want to search for.
When I enter the name of an entity I want to search for, I add a restriction for the name field to the detached criteria before calling .findByCriteria(). This causes .findByCriteria() to retrieve a list of results with the name I am looking for.
Also, when I look through my log, I can see hibernate calling a select statment.
I have now added another entity to my view, with a few searchable fields. When I try to search for a field related to this new entity, I get an exception in my log.
When I look through my log with the exception, I can see hibernate calling a select statment with an update statement right after the select (I am not trying to update a record, just retrieve it in a list).
So why is hibernate calling an update when I am calling .findByCriteria() for my new entity?
org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:90)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
SQL that is executed:
Hibernate:
select
*
from
( select
this_.SEARCH_ID as SEARCH1_35_0_,
this_.ST_NM as ST24_35_0_
from
SEARCH_RESULT this_
where
this_.LOAN_TYPE=? )
where
rownum <= ?
DEBUG 2012-03-21 11:37:19,332 142195 (http-8181-3:org.springframework.orm.hibernate3.HibernateTemplate):
[org.springframework.orm.hibernate3.HibernateAccessor.flushIfNecessary(HibernateAccessor.java:389)]
Eagerly flushing Hibernate session
DEBUG 2012-03-21 11:37:19,384 142247 (http-8181-3:org.hibernate.SQL):
[org.hibernate.jdbc.util.SQLStatementLogger.logStatement(SQLStatementLogger.java:111)]
update
SEARCH_RESULT
set
ADDR_LINE1=?,
ASSGND_REGION=?,
BASE_DEAL_ID=?,
ST_NM=?
where
SEARCH_ID=?
There is probably an update happening because Hibernate is set up to do an autoflush before executing the queries, so if the persistence context thinks it has dirty data, it will try to update it. Without seeing the code I can't be sure, but I'd guess that even though search_result is a view, your corresponding Java object is annotated on the getters and the object has matching setters. Hibernate doesn't make a distinction between tables and views, and if you call a setter, Hibernate will assume that it has data changes to update.
You can tweak how you build your Java objects for views by adding the #Immutable annotation (or hibernate.#Entity(mutable = false) depending on which version you're using. This should be enough to indicate to Hibernate to not flush changes. You can also annotate the fields directly and get rid of your setters so that consumers of the SearchResult object know that it's read only.

Categories