Prevent reading the same record with Hibernate - java

I use Hibernate 5 and Oracle 12.
With the below query I want to randomly select an Entity from a set of Entity's:
Query query = getSession().createQuery("SELECT e FROM Entity e ... <CONDITIONS> ... AND ROWNUM = 1");
Optional<Entity> entity = query.list().stream().findAny();
// Change the entity in some way. The changes will also make sure that the entity won't appear in the next query run based on <CONDITIONS>
...
This works but only if all the transactions that execute the code run sequentially. Thus I also want to make sure that the entity that has already been read won't be read in another transaction.
I tried it with locking:
Query query = getSession().createQuery("SELECT e FROM Entity e ... <CONDITIONS> ... AND ROWNUM = 1")
.setLockMode("this", LockMode.PESSIMISTIC_READ);
But it seems that Hibernate converts this construct to SELECT ... FOR UPDATE which doesn't prevent the other transaction from reading the entity, waiting till the other transactions using it commits and then applying their own changes on the entity.
Is it possible to set some kind of lock on the entity so that it disappears guaranteed from the query result in another transaction?
I've written some experimental code to understand how locking works in Hibernate. It simulates two transactions whose key steps (select and commit) can be executed in different order by adjusting the parameters of transaction() method. This time Field is used instead of Entity, but it doesn't matter. Each transaction reads the same Field, updates its description attribute and commits.
private static final LockMode lockMode = LockMode.PESSIMISTIC_WRITE;
enum Order {T1_READS_EARLIER_COMMITS_LATER, T2_READS_EARLIER_COMMITS_LATER};
#Test
public void firstReadsTheOtherRejected() {
ExecutorService es = Executors.newFixedThreadPool(3);
// It looks like the transaction that commits first is the only transaction that can make changes.
// The changes of the other one will be ignored.
final Order order = Order.T1_READS_EARLIER_COMMITS_LATER;
// final Order order = Order.T2_READS_EARLIER_COMMITS_LATER;
es.execute(() -> {
switch (order) {
case T1_READS_EARLIER_COMMITS_LATER:
transaction("T1", 1, 8);
break;
case T2_READS_EARLIER_COMMITS_LATER:
transaction("T1", 4, 1);
break;
}
});
es.execute(() -> {
switch (order) {
case T1_READS_EARLIER_COMMITS_LATER:
transaction("T2", 4, 1);
break;
case T2_READS_EARLIER_COMMITS_LATER:
transaction("T2", 1, 8);
break;
}
});
es.shutdown();
try {
es.awaitTermination(1, TimeUnit.MINUTES);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void transaction(String name, int delayBeforeRead, int delayBeforeCommit) {
Transaction tx = null;
Session session = null;
try {
session = factory.openSession();
tx = session.beginTransaction();
try {
TimeUnit.SECONDS.sleep(delayBeforeRead);
} catch (InterruptedException e) {
e.printStackTrace();
}
Query query = session.createQuery("SELECT f FROM Field f WHERE f.description=?1").setLockMode("this", lockMode);
query.setString("1", DESC);
Field field = (Field) query.uniqueResult();
String description1 = field.getDescription();
System.out.println(name + " : FIELD READ " + description1);
try {
TimeUnit.SECONDS.sleep(delayBeforeCommit);
} catch (InterruptedException e) {
e.printStackTrace();
}
field.setDescription(name);
session.update(field);
System.out.println(name + " : FIELD UPDATED");
tx.commit();
} catch (Exception e) {
fail();
if (tx != null) {
tx.rollback();
}
} finally {
session.close();
}
System.out.println(name + " : COMMITTED");
}
and the output:
T1 : FIELD READ This is a field for testing
апр 19, 2019 5:28:01 PM org.hibernate.loader.Loader determineFollowOnLockMode
WARN: HHH000445: Alias-specific lock modes requested, which is not currently supported with follow-on locking; all acquired locks will be [PESSIMISTIC_WRITE]
апр 19, 2019 5:28:01 PM org.hibernate.loader.Loader shouldUseFollowOnLocking
WARN: HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
Hibernate: select field0_.ID as ID1_9_, field0_.DESCRIPTION as DESCRIPTION2_9_, field0_.NAME as NAME3_9_, field0_.TYPE as TYPE4_9_ from FIELD field0_ where field0_.DESCRIPTION=?
Hibernate: select ID from FIELD where ID =? for update
T1 : FIELD UPDATED
Hibernate: update FIELD set DESCRIPTION=?, NAME=?, TYPE=? where ID=?
T2 : FIELD READ This is a field for testing
T1 : COMMITTED
апр 19, 2019 5:28:07 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl stop
T2 : FIELD UPDATED
Hibernate: update FIELD set DESCRIPTION=?, NAME=?, TYPE=? where ID=?
INFO: HHH000030: Cleaning up connection pool [jdbc:oracle:thin:#localhost:1521:oracle]
T2 : COMMITTED
Process finished with exit code 0
After the execution the column description contains T2. It looks like pessimistic_write mode works. The transaction who wrote first - won. And this was T2. But what happened with T1? T1 : COMMITTED is also seen in the output. As long as T1 doesn't change anything it's acceptable for me, but I need an indicator that T1 failed, so that I can retry the read/select.
I was wrong. I ran the code multiple times and with different results. Sometimes the column description contains T1, sometimes T2.

You say you want to make sure that other transactions will NOT READ the queries entities.
For that, you need LockMode.PESSIMISTIC_WRITE. This does not allow both READs and UPDATEs. LockMode.PESSIMISTIC_READ does not allow only UPDATEs.
A lock with LockModeType.PESSIMISTIC_WRITE can be obtained on an
entity instance to force serialization among transactions attempting
to update the entity data.
A lock with LockModeType.PESSIMISTIC_WRITE can be used when querying
data and there is a high likelihood of deadlock or update failure
among concurrent updating transactions.

Related

How to continue processing after a constraint violation exception?

We have a Kafka consumer in a Java - Spring Boot service that has multiple instances in multiple clusters (all with same consumer group id). The consumer saves data to the database (SQL Server).
The code checks if a record exists in the ItemSet table before saving to the database. The actual data in the payload gets saved in a child table ItemValue and not the parent, ItemSet. The relational data table hierarchy is one to many in this order ItemSet -> ItemName -> ItemValue. ItemSet table has a unique constraint for department id, season combination to prevent multiple duplicate adds.
I need to do some processing after catching this exception to ensure that the incoming data still gets saved under the existing ItemSet and doesn't get lost. I am using Spring Data JPA and as soon as I catch the exception and try to retrieve the existing record I end up getting:
org.hibernate.AssertionFailure: null id in ItemSet entry (don't flush the Session after an exception occurs).
The getItemSet() in the catch block blows up.
What is the best way to overcome these race conditions?
ItemSet savedItemSet = null;
try
{
String seasonName = itemSet.getSeasonName();
Long seasonYear = itemSet.getSeasonYear();
Long departmentId = itemSet.getDepartment().getId();
List<ItemSet> itemSets = attributeSetRepository.findBySeasonNameAndSeasonYearAndDepartmentId(
seasonName, seasonYear, departmentId);
LOGGER.info("Found {} item sets corresponding to season name : {}, season year : {}, "
+ "department id : {}", itemSets.size(), seasonName, seasonYear, departmentId);
if(CollectionUtils.isEmpty(itemSets)) {
savedItemSet = itemSetRepository.save(itemSet);
}
else {
return new CreatedItemSet(itemSets.get(0).getId());
}
}
catch(PersistenceException | DataIntegrityViolationException e)
{
LOGGER.error("An exception occurred while saving itemSet set", e);
if (e.getCause() instanceof ConstraintViolationException)
{
String seasonName = itemSet.getSeasonName();
Long seasonYear = itemSet.getSeasonYear();
Long deptId = itemSet.getDepartment().getId();
LOGGER.info("A duplicate item set found in the database corresponding "
+ "to season name : {}, season year : {} and department : {}",
seasonName, seasonYear, deptId);
ExistingItemSet existingItemSet = getItemSet(seasonName,
seasonYear, deptId);
if(existingItemSet == null) {
LOGGER.info("No item set found");
return null;
}
return new CreatedItemSet(existingItemSet.getId());
}
}
You can't "continue". A transaction is marked for rollback and the persistence context is unusable after a constraint violation happens.
You can either try to avoid the constraint violation, by checking if the DB contains an entry before persisting/updating, or you run the rest of your code in a separate transaction if a constraint violation happens.

SELECT FOR UPDATE in Hibernate and Oracle with multiple threads

I am having trouble getting SELECT FOR UPDATE to work in Hibernate and Oracle.
When I have two threads with one EntityManager per thread, the second thread seems to be able to read the same row as the first. I can see this by adding traces which show that the second thread reads the same row while the first is in between query.getSingleResult() and entityManager.getTransaction().commit() My expectation was that once a SELECT FOR UPDATE has been issued no one else should be able to read the same row until it is committed by the first thread. But this is not happening.
I can resort to an alternative implementation. What I want to achieve is only one process being able to read and update a row in an Oracle table so that it behaves like a queue given that the consumer processes can be in different machines.
Here is the minimum example of my code:
public MyMessage getNextMessage() {
String sql = "SELECT * FROM MESSAGE WHERE MESSAGE_STATUS = 'Pending' AND rownum=1 FOR UPDATE OF MESSAGE_STATUS";
entityManager.getTransaction().begin();
Query query = entityManager.createNativeQuery(sql, MyMessage.class);
query.setLockMode(LockModeType.PESSIMISTIC_WRITE);
MyMessage msg = null;
try {
msg = (MyMessage) query.getSingleResult();
} catch (NoResultException nodatafound) {
// Ignore when no data found, just return null
}
if (msg != null) {
msg.setMessageStatus("In Progress");
entityManager.persist(msg);
}
entityManager.getTransaction().commit();
return msg;
}

How to DO Batch Update in Hibernate Effectively

I have read many article and found some ways to do batch process
One of that is Using flush and clear , following is the code
long t1 = System.currentTimeMillis();
Session session = getSession();
Transaction transaction = session.beginTransaction();
try {
Query query = session.createQuery("FROM PersonEntity WHERE id > " + lastMaxId + " ORDER BY id");
query.setMaxResults(1000);
rows = query.list();
int count = 0;
if (rows == null || rows.size() == 0) {
return;
}
LOGGER.info("fetched {} rows from db", rows.size());
for (Object row : rows) {
PersonEntity personEntity = (PersonEntity) row;
personEntity.setName(randomAlphaNumeric(30));
lastMaxId = personEntity.getId();
session.saveOrUpdate(personEntity);
if (++count % 50 == 0) {
session.flush();
session.clear();
LOGGER.info("Flushed and Cleared");
}
}
} finally {
if (session != null && session.isOpen()) {
LOGGER.info("Closing Session and commiting transaction");
transaction.commit();
session.close();
}
}
long t2 = System.currentTimeMillis();
LOGGER.info("time taken {}s", (t2 - t1) / 1000);
In above code we are processing records in batch of 1000 and updating them in the same transaction .
It is OK when we have to do batch update only .
But I have following questions regading it :
There can be case when some other thread(T2) is accessing the same set of rows for some runtime update operations , but in this case till the 1000 batch will not be commited , T2 remians stuck
So , How we should handle this case ?
Possible thoughts/solution by me :
I think we can do update in different session with small batch of say 50
Use a diffrent Stateless connection for Update and commit the transcation one by one , but close the session when a batch of 1000 completes .
Please Help me getting better solution .
Do you mean to say this:
there is a batch update in progress inside a transaction
in the meanwhile another thread starts updating one of the records that's there in the batch as well
because of this, the batch will wait till the update in point 2 is complete. This causes the rest of the records in the batch to also wait.
So far, it appears all good. However, the important pont here was that the transaction was done to make the update to a large set of records "faster". Usually, transactions are used to ensure "consistency/atomicity".
How does one design this piece - fast updates to multiple records in one go with atomicity not being the primary criteria, while a likely update to a record in the batch is also requested by another thread

EntityManager throws OptimisticLockException when try to delete locked entity in same transaction

Here is my code:
EntityManager em = JPAUtil.createEntityManager();
try {
EntityTransaction tx = em.getTransaction();
try {
//do some stuff here
tx.begin();
List es = em.createNamedQuery("getMyEntities", MyEntity.class).getResultList();
for (MyEntity e : es) {
em.lock(e, LockModeType.OPTIMISTIC);
}
if (es.size() != 0) {
em.remove(es.get(0));
}
tx.commit
} finally {
if (tx.isActive()) {
tx.rollback();
}
}
} finally {
em.close();
}
When I'm executing that code I get :
...
..........
Caused by: javax.persistence.OptimisticLockException: Newer version [null] of entity [[MyEntity#63]] found in database
at org.hibernate.ejb.AbstractEntityManagerImpl.wrapLockException(AbstractEntityManagerImpl.java:1427)
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1324)
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1300)
at org.hibernate.ejb.TransactionImpl.commit(TransactionImpl.java:80)
... 23 more
Can anybody explain me why that?
I suppose that you have added the #Version-annotated column after you already had some entries in database, so that some null-values were created for the already-existing records.
Now hibernate can't compare the versions.
I would try to set the version column to 1 for all null-versioned entities.
I think this error is thrown due to the fact that I try to delete a record that has a lock on it. Trying to delete this row, will set the version to null, but the version in the database still remains set to former number. It seems that hibernate core perceive a null value to be not reliable for this kind of operation.
If I have to do this kind of operation, I have to release the lock first on this entity.
Anyone with better knowledge on it has to clarify that issue.

Obtain id of an insert in the same statement [duplicate]

This question already has answers here:
How to get the insert ID in JDBC?
(14 answers)
Closed 7 years ago.
Is there any way of insert a row in a table and get the new generated ID, in only one statement? I want to use JDBC, and the ID will be generated by a sequence or will be an autoincrement field.
Thanks for your help.
John Pollancre
using getGeneratedKeys():
resultSet = pstmt.getGeneratedKeys();
if (resultSet != null && resultSet.next()) {
lastId = resultSet.getInt(1);
}
You can use the RETURNING clause to get the value of any column you have updated or inserted into. It works with trigger (i-e: you get the values actually inserted after the execution of triggers). Consider:
SQL> CREATE TABLE a (ID NUMBER PRIMARY KEY);
Table created
SQL> CREATE SEQUENCE a_seq;
Sequence created
SQL> VARIABLE x NUMBER;
SQL> BEGIN
2 INSERT INTO a VALUES (a_seq.nextval) RETURNING ID INTO :x;
3 END;
4 /
PL/SQL procedure successfully completed
x
---------
1
SQL> /
PL/SQL procedure successfully completed
x
---------
2
Actually, I think nextval followed by currval does work. Here's a bit of code that simulates this behaviour with two threads, one that first does a nextval, then a currval, while a second thread does a nextval in between.
public void checkSequencePerSession() throws Exception {
final Object semaphore = new Object();
Runnable thread1 = new Runnable() {
public void run() {
try {
Connection con = getConnection();
Statement s = con.createStatement();
ResultSet r = s.executeQuery("SELECT SEQ_INV_BATCH_DWNLD.nextval AS val FROM DUAL ");
r.next();
System.out.println("Session1 nextval is: " + r.getLong("val"));
synchronized(semaphore){
semaphore.notify();
}
synchronized(semaphore){
semaphore.wait();
}
r = s.executeQuery("SELECT SEQ_INV_BATCH_DWNLD.currval AS val FROM DUAL ");
r.next();
System.out.println("Session1 currval is: " + r.getLong("val"));
con.commit();
} catch (Exception e) {
e.printStackTrace();
}
}
};
Runnable thread2 = new Runnable(){
public void run(){
try{
synchronized(semaphore){
semaphore.wait();
}
Connection con = getConnection();
Statement s = con.createStatement();
ResultSet r = s.executeQuery("SELECT SEQ_INV_BATCH_DWNLD.nextval AS val FROM DUAL ");
r.next();
System.out.println("Session2 nextval is: " + r.getLong("val"));
con.commit();
synchronized(semaphore){
semaphore.notify();
}
}catch(Exception e){
e.printStackTrace();
}
}
};
Thread t1 = new Thread(thread1);
Thread t2 = new Thread(thread2);
t1.start();
t2.start();
t1.join();
t2.join();
}
The result is as follows:
Session1 nextval is: 47
Session2 nextval is: 48
Session1 currval is: 47
I couldn't comment otherwise I would have added to Vinko Vrsalovic's post:
The id generated by a sequence can be obtained via
insert into table values (sequence.NextVal, otherval)
select sequence.CurrVal
ran in the same transaction as to get a consistent view.
Updating de sequence after getting a nextval from it is an autonomous transaction. Otherwise another session would get the same value from the sequence. So getting currval will not get the inserted id if anothers sesssion has selected from the sequence in between the insert and select.
Regards,
Rob
The value of the auto-generated ID is not known until after the INSERT is executed, because other statements could be executing concurrently and the RDBMS gets to decide how to schedule which one goes first.
Any function you call in an expression in the INSERT statement would have to be evaluated before the new row is inserted, and therefore it can't know what ID value is generated.
I can think of two options that are close to what you're asking:
Write a trigger that runs AFTER INSERT, so you have access to the generated ID key value.
Write a procedure to wrap the insert, so you can execute other code in the procedure and query the last generated ID.
However, I suspect what you're really asking is whether you can query for the last generated ID value by your current session even if other sessions are also inserting rows and generating their own ID values. You can be assured that every RDBMS that offers an auto-increment facility offers a way to query this value, and it tells you the last ID generated in your current session scope. This is not affected by inserts done in other sessions.
The id generated by a sequence can be obtained via
insert into table values (sequence.NextVal, otherval)
select sequence.CurrVal
ran in the same transaction as to get a consistent view.
I think you'll find this helpful:
I have a table with a
auto-incrementing id. From time to
time I want to insert rows to this
table, but want to be able to know
what the pk of the newly inserted row
is.
String query = "BEGIN INSERT INTO movement (doc_number) VALUES ('abc') RETURNING id INTO ?; END;";
OracleCallableStatement cs = (OracleCallableStatement) conn.prepareCall(query);
cs.registerOutParameter(1, OracleTypes.NUMBER );
cs.execute();
System.out.println(cs.getInt(1));
Source: Thread: Oracle / JDBC Error when Returning values from an Insert
I couldn't comment, otherwise I would have just added to dfa's post, but the following is an example of this functionality with straight JDBC.
http://www.ibm.com/developerworks/java/library/j-jdbcnew/
However, if you are using something such as Spring, they will mask a lot of the gory details for you. If that can be of any assistance, just good Spring Chapter 11, which is the JDBC details. Using it has saved me a lot of headaches.

Categories