How to handle Optimistic Locked item quantity update with concurrent users? - java

I have method which updates to Item quantity.
MyEntity has #Version annoted version property as long.
There is an item list endpoint /items
Also there is an item update endpoint /item/update (consider as product stock, buying an item)
So N concurrent users want's to update same item.
But there throws org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1 that exception. While updating.
And also at this time, /items endpoint couldn't return data. Or waits user to return with too much latency.(If updating users count too much at this time it also gets an exception timeout).
So How can I handle that situation without any missing? (Can be good implementation )

Unfortunately, JPA/Hibernate does not play nice with batch inserts when there is contention: whenever any exception is thrown in the context of a Hibernate session, you're out of luck.
See 13.2.3. Exception handling of: https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch13.html#transactions-optimistic
Specifically:
No exception thrown by Hibernate can be treated as recoverable. Ensure
that the Session will be closed by calling close() in a finally block.
In the past I have had to migrate JPA code to QueryDSL or fall back to raw SQL and JdbcTemplate (something like How to do multiple inserts in database using spring JDBC Template batch?).

Related

JPA use flush to trigger exception and halt execution

In a recent task, after I created an object I flushed the result to the database. The database table had a unique constraint, meaning that if I tried to flush the same record for the second time, I would get a ConstraintViolationException. A sample snippet is shown below:
createEntityAndFlush(result);
sendAsyncRequestToThirdSystem(param);
The code for the createEntityAndFlush:
private T createEntityAndFlush(final T entity) throws ServiceException {
log.debug("Persisting {}", entity.getClass().getSimpleName());
getEntityManager().persist(entity);
getEntityManager().flush();
return entity;
}
The reason I used flush was that I wanted to make sure that a ConstraintViolationException would be thrown prior to finishing the transaction and thus calling the sendAsyncRequestToThirdSystem. But that was not the case, since sendAsyncRequestToThirdSystem was called after the exception was thrown.
To test the code in racing conditions, I used the ManagedExecutorService and created two runnable tasks (Future<?> submit(Runnable task)) to replicate the incoming request.
Eventually the problem was solved by trying performing a lock on a new table for each unique request id, but I would like to know where I was wrong in my first approach (ex. wrong use of flash, ManagedExecutorService was responsible for awkward behaviour). Thanks in advance!
The issue is that while flush() does flush the changes into the database, the transaction is still open, and the unique constraint will be checked when the transaction is committed (this may depend on the database, but at least with Postgres and any MVCC using DB).
So you will need to make sure that createEntityAndFlush(result); runs in its own transaction, possibly with a #Transactional(propagation = Propagation.REQUIRES_NEW) (or equivalent, if not using Spring) to see if the unique index is violated.

hibernate returning null for auto-generated timestamps from mysql

i have a table in mysql which has a data type of timestamp as one of the columns, which gets a default value of CURRENT_TIME upon insertion. and i have another timestamp column that has a default value of CURRENT_TIME upon update. i have these so that timestamp columns will get updated automatically on insertion and update (which works fine).
now i am using cxf, hibernate/jpa, mysql, jackson to build a web service.
i am simply creating a new record and retrieving it right away as below code shows.
Session session = getSession(); // sessionFactory.getCurrentSession();
String accountId = (String)session.save(account);
Account newAccount = (Account)session.load(Account.class, accountId);
logger.info("created timestamp=" + newAccount.getCreatedTimestamp());
after above code is ran, i can see that new record is created in mysql with correct timestamps for createdTimestamp. however, logger.info() line above throws an exception because newAccount.getCreatedTimestamp() returns null. if i remove logger.info() line, i can see that newAccount object is populated with correct values except for createdTimestamp which is null.
what's more odd is that after above code is ran (which is a part of HTTP POST operation), i call a HTTP GET service which just fetches a record that i just inserted by doing
session.get(Account.class, accountId);
and it correctly shows timestamps!
i tried to sleep before session.load() or session.get() thinking that there might be a delay in inserting timestamp, but that didn't do much. is there something special about hibernate session management that does not retrieve columns that mysql generates? what am i missing here? please help.
Your actual save isn't being committed until the session is flushed. Hibernate doesn't actually commit anything to the database until the session is flushed or closed so that if an exception is thrown, a rollback doesn't actually have to touch the physical database, the changes are just not sent. However if Hibernate detects that a query is going to receive stale data, it will automatically flush before running that query.
For example, you add a record to the database and immediately call a SELECT COUNT(*) query. Hibernate will flush the session (committing the record in the process) and then perform the SELECT COUNT(*) query on the now clean session ensuring that you get correct data. Hibernate didn't do this in your case because it saw that you were requesting the same object that you were trying to insert (in the same session) so it just returned you that reference.
If you are letting hibernate manage its sessions (using a session factory or similar) I don't think that you have to explicitly close sessions. I know that I don't, but I'm using Hibernate with Spring, and using the #Transactional annotation which manages the actual Hibernate session. If you want an immediate insert, make your call to save() the last call in the method. Usually, once the method exits, a commit() will be called automatically.
All the load() will be doing is giving you the same instance of Account that you passed into session.save(). Either close or flush the session, then try the load() again, and your value should be set.

Best way to handle multiple inserts

Currently we are using play 1.2.5 with Java and MySQL. We have a simple JPA model (a Play entity extending Model class) we save to the database.
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
At each web request we save multiple instances of the SimpleModel, for example:
JPAPlugin.startTx(false);
for (int i=0;i<5000;i++)
{
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
}
JPAPlugin.closeTx(false);
We are using the JPAPlugin.startTx and closeTx to manually start and end the transaction.
Everything works fine if there is only one request executing the transaction.
What we noticed is that if a second request tries to execute the loop simultaneously, the second request gets a "Lock wait timeout exceeded; try restarting transaction javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not insert: [SimpleModel]" since the first request locks the table but is not done until the second request times out.
This results in multiple:
ERROR AssertionFailure:45 - an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session)
org.hibernate.AssertionFailure: null id in SimpleModel entry (don't flush the Session after an exception occurs)
Another disinfect is that the CPU usage during the inserts goes crazy.
To fix this, I'm thinking to create a transaction aware queue to insert the entities sequentially but this will result in huge inserting times.
What is the correct way to handle this situation?
JPAPlugin on Play Framwork 1.2.5 is not thread-safe and you will not resolve this using this version of Play.
That problem is fixed on Play 2.x, but if you can't migrate try to use hibernate directly.
You should not need to handle transactions yourself in this scenario.
Instead either put your inserts in a controller method or in an asynchronous job if the task is time consuming.
Jobs and controller both handle transasctions.
However check that this is really what you are trying to achieve. Each http request creating 5000 records does not seem realistic. Perhaps it would make more sense to have a container model with a collection?
Do you really need a transaction for the entire insert? Does it matter if the database is not locked during the data import?
You can simply create a job and execute it for each insert:
for (int i=0;i<5000;i++)
{
new Job() {
doJob(){
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
}.now();
}
This will create a single transaction for each insert and get rid of your database lock issue.

StaleObjectStateException with Hibernate in read operation?

I am using Hibernate in a listener of Spring DefaultMessageLisenerContainer.
When I let the listener run with multiple threads, I often encounter this StaleStateException for a read only operation:
Query q = session.createQuery("SELECT k FROM Keyword k WHERE k.name = :name").setParameter("name", keywordName);
List<Keyword> kws = q.list()
The exception is thrown at q.list():
optimistic locking failed; nested exception is
org.hibernate.StaleObjectStateException: Row was updated or deleted by
another transaction (or unsaved-value mapping was incorrect)
Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [com.aurora.common.model.Keyword#7550]
at org.hibernate.persister.entity.AbstractEntityPersister.check(AbstractEntityPersister.java:1934)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2578)
at org.hibernate.persister.entity.AbstractEntityPersister.updateOrInsert(AbstractEntityPersister.java:2478)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2805)
at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:114)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:267)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:259)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:64)
at org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:1175)
at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1251)
at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102)
It is really strange as read operation should read a fresh copy from DB rather than check for a version conflict and throw StaleObjectStateException.
The name attribute is not the primary key of Keyword object.
UPDATE:
My data access code: I am using Spring's HibernateTransactionManager which support thread-bound Hibernate session. The Hibernate session is retrieved through SessionFactory.getCurrentSession() method.
Each transaction wrap around a invoke of listener by assigning the HibernateTransactionManager to MessageListenerContainer:
<jms:listener-container connection-factory="connectionFactory" concurrency="3-3" prefetch="6" transaction-manager="transactionManager">
<jms:listener destination="${requests}" response-destination="${replies}" ref="chunkHandler" method="handleChunk" />
</jms:listener-container>
UPDATE :
As in the suggested answer, there might be other operations causing staleObjectStateException.
I have tried logging out the Session.isDirty(), for all other operations prior to that. They are all read operation. Interestingly, the session is actually marked as dirty after the keyword select by name operation. The actual code is something like this:
for (String n : keywordNames) {
Keyword k = keywordDao.getKeywordByName(n);
}
The session is dirty after the first iteration. (KeywordDao.getKeywordByName implmentation is as above).
Any idea ? Thanks,
Khue.
I believe other answers given are not correct. Accessing row does not exist does not give StaleObjectStateException, and simply query an entity is not going to trigger optimistic lock for that entity too.
Further inspection on the stack trace will give some hints for the cause:
at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) When you are calling query.list()
at org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:1175) Hibernate will determine if auto flush of the session is required. By some reason Hibernate believe auto flush is required. (Probably due to you have previously done update on some Keyword entity in the same session, or other entities... that's something I cannot tell honestly)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2805) Then Hibernate will flush all the changes in the session to DB. And, the problem of StaleObjectStateException occurs here, which means Optimistic Concurrency check failure. The optimistic concurrency check failure MAY or MAY NOT relates to Keyword entity (coz it is simply flushing all updated entities in session to DB). However, in your case, it is actually related to Keyword entity ( Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [com.ncs.singtel.aurora.common.model.Keyword#7550])
Please verify what is the cause for the optimistic concurrency failure. Normally we simply rethrow the optimistic concurrency exception to caller and let caller decide if they want to invoke the function again. However it all depends on your design.
The stalestateException occurs when we try to access a row that doesn't exist. check your keyword.getName() to see what it returns.
Some other transactions could be updating Keyword entity at the same time as you read and your read operation could result in Stale objects.
This is optimistic locking. You can consider pessismistic locking , but it will seriously affect the performance.
I would suggest catch StaleObjectStateException and try to read again.

Does hibernate a default optimistic locking for detached objects?

I have an application that does:
void deleteObj(id){
MyObj obj = getObjById(id);
if (obj == null) {
throw new CustomException("doesn't exists");
}
em.remove(obj);//em is a javax.persistence.EntityManager
}
I haven't explicitly configure optimistic locking with version field.However, if two request are running in parallel, trying to delete the same object, then I get sometimes an HibernateOptimisticLockingFailureException and other times the "CustomException".
Is it normal to get HibernateOptimisticLockingFailureException without explicitly setting optimistic locking ? Does hibernate a default optimistic locking for detached objects ?
What are you doing to handle this HibernateOptimisticLockingFailureException ? Retry or inform to the user with a default message like "server busy" ?
First of all, HibernateOptimisticLockingFailureException is a result of Spring's persistence exception translation mechanism. It's thrown in response to StaleStateException, whose javadoc says:
Thrown when a version number or timestamp check failed, indicating that the Session contained stale data (when using long transactions with versioning). Also occurs if we try delete or update a row that does not exist.
From the common sense, optimistic lock exception occurs when data modification statement returns unexpected number of affected rows. It may be caused by mismatch of version value as well as by absence of the row at all.
To make sure that entity was actually removed you can try to flush the context by em.flush() right after removing and catch an exception thrown by it (note that it should be subclass of PersistenceException having StaleStateException as a cause).

Categories