JPA - Pessimistic Lock - What happens when the lock exists? - java

Background Info:
I have an issue that is symptomatic of an entity update not going through. Reviewing my logs, I can see see the update sql statements that I expected, but they are almost simultaneous (0.012 seconds apart) and the application uses a pessimistic read lock when updating the entity.
That leads me to my question:
What is the expected behavior when a pessimistic lock exists? Should I still expect to see multiple update queries? I should expect the PessimisticLockException to be thrown, right? Are there any other indicators I should look for?
Hibernate is my JPA implementation.

Pesimistic locks are actually propagated to the DB level using SQL-queries (check the executed queries to compare).
If a pessimistic lock exists, the application should wait for the DB until the lock is released, so it is not mandatory an expcetion to be thrown (but it could be).
Now about the exceptions:
/*
PessimisticLockException if pessimistic locking fails and the transaction is rolled back
LockTimeoutException if pessimistic locking fails and only the statement is rolled back
*/
public <T> T find(Class<T> entityClass, Object primaryKey, LockModeType lockMode);
For other EntityManager methods those two exceptions are thrown in simmilar situations.

Pessimistic locking prevents objects from being updated simultaneously. Instead, the object's updates are forming the sort of chain - if the lock already exists, the update will wait until the lock is released.
Thus, throwing an exception is not expected outcome of pessimistic lock. Expected behavior is eliminating of concurrency I described above.
For further reading you can refer to this and this sources.
In our case it seems that your update is not going through because it is overwritten by some later update.

Related

Avoid optimistic locking in java web application

I have a problem concerning java optimistic locking exception. I have a service class that is instantiated (by spring) for every new user session and it contains a non static method that perform db operations. I wonder how I can avoid optimistic locking exception on the entity that is read/written to db. I would like to achieve a similar result as a synchronized method would but I guess using "synchronized" is out of the question since the method is not static and would not have any effect when users have own instances of the service? Can I somehow detect if a new version of the entity is saved to db and then retrieve a new version and then edit and save that one? I want the transaction to hold until it is ok even if it implies the transaction have to wait for other transactions. My first idea was to put the transaction code into a try-catch block and then retry the transaction (read & write) if optimistic locking exceptions is thrown. Is that solution "too easy" or?
Optimistic locking is used to improve performance, but still avoid messing up the data.
If there's an Optimistic lock failure, the user (that failed the update) needs to decide if he wants to do his operation again. You can't automate that, since it depends entirely on what was changed and how.
So no, your idea of a retry the transaction with a try/catch is not a "too easy solution". It's not a solution, it would be a serious (and dumb) bug.

JPA Optimistic locking

I have some troubles understanding the OPTIMISTIC LockMode.
Let's consider in the following scenario: "Thread A creates a Transaction and reads a list of all Users from Table USERS. Thread B updates a user in Table USERS. Thread B commits. Thread A commits".
Assume I am using OPTIMISTIC Locking. Would in this case the 2nd commit cause the OptimisticLockException to be thrown ?
Because according to this docu: "During commit (and flush), ObjectDB checks every database object that has to be updated or deleted, and compares the version number of that object in the database to the version number of the in-memory object being updated. The transaction fails and an OptimisticLockException is thrown if the version numbers do not match".
No Exception should be thrown because the version number is checked only for those entities, which has to be updated or deleted.
BUT
This docu is saying: "JPA Optimistic locking allows anyone to read and update an entity, however a version check is made upon commit and an exception is thrown if the version was updated in the database since the entity was read. "
According to this desc the Exception should be thrown, because the version check is made upon commit (I assume they mean every commit including the commits after read).
I want to achieve that the described scenario should not throw any Concurency Exception, it's no problem if Thread A returns a list of users, which is not the most recent. So is it correct to use optimistic Locking or if not which LockType should I use ?
The two links you gave say the same thing. If an entity is being updated in TransactionA and it has been modified in the DB by TransactionB since TransactionA read the entity, then an OptimisticLockException will be thrown.
In your case you are retrieving a list of all Users in threadA, but you update only one. You will get the OptimisticLockException only if the same entity was changed and committed(attempted) in threadb.
You would want an exception to be thrown in this case otherwise only one of the updates will succeed – the last one to commit would simply override the earlier commit - but which one will be the last one is somewhat indeterminate – sometimes threadA sometimes threadB and the DB contents will be really not as intended. So locking prevents this undesired behaviour.
If your application transactions are regularly colliding with data consider using pessimistic locking also described in https://blogs.oracle.com/carolmcdonald/entry/jpa_2_0_concurrency_and
Optimistic locking is very simple to understand:
Each entity have a timestamp / version numer attribute.
Each time an entity is updated, the timestamp / version number is updated too.
When you update an entity, you first read the actual timestamp in persistence layer (the database) if it has changed since the time you loaded it, then an OptimisticLockException is thrown, otherwise it's updated along with the new timestamp / version number.
If you have no risk of concurrent update then you shouldn't use any lock mechanism cause even the optimistic one have a performance impact (you have to check the timestamp before updating the entity).
Pessimistic locking is a scalability issue because it allow only one access for update at a time on a given resources (and so other -not read only- accesses are blocked), but it avoid operations to fail. If you can't afford to loose an operation go with pessimistic if scalability is not an issue, otherwise handle the concurrency mitigation at business level.

Entity classes and Record locking

I am looking at EntityManager API, and I am trying to understand an order in which I would do a record lock. Basically when a user decides to Edit a record, my code is:
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey());
r.setRoute(txtRoute.getText());
entityManager.persist(r);
entityManager.getTransaction().commit();
From my trial and error, it appears I need to set WWEntityManager.entityManager.lock(r, LockModeType.PESSIMISTIC_READ); after the .begin().
I naturally assumed that I would use WWEntityManager.entityManager.lock(r, LockModeType.NONE); after the commit, but it gave me this:
Exception Description: No transaction is currently active
I haven't tried putting it before the commit yet, but wouldn't that defeat the purpose of locking the record, since my goal is to avoid colliding records in case 50 users try to commit a change at once?
Any help as to how to I can lock the record for the duration of the edit, is greatly appreciated!
Thank You!
Performing locking inside transaction makes perfectly sense. Lock is automatically released in the end of the transaction (commit / rollback). Locking outside of transaction (in context of JPA) does not make sense, because releasing lock is tied to end of the transaction. Also otherwise locking after changes are performed and transaction is committed does not make too much sense.
It can be that you are using pessimistic locking to purpose other than what they are really for. If my assumption is wrong, then you can ignore end of the answer. When your transaction holds pessimistic read lock on entity (row), following is guaranteed:
No dirty reads: other transactions cannot see results of operations you performed to locked rows.
Repeatable reads: no modifications from other transactions
If your transaction modifies locked entity, PESSIMISTIC_READ is upgraded to PESSIMISTIC_WRITE or transaction fails if lock cannot be upgraded.
Following coarsely describes scenario with obtaining locking in the beginning of transaction:
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey(),
LockModeType.PESSIMISTIC_READ);
//from this moment on we can safely read r again expect no changes
r.setRoute(txtRoute.getText());
entityManager.persist(r);
//When changes are flushed to database, provider must convert lock to
//PESSIMISTIC_WRITE, which can fail if concurrent update
entityManager.getTransaction().commit();
Often databases do not have separate support for pessimistic read, so you are actually holding lock to row since PESSIMISTIC_READ. Also using PESSIMISTIC_READ makes sense only if no changes to the locked row are expected. In case above changes are done always, so using PESSIMISTIC_WRITE from the beginning on is reasonable, because it saves you from the risk of concurrent update.
In many cases it also makes sense to use optimistic instead of pessimistic locking. Good examples and some comments about choosing between locking strategies can be found from: Locking and Concurrency in Java Persistence 2.0
Great work attempting to be safe in write locking your changing data. :) But you might be going overboard / doing it the long way.
First a minor point. The call to persist() isn't needed. For update, just modify the attributes of the entity returned from find(). The entityManager automatically knows about the changes and writes them to the db during commit. Persist is only needed when you create a new object & write it to the db for the first time (or add a new child object to a parent relation and which to cascade the persist via cascade=PERSIST).
Most applications have a low probability of 'clashing' concurrent updates to the same data by different threads which have their own separate transactions and separate persistent contexts. If this is true for you and you would like to maximise scalability, then use an optimistic write lock, rather than a pessimistic read or write lock. This is the case for the vast majority of web applications. It gives exactly the same data integrity, much better performance/scalability, but you must (infrequently) handle an OptimisticLockException.
optimistic write locking is built-in automatically by simply having a short/integer/long/TimeStamp attribute in the db and entity and annotating it in the entity with #Version, you do not need to call entityManager.lock() in that case
If you were satisfied with the above, and you added a #Version attribute to your entity, your code would be:
try {
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey());
r.setRoute(txtRoute.getText());
entityManager.getTransaction().commit();
} catch (OptimisticLockException e) {
// Logging and (maybe) some error handling here.
// In your case you are lucky - you could simply rerun the whole method.
// Although often automatic recovery is difficult and possibly dangerous/undesirable
// in which case we need to report the error back to the user for manual recovery
}
i.e. no explicit locking at all - the entity manager handles it automagically.
IF you had a strong need to avoid concurrent data update "clashes", and are happy to have your code with limited scalability then serialise data access via pessimistic write locking:
try {
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey(), LockModeType.PESSIMISTIC_WRITE);
r.setRoute(txtRoute.getText());
entityManager.getTransaction().commit();
} catch (PessimisticLockException e) {
// log & rethrow
}
In both cases, a successful commit or an exception with automatic rollback means that any locking carried out is automatically cleared.
Cheers.

JPA: How does Read Lock work?

I am trying to understand whats the effect of calling EntityManager.lock(entity, LockModeType.READ). The API documentation sounds very confusing for me.
If I have to concurrent threads and Thread 1 calls lock(entity, LockModeType.READ), can Thread 2 still read and write the entity?
What I have learned so far:
The lock type READ in JPA1 is the same as OPTIMISTIC in JPA2. If such a lock is set, the EntityManager checks the version attribute before commiting the transaction, but does not update it. I found an explanation for the OPTIMISTIC lock mode: Link. Search for OPTIMISTIC (READ) LockMode Example.
As fas as I understand this, setting a read lock in Thread 1 has no effect on Threads 2 ... n. All other threads can still read and write the entity. But when the transaction in Thread 1 commits and an other Thread has updated the entity, the transaction in Thread 1 is rolled back.
Am I understanding this correct?
Read is curently deprecated anyway but just for your understanding:
A READ lock will ensure that the state of the object does not change on commit, because the READ lock allows other transactions to update or delete it then if Thread 1 does some change and then commits it first checks the state (the version) of the entity if it checks, it is commited, if not it is not allowed,
so basicly your understanding is correct.
there is also OPTIMISTIC_READ which is the modern way of using it(aslo there is _WRITE).
UPDATE
Ok this article helped me a lot in understanding hope this helps.

How can I configure Hibernate to immediately apply all saves, updates, and deletes?

How can I configure Hibernate to apply all saves, updates, and deletes to the database server immediately after the session executes each operation? By default, Hibernate enqueues all save, update, and delete operations and submits them to the database server only after a flush() operation, committing the transaction, or the closing of the session in which these operations occur.
One benefit of immediately flushing database "write" operations is that a program can catch and handle any database exceptions (such as a ConstraintViolationException) in the code block in which they occur. With late or auto-flushing, these exceptions may occur long after the corresponding Hibernate operation that caused the SQL operation.
Update:
According to the Hibernate API documentation for interface Session, the benefit of catching and handling a database exception before the session ends may be of no benefit at all: "If the Session throws an exception, the transaction must be rolled back and the session discarded. The internal state of the Session might not be consistent with the database after the exception occurs."
Perhaps, then, the benefit of surrounding an "immediate" Hibernate session write operation with a try-catch block is to catch and log the exception as soon as it occurs. Does immediate flushing of these operations have any other benefits?
How can I configure Hibernate to apply all saves, updates, and deletes to the database server immediately after the session executes each operation?
To my knowledge, Hibernate doesn't offer any facility for that. However, it looks like Spring does and you can have some data access operations FLUSH_EAGER by turning their HibernateTemplate respectively HibernateInterceptor to that flush mode (source).
But I warmly suggest to read the javadoc carefully (I'll come back on this).
By default, Hibernate enqueues all save, update, and delete operations and submits them to the database server only after a flush() operation, committing the transaction, or the closing of the session in which these operations occur.
Closing the session doesn't flush.
One benefit of immediately flushing database "write" operations is that a program can catch and handle any database exceptions (such as a ConstraintViolationException) in the code block in which they occur. With late or auto-flushing, these exceptions may occur long after the corresponding Hibernate operation that caused the SQL operation
First, DBMSs vary as to whether a constraint violation comes back on the insert (or update ) or on the subsequent commit (this is known as immediate or deferred constraints). So there is no guarantee and your DBA might even not want immediate constraints (which should be the default behavior though).
Second, I personally see more drawbacks with immediate flushing than benefits, as explained black in white in the javadoc of FLUSH_EAGER:
Eager flushing leads to immediate
synchronization with the database,
even if in a transaction. This causes
inconsistencies to show up and throw a
respective exception immediately, and
JDBC access code that participates in
the same transaction will see the
changes as the database is already
aware of them then. But the drawbacks
are:
additional communication roundtrips with the database, instead of a single
batch at transaction commit;
the fact that an actual database rollback is needed if the Hibernate
transaction rolls back (due to already
submitted SQL statements).
And believe me, increasing the database roundtrips and loosing the batching of statements can cause major performance degradation.
Also keep in mind that once you get an exception, there is not much you can do apart from throwing your session away.
To sum up, I'm very happy that Hibernate enqueues the various actions and I would certainly not use this EAGER_FLUSH flushMode as a general setting (but maybe only for the specific operations that actually require eager, if any).
Look in to autocommit though it is not recommended. If your work includes more than one update or insert SQL statement, you autocommit some of the work, and then a statement fails, you have a potentially arduous task of undoing the first part of the action. It gets really fun when the 'undo' operation fails.
Anyway, here's a link that shows how to do it.

Categories