Entity classes and Record locking - java

I am looking at EntityManager API, and I am trying to understand an order in which I would do a record lock. Basically when a user decides to Edit a record, my code is:
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey());
r.setRoute(txtRoute.getText());
entityManager.persist(r);
entityManager.getTransaction().commit();
From my trial and error, it appears I need to set WWEntityManager.entityManager.lock(r, LockModeType.PESSIMISTIC_READ); after the .begin().
I naturally assumed that I would use WWEntityManager.entityManager.lock(r, LockModeType.NONE); after the commit, but it gave me this:
Exception Description: No transaction is currently active
I haven't tried putting it before the commit yet, but wouldn't that defeat the purpose of locking the record, since my goal is to avoid colliding records in case 50 users try to commit a change at once?
Any help as to how to I can lock the record for the duration of the edit, is greatly appreciated!
Thank You!

Performing locking inside transaction makes perfectly sense. Lock is automatically released in the end of the transaction (commit / rollback). Locking outside of transaction (in context of JPA) does not make sense, because releasing lock is tied to end of the transaction. Also otherwise locking after changes are performed and transaction is committed does not make too much sense.
It can be that you are using pessimistic locking to purpose other than what they are really for. If my assumption is wrong, then you can ignore end of the answer. When your transaction holds pessimistic read lock on entity (row), following is guaranteed:
No dirty reads: other transactions cannot see results of operations you performed to locked rows.
Repeatable reads: no modifications from other transactions
If your transaction modifies locked entity, PESSIMISTIC_READ is upgraded to PESSIMISTIC_WRITE or transaction fails if lock cannot be upgraded.
Following coarsely describes scenario with obtaining locking in the beginning of transaction:
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey(),
LockModeType.PESSIMISTIC_READ);
//from this moment on we can safely read r again expect no changes
r.setRoute(txtRoute.getText());
entityManager.persist(r);
//When changes are flushed to database, provider must convert lock to
//PESSIMISTIC_WRITE, which can fail if concurrent update
entityManager.getTransaction().commit();
Often databases do not have separate support for pessimistic read, so you are actually holding lock to row since PESSIMISTIC_READ. Also using PESSIMISTIC_READ makes sense only if no changes to the locked row are expected. In case above changes are done always, so using PESSIMISTIC_WRITE from the beginning on is reasonable, because it saves you from the risk of concurrent update.
In many cases it also makes sense to use optimistic instead of pessimistic locking. Good examples and some comments about choosing between locking strategies can be found from: Locking and Concurrency in Java Persistence 2.0

Great work attempting to be safe in write locking your changing data. :) But you might be going overboard / doing it the long way.
First a minor point. The call to persist() isn't needed. For update, just modify the attributes of the entity returned from find(). The entityManager automatically knows about the changes and writes them to the db during commit. Persist is only needed when you create a new object & write it to the db for the first time (or add a new child object to a parent relation and which to cascade the persist via cascade=PERSIST).
Most applications have a low probability of 'clashing' concurrent updates to the same data by different threads which have their own separate transactions and separate persistent contexts. If this is true for you and you would like to maximise scalability, then use an optimistic write lock, rather than a pessimistic read or write lock. This is the case for the vast majority of web applications. It gives exactly the same data integrity, much better performance/scalability, but you must (infrequently) handle an OptimisticLockException.
optimistic write locking is built-in automatically by simply having a short/integer/long/TimeStamp attribute in the db and entity and annotating it in the entity with #Version, you do not need to call entityManager.lock() in that case
If you were satisfied with the above, and you added a #Version attribute to your entity, your code would be:
try {
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey());
r.setRoute(txtRoute.getText());
entityManager.getTransaction().commit();
} catch (OptimisticLockException e) {
// Logging and (maybe) some error handling here.
// In your case you are lucky - you could simply rerun the whole method.
// Although often automatic recovery is difficult and possibly dangerous/undesirable
// in which case we need to report the error back to the user for manual recovery
}
i.e. no explicit locking at all - the entity manager handles it automagically.
IF you had a strong need to avoid concurrent data update "clashes", and are happy to have your code with limited scalability then serialise data access via pessimistic write locking:
try {
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey(), LockModeType.PESSIMISTIC_WRITE);
r.setRoute(txtRoute.getText());
entityManager.getTransaction().commit();
} catch (PessimisticLockException e) {
// log & rethrow
}
In both cases, a successful commit or an exception with automatic rollback means that any locking carried out is automatically cleared.
Cheers.

Related

How to properly implement Optimistic Locking at the application layer?

I am a little confused as to why Optimistic Locking is actually safe. If I am checking the version at the time of retrieval with the version at the time of update, it seems like I can still have two requests enter the update block if the OS issues an interrupt and swaps the processes before the commit actually occurs. For example:
latestVersion = vehicle.getVersion();
if (vehicle.getVersion() == latestVersion) {
// update record in database
} else {
// don't update record
}
In this example, I am trying to manually use Optimistic Locking in a Java application without using JPA / Hibernate. However, it seems like two requests can enter the if block at the same time. Can you please help me understand how to do this properly? For context, I am also using Java Design Patterns website as an example.
Well... that's the optimistic part. The optimism is that it is safe. If you have to be certain it's safe, then that's not optimistic.
The example you show definitely is susceptible to a race condition. Not only because of thread scheduling, but also due to transaction isolation level.
A simple read in MySQL, in the default transaction isolation level of REPEATABLE READ, will read the data that was committed at the time your transaction started.
Whereas updating data will act on the data that is committed at the time of the update. If some other concurrent session has updated the row in the database in the meantime, and committed it, then your update will "see" the latest committed row, not the row viewed by your get method.
The way to avoid the race condition is to not be optimistic. Instead, force exclusive access to the record. Doveryai, no proveryai.
If you only have one app instance, you might use a critical section for this.
If you have multiple app instances, critical sections cannot coordinate other instances, so you need to coordinate in the database. You can do this by using pessimistic locking. Either read the record using a locking read query, or else you can use MySQL's user-defined locks.

Avoid optimistic locking in java web application

I have a problem concerning java optimistic locking exception. I have a service class that is instantiated (by spring) for every new user session and it contains a non static method that perform db operations. I wonder how I can avoid optimistic locking exception on the entity that is read/written to db. I would like to achieve a similar result as a synchronized method would but I guess using "synchronized" is out of the question since the method is not static and would not have any effect when users have own instances of the service? Can I somehow detect if a new version of the entity is saved to db and then retrieve a new version and then edit and save that one? I want the transaction to hold until it is ok even if it implies the transaction have to wait for other transactions. My first idea was to put the transaction code into a try-catch block and then retry the transaction (read & write) if optimistic locking exceptions is thrown. Is that solution "too easy" or?
Optimistic locking is used to improve performance, but still avoid messing up the data.
If there's an Optimistic lock failure, the user (that failed the update) needs to decide if he wants to do his operation again. You can't automate that, since it depends entirely on what was changed and how.
So no, your idea of a retry the transaction with a try/catch is not a "too easy solution". It's not a solution, it would be a serious (and dumb) bug.

JPA Optimistic locking

I have some troubles understanding the OPTIMISTIC LockMode.
Let's consider in the following scenario: "Thread A creates a Transaction and reads a list of all Users from Table USERS. Thread B updates a user in Table USERS. Thread B commits. Thread A commits".
Assume I am using OPTIMISTIC Locking. Would in this case the 2nd commit cause the OptimisticLockException to be thrown ?
Because according to this docu: "During commit (and flush), ObjectDB checks every database object that has to be updated or deleted, and compares the version number of that object in the database to the version number of the in-memory object being updated. The transaction fails and an OptimisticLockException is thrown if the version numbers do not match".
No Exception should be thrown because the version number is checked only for those entities, which has to be updated or deleted.
BUT
This docu is saying: "JPA Optimistic locking allows anyone to read and update an entity, however a version check is made upon commit and an exception is thrown if the version was updated in the database since the entity was read. "
According to this desc the Exception should be thrown, because the version check is made upon commit (I assume they mean every commit including the commits after read).
I want to achieve that the described scenario should not throw any Concurency Exception, it's no problem if Thread A returns a list of users, which is not the most recent. So is it correct to use optimistic Locking or if not which LockType should I use ?
The two links you gave say the same thing. If an entity is being updated in TransactionA and it has been modified in the DB by TransactionB since TransactionA read the entity, then an OptimisticLockException will be thrown.
In your case you are retrieving a list of all Users in threadA, but you update only one. You will get the OptimisticLockException only if the same entity was changed and committed(attempted) in threadb.
You would want an exception to be thrown in this case otherwise only one of the updates will succeed – the last one to commit would simply override the earlier commit - but which one will be the last one is somewhat indeterminate – sometimes threadA sometimes threadB and the DB contents will be really not as intended. So locking prevents this undesired behaviour.
If your application transactions are regularly colliding with data consider using pessimistic locking also described in https://blogs.oracle.com/carolmcdonald/entry/jpa_2_0_concurrency_and
Optimistic locking is very simple to understand:
Each entity have a timestamp / version numer attribute.
Each time an entity is updated, the timestamp / version number is updated too.
When you update an entity, you first read the actual timestamp in persistence layer (the database) if it has changed since the time you loaded it, then an OptimisticLockException is thrown, otherwise it's updated along with the new timestamp / version number.
If you have no risk of concurrent update then you shouldn't use any lock mechanism cause even the optimistic one have a performance impact (you have to check the timestamp before updating the entity).
Pessimistic locking is a scalability issue because it allow only one access for update at a time on a given resources (and so other -not read only- accesses are blocked), but it avoid operations to fail. If you can't afford to loose an operation go with pessimistic if scalability is not an issue, otherwise handle the concurrency mitigation at business level.

How to implement race condition at database level with Spring and hibernate?

I have a bank project which customer balances should be updated by parallel threads in parallel applications. I hold customer balances in an Oracle database. My java applications will be implemented with Spring and Hibernate.
How can i implement the race condition between parallel applications? Should my solution be at database level or at application level?
I assume what you would like to know is how to handle concurrency, preventing race conditions which can occur where two parts of the application modify and accidentally overwrite the same data.
You have mostly two strategies for this: pessimistic locking and optimistic locking:
Pessimistic locking
here you assume that the likelyhood that two threads overwrite the same data is high, so you would like it to handle it in a transparent way. To handle this, increase the isolation level of your Spring transactions from it's default value of READ_COMMITTED to for example REPEATABLE_READ which should be sufficient in most cases:
#Transactional(isolation=Isolation.REPEATABLE_READ)
public void yourBusinessMethod {
...
}
In this case if you read some data in the beginning of the method, you are sure that noone can overwrite the data in the database while your method is ongoing. Note that it's still possible for another thread to insert extra records to a query you made (a problem known as phantom reads), but not change the records you already read.
If you want to protect against phantom reads, you need to upgrade the isolation level to SERIALIZABLE. The improved isolation comes at a performance cost, your program will run slower and will more frequently 'hang' waiting for the other part of the program to finish.
Optimistic Locking
Here you assume that data access colisions are rare, and that in the rare cases they occur they are easilly recoverable by the application. In this mode, you keep all your business methods in their default REPEATABLE_READ mode.
Then each Hibernate entity is marked with a version column:
#Entity
public SomeEntity {
...
#Version
private Long version;
}
With this each entity read from the database is versioned using the version column. When Hibernate write changes to an entity in the database, it will check if the version was incremented since the last time that transaction read the entity.
If so it means someone else modified the data, and decisions where made using stale data. In this case a StaleObjectException is thrown, that needs to be caught by the application and handled, ideally at a central place.
In the case of a GUI, you usuall catch the exception, show a message saying user xyz changed this data while you where also editing it, your changes are lost. Press Ok to reload the new data.
With optimistic locking your program will run faster but the applications needs to handle some concurrency aspects that would otherwise be transparent with pessimistic locking: version entities, catch exceptions.
The most frequently used method is optimistic locking, as it seems to be acceptable in most applications. With pessimistic locking it's very easy to cause performance problems, specially when data access colisions are rare and can be solved in a simple way.
There are no constraints to mix the use of the two concurrency handling methods in the same application if needed.

Hibernate Session in Thread with UserThread and Serialization

I have the following case:
I have a thread which uses the session to save or update
public void run()
{
Session session = DAO.getInstance().getCurrentSession();
Transaction tx = null;
try
{
tx = session.beginTransaction();
session.saveOrUpdate(entity);
}catch.....
}
But in the meantime during the serialization with session.saveorUpdate i change the entity object...
So the User-thread will change the data during the session serialization..
How can I overcome this problem? is there a simple way in hibernate?
EDIT:
The biggest problem lies when the UserThread changes some data in the entity object durignt the saveOrUpdate method.
It sounds like you'd be interested in optimistic concurrency control using versioning.
Optimistic Concurrency Control
If you haven't come across it before, it's a similar idea to compare-and-swap whereby Hibernate will manage a version along with the entity. By incrementing a version number during updates and checking it hasn't changed after, Hibernate can detect conflict and error. It optimistically assumes that actual contention is rare and leaves it to the developer to handle the exceptions. I've generally found this to be the case and as the Hibernate docs put it;
The only approach that is consistent with high concurrency and high
scalability, is optimistic concurrency control with versioning.
You can tweak Hibernate's transaction visibility and isolation level to affect the finer details, see
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/transactions.html#transactions-optimistic
Transaction Demarcation
I can't tell from the question's code snippet but it may also be worth considering the transaction boundary. Usually, I'll start a transaction (beginTransaction) at the start of a business operation or request and commit and completion. All updates are performed in this session (with one thread-per-session Hibernate) model. I still have each business operation or request processed on their own thread and rely on usual Hiernate issolation levels etc to manage conflicts.
I mention it because there may be a chance to step back and consider why you make updates from multiple threads. It may be that your application doesn't suit the approach I've tried to outline but it may be worth considering if it can be shifted around to avoid genuine multiple-thread updates.
Failing that it's certainly worth understanding if there is likely to be frequent conflicts in production. Testing this could help you understand if you really need to worry about it or if you can rely on the usually transaction control to detect conflicts and handle them in other ways.
One way is to synchronize on the entity object:
public void run()
{
Session session = DAO.getInstance().getCurrentSession();
Transaction tx = null;
try
{
tx = session.beginTransaction();
synchronized(entity) {
session.saveOrUpdate(entity);
}
}

Categories