As per How to do optimistic locking in hibernate, we need to enable the optimistic
locking with version element or version annotation in hibernate. I am clear till here.
I am not sure what is the purpose of Lock Mode Optimistic ?
In what kind of scenario, developer should use it ?
to understand why you would want optimistic locking, you first need to understand what no locking and pessimistic locking mean. I'm no hibernate expert, so I'll just tell it to you without a focus on hibernate.
When 2 process/users update the same object then the one who updates it last will win. So you need to find a way to prevent this. One way to do this is pessimistic locking. Here, you will put a lock on the object at the moment you load it from database "select for update". Until your transaction is commited or rolled back, nobody else can "select for update" this object. now the problem is: When you load an entity via hibernate, you nowhere specify if you want to load it for read-only purpose or if you want to modify this object.
So here comes optimistic locking. This concept assumes optimistically that everything will go ok in most cases. When 2 processes/users update the same object, the second one will not win, but get an exception on commit.
Related
I read about locking a lot but still it is not much clear to me.
I believe PESSIMISTIC_READ is something which I need but need more clarification and how I can use that.
I want to acquire a lock on table so that one can read from the table but can not write it untill my transaction finished. Also if somebody wants to update the table during my transaction he should wait and face no exception.
Is it possible via JPA locktypemode ?
Please suggest if it is required to use to other lock type.
I am using oracle database 12.1
I found a lot of posts regarding this topic, but all answers were just links to documentations with no example code, i.e., how to use concurrency in practice.
My situation: I have an entity House with (for simplyfication) two attributes, number (the id) and owner. The database is initialized with 10 Houses with number 1-10 and owner always null.
I want to assign a new owner to the house with currently no owner, and the smallest number. My code looks like this:
#Transactional
void assignNewOwner(String newOwner) {
//this is flagged as #Transactional too
House tmp = houseDao.getHouseWithoutOwnerAndSmallestNumber();
tmp.setOwner(newOwner);
//this is flagged as #Transactional too
houseDao.update(tmp);
}
For my understanding, although the #Transactional is used, the same House could be assigned twice to different owners, if two requests fetch the same empty House as tmp. How do I ensure this can not happen?
I know, including the update in the selection of the empty House would solve the issue, but in near future, I want to modify/work with the tmp object more.
Optimistic
If you add a version column to your entity / table then you could take advantage of a mechanism called Optimistic Locking. This is the most proficient way of making sure that the state of an entity has not changed since we obtained it in a transactional context.
Once you createQuery using the session you can then call setLockMode(LockModeType.OPTIMISTIC);
Then, just before the transaction is commited, the persistence provider would query for the current version of that entity and check whether it has been incremented by another transaction. If so, you would get an OptimisticLockException and a transaction rollback.
Pessimistic
If you do not version your rows, then you are left with pessimistic lockin which basically means that you phycically create a lock for queries entities on the database level and other transactions cannot read / update those certain rows.
You achieve that by setting this on the Query object:
setLockMode(LockModeType.PESSIMISTIC_READ);
or
setLockMode(LockModeType.PESSIMISTIC_WRITE);
Actually it's pretty easy - at least in my opinion and I am going to abstract away of what Hibernate will generate when you say Pessimistic/Optimistic. You might think this is SELECT FOR UPDATE - but it's not always the case, MSSQL AFAIK does not have that...
These are JPA annotations and they guarantee some functionality, not the implementation.
Fundamentally they are entire different things - PESSIMISTIC vs OPTIMISTIC locking. When you do a pessimistic locking you sort of do a synchronized block at least logically - you can do whatever you want and you are safe within the scope of the transaction. Now, whatever the lock is being held for the row, table or even page is un-specified; so a bit dangerous. Usually database may escalate locks, MSSQL does that if I re-call correctly.
Obviously lock starvation is an issue, so you might think that OPTIMISTIC locking would help. As a side note, this is what transactional memory is in modern CPU; they use the same thinking process.
So optimistically locking is like saying - I will mark this row with an ID/Date, etc, then I will take a snapshot of that and work with it - before committing I will check if that Id has a changed. Obviously there is contention on that ID, but not on the data. If it has changed - abort (aka throw OptimisticLockException) otherwise commit the work.
The thing that bothers everyone IMO is that OptimisticLockException - how do you recover from that? And here is something you are not going to like - it depends. There are apps where a simple retry would be enough, there are apps where this would be impossible. I have used it in rare scenarios.
I usually go with Pessimistic locking (unless Optimistic is totally not an option). At the same time I would look of what hibernate generates for that query. For example you might need an index on how the entry is retrieved for the DB to actually lock just the row - because ultimately that is what you would want.
This seems like it would come up often, but I've Googled to no avail.
Suppose you have a Hibernate entity User. You have one User in your DB with id 1.
You have two threads running, A and B. They do the following:
A gets user 1 and closes its Session
B gets user 1 and deletes it
A changes a field on user 1
A gets a new Session and merges user 1
All my testing indicates that the merge attempts to find user 1 in the DB (it can't, obviously), so it inserts a new user with id 2.
My expectation, on the other hand, would be that Hibernate would see that the user being merged was not new (because it has an ID). It would try to find the user in the DB, which would fail, so it would not attempt an insert or an update. Ideally it would throw some kind of concurrency exception.
Note that I am using optimistic locking through #Version, and that does not help matters.
So, questions:
Is my observed Hibernate behaviour the intended behaviour?
If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
If the answer to 2. is yes, why is nobody complaining about it?
Please see the text from hibernate documentation below.
Copy the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance.
It clearly stated that copy the state(data) of object in database. if object is not there then save a copy of that data. When we say save a copy hibernate always create a record with new identifier.
Hibernate merge function works something like as follows.
It checks the status(attached or detached to the session) of entity and found it detached.
Then it tries to load the entity with identifier but not found in database.
As entity is not found then it treat that entity as transient.
Transient entity always create a new database record with new identifier.
Locking is always applied to attached entities. If entity is detached then hibernate will always load it and version value gets updated.
Locking is used to control concurrency problems. It is not the concurrency issue.
I've been looking at JSR-220, from which Session#merge claims to get its semantics. The JSR is sadly ambiguous, I have found.
It does say:
Optimistic locking is a technique that is used to insure that updates
to the database data corresponding to the state of an entity are made
only when no intervening transaction has updated that data since the
entity state was read.
If you take "updates" to include general mutation of the database data, including deletes, and not just a SQL UPDATE, which I do, I think you can make an argument that the observed behaviour is not compliant with optimistic locking.
Many people agree, given the comments on my question and the subsequent discovery of this bug.
From a purely practical point of view, the behaviour, compliant or not, could lead to quite a few bugs, because it is contrary to many developers' expectations. There does not seem to be an easy fix for it. In fact, Spring Data JPA seems to ignore this issue completely by blindly using EM#merge. Maybe other JPA providers handle this differently, but with Hibernate this could cause issues.
I'm actually working around this by using Session#update currently. It's really ugly, and requires code to handle the case when you try to update an entity that is detached, and there's a managed copy of it already. But, it won't lead to spurious inserts either.
1.Is my observed Hibernate behaviour the intended behaviour?
The behavior is correct. You just trying to do operations that are not protected against concurrent data modification :) If you have to split the operation into two sessions. Just find the object for update again and check if it is still there, throw exception if not. If there is one then lock it by using em.(class, primary key, LockModeType); or using #Version or #Entity(optimisticLock=OptimisticLockType.ALL/DIRTY/VERSION) to protect the object till the end of the transaction.
2.If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
Probably: yes
3.If the answer to 2. is yes, why is nobody complaining about it?
Because if you protect your operations using pessimistic or optimistic locking the problem will disappear:)
The problem you are trying to solve is called: Non-repeatable read
In my web application I have several threads that potentially access the same data concurrently why I decided to implement optimistic (versioning) and pessimistic locking with Hibernate.
Currently I use the following pattern to lock an entity and perform write operations on it (using Springs Transaction manager and transaction demarcation with #Transactional):
#Transactional
public void doSomething(entity) {
session.lock(entity, LockMode.UPGRADE);
session.refresh(entity);
// I change the entity itself as well as entites in a relationship.
entity.setBar(...);
for(Child childEntity : entity.getChildren()) {
childEntity.setFoo(...);
}
}
However, sometimes I am getting StaleObjectException when the #Transactional is flushing that tells me that a ChildEntity has been modifed concurrently and now has a wrong version.
I guess I am not correctly refreshing entity and its children so I am working with stale data. Can someone point out how to achieve this? Some thoughts of me included clearing the persistence context (the session) or calling session.lock(entity, LockMode.READ) again, but I am not sure what is correct here.
Thanks for your help!
You may want to take at look at this Hibernate-Issue: LockMode.Upgrade doesn't refresh entity values.
In short: Hibernat does NOT perform a select after a successful lock if the given entity was already preloaded. You need to call refresh for the entity for yourself after you received the lock.
Why do you make "LockMode.UPGRADE" and optimistic locking live together? Seem like controversial things.
Hibernate never lock objects in memory and always use the locking mechanism of the database. Also, "if the requested lock mode is not supported by the database, Hibernate uses an appropriate alternate mode instead of throwing an exception. This ensures that applications are portable.". It means, that if your database doesn't support SELECT ... FOR UPDATE, most probably, you will get these exceptions.
Another possible reason is that you haven't used "org.hibernate.annotations.CascadeType.LOCK" for children.
I have situation in which I read a record from a database. And if everything is ok I'll modify few properties and commit transaction.
But in situations two threads do the same, they will update the same record.
How to make it in hibernate?
You can use optimistic locking: give entities a version and let it throw an exception and try again later if the version isn't the same because something else (other thread, other node in a cluster or even some independant sql script that bothers to update the version) changed the same entity.
Or you can use pessimistic locking: really lock the entities in the database.
See the Transactions and Concurrency chapter in the hibernate documentation for more details.