hibernate's transaction read and subsequent update - java

I have situation in which I read a record from a database. And if everything is ok I'll modify few properties and commit transaction.
But in situations two threads do the same, they will update the same record.
How to make it in hibernate?

You can use optimistic locking: give entities a version and let it throw an exception and try again later if the version isn't the same because something else (other thread, other node in a cluster or even some independant sql script that bothers to update the version) changed the same entity.
Or you can use pessimistic locking: really lock the entities in the database.
See the Transactions and Concurrency chapter in the hibernate documentation for more details.

Related

Hibernate read only entities says saves memory by deleting database snapshots

While going through the hibernate document and reading through the read-only entities, the following hibernate official documentation says,
Hibernate does some optimizing for read-only entities:
It saves execution time by not dirty-checking simple properties or single-ended associations.
It saves memory by deleting database snapshots.
I don't understand what does it mean by deleting database snapshots.
Is it referring to some optimization that happens in the database? If so, how does hibernate inform/hint the DB to do that optimization? Is this optimization a database specific feature and so not guaranteed across the databases?
Or is it referring to an optimization that happens with in the hibernate library? I doubt this is the case, because whether it is readOnly or not the query fired by hibernate to fetch the records is same, but want to make sure I am not missing anything here.
UPDATE: As per the answer from #tgdavies, it helps hibernate not to keep the snapshots as the dirty checking is not needed.
Subsequently would like to understand if there is any link between JDBC readOnly and hibernate readOnly to enable db optimization. As per the Connection.html#setReadOnly it says - Puts this connection in read-only mode as a hint to the driver to enable database optimizations.. And what are those hints?
Can someone throw some light on how this optimization is actually achieved.
When Hibernate loads an object into a Session it creates a state snapshot of the current database state of the object, so that it can perform dirty checking against the snapshot.
As a read only object will never be modified, this snapshot is not needed and memory can be saved.
This is not an optimisation related to database access, but to reducing the memory used by the Session.
I doubt that Hibernate sets the JDBC connection to read only -- Hibernate doesn't know what else will happen in the Session. You could log the SQL Hibernate is sending to make sure: How to log final SQL queries with hibernate
I'm not sure what optimisations the database can perform on a read only connection -- probably taking fewer locks in some isolation modes, but that's just hand-waving on my part.

Concurrency with Hibernate in Spring

I found a lot of posts regarding this topic, but all answers were just links to documentations with no example code, i.e., how to use concurrency in practice.
My situation: I have an entity House with (for simplyfication) two attributes, number (the id) and owner. The database is initialized with 10 Houses with number 1-10 and owner always null.
I want to assign a new owner to the house with currently no owner, and the smallest number. My code looks like this:
#Transactional
void assignNewOwner(String newOwner) {
//this is flagged as #Transactional too
House tmp = houseDao.getHouseWithoutOwnerAndSmallestNumber();
tmp.setOwner(newOwner);
//this is flagged as #Transactional too
houseDao.update(tmp);
}
For my understanding, although the #Transactional is used, the same House could be assigned twice to different owners, if two requests fetch the same empty House as tmp. How do I ensure this can not happen?
I know, including the update in the selection of the empty House would solve the issue, but in near future, I want to modify/work with the tmp object more.
Optimistic
If you add a version column to your entity / table then you could take advantage of a mechanism called Optimistic Locking. This is the most proficient way of making sure that the state of an entity has not changed since we obtained it in a transactional context.
Once you createQuery using the session you can then call setLockMode(LockModeType.OPTIMISTIC);
Then, just before the transaction is commited, the persistence provider would query for the current version of that entity and check whether it has been incremented by another transaction. If so, you would get an OptimisticLockException and a transaction rollback.
Pessimistic
If you do not version your rows, then you are left with pessimistic lockin which basically means that you phycically create a lock for queries entities on the database level and other transactions cannot read / update those certain rows.
You achieve that by setting this on the Query object:
setLockMode(LockModeType.PESSIMISTIC_READ);
or
setLockMode(LockModeType.PESSIMISTIC_WRITE);
Actually it's pretty easy - at least in my opinion and I am going to abstract away of what Hibernate will generate when you say Pessimistic/Optimistic. You might think this is SELECT FOR UPDATE - but it's not always the case, MSSQL AFAIK does not have that...
These are JPA annotations and they guarantee some functionality, not the implementation.
Fundamentally they are entire different things - PESSIMISTIC vs OPTIMISTIC locking. When you do a pessimistic locking you sort of do a synchronized block at least logically - you can do whatever you want and you are safe within the scope of the transaction. Now, whatever the lock is being held for the row, table or even page is un-specified; so a bit dangerous. Usually database may escalate locks, MSSQL does that if I re-call correctly.
Obviously lock starvation is an issue, so you might think that OPTIMISTIC locking would help. As a side note, this is what transactional memory is in modern CPU; they use the same thinking process.
So optimistically locking is like saying - I will mark this row with an ID/Date, etc, then I will take a snapshot of that and work with it - before committing I will check if that Id has a changed. Obviously there is contention on that ID, but not on the data. If it has changed - abort (aka throw OptimisticLockException) otherwise commit the work.
The thing that bothers everyone IMO is that OptimisticLockException - how do you recover from that? And here is something you are not going to like - it depends. There are apps where a simple retry would be enough, there are apps where this would be impossible. I have used it in rare scenarios.
I usually go with Pessimistic locking (unless Optimistic is totally not an option). At the same time I would look of what hibernate generates for that query. For example you might need an index on how the entry is retrieved for the DB to actually lock just the row - because ultimately that is what you would want.

what is the purpose of LockMode OPTIMISTIC?

As per How to do optimistic locking in hibernate, we need to enable the optimistic
locking with version element or version annotation in hibernate. I am clear till here.
I am not sure what is the purpose of Lock Mode Optimistic ?
In what kind of scenario, developer should use it ?
to understand why you would want optimistic locking, you first need to understand what no locking and pessimistic locking mean. I'm no hibernate expert, so I'll just tell it to you without a focus on hibernate.
When 2 process/users update the same object then the one who updates it last will win. So you need to find a way to prevent this. One way to do this is pessimistic locking. Here, you will put a lock on the object at the moment you load it from database "select for update". Until your transaction is commited or rolled back, nobody else can "select for update" this object. now the problem is: When you load an entity via hibernate, you nowhere specify if you want to load it for read-only purpose or if you want to modify this object.
So here comes optimistic locking. This concept assumes optimistically that everything will go ok in most cases. When 2 processes/users update the same object, the second one will not win, but get an exception on commit.

hibernate simultaneous updates

Consider a scenario- 2 applications accessing/updating a single database. one of the applications is using hibernate & has got some records from db, will now process them & save it back. But before it saves, the same set of records is updated by the other application. What will happen in this scenario?
Will hibernate throw an error on saving ? or hibernate will have the intelligence to sync the updated records?
The hibernate will throw StaleObjectException. Here is why.
Hibernate uses optimistic locking to handle database concurrency. A StaleObjectExceptionis thrown if the data to be updated is modified by another transaction before current transaction commits the changes.
EDIT:
and how does hibernate identify that the state of object in memory is stale?
Hibernate uses a version field to track the changes to the entity. This version field updated on every commit. Now if the version number just before commit does not match the version number when the entity was read at the beginning of transaction,StaleObjectException is thrown.

How to lock and reload an entity correctly

In my web application I have several threads that potentially access the same data concurrently why I decided to implement optimistic (versioning) and pessimistic locking with Hibernate.
Currently I use the following pattern to lock an entity and perform write operations on it (using Springs Transaction manager and transaction demarcation with #Transactional):
#Transactional
public void doSomething(entity) {
session.lock(entity, LockMode.UPGRADE);
session.refresh(entity);
// I change the entity itself as well as entites in a relationship.
entity.setBar(...);
for(Child childEntity : entity.getChildren()) {
childEntity.setFoo(...);
}
}
However, sometimes I am getting StaleObjectException when the #Transactional is flushing that tells me that a ChildEntity has been modifed concurrently and now has a wrong version.
I guess I am not correctly refreshing entity and its children so I am working with stale data. Can someone point out how to achieve this? Some thoughts of me included clearing the persistence context (the session) or calling session.lock(entity, LockMode.READ) again, but I am not sure what is correct here.
Thanks for your help!
You may want to take at look at this Hibernate-Issue: LockMode.Upgrade doesn't refresh entity values.
In short: Hibernat does NOT perform a select after a successful lock if the given entity was already preloaded. You need to call refresh for the entity for yourself after you received the lock.
Why do you make "LockMode.UPGRADE" and optimistic locking live together? Seem like controversial things.
Hibernate never lock objects in memory and always use the locking mechanism of the database. Also, "if the requested lock mode is not supported by the database, Hibernate uses an appropriate alternate mode instead of throwing an exception. This ensures that applications are portable.". It means, that if your database doesn't support SELECT ... FOR UPDATE, most probably, you will get these exceptions.
Another possible reason is that you haven't used "org.hibernate.annotations.CascadeType.LOCK" for children.

Categories