Pessimistic Lock doesn't work - java

I'm developing an application with JPA2.1. I have the followed trouble.
I'm trying to lock an entity in this way :
Book book = em.find(Book.class, 12);
em.lock(book, LockModeType.PESSIMISTIC_WRITE);
but if try to access from another windows browser or client to entity with id=12 , the system doesn't thrown PessimisticLockException?
Where am I wrong?

The lock will be effective during the lifetime of the transaction but certainly not across multiple request-response loop (unless you have configured your entity manager and transaction manager to manage long time transaction).
The transaction MUST be a short-time living object (for performance reasons).
Optimistic write-lock means that book will not be modified by any other thread between the lock instruction and the end of the transaction. But the book object itself may live longer of course.

I suppose that in another window/browser you try the same thing: to acquire a PESSIMISTIC_WRITE lock.
The problem that you have, is that the lock is released when the method returns (as the transaction ends), meaning that when you open the second browser/window, there is no lock anymore.
You should probably explain us the problem/scenario that you want to try to solve/test.
For the general situation:
Another possible cause could be that your database table does not support row-level locking. For example in MySql only the InnoDB storage engine supports "SELECT * FOR UPDATE" (which the PESSIMISTIC_WRITE lock is translated into).

Related

Is an attached Record thread-safe?

Is an attached jOOQ Record (UpdatableRecord) thread-safe, i.e. can I attach (fetch) a Record in one thread, and store it later in another thread without negative effects? Should I detach it in the original thread and attach it back in the new thread?
I know about the jOOQ manual page about thread-safety of the DSLContext. I'm using the Spring Boot Autoconfiguration of jOOQ, so that should all be thread-safe (with Spring's DataSourceTransactionManager and Hikari pooling).
But the following questions remain:
How does an attached Record behave when a transaction in the original thread is opened, and store() is called in another thread either before or after the original transaction has been committed? Does jOOQ open a new connection every time for each operation?
Would the attached Record be keeping a connection open across threads, which might then lead to resource leaks?
A jOOQ record is not thread safe. It is a simple mutable container backed by an ordinary Object[]. As such, all the usual issues may arise when sharing mutable state across threads.
But your question isn't really about the thread safety of the record.
How does an attached Record behave when a transaction in the original thread is opened, and store() is called in another thread either before or after the original transaction has been committed? Does jOOQ open a new connection every time for each operation?
This has nothing to do with Record, but how you configure jOOQ's ConnectionProvider. jOOQ doesn't hold a connection or even open one. You do that, explicitly, or implicitly, by passing jOOQ a connection via a ConnectionProvider (probably via some Spring configured DataSource). jOOQ will, for each database interaction, acquire() a connection, and release() it again after the interaction. The Record doesn't know how this connection is obtained. It just runs jOOQ queries that acquire and release connections.
In fact, jOOQ doesn't even really care about your transactions (unless you're using jOOQ's transaction API, but you aren't).
Would the attached Record be keeping a connection open across threads, which might then lead to resource leaks?
No, a Record is "attached" to a Configuration, not a connection. That Configuration contains a ConnectionProvider, which does whatever you configured it to do.

How to properly implement Optimistic Locking at the application layer?

I am a little confused as to why Optimistic Locking is actually safe. If I am checking the version at the time of retrieval with the version at the time of update, it seems like I can still have two requests enter the update block if the OS issues an interrupt and swaps the processes before the commit actually occurs. For example:
latestVersion = vehicle.getVersion();
if (vehicle.getVersion() == latestVersion) {
// update record in database
} else {
// don't update record
}
In this example, I am trying to manually use Optimistic Locking in a Java application without using JPA / Hibernate. However, it seems like two requests can enter the if block at the same time. Can you please help me understand how to do this properly? For context, I am also using Java Design Patterns website as an example.
Well... that's the optimistic part. The optimism is that it is safe. If you have to be certain it's safe, then that's not optimistic.
The example you show definitely is susceptible to a race condition. Not only because of thread scheduling, but also due to transaction isolation level.
A simple read in MySQL, in the default transaction isolation level of REPEATABLE READ, will read the data that was committed at the time your transaction started.
Whereas updating data will act on the data that is committed at the time of the update. If some other concurrent session has updated the row in the database in the meantime, and committed it, then your update will "see" the latest committed row, not the row viewed by your get method.
The way to avoid the race condition is to not be optimistic. Instead, force exclusive access to the record. Doveryai, no proveryai.
If you only have one app instance, you might use a critical section for this.
If you have multiple app instances, critical sections cannot coordinate other instances, so you need to coordinate in the database. You can do this by using pessimistic locking. Either read the record using a locking read query, or else you can use MySQL's user-defined locks.

JPA pessimistic lock logic for client-server application

I am learning JPA pessimistic lock. I found the following explanation
PESSIMISTIC_READ - The Entity is locked on the database, prevents any
other transaction from acquiring a PESSIMISTIC_WRITE lock.
PESSIMISTIC_WRITE - The Entity is locked on the database, prevents any
other transaction from acquiring a PESSIMISTIC_READ or
PESSIMISTIC_WRITE lock.
If I understand it right, then if we have three users (A, B, C) and user A gets READ lock, then user B can get READ lock too, but user C can't get WRITE lock until users A and B releases their locks. And if user A gets a WRITE lock then user B and user C can't get nothing until user A releases the lock.
However, for my client-server application I want the following logic. If users want only to read an entity their open the entity in READ-ONLY mode (unlimited number of users can do it at the same time). If some user wants to edit the entity he opens it in WRITE mode - no one can open the same entity in WRITE mode (until the user releases the WRITE lock) but all other can still open the entity in READ-ONLY mode.
And I have two questions:
Is my understanding of JPA pessimistic lock right?
Is it possible to make JPA do the logic I need (using JPA lock mechanisms)?
Is my understanding of JPA pessimistic lock right?
Yes, that's exactly how read/write locking works
...but all other can still open the entity in READ-ONLY mode
I'm not exactly sure what you mean. We are still talking about multiple transactions executing simultaneously, right (I have a strange feeling that's not what you mean)? If that's the case, then in your logic, holding a 'READ_ONLY' lock accomplishes nothing.
Locking means 'I'm freezing this resource so that certain other transactions cannot proceed until I'm done'. But, in the logic you described, when you're holding the 'READ_ONLY' lock, both a transaction holding the 'READ_ONLY' lock and the transaction holding the 'WRITE' lock are allowed to proceed.

How to lock PostgreSQL database via JDBC?

In my Java webapp, each instance is checking on startup if the database is up-to-date via a JDBC connection. If the DB is not up-to-date, it performs an update routine by executing SQL scripts.
I can't control when instances get startet. Therefore, I need to ensure that only a single instance is performing a database update at the same time. Ideally, I would need to lock the complete database, but according to
http://www.postgresql.org/docs/8.4/static/explicit-locking.html
and
http://wiki.postgresql.org/wiki/Lock_database
PostgreSQL doesn't support it (I'm still using version 8.4).
What other options do I have?
If you control the code for all the instances, then you can create a table in the database where each instance that starts, looks in this table for a record with a timestamp. Lets call it your "lock" record.
If a process finds that the lock record does not exist, then it inserts the record and processes the data you require.
If a process finds that the lock record does exist then you can assume that another process has created it and do nothing, busy wait, or what ever.
With this design you are effectively creating a "lock" in the database to synchronize your processes with. You code it, so all processes know they have to adhere to the logic of the lock record.
Once the first process that has the lock, has completed processing, it should clear the lock record so the next restart behaves correctly. You also need to think about the situation where the lock has not been cleared due to a server error, or execution erorr. Typically, if the lock is older than n minutes you can consider it to be "stale", therefore delete it, and create it again (or just update it).
When dealing with the "lock" record be sure to utilise the Serializable isolation level on your DB connection in order to guarantee atomicity.
The Service layer of your Java code can enforce with your locking strategy prior to calling your Data Access layer. It won't matter whether you use Hibernate or not, as it's just application logic.
Ideally, I would need to lock the complete database.
Does it really matter what your lock applies to, as long as you're effectively serializing access? Just acquire an exclusive lock on any table, or row for that matter.

Entity classes and Record locking

I am looking at EntityManager API, and I am trying to understand an order in which I would do a record lock. Basically when a user decides to Edit a record, my code is:
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey());
r.setRoute(txtRoute.getText());
entityManager.persist(r);
entityManager.getTransaction().commit();
From my trial and error, it appears I need to set WWEntityManager.entityManager.lock(r, LockModeType.PESSIMISTIC_READ); after the .begin().
I naturally assumed that I would use WWEntityManager.entityManager.lock(r, LockModeType.NONE); after the commit, but it gave me this:
Exception Description: No transaction is currently active
I haven't tried putting it before the commit yet, but wouldn't that defeat the purpose of locking the record, since my goal is to avoid colliding records in case 50 users try to commit a change at once?
Any help as to how to I can lock the record for the duration of the edit, is greatly appreciated!
Thank You!
Performing locking inside transaction makes perfectly sense. Lock is automatically released in the end of the transaction (commit / rollback). Locking outside of transaction (in context of JPA) does not make sense, because releasing lock is tied to end of the transaction. Also otherwise locking after changes are performed and transaction is committed does not make too much sense.
It can be that you are using pessimistic locking to purpose other than what they are really for. If my assumption is wrong, then you can ignore end of the answer. When your transaction holds pessimistic read lock on entity (row), following is guaranteed:
No dirty reads: other transactions cannot see results of operations you performed to locked rows.
Repeatable reads: no modifications from other transactions
If your transaction modifies locked entity, PESSIMISTIC_READ is upgraded to PESSIMISTIC_WRITE or transaction fails if lock cannot be upgraded.
Following coarsely describes scenario with obtaining locking in the beginning of transaction:
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey(),
LockModeType.PESSIMISTIC_READ);
//from this moment on we can safely read r again expect no changes
r.setRoute(txtRoute.getText());
entityManager.persist(r);
//When changes are flushed to database, provider must convert lock to
//PESSIMISTIC_WRITE, which can fail if concurrent update
entityManager.getTransaction().commit();
Often databases do not have separate support for pessimistic read, so you are actually holding lock to row since PESSIMISTIC_READ. Also using PESSIMISTIC_READ makes sense only if no changes to the locked row are expected. In case above changes are done always, so using PESSIMISTIC_WRITE from the beginning on is reasonable, because it saves you from the risk of concurrent update.
In many cases it also makes sense to use optimistic instead of pessimistic locking. Good examples and some comments about choosing between locking strategies can be found from: Locking and Concurrency in Java Persistence 2.0
Great work attempting to be safe in write locking your changing data. :) But you might be going overboard / doing it the long way.
First a minor point. The call to persist() isn't needed. For update, just modify the attributes of the entity returned from find(). The entityManager automatically knows about the changes and writes them to the db during commit. Persist is only needed when you create a new object & write it to the db for the first time (or add a new child object to a parent relation and which to cascade the persist via cascade=PERSIST).
Most applications have a low probability of 'clashing' concurrent updates to the same data by different threads which have their own separate transactions and separate persistent contexts. If this is true for you and you would like to maximise scalability, then use an optimistic write lock, rather than a pessimistic read or write lock. This is the case for the vast majority of web applications. It gives exactly the same data integrity, much better performance/scalability, but you must (infrequently) handle an OptimisticLockException.
optimistic write locking is built-in automatically by simply having a short/integer/long/TimeStamp attribute in the db and entity and annotating it in the entity with #Version, you do not need to call entityManager.lock() in that case
If you were satisfied with the above, and you added a #Version attribute to your entity, your code would be:
try {
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey());
r.setRoute(txtRoute.getText());
entityManager.getTransaction().commit();
} catch (OptimisticLockException e) {
// Logging and (maybe) some error handling here.
// In your case you are lucky - you could simply rerun the whole method.
// Although often automatic recovery is difficult and possibly dangerous/undesirable
// in which case we need to report the error back to the user for manual recovery
}
i.e. no explicit locking at all - the entity manager handles it automagically.
IF you had a strong need to avoid concurrent data update "clashes", and are happy to have your code with limited scalability then serialise data access via pessimistic write locking:
try {
entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey(), LockModeType.PESSIMISTIC_WRITE);
r.setRoute(txtRoute.getText());
entityManager.getTransaction().commit();
} catch (PessimisticLockException e) {
// log & rethrow
}
In both cases, a successful commit or an exception with automatic rollback means that any locking carried out is automatically cleared.
Cheers.

Categories