Locking all the rows used in the Transaction Java - java

I have a scenario where I use a read on set of tables in a java service.
I've annotated the service class #Transactional.
Is there any possible way to lock the corresponding rows I read, in all the tables I use, in my transaction and release it at the end of transaction ?
Ps: I'm using spring Hibernate, and I'm new to this locking concept.
any material/ examples links would be of much help
Thanks

This depends on the underlying database engine and selected transaction isolation level.
Some database systems do locking for reads, and some use MVCC, which means your updates won't be visible to other transactions until your transaction finishes and your transaction will operate on a snapshot of data taken at the start of the transaction.
So a simple answer is: choose appropriately high transaction isolation level (e.g. SERIALIZABLE) for your needs and a database engine that supports it.
http://en.wikipedia.org/wiki/Isolation_(database_systems)

Related

How can we set the Read Uncommitted isolation level with JPA and Hibernate?

In he famous book "Java persistence with Hibernate" we can read the following:
"A persistence context is a cache of persistent entity instances.... Automatic dirty checking is one of the benefits of this caching. Another benefit is repeatable read for entities and the performance advantage of a unit of work-scoped cache... You don’t have to do anything special to enable the persistence context cache. It’s always on and, for the reasons shown, can’t be turned off.
Does this mean that one can never achieve a transaction isolation level "Read uncommitted" with Hibernate?
Does this mean that one can never achieve a transaction isolation level "Read uncommitted" with Hibernate?
No, it doesn't. Hibernate offers application-level repeatable reads for entities. That's different than DB-level repeatable reads which applies to any query.
So, if you want a custom isolation level, like REPEATABLE_READ for all the queries executed by a given transaction, not just for fetching entities, then you can set it like this:
#Transactional(isolation = Isolation.REPEATABLE_READ)
public void orderProduct(Long productId) {
...
}
Now, your question title says:
(How) can we achieve isolation level Read Uncommitted with Hibernate / JPA?
If you are using Oracle and PostgreSQL, you cannot do that since Read Uncommitted is not supported, and you'll get READ_COMMITTED instead.
For SQL Server and MySQL, set it like this:
#Transactional(isolation = Isolation.READ_UNCOMMITTED)
Indeed, hibernate does offer repeateble reads thorugh its first-level cache (Persistence Context) as cited in the book
"Transactions and concurrency control" by Vlad Mihalcea.
as follows:
Some ORM frameworks (e.g. JPA/Hibernate) offer application-level repeatable reads. The first snapshot of any retrieved entity is cached in the currently running Persistence Context. Any successive query returning the same database row is going to use the very same object that was previously cached. This way, the fuzzy reads may be prevented even in Read Committed isolation level.
However, according to the same book above, it seems that using Spring with JPA / hibernate allows customizing the transaction isolation level.
In the book above we can also read the following:
Spring supports transaction-level isolation levels when using the JpaTransactionManager. For JTA transactions, the JtaTransactionManager follows the Java EE standard and disallows overriding the default isolation level. As a workaround, the Spring framework provides extension points, so the application developer can customize the default behavior and implement a
mechanism to set isolation levels on a transaction basis.

How to handle row lock contention at application level

I have 2 applications (Spring - Hibernate with Boot) using same oracle database (11g). Both apps hit a specific table consistently and there are huge number of hits on this table. we can see row lock contention exceptions in the DB logs and applications have to be restarted each time we get these or when it creates a deadlock like situation.
we are using JPA entitymanager for these applications.
need help for this issue
According to this link :
http://www.dba-oracle.com/t_enq_tx_row_lock_contention.htm
This error occurs because a transaction is waiting for another transaction to commit or roll back ... This behavior is correct from the database POV and if you think of Data consistency ..... But if availability / fulfillment is a concern for you... You might need to make some work around including :
1 make separate tables for each of the application then update the main table with data offline (but u will sacrifice data consistency)
2 make a separate thread to log and retry unsuccessful transactions
3 bear the availability issue (latency) if consistency is a big concern
Also there are some general tips to consider :
1 make the transaction minimal ... Think about every process included in the transaction. If it's mandatory or can be removed outside
2 tune transaction demarcation ... U might find transaction open for long with no reason but bad coding
3 don't make read operations inside transactions
4 avoid extended persistence context (stateless) whenever possible
5 u might choose to use non jta transactional data source for reporting and reading queries
6 check the lock types you are using and try to avoid -according to your case- any thing but OPTIMISTIC
But finally you agree with me we shouldn't blame the database from blocking two transactions from modifying the same row.

Spring Transaction Isolation Level

Most of us might be using Spring and Hibernate for data access.
I am trying to understand few of the internals of Spring Transaction Manager.
According to Spring API, it supports different Isolation Level - doc
But I couldn't find clear cut information on which occasions these are really helpful to gain performance improvements.
I am aware that readOnly parameter from Spring Transaction can help us to use different TxManagers to read-only data and can leverage good performance. But it locks the table to get the data to avoid dirty-reads/non-committed reads - doc.
Assume, in few occasions, we might want to blindly insert the records into a table and retrieve the information without locking the table, a case where we never update the table data, we just insert and read [append-only]. Can we use better Isolation to gain any performance?
As you see from one of the reference links, do we really require to implement/write our own CustomJPADiaelect?
What's the better Isolation for my requirement?
Read-only allows certain optimizations like disabling dirty checking and you should totally use it when you don't plan on changing an entity.
Each isolation level defines how much locking a database has to impose for ensuring the data anomaly prevention.
Most database use MVCC (Oracle, PostgreSQL, MySQL) so readers don't lock writers and writers don't lock readers. Only writers lock writers as you can see in the following example.
REPEATABLE_READ doesn't have to hold a lock to prevent a concurrent transaction from modifying your current transaction loaded rows. The MVCC engine allows other transactions to read the committed state of a row, even if your current transaction has changed it but hasn't yet committed (MVCC uses the undo logs to recover the previous version of a pending changed row).
In your use case you should use READ_COMMITTED as it scales better than other more strict isolation levels and you should use optimistic locking for preventing lost updates in long conversations.
Update
Setting #Transactional(isolation = Isolation.SERIALIZABLE) to a Spring bean has a different behaviour, depending on the current transaction type:
For RESOURCE_LOCAL transactions, the JpaTransactionManager can apply the specific isolation level for the current running transaction.
For JTA resources, the transaction-scoped isolation level doesn't propagate to the underlying database connection, as this is the default JTA transaction manager behavior. You could override this, following the example of the WebLogicJtaTransactionManager.
Actually readOnly=truedoesn’t cause any lock contention to the database table, because simply no locking is required - the database is able to revert back to previous versions of the records ignoring all new changes.
With readOnly as true, you will have the flush mode as FlushMode.NEVER in the current Hibernate Session preventing the session from committing the transaction. In addition, setReadOnly(true) will be called on the JDBC Connection, which is also a hint to the underlying database not to commit changes.
So readOnly=true is exactly what you are looking for (e.g. SERIALIZED isolation level).
Here is a good explanation.

Java EE: Why do we need to know about Concurrency?

I am extracting the following lines from the famous book - Mastering Enterprise JavaBeans™ 3.0.
Concurrent Access and Locking:Concurrent access to data in the database is always protected by transaction isolation, so you need not design additional concurrency controls to protect your
data in your applications if transactions are used appropriately. Unless you make specific provisions, your entities will be protected by container-managed transactions using the isolation levels that are configured for your persistence provider and/or EJB container’s transaction service. However, it is important to understand the concurrency control requirements and semantics of your applications.
Then it talks about Java Transaction API, Container Managed and Bean Managed Transaction, different TransactionAttributes, different Isolation Levels. It also states that -
The Java Persistence specification defines two important features that can be
tuned for entities that are accessed concurrently:
1.Optimistic locking using a version attribute
2.Explicit read and write locks
Ok - I read everything and understood them well. But the question comes in which scenario I need the use all these techniques? If I use Container Managed transaction and it does everything for me why I need to bother about all these details? I know the significance of TransactionAttributes (REQUIRED, REQUIRES_NEW) and know in which cases I need to use them, but what about the others? More specifically -
Why do I need Bean Managed transaction?
Why do we need Read and Write Lock on Entity classes?
Why do we need version attribute?
For Q2 and Q3 - I think Entity classes are not thread safe and hence we need locking over there. But database is managed at the EJB class by the JTA API (as stated in the first para), and then why do we need to manage the Entity classes separately? I know how the Lock and Version works and why they are required. But why they are coming into the picture since JTA is already present?
Can you please provide any answer to them? If you give me some URLs even that will be very highly appreciated.
Many thanks in advance.
You don't need locking because entity classes are not thread-safe. Entities must not be shared between threads, that's all.
Your database comes with ACID guarantees, but that is not always sufficient, and you sometimes nees to explicitely lock rows to get what you need. Imagine the following scenarios:
transaction A reads employee 1 from database
transaction B reads employee 1 from database
transaction A sets employee 1 salary to 3000
transaction B sets employee 1 salary to 4000
transaction A commits
transaction B commits
The end result is that the salary is 4000. The user that started transaction A is completely unaware that even though he set the salary to 3000, another user, concurrently, set it to 4000. Depending on which transaction writes last, the end result is different (and thus unpredictable). That's the kind of situation that can be avoided using optimistic locking.
Next scenario: you want to generate purely sequential invoice numbers, without lost values and without duplicates. You could imagine reading and incrementing a value in the database to do that. But two transactions might both read the same value concurrently, and then incrementing it. You would thus have a duplicate. Using a lock in the table row holding the next number allows avoiding this situation.

EntityManager doesn't see changes made in other transactions

I'm writing some application for GlassFish 2.1.1 (JavaEE 5, JPA 1.0, as far as I know). I have the following code in my servlet (which I mostly borrowed from some sample on the Internet):
#PersistenceContext(name = "persistence/em", unitName = "pu")
private EntityManager em;
#Resource
private UserTransaction utx;
#Override
protected void doPost(...) {
utx.begin();
. . . perform retrieving operations on em . . .
utx.rollback();
}
web.xml has the following in it:
<persistence-context-ref>
<persistence-context-ref-name>persistence/em</persistence-context-ref-name>
<persistence-unit-name>pu</persistence-unit-name>
</persistence-context-ref>
The problem is, the em doesn't see changes that have been made in another, outside transaction. Roughly, I make a request to my servlet from web browser, see data, perform some DML in SQL console, reload servlet page -- and it doesn't show any change. I've tried to use many combinations of em.flush, and utx.rollback, and em.joinTransaction, but it doesn't seem to do any good.
Situation is complicated by me being a total newbie in JPA, so I do not have a clear understanding of how the underlying machinery works. So any help and -- more importantly -- explanations/links of what is happening there would be very appreciated. Thanks!
The JPA implementation maintains a cache of entities that have been accessed. When you perform operations in a different transaction without using JPA, the cache is no longer up to date, and hence you never see the changes made in it.
If you do wish to see the changes, you will have to refresh the cache, in which case all entities will be evicted from the cache. Of course, you'll need to know when to do this (after the other transaction has completed), otherwise you'll continue to see ambiguous entities. If this is your business need, then JPA is possibly not a good fit to your problem domain.
Related:
Are entities cached in jpa by default ?
Invalidating JPA EntityManager session
As axtavt says, you need to commit the transaction in the console. Assuming you did that, it is also possible data is still being cached by the PersistenceManager (or underlying infrastructure).
To prevent trouble with caching you can evict by hand (which may be tricky as you have to know when to evict) or you can go to pessimistic locking. Pessimistic locking can have a huge impact on performance, but if you have multiple independent connections to the database you may not have a choice.
If your process has concurrent read/writes from different sources the whole time, you may really need pessimistic locks. If you sometimes have a batch update from an external source, you may try to signal, from that batch job, your JPA application that it should evict. Perhaps via a web service or so. That way you would not incur pessimistic locking performance degradation the entire time.
The wise lesson here is that synchronization of processes can be really complicated :)
Perhaps you need to commit a transaction made in SQL console.

Categories