I have 2 applications (Spring - Hibernate with Boot) using same oracle database (11g). Both apps hit a specific table consistently and there are huge number of hits on this table. we can see row lock contention exceptions in the DB logs and applications have to be restarted each time we get these or when it creates a deadlock like situation.
we are using JPA entitymanager for these applications.
need help for this issue
According to this link :
http://www.dba-oracle.com/t_enq_tx_row_lock_contention.htm
This error occurs because a transaction is waiting for another transaction to commit or roll back ... This behavior is correct from the database POV and if you think of Data consistency ..... But if availability / fulfillment is a concern for you... You might need to make some work around including :
1 make separate tables for each of the application then update the main table with data offline (but u will sacrifice data consistency)
2 make a separate thread to log and retry unsuccessful transactions
3 bear the availability issue (latency) if consistency is a big concern
Also there are some general tips to consider :
1 make the transaction minimal ... Think about every process included in the transaction. If it's mandatory or can be removed outside
2 tune transaction demarcation ... U might find transaction open for long with no reason but bad coding
3 don't make read operations inside transactions
4 avoid extended persistence context (stateless) whenever possible
5 u might choose to use non jta transactional data source for reporting and reading queries
6 check the lock types you are using and try to avoid -according to your case- any thing but OPTIMISTIC
But finally you agree with me we shouldn't blame the database from blocking two transactions from modifying the same row.
Related
I am extracting the following lines from the famous book - Mastering Enterprise JavaBeans™ 3.0.
Concurrent Access and Locking:Concurrent access to data in the database is always protected by transaction isolation, so you need not design additional concurrency controls to protect your
data in your applications if transactions are used appropriately. Unless you make specific provisions, your entities will be protected by container-managed transactions using the isolation levels that are configured for your persistence provider and/or EJB container’s transaction service. However, it is important to understand the concurrency control requirements and semantics of your applications.
Then it talks about Java Transaction API, Container Managed and Bean Managed Transaction, different TransactionAttributes, different Isolation Levels. It also states that -
The Java Persistence specification defines two important features that can be
tuned for entities that are accessed concurrently:
1.Optimistic locking using a version attribute
2.Explicit read and write locks
Ok - I read everything and understood them well. But the question comes in which scenario I need the use all these techniques? If I use Container Managed transaction and it does everything for me why I need to bother about all these details? I know the significance of TransactionAttributes (REQUIRED, REQUIRES_NEW) and know in which cases I need to use them, but what about the others? More specifically -
Why do I need Bean Managed transaction?
Why do we need Read and Write Lock on Entity classes?
Why do we need version attribute?
For Q2 and Q3 - I think Entity classes are not thread safe and hence we need locking over there. But database is managed at the EJB class by the JTA API (as stated in the first para), and then why do we need to manage the Entity classes separately? I know how the Lock and Version works and why they are required. But why they are coming into the picture since JTA is already present?
Can you please provide any answer to them? If you give me some URLs even that will be very highly appreciated.
Many thanks in advance.
You don't need locking because entity classes are not thread-safe. Entities must not be shared between threads, that's all.
Your database comes with ACID guarantees, but that is not always sufficient, and you sometimes nees to explicitely lock rows to get what you need. Imagine the following scenarios:
transaction A reads employee 1 from database
transaction B reads employee 1 from database
transaction A sets employee 1 salary to 3000
transaction B sets employee 1 salary to 4000
transaction A commits
transaction B commits
The end result is that the salary is 4000. The user that started transaction A is completely unaware that even though he set the salary to 3000, another user, concurrently, set it to 4000. Depending on which transaction writes last, the end result is different (and thus unpredictable). That's the kind of situation that can be avoided using optimistic locking.
Next scenario: you want to generate purely sequential invoice numbers, without lost values and without duplicates. You could imagine reading and incrementing a value in the database to do that. But two transactions might both read the same value concurrently, and then incrementing it. You would thus have a duplicate. Using a lock in the table row holding the next number allows avoiding this situation.
I have a scenario where I use a read on set of tables in a java service.
I've annotated the service class #Transactional.
Is there any possible way to lock the corresponding rows I read, in all the tables I use, in my transaction and release it at the end of transaction ?
Ps: I'm using spring Hibernate, and I'm new to this locking concept.
any material/ examples links would be of much help
Thanks
This depends on the underlying database engine and selected transaction isolation level.
Some database systems do locking for reads, and some use MVCC, which means your updates won't be visible to other transactions until your transaction finishes and your transaction will operate on a snapshot of data taken at the start of the transaction.
So a simple answer is: choose appropriately high transaction isolation level (e.g. SERIALIZABLE) for your needs and a database engine that supports it.
http://en.wikipedia.org/wiki/Isolation_(database_systems)
We are using Hibernate 3.6.0.Final with JPA 2 and Spring 3.0.5 for a large scale enterprise application running on tomcat 7 and MySQL 5.5. Most of the transactions in application, lives for less than a second and update 5-10 entities but in some use cases we need to update more than 10-20K entities in single transaction, which takes few minutes and hence more than 70% of times such transaction fails with StaleObjectStateException because some of those entities got updated by some other transaction.
We generally maintain version column in all tables and in case of StaleObjectStateException we generally retry but since these longs transactions are anyways very long so if we keep on retrying then also I am not very sure that we'll be able to escape StaleObjectStateException.
Also lot of activities keep updating these entities in busy hours so we cannot go with pessimistic approach because it can potentially halt many activities in system.
Please suggest how to fix such long transaction issue because we cannot spawn thousands of independent and small transactions because we cannot afford messed up data in case of some failed & some successful transactions.
Modifying 20,000 entities in one transaction is really a lot, much more than normal.
I can't give you a general solution, but here are some ideas how to solve the problem.
1) Use LockMode.UPGRADE (see pessimistic locking). There you explicitly generate a "SELECT FOR UPDATE", which stops other users to modify the rows while they are locked.
This should avoid your problem, but if you have too many large transactions it can produce deadlocks (depending of your programming) or timeouts.
2) Change your data model to avoid these large transactions. Why do you have to update 10,000 rows? Perhaps it is possible to put this information, which is updated in so many rows, into a new table and let it be referenced only, so you have to update only a few rows in the new table.
3) Use StatelessSession instead of Session. In this case you are not forced to rollback after an exception, instead you can correct the problem and continue (in your case reload the entity which was modified in meantime and do the modifcation for the large transaction on the reloaded entity). This perhaps give you the possibility to handle the critical event (row modified in meantime) on a row to row basis instead for the complete large transaction.
I am working on an application that uses Oracle's built in authentication mechanisms to manage user accounts and passwords. The application also uses row level security. Basically every user that registers through the application gets an Oracle username and password instead of the typical entry in a "USERS" table. The users also receive labels on certain tables. This type of functionality requires that the execution of DML and DDL statements be combined in many instances, but this poses a problem because the DDL statements perform implicit commits. If an error occurs after a DDL statement has executed, the transaction management will not roll everything back. For example, when a new user registers with the system the following might take place:
Start transaction
Insert person details into a table. (i.e. first name, last name, etc.) -DML
Create an oracle account (create user testuser identified by password;) -DDL implicit commit. Transaction ends.
New transaction begins.
Perform more DML statments (inserts,updates,etc).
Error occurs, transaction only rolls back to step 4.
I understand that the above logic is working as designed, but I'm finding it difficult to unit test this type of functionality and manage it in data access layer. I have had the database go down or errors occur during the unit tests that caused the test schema to be contaminated with test data that should have been rolled back. It's easy enough to wipe the test schema when this happens, but I'm worried about database failures in a production environment. I'm looking for strategies to manage this.
This is a Java/Spring application. Spring is providing the transaction management.
First off I have to say: bad idea doing it this way. For two reasons:
Connections are based on user. That means you largely lose the benefits of connection pooling. It also doesn't scale terribly well. If you have 10,000 users on at once, you're going to be continually opening and closing hard connections (rather than soft connection pools); and
As you've discovered, creating and removing users is DDL not DML and thus you lose "transactionality".
Not sure why you've chosen to do it this but I would strongly recommend you implement users at the application and not the database layer.
As for how to solve your problem, basically you can't. Same as if you were creating a table or an index in the middle of your sequence.
You should use Oracle proxy authentication in combination with row level security.
Read this: http://www.oracle.com/technology/pub/articles/dikmans-toplink-security.html
I'll disagree with some of the previous comments and say that there are a lot of advantages to using the built-in Oracle account security. If you have to augment this with some sort of shadow table of users with additional information, how about wrapping the Oracle account creation in a separate package that is declared PRAGMA AUTONOMOUS_TRANSACTION and returns a sucess/failure status to the package that is doing the insert into the shadow table? I believe this would isolate the Oracle account creation from the transaction.
Perhaps this question is not very clear but I didn't find better words for the heading, which describes the problem I like to deal with shortly.
I want to restrict access from a java desktop application to postgres.
The background:
Suppose you have 2 apps running and the first Application has to do some complex calculations on the basis of data in the db. To nail the immutability of the data in the db down i'd like to lock the db for insert, update and delete operations. On client side i think it's impossible to handle this behaviour satisfactory. So i thought about to use a little java-app on server-side which works like a proxy. So the task is to hand over CRUD (Create Read Update Delete) operations until it gets a command to lock. After a lock it rejects all CUD operations until it gets a unlock command from the locking client or a timeout is reached.
Questions:
What do you think about this approach?
Is it possible to lock a Database while using such an approach?
Would you prefer Java SE or Java EE as server-side java app?
Thanks in advance.
Why not use transactions in your operations? The database has features to maintain data integrity itself, rather than resorting to a brute operation such as a total-database lock.
This locking mechanism you describe sounds like it would be a pain for the users. Are the users initating the lock or is the software itself? If it's the users, you can expect some problems when Bob hits lock and then goes to lunch for 2 hours, forgetting to unlock the database first...
Indeed... there are a few proper ways to deal with this problem.
Just lock the tables in your code. Postgresql has commands for locking entire tables that you could run from your client application
Pick a transaction isolation level that doesn't have the problem of reading data that was committed after your txn started (BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ).
Of these, by far the most efficient is to use repeatable read as your isolation level. Postgres supports this quite efficiently, and it will give you a consistent view of the data without such heavy locking of the db.
Year i thought about transactions but in this case i can't use them. I'm sorry i didn't mention it exactly. So assume the follow easy case:
A calculation closes one area of responsibility. After calc a new one is opened and new inserts are dedicated to it. But while calculation-process a insert or update or delete is not allowed to the data of the (currently calculated) area of responsibility. More over a delete is strictly prohibited because data has to be archived.
So imo the use of transactions doesn't fit this requirement. Or did i miss sth.?
ps: (off topic) #jsight: i currently read that intenally postgres mapps "repeatable read" to "serializable", so using "repeatable read" gets you more restriction then you would perhaps expect.