TX-row lock contention : Inserting Duplicate values for Unique Key - java

We are getting a TX-row lock contention error while trying to insert data
It happens while running a job which processes an xml with almost 10000 records, inserting the data into a table
We are having a unique key constraint on one of the columns in the table, and in the request we are getting duplicate values. This is causing the locks and thereby the job is taking more time.
We are using hibernate and spring. The DAO method we use is hibernate template's 'save' annotated with Spring Transaction Manager's #Transactional
Any suggestions please?

It's not clear whether you're getting locking problems or errors.
"TX-row lock contention" is an event indicating that two sessions are trying to insert the same value into a primary or unique constraint column set -- an error is not raised until the first one commits, then the second one gets the error. So you definitely have multiple sessions inserting rows. If you just had one session then you'd receive the error immediately, with no "TX-row lock contention" event being raised.
Suggestions:
Insert into a temporary table without the constraint, then load to the real table using logic that eliminates the duplicates
Eliminate the duplicates as part of the read of the XML.
Use Oracle's error logging syntax -- example here. http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#SQLRF55004

Related

Upsert in Spring Data

There is a table
columns: id(pk), name, attribute
unique constraint on (name, attribute).
There are a bunch of threads which insert in the table if a record is not there. Spring Data is used for that and it's done it a transaction which could take some time. The records could be the same, meaning same (name, attribute), simultaneously in a couple of threads. From time to time race condition happens, thread A tries to commit a new record whereas thread b committed the same before thread A read it.
Are there any approaches on how to do upsert in this kind of situations?
Perhaps, there are other suggestions to resolve this issue, would be happy to hear them.
Either do it the JPA way:
Try to find the entity, if it is not there, save it.
If it is there there is nothing to do, but of course you could update it by manipulating the found entity.
Alternatively go SQL and write an actual upsert/merge statement which many database dialects support.

Multiple Consumers are Causing LockAcquisitionExecption somewhere in RestImpl class

My application is receiving JMSTextMessages from IBM Mq in bulk. For each message I am calling a REST API to process these messages. Somewhere when the API is processing these messages I am getting following exception logs, intermittently but very frequently:
java.lang.Exception: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.LockAcquisitionException: could not execute statement
2018-04-03 17:54:10.614 ERROR 5259 --- [.0-8083-exec-59] o.h.engine.jdbc.spi.SqlExceptionHelper : ORA-00060: deadlock detected while waiting for resource
It neither prints the table nor the query which it is trying to execute. To resolve this issue I tried pushing all Database processing code, in my REST API service code, inside a synchronized block, but still the same issue.Due to this issue every 2nd message fails to get processed.
Basis on a lot of material available online, it appears that each message received, triggers a new thread (at REST API service method end) and thus causing some deadlock kind of behavior in my app. Sadly, I am unable to figure out which piece of code exactly is causing this and we have a single thread when it comes to service application startup.
So now, I have decided to find out if I can introduce a delay before I start processing another message at the listener end. Again read a lot and everywhere I see XML property changes to handle this, but nothing comes handy if I want to do it via Spring Annotations. My app does not have any XML configs.
Here is my pseudo code:
#JmsListener(destination="queue_name")
public void receiveMessage(Object message){
myRestService.processMessage(message);
}
ServiceImpl code
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void processMessage(Object message){
Long id = getIdFromMessage(message);
X x = readFromDB(id);
updateDataInX(x, message);
savex(x);
}
I would also like to highlight that I have a parent-child relationship in the database. Where PRIMARY_KEY of parent table is PRIMARY_KEY (as well as FK) of child table and it is indexed as well. While I am updating values in above method (updateDataInX), some columns are being updated in child table row and some columns are being updated in parent table row (mapped to child row). The instance x is of CHILD class.
Apart from introducing a delay, how can I fix this? The producer is a separate application, and within a second they are sending multiple messages - which the consumer is clearly failing to process. This entirely is back-end update so I dont mind introducing a delay to have a fail-safe mechanism but I did not find any proper solution with spring annotations. Also if you can tell what exactly happens with threads at receiver side and rest API side. I am thoroughly confused reading about it since last 3 weeks.
My code also involved adding AUDIT logging data in one of the other CHILD tables, which was referring to a NON-PRIMARY key of the same PARENT table (Hierarchy of X).
While inserting a new row in CHILD table oracle was taking a full table lock which was resulting all subsequent inserts to fail with a deadlock issue.
Reason for taking a full table lock on CHILD table was that it had a foreign key column REFERENCE_NO which refers to the PARENT table.
When SQL statements from multiple sessions involve parent/child tables, a deadlock may occur with oracle default locking mechanism if the foreign key column in the child table is not indexed. Also Oracle takes a full table lock on the child table when primary key value is changed in the parent table.
In our case the PARENT.REFERENCE_NO is a non-primary key which is referred as foreign key in CHILD table. Every time we do a status update on PARENT table as mentioned in above pseudo code, hibernate fires update on REFERENCE_NO which results in locking the entire CHILD table, making it unavailable for AUDIT logging insertions.
So they Key Is : Oracle recommends to INDEX such foreign key columns due to which oracle does not take full table lock.

How to avoid violation of constraints for insert and Updates in Oracle DB

I am building a web application. which inserts records in database. I validate records before inserting them in database. If between the time of my validation check and insertion, another application put the db state such that a unique key violation constraints occur if I attempt to insert these records that I have just validated for insertion. How can I avoid this kind of problem. I am using an oracle database and my development language is java.
Basically, you can't unless you change your constraints. You have several ways to to:
You keep the unique constraint and deal with the database exception in your Java code. Race conditions can happen and you have to deal with it.
You lock the entire table as soon as someone enters "insertion mode" in your app, effectively limiting inserts to one at a time. This would mean blocking other users in your application from entering edit mode until the first one is done. Probably not a good idea, but can work when you have very few users.
Remove the constraint. Now this might seem difficult but think about it. Do you really need globally unique entries in some fields? Or can you work around that by including an additional column in your key. This could be an artificial counter, effectively making each row unique again or maybe just the UserID, so that the unique constraint is only checked within each user?

Prevent violating of UNIQUE constraint with Hibernate

I have a table like (id INTEGER, sometext VARCHAR(255), ....) with id as the primary key and a UNIQUE constraint on sometext. It gets used in a web server, where a request needs to find the id corresponding to a given sometext if it exists, otherwise a new row gets inserted.
This is the only operation on this table. There are no updates and no other operations on this table. Its sole purpose is to persistently number of encountered values of sometext. This means that I can't drop the id and use sometext as the PK.
I do the following:
First, I consult my own cache in order to avoid any DB access. Nearly always, this works and I'm done.
Otherwise, I use Hibernate Criteria to find the row by sometext. Usually, this works and again, I'm done.
Otherwise, I need to insert a new row.
This works fine, except when there are two overlapping requests with the same sometext. Then an ConstraintViolationException results. I'd need something like INSERT IGNORE or INSERT ... ON DUPLICATE KEY UPDATE (Mysql syntax) or MERGE (Firebird syntax).
I wonder what are the options?
AFAIK Hibernate merge works on PK only, so it's inappropriate. I guess, a native query might help or not, as it may or may not be committed when the second INSERT takes place.
Just let the database handle the concurrency. Start a secondary transaction purely for inserting the new row. if it fails with a ConstraintViolationException, just roll that transaction back and read the new row.
Not sure this scales well if the likelihood of a duplicate is high, a lot of extra work if some percent (depends on database) of transactions have to fail the insert and then reselect.
A secondary transaction minimizes the length of time the transaction to add the new text takes, assuming the database supports it correctly, it might be possible for the thread 1 transaction to cause the thread 2 select/insert to hang until the thread 1 transaction is committed or rolled back. Overall database design might also affect transaction throughput.
I don't necessarily question why sometext can't be a PK, wondering why you need to break it out at all. Of course, large volumes might substantially save space if sometext records are large, it almost seems like you're trying to emulate a lucene index to give you a complete list of text values.

Hibernate Delete Error: Batch Update Returned Unexpected Row Count

I wrote this method below that is suppose to delete a member record from the database. But when I use it in my servlet it returns an error.
MemberDao Class
public static void deleteMember(Member member) {
Session hibernateSession = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = hibernateSession.beginTransaction();
hibernateSession.delete(member);
tx.commit();
}
Controller Part
if(delete != null) {
HttpSession httpSession = request.getSession();
Member member = (Member) httpSession.getAttribute("member");
MemberDao.deleteMember(member);
nextPage = "ledenlijst.jsp";
}
HTTP Status 500
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
Sometimes it even throws this error when I try to execute the page multiple times.
org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update
Does anybody know what exactly is causing these errors?
The error can be caused by several things. I'm not taking the credit for it, found it here.
Flushing the data before committing the object may lead to clear all
object pending for persist.
If object has primary key which is auto generated and you are
forcing an assigned key
if you are cleaning the object before committing the object to
database.
Zero or Incorrect ID: If you set the ID to zero or something else,
Hibernate will try to update instead of insert.
Object is Stale: Hibernate caches objects from the session. If the
object was modified, and Hibernate doesn’t know about it, it will
throw this exception — note the StaleStateException
Also look at this answer by beny23 which gives a few further hints to find the problem.
In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
The exception
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
use to be thrown when Hibernate notice that the entity he wants to flush to the database isn't exactly as it was at the beginning of the transaction.
I described more in details two different use cases that happen to me here.
In my case this exception was caused by wrong entity mapping. There were no cascade for relation, and referenced child entity wasn't saved before trying to reference it from parent. Changing it to
#OneToMany(cascade = CascadeType.ALL)
fixed the issue.
Surely best way to find cause of this exception is setting show_sql and DEBUG level for logs - it will stop just at the sql that caused the problem.
I was facing same issue.
The code was working in the testing environment. But it was not working in staging environment.
org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1
The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.)
So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back.
After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.
this is the solution for my case, maybe it will help you!
Actually it was a conversion problem between a database field type (timestamp on postgreSQL) and his equivalent property type (Calendar) on hibernate xml file.
When Hibernate did this update request, it didn't retrieve the row because the request interrogate with a bad convert Calendar value.
So I simply replaced property type "Calendar" in "Date" in Hibernate xml file and the problem was fixed.
i recently experienced this and what happened was that i used the update method and it was throwing an exception because there was no existing record. I changed the method to saveOrUpdate. It worked.
I experienced this same issue with hibernate/JPA 2.1 when using memcached as the secondary cache. You would get the above exception along with a StaleStateException. The resolution was different than what has been noted previously.
I noticed that if you have an operation that interleaves deletes and selects (finds) from within the same table and transaction, hibernate can become overwhelmed and report that stale state exception. It would only occur for us during production as multiple identical operations on different entities would occur on the same table. You would see the system timeout and toss exceptions.
The solution is to simply be more efficient. Rather than interleaving in a loop, attempt to resolve which items need to be read and do so, preferably in one operation. Then perform the delete in a separate operation. Again, all in the same transaction but don't pepper hibernate with read/delete/read/delete operations.
This is much faster and reduces the housekeeping load on Hibernate considerably. The problem went away. This occurs when you are using a secondary cache and won't occur otherwise as the load will be on the database for resolution without a secondary cache. That's another issue.
I have had this problem,
I checked my code, there weren't any problems but when i checked my data I found out, I had two entities with the same id!
So, the flush() could not work, because it works one by one for batch updates and it found 2 rows. Therefore, it didn't update and threw exception, but it was my problem. I don't know if it works fine for you or not!

Categories