My application is receiving JMSTextMessages from IBM Mq in bulk. For each message I am calling a REST API to process these messages. Somewhere when the API is processing these messages I am getting following exception logs, intermittently but very frequently:
java.lang.Exception: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.LockAcquisitionException: could not execute statement
2018-04-03 17:54:10.614 ERROR 5259 --- [.0-8083-exec-59] o.h.engine.jdbc.spi.SqlExceptionHelper : ORA-00060: deadlock detected while waiting for resource
It neither prints the table nor the query which it is trying to execute. To resolve this issue I tried pushing all Database processing code, in my REST API service code, inside a synchronized block, but still the same issue.Due to this issue every 2nd message fails to get processed.
Basis on a lot of material available online, it appears that each message received, triggers a new thread (at REST API service method end) and thus causing some deadlock kind of behavior in my app. Sadly, I am unable to figure out which piece of code exactly is causing this and we have a single thread when it comes to service application startup.
So now, I have decided to find out if I can introduce a delay before I start processing another message at the listener end. Again read a lot and everywhere I see XML property changes to handle this, but nothing comes handy if I want to do it via Spring Annotations. My app does not have any XML configs.
Here is my pseudo code:
#JmsListener(destination="queue_name")
public void receiveMessage(Object message){
myRestService.processMessage(message);
}
ServiceImpl code
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void processMessage(Object message){
Long id = getIdFromMessage(message);
X x = readFromDB(id);
updateDataInX(x, message);
savex(x);
}
I would also like to highlight that I have a parent-child relationship in the database. Where PRIMARY_KEY of parent table is PRIMARY_KEY (as well as FK) of child table and it is indexed as well. While I am updating values in above method (updateDataInX), some columns are being updated in child table row and some columns are being updated in parent table row (mapped to child row). The instance x is of CHILD class.
Apart from introducing a delay, how can I fix this? The producer is a separate application, and within a second they are sending multiple messages - which the consumer is clearly failing to process. This entirely is back-end update so I dont mind introducing a delay to have a fail-safe mechanism but I did not find any proper solution with spring annotations. Also if you can tell what exactly happens with threads at receiver side and rest API side. I am thoroughly confused reading about it since last 3 weeks.
My code also involved adding AUDIT logging data in one of the other CHILD tables, which was referring to a NON-PRIMARY key of the same PARENT table (Hierarchy of X).
While inserting a new row in CHILD table oracle was taking a full table lock which was resulting all subsequent inserts to fail with a deadlock issue.
Reason for taking a full table lock on CHILD table was that it had a foreign key column REFERENCE_NO which refers to the PARENT table.
When SQL statements from multiple sessions involve parent/child tables, a deadlock may occur with oracle default locking mechanism if the foreign key column in the child table is not indexed. Also Oracle takes a full table lock on the child table when primary key value is changed in the parent table.
In our case the PARENT.REFERENCE_NO is a non-primary key which is referred as foreign key in CHILD table. Every time we do a status update on PARENT table as mentioned in above pseudo code, hibernate fires update on REFERENCE_NO which results in locking the entire CHILD table, making it unavailable for AUDIT logging insertions.
So they Key Is : Oracle recommends to INDEX such foreign key columns due to which oracle does not take full table lock.
Related
In my java application, I'm running a Transaction I where an insert is done on table A.
Transaction I also calls a method which uses a separate connection to have a separate Transaction II in order to create a record in table B. (This table is used to keep track of internal ID generation and it is important, that whatever happens to Transaction I, that Transaction II is commited).
Now, because of things, table B actually had a reference to the PK of table A (which is actually the problem).
However since the record in A is just being created in the "outer" Transaction I, it is not yet commited, and the value to insert as reference in B (pointing to A) thus does not exist in Transaction II. (Or in other words, I would suspect that this value is not visible for Transaction II since it isn't commited)
Now in this situation I'd expect to get a FK constraint violation error, but I am getting a
com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Lock wait timeout exceeded; try restarting transaction.
Testing the same scenario on a postgresql db raises the expected:
org.postgresql.util.PSQLException: ERROR: insert or update on table "B" violates foreign key constraint
I suspect that the mysql database knows about the insert of the first transaction and actually tries to wait in the second transaction for the first to finish to actually be able to insert the record of the second transaction?
Of course that can't ever happen since the first transaction also waits for the second to complete. So the "inner" transaction aborts with the timeout.
Is there a reason the two db systems behave differently, or do I miss something?
Update: I think this has to do with the different default isolation levels of mysql (REPEATABLE_READ) and postgresql (READ_COMMITED)
You cannot have two separate connections and expect it to work.
You should write a code where you can pass the current connection object to the method so that the connection is re used in that method which will not complain about fk.
The solution for my specific problem was to simply remove the FK constraint between Table B and Table A since it is not needed anyway.
The different behaviour between MySQL and PostgreSQL definitely derives from the default setting regarding the isolation levels.
I'm running two instances of SymmetricDS with MySQL.
I have to start and stop synchronizations, for that I use:
update sym_channel set enabled=0/1;
For some reason when they synchronize (enabled=1), I get the following error:
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails (`test_db`.`defectstdreference`, CONSTRAINT `Relationship72` FOREIGN KEY (`improve_notice_doc_id`, `defect_id`, `client_id`) REFERENCES `improvementnoticedefect` (`doc_id`, `defect_id`, `client)
Yet, in some time synchronization finishes successfully, so all this exception does is significantly slowing down the process.
Do you have any idea what may have casued this?
Have you created your own channels or are you using default?
If you created your own they can synchronize independently of each other. As a result if you have a foreign key between two tables and the parent table uses channelA and the child table uses channelB its possible that changes in channelB could synchronize before channelA thus causing a foreign key error. At times channelB may process after channelA so this might explain the unexpected behavior. SymmetricDS will retry any batches in error so eventually it gets them in order. To avoid these errors though all together make sure if your using custom channels that all related tables participate in the same channel.
We have a stateless ejb which persists some data in an object oriented database. Unfortunately, today our persistence object does not have a unique key due to some unknown reason and altering the PO is also not possible today.
So we decided to synchronize the code. Then we check if there is an object already persisted with the name(what we consider should be unique). Then we decide to persist or not.
Later we realized that the code is deployed on a cluster which has three jboss instances.
Can anyone please suggest an idea which does not allow to persist objects with the same name.
If you have a single database behind the JBoss cluster you can just apply a unique contraint to the column for example (I am assuming its an SQL database):
ALTER TABLE your_table ADD CONSTRAINT unique_name UNIQUE (column_name);
Then in the application code you may want to catch the SQL exception and let the user know they need to try again or whatever.
Update:
If you cannot alter the DB schema then you can achieve the same result by performing a SELECT query before insert to check for duplicate entries, if you are worried about 2 inserts happening at the same time you can look at applying a WRITE_LOCK to the row in question
We are getting a TX-row lock contention error while trying to insert data
It happens while running a job which processes an xml with almost 10000 records, inserting the data into a table
We are having a unique key constraint on one of the columns in the table, and in the request we are getting duplicate values. This is causing the locks and thereby the job is taking more time.
We are using hibernate and spring. The DAO method we use is hibernate template's 'save' annotated with Spring Transaction Manager's #Transactional
Any suggestions please?
It's not clear whether you're getting locking problems or errors.
"TX-row lock contention" is an event indicating that two sessions are trying to insert the same value into a primary or unique constraint column set -- an error is not raised until the first one commits, then the second one gets the error. So you definitely have multiple sessions inserting rows. If you just had one session then you'd receive the error immediately, with no "TX-row lock contention" event being raised.
Suggestions:
Insert into a temporary table without the constraint, then load to the real table using logic that eliminates the duplicates
Eliminate the duplicates as part of the read of the XML.
Use Oracle's error logging syntax -- example here. http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#SQLRF55004
I wrote this method below that is suppose to delete a member record from the database. But when I use it in my servlet it returns an error.
MemberDao Class
public static void deleteMember(Member member) {
Session hibernateSession = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = hibernateSession.beginTransaction();
hibernateSession.delete(member);
tx.commit();
}
Controller Part
if(delete != null) {
HttpSession httpSession = request.getSession();
Member member = (Member) httpSession.getAttribute("member");
MemberDao.deleteMember(member);
nextPage = "ledenlijst.jsp";
}
HTTP Status 500
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
Sometimes it even throws this error when I try to execute the page multiple times.
org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update
Does anybody know what exactly is causing these errors?
The error can be caused by several things. I'm not taking the credit for it, found it here.
Flushing the data before committing the object may lead to clear all
object pending for persist.
If object has primary key which is auto generated and you are
forcing an assigned key
if you are cleaning the object before committing the object to
database.
Zero or Incorrect ID: If you set the ID to zero or something else,
Hibernate will try to update instead of insert.
Object is Stale: Hibernate caches objects from the session. If the
object was modified, and Hibernate doesn’t know about it, it will
throw this exception — note the StaleStateException
Also look at this answer by beny23 which gives a few further hints to find the problem.
In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
The exception
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
use to be thrown when Hibernate notice that the entity he wants to flush to the database isn't exactly as it was at the beginning of the transaction.
I described more in details two different use cases that happen to me here.
In my case this exception was caused by wrong entity mapping. There were no cascade for relation, and referenced child entity wasn't saved before trying to reference it from parent. Changing it to
#OneToMany(cascade = CascadeType.ALL)
fixed the issue.
Surely best way to find cause of this exception is setting show_sql and DEBUG level for logs - it will stop just at the sql that caused the problem.
I was facing same issue.
The code was working in the testing environment. But it was not working in staging environment.
org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1
The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.)
So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back.
After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.
this is the solution for my case, maybe it will help you!
Actually it was a conversion problem between a database field type (timestamp on postgreSQL) and his equivalent property type (Calendar) on hibernate xml file.
When Hibernate did this update request, it didn't retrieve the row because the request interrogate with a bad convert Calendar value.
So I simply replaced property type "Calendar" in "Date" in Hibernate xml file and the problem was fixed.
i recently experienced this and what happened was that i used the update method and it was throwing an exception because there was no existing record. I changed the method to saveOrUpdate. It worked.
I experienced this same issue with hibernate/JPA 2.1 when using memcached as the secondary cache. You would get the above exception along with a StaleStateException. The resolution was different than what has been noted previously.
I noticed that if you have an operation that interleaves deletes and selects (finds) from within the same table and transaction, hibernate can become overwhelmed and report that stale state exception. It would only occur for us during production as multiple identical operations on different entities would occur on the same table. You would see the system timeout and toss exceptions.
The solution is to simply be more efficient. Rather than interleaving in a loop, attempt to resolve which items need to be read and do so, preferably in one operation. Then perform the delete in a separate operation. Again, all in the same transaction but don't pepper hibernate with read/delete/read/delete operations.
This is much faster and reduces the housekeeping load on Hibernate considerably. The problem went away. This occurs when you are using a secondary cache and won't occur otherwise as the load will be on the database for resolution without a secondary cache. That's another issue.
I have had this problem,
I checked my code, there weren't any problems but when i checked my data I found out, I had two entities with the same id!
So, the flush() could not work, because it works one by one for batch updates and it found 2 rows. Therefore, it didn't update and threw exception, but it was my problem. I don't know if it works fine for you or not!