I'm running two instances of SymmetricDS with MySQL.
I have to start and stop synchronizations, for that I use:
update sym_channel set enabled=0/1;
For some reason when they synchronize (enabled=1), I get the following error:
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails (`test_db`.`defectstdreference`, CONSTRAINT `Relationship72` FOREIGN KEY (`improve_notice_doc_id`, `defect_id`, `client_id`) REFERENCES `improvementnoticedefect` (`doc_id`, `defect_id`, `client)
Yet, in some time synchronization finishes successfully, so all this exception does is significantly slowing down the process.
Do you have any idea what may have casued this?
Have you created your own channels or are you using default?
If you created your own they can synchronize independently of each other. As a result if you have a foreign key between two tables and the parent table uses channelA and the child table uses channelB its possible that changes in channelB could synchronize before channelA thus causing a foreign key error. At times channelB may process after channelA so this might explain the unexpected behavior. SymmetricDS will retry any batches in error so eventually it gets them in order. To avoid these errors though all together make sure if your using custom channels that all related tables participate in the same channel.
Related
My application is receiving JMSTextMessages from IBM Mq in bulk. For each message I am calling a REST API to process these messages. Somewhere when the API is processing these messages I am getting following exception logs, intermittently but very frequently:
java.lang.Exception: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.LockAcquisitionException: could not execute statement
2018-04-03 17:54:10.614 ERROR 5259 --- [.0-8083-exec-59] o.h.engine.jdbc.spi.SqlExceptionHelper : ORA-00060: deadlock detected while waiting for resource
It neither prints the table nor the query which it is trying to execute. To resolve this issue I tried pushing all Database processing code, in my REST API service code, inside a synchronized block, but still the same issue.Due to this issue every 2nd message fails to get processed.
Basis on a lot of material available online, it appears that each message received, triggers a new thread (at REST API service method end) and thus causing some deadlock kind of behavior in my app. Sadly, I am unable to figure out which piece of code exactly is causing this and we have a single thread when it comes to service application startup.
So now, I have decided to find out if I can introduce a delay before I start processing another message at the listener end. Again read a lot and everywhere I see XML property changes to handle this, but nothing comes handy if I want to do it via Spring Annotations. My app does not have any XML configs.
Here is my pseudo code:
#JmsListener(destination="queue_name")
public void receiveMessage(Object message){
myRestService.processMessage(message);
}
ServiceImpl code
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void processMessage(Object message){
Long id = getIdFromMessage(message);
X x = readFromDB(id);
updateDataInX(x, message);
savex(x);
}
I would also like to highlight that I have a parent-child relationship in the database. Where PRIMARY_KEY of parent table is PRIMARY_KEY (as well as FK) of child table and it is indexed as well. While I am updating values in above method (updateDataInX), some columns are being updated in child table row and some columns are being updated in parent table row (mapped to child row). The instance x is of CHILD class.
Apart from introducing a delay, how can I fix this? The producer is a separate application, and within a second they are sending multiple messages - which the consumer is clearly failing to process. This entirely is back-end update so I dont mind introducing a delay to have a fail-safe mechanism but I did not find any proper solution with spring annotations. Also if you can tell what exactly happens with threads at receiver side and rest API side. I am thoroughly confused reading about it since last 3 weeks.
My code also involved adding AUDIT logging data in one of the other CHILD tables, which was referring to a NON-PRIMARY key of the same PARENT table (Hierarchy of X).
While inserting a new row in CHILD table oracle was taking a full table lock which was resulting all subsequent inserts to fail with a deadlock issue.
Reason for taking a full table lock on CHILD table was that it had a foreign key column REFERENCE_NO which refers to the PARENT table.
When SQL statements from multiple sessions involve parent/child tables, a deadlock may occur with oracle default locking mechanism if the foreign key column in the child table is not indexed. Also Oracle takes a full table lock on the child table when primary key value is changed in the parent table.
In our case the PARENT.REFERENCE_NO is a non-primary key which is referred as foreign key in CHILD table. Every time we do a status update on PARENT table as mentioned in above pseudo code, hibernate fires update on REFERENCE_NO which results in locking the entire CHILD table, making it unavailable for AUDIT logging insertions.
So they Key Is : Oracle recommends to INDEX such foreign key columns due to which oracle does not take full table lock.
I am building a web application. which inserts records in database. I validate records before inserting them in database. If between the time of my validation check and insertion, another application put the db state such that a unique key violation constraints occur if I attempt to insert these records that I have just validated for insertion. How can I avoid this kind of problem. I am using an oracle database and my development language is java.
Basically, you can't unless you change your constraints. You have several ways to to:
You keep the unique constraint and deal with the database exception in your Java code. Race conditions can happen and you have to deal with it.
You lock the entire table as soon as someone enters "insertion mode" in your app, effectively limiting inserts to one at a time. This would mean blocking other users in your application from entering edit mode until the first one is done. Probably not a good idea, but can work when you have very few users.
Remove the constraint. Now this might seem difficult but think about it. Do you really need globally unique entries in some fields? Or can you work around that by including an additional column in your key. This could be an artificial counter, effectively making each row unique again or maybe just the UserID, so that the unique constraint is only checked within each user?
I am experiencing an issue while trying to use write-behind on caches connected to tables which have foreign key constraints between them. Seemingly the write-behind mechanism is not executing the updates/inserts in a deterministic order, but rather is trying to push all the collected changes per each cache consecutively in some unknown order. But as we have foreign keys in the tables, the order of the operation matters, so parent objects should be inserted/updated first, and children only after that (otherwise foreign key violations are thrown from the DB).
It seems that the current implementation is trying to workaround this problem on a trial-and-error basis (org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore:888), which means that it will periodically retry to flush the changes again and again for the caches in case of which a constraint violation occured. So the "child" cache will periodically retry to flush, until the "parent" cache gets flushed first. This ultimately will result in getting the data into the DB, but it also means a lot of unsuccessful tries in case of complex hierarchical tables, until the correct order is "found". This results in poor performance and unnecessary shelling of the DB.
Do you have any suggestions on how could I circumvent this issue?
(Initially I were trying with write-through, but it resulted in VERY poor performance, because the CacheAbstractJdbcStore is seemingly opening a new prepared statement for each insert/update operation.)
With write-behind the order of store updates is undefined because each node writes independently and asynchronously. If you have foreign key constraints, you should use write-through.
As for write-through performance, CacheAbstractJdbcStore operates with a configurable DataSource, so it depend on its implementation whether it opens a new connection each time or not. If you use some pooled version, this will not happen.
We are getting a TX-row lock contention error while trying to insert data
It happens while running a job which processes an xml with almost 10000 records, inserting the data into a table
We are having a unique key constraint on one of the columns in the table, and in the request we are getting duplicate values. This is causing the locks and thereby the job is taking more time.
We are using hibernate and spring. The DAO method we use is hibernate template's 'save' annotated with Spring Transaction Manager's #Transactional
Any suggestions please?
It's not clear whether you're getting locking problems or errors.
"TX-row lock contention" is an event indicating that two sessions are trying to insert the same value into a primary or unique constraint column set -- an error is not raised until the first one commits, then the second one gets the error. So you definitely have multiple sessions inserting rows. If you just had one session then you'd receive the error immediately, with no "TX-row lock contention" event being raised.
Suggestions:
Insert into a temporary table without the constraint, then load to the real table using logic that eliminates the duplicates
Eliminate the duplicates as part of the read of the XML.
Use Oracle's error logging syntax -- example here. http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#SQLRF55004
I wrote this method below that is suppose to delete a member record from the database. But when I use it in my servlet it returns an error.
MemberDao Class
public static void deleteMember(Member member) {
Session hibernateSession = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = hibernateSession.beginTransaction();
hibernateSession.delete(member);
tx.commit();
}
Controller Part
if(delete != null) {
HttpSession httpSession = request.getSession();
Member member = (Member) httpSession.getAttribute("member");
MemberDao.deleteMember(member);
nextPage = "ledenlijst.jsp";
}
HTTP Status 500
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
Sometimes it even throws this error when I try to execute the page multiple times.
org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update
Does anybody know what exactly is causing these errors?
The error can be caused by several things. I'm not taking the credit for it, found it here.
Flushing the data before committing the object may lead to clear all
object pending for persist.
If object has primary key which is auto generated and you are
forcing an assigned key
if you are cleaning the object before committing the object to
database.
Zero or Incorrect ID: If you set the ID to zero or something else,
Hibernate will try to update instead of insert.
Object is Stale: Hibernate caches objects from the session. If the
object was modified, and Hibernate doesn’t know about it, it will
throw this exception — note the StaleStateException
Also look at this answer by beny23 which gives a few further hints to find the problem.
In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
The exception
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
use to be thrown when Hibernate notice that the entity he wants to flush to the database isn't exactly as it was at the beginning of the transaction.
I described more in details two different use cases that happen to me here.
In my case this exception was caused by wrong entity mapping. There were no cascade for relation, and referenced child entity wasn't saved before trying to reference it from parent. Changing it to
#OneToMany(cascade = CascadeType.ALL)
fixed the issue.
Surely best way to find cause of this exception is setting show_sql and DEBUG level for logs - it will stop just at the sql that caused the problem.
I was facing same issue.
The code was working in the testing environment. But it was not working in staging environment.
org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1
The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.)
So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back.
After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.
this is the solution for my case, maybe it will help you!
Actually it was a conversion problem between a database field type (timestamp on postgreSQL) and his equivalent property type (Calendar) on hibernate xml file.
When Hibernate did this update request, it didn't retrieve the row because the request interrogate with a bad convert Calendar value.
So I simply replaced property type "Calendar" in "Date" in Hibernate xml file and the problem was fixed.
i recently experienced this and what happened was that i used the update method and it was throwing an exception because there was no existing record. I changed the method to saveOrUpdate. It worked.
I experienced this same issue with hibernate/JPA 2.1 when using memcached as the secondary cache. You would get the above exception along with a StaleStateException. The resolution was different than what has been noted previously.
I noticed that if you have an operation that interleaves deletes and selects (finds) from within the same table and transaction, hibernate can become overwhelmed and report that stale state exception. It would only occur for us during production as multiple identical operations on different entities would occur on the same table. You would see the system timeout and toss exceptions.
The solution is to simply be more efficient. Rather than interleaving in a loop, attempt to resolve which items need to be read and do so, preferably in one operation. Then perform the delete in a separate operation. Again, all in the same transaction but don't pepper hibernate with read/delete/read/delete operations.
This is much faster and reduces the housekeeping load on Hibernate considerably. The problem went away. This occurs when you are using a secondary cache and won't occur otherwise as the load will be on the database for resolution without a secondary cache. That's another issue.
I have had this problem,
I checked my code, there weren't any problems but when i checked my data I found out, I had two entities with the same id!
So, the flush() could not work, because it works one by one for batch updates and it found 2 rows. Therefore, it didn't update and threw exception, but it was my problem. I don't know if it works fine for you or not!