What rollback actually does? - java

I trying to find out what rollback actually does.
I have couple of scenarios:
Rolling back after a successful commit.
Connection conn=getConnection();
try{
executesSomeQuery(conn);
conn.commit();
} catch(Exception e){
//assume no exception
}finally{
conn.rollback();
}
Rolling back after an unsuccessful commit. If committing three queries q1, q2, q3 in order within the same commit(), what happens to q1 if q2 fails? and how conn.rollback() helps? Will the conn.commit() rollback without the need of rollback()?
Connection conn=getConnection();
try{
executesSomeQuery(conn); // has three queries in order; q1, q2, q3. q2 fails and causes error
conn.commit();
} catch(Exception e){
//assume exception is thrown because commit failed due to q2
conn.rollback();
}

As the name suggests, rollback() rolls back transaction and does not make any change to the database.
All the statements in your try block get executed sequentially on DB. If any of them throws an exception the code does not commit the changes, instead, it reverts them all and leaves the DB in the unimpacted state.
This helps to achieve Atomicity for multiple transactions.

Commit and rollback inform the database to commit or rollback (i.e. undo) the current transaction. At the Java level, they don't actually do much at all.
How it's done on the database varies on the actual database implementation.
In your case, if q2 fails, whatever effects q1 had on the database will be undone.
And, naturally, q3 won't be executed at all.

The idea of the rollback is that, whenever you are inserting/updating data in the database, if there are any errors rollback will save you from inserting/updating wrong/not complete data into the database. By looking at your code snippet 1- is wrong, you should rollback in the catch, if the error is caught than rollback must happen, finally is last in execution so by that time data is already committed, there is nothing to rollback.

Related

Auto attached records get detached after commit

I have want to fetch a list of Relation records, and I'll use .fetchInto(RELATION); and I want to iterate over the list and commit each iteration to the database. This doesn't seem to be working for me because I get Caused by: java.sql.SQLException: Connection is closed while updating the record. I don't have this problem with regular jOOQ queries. And it doesn't seem any connections are closed.
When I use contact.attach(jooq().configuration()) it seems to be working again. How can I prevent it from detaching?
I start and commit a transaction through JPA.em().getTransaction().*.
org.jooq.exception.DataAccessException: SQL [update `Relation` set `Relation`.`organizationName` = ? where `Relation`.`id` = ?]; Connection is closed
at org.jooq_3.15.1.MYSQL.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2979)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:643)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349)
at org.jooq.impl.UpdatableRecordImpl.storeMergeOrUpdate0(UpdatableRecordImpl.java:331)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:228)
at org.jooq.impl.UpdatableRecordImpl.lambda$storeUpdate$1(UpdatableRecordImpl.java:220)
at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:143)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:219)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:156)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:151)
at worker.taskprocessors.BulkContactEditWorker.execute(BulkContactEditWorker.java:144)
Example:
var contacts = jooq()
.selectFrom(
RELATION
.join(BULK_CONTACT_EDIT_CONTACTS)
.on(
BULK_CONTACT_EDIT_CONTACTS
.CONTACT_ID
.eq(RELATION.ID)
.and(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())))
.limit(batchSize)
.fetchInto(RELATION);
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
for (RelationRecord contact : contacts) {
contact.attach(jooq().configuration()); // I have to add this line to make it work.
contact.setOrganizationName("OrganizationName");
contact.update();
JPA.em().getTransaction().commit();
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
}
What's the problem
You're fetching your jOOQ RelationRecord values outside of a JPA transactional context, so that fetching runs in its own transaction (this is independent of jOOQ). jOOQ will always try to acquire() and release() a connection for every query, and when that happens outside of a transactional context, then the connection will be effectively closed (e.g. returned to the connection pool). You should switch the order of starting a transaction and running the jOOQ query:
// start transaction
// run jOOQ query
// iterate jOOQ results
// run updates
// commit transaction
If you're using declarative transactions (e.g. Spring's #Transactional annotation), then you're less likely to run into this problem as your initial jOOQ query will more likely also be in the same transaction already.
Why did attaching work
When you explicitly attach the RelationRecord to the configuration, then you will attach a record that has been attached to the previously closed connection to the new configuration with the transactional connection. I'm assuming that your jooq() method produces a new DSLContext instance, which wraps the currently active DataSource or Connection.
Switch to bulk updates
However, if your example is all there is to your actual logic (i.e. you haven't simplified it for this Stack Overflow question), then why not just run a bulk update? It will be simpler and much faster.
jooq()
.update(RELATION)
.set(RELATION.ORGANIZATION_NAME, "OrganizationName")
.where(RELATION.ID.in(
select(BULK_CONTACT_EDIT_CONTACTS.CONTACT_ID)
.from(BULK_CONTACT_EDIT_CONTACTS)
.where(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())
))
// Since you're using MySQL, you have native UPDATE .. LIMIT support
.limit(batchSize)
.execute();
I'm assuming your actual example is a bit more complex, because you need to set the PROCESSED flag to true somewhere, too, but it's always good to keep this option in mind.

How to fix 'SQLRecoverableException: Closed Connection' in Java

We are working for an ecommerce built with Hybris framework and currently we have an issue with database connection (I suppose) and no idea on how to solve it. It happens only on production environment and only on servers that are used by ESB (2 servers in a total of 40).
Basically, sometimes (1-3/day), we discover sessions waiting for some idle session (SEL*NET message from client). We can only manually kill the holder in order to free these sessions.
All the servers share the same application code and the main difference between ESB and Frontend servers is in the controllers that are called and in the requests count.
ESB Server: 10 requests per minute
Frontend Server: 300 requests per minute
In the application log I found a lot of Closed Connection errors on these 2 servers and I think that this is related to our problem but actually I don't know why.
In access.log I have this request:
[26/Mar/2019:09:04:39 +0100] "GET /blockorder?orderCode=XXXX&access_token=XXXX HTTP/1.1" 400 122 "-" "AHC/1.0"
and in the console.log I have this:
hybrisHTTP8 2019-03-26 09:04:39,184 ERROR [[10.125.31.2] ] () [de.hybris.platform.jdbcwrapper.ConnectionImpl] error resetting AutoCommit
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.PhysicalConnection.setAutoCommit(PhysicalConnection.java:3763)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.doSetAutoCommit(ConnectionImpl.java:431)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.restoreAutoCommit(ConnectionImpl.java:185)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.unsetTxBound(ConnectionImpl.java:175)
at de.hybris.platform.tx.Transaction.unsetTxBoundConnection(Transaction.java:920)
at de.hybris.platform.tx.Transaction.clearTxBoundConnectionAndNotify(Transaction.java:897)
at de.hybris.platform.tx.Transaction.clearTxBoundConnectionAndNotifyRollback(Transaction.java:887)
at de.hybris.platform.tx.Transaction.rollbackOuter(Transaction.java:1084)
at de.hybris.platform.tx.Transaction.rollback(Transaction.java:1028)
at de.hybris.platform.tx.Transaction.commit(Transaction.java:690)
at de.hybris.platform.tx.Transaction.finishExecute(Transaction.java:1218)
at de.hybris.platform.tx.Transaction.execute(Transaction.java:1205)
at de.hybris.platform.tx.Transaction.execute(Transaction.java:1160)
at de.hybris.platform.jalo.Item.setAllAttributes(Item.java:2082)
at de.hybris.platform.jalo.Item.setAllAttributes(Item.java:2057)
at de.hybris.platform.servicelayer.internal.converter.impl.ItemModelConverter.storeAttributes(ItemModelConverter.java:1503)
at de.hybris.platform.servicelayer.internal.converter.impl.ItemModelConverter.save(ItemModelConverter.java:730)
at de.hybris.platform.servicelayer.internal.model.impl.wrapper.ModelWrapper.save(ModelWrapper.java:336)
at de.hybris.platform.servicelayer.internal.model.impl.ResolvingModelPersister.saveOthers(ResolvingModelPersister.java:64)
at de.hybris.platform.servicelayer.internal.model.impl.ResolvingModelPersister.persist(ResolvingModelPersister.java:49)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveViaJalo(DefaultModelService.java:1059)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.doJaloPersistence(DefaultModelService.java:648)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.persistWrappers(DefaultModelService.java:1002)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.performPersistenceOperations(DefaultModelService.java:626)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveAllInternal(DefaultModelService.java:620)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveAll(DefaultModelService.java:600)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.save(DefaultModelService.java:548)
at com.test.fulfilment.process.impl.DefaultOrderProcessService.requestForcedOrderCancellation(DefaultOrderProcessService.java:131)
at com.test.application.order.facades.impl.DefaultOrderFacade.forcedOrderCancel(DefaultOrderFacade.java:62)
at com.test.application.controllers.OrderController.blockOrder(OrderController.java:520)
Our pool config is the following:
{
"maxIdle": 90,
"minIdle": 2,
"maxActive": 90,
"maxWait": 10000,
"whenExhaustedAction": 1,
"testOnBorrow": true,
"testOnReturn": true,
"testWhileIdle": true,
"timeBetweenEvictionRunsMillis": 10000,
"numTestsPerEvictionRun": 100,
"minEvictableIdleTimeMillis": 300000,
"softMinEvictableIdleTimeMillis": -1,
"lifo": true
}
Our tomcat config is:
tomcat.generaloptions.JDBC=-Doracle.jdbc.ReadTimeout=60000
tomcat.generaloptions.TIMEOUT=-Dsun.net.client.defaultConnectTimeout\=60000 -Dsun.net.client.defaultReadTimeout\=60000
tomcat.ajp.acceptCount=100
tomcat.ajp.maxThreads=400
tomcat.maxthreads=400
tomcat.minsparethreads=50
tomcat.maxidletime=10000
tomcat.connectiontimeout=120000
tomcat.acceptcount=100
We tried to remove the oracle.jdbc.ReadTimeout but the result was that we started to see Closed Connections on the other servers.
The code that trigger this error is pretty simple (and it works in the 95% of time):
#Override
public boolean requestForcedOrderCancellation(final OrderModel order) {
Transaction.current().begin();
try {
modelService.lock(order.getPk());
modelService.refresh(order);
order.setForcedCancelled(true);
modelService.save(order);
Transaction.current().commit();
return true;
catch (Exception e) {
LOG.error(e.getMessage(), e);
Transaction.current().rollback();
return false;
}
}
We tried also without explicit locking and the problem is exactly the same.
It seems like the connection is already closed and we cannot rollback (or commit) the transactions that are still waiting in DB.
I expect to avoid this lock and these closed connection errors.
Your connection pool is probably fixing this already for you. Try in increase the logging to see whether it does.
Background: Databases hate long living connections because it can starve them. So they tend to close the connection after some time. Another culprit are firewalls which tend to delete idle connections from their tables. Connection pools know how to handle this by testing the connections (all those test* options in your config above).
Sometimes, you need to tell your pool how to test a connection. Check the documentation. For Oracle, a good test is select 1 from dual.
I think your real problem are those stuck sessions. Find out what they are waiting for by looking at a Java thread dump which you can create using the tool jstack which comes with the Java SDK.
We found that issue was due to uncatched exception/error in transactional code.
Server answer with error and Hybris did not rollback the transaction that is still open.
The same thread is reused sometime later (maybe some days) and old transaction is still open.
When this corrupted thread is used for locking some rows in database, even if we commit the transaction in the code, the same is not committed to database because internally Hybris has a transaction counter to handle inner transactions (maybe used in called methods). Transaction is commited/rollback to DB only when we use commit/rollback method and transaction counter is 1.
Request1:
Transaction.begin() // Hybris Counter = 1
doSomething() // This throws Exception, Application Exit, Hybris Counter is still 1
try {
Transaction.commit()
} catch (Exception e) {
Transaction.rollback();
}
Request2 on same thread:
Transaction.begin() // Hybris Counter now is 2
doSomething() // Works OK, Hybris Counter is still 2
try {
Transaction.commit() // HybrisCounter -= 1
// Transaction is not commited to DB because Hybris counter now is 1
} catch (Exception e) {
Transaction.rollback();
}
Request3 on same thread:
Transaction.begin() // Hybris Counter now is 2
lockRow()
// Row is locked for the whole transaction (the same opened in R1)
// Everything is OK
try {
Transaction.commit() // HybrisCounter -= 1
// Transaction is not commited to DB because Hybris counter now is 1
// Row is still locked
// Next requests to the same row will wait lock forever
} catch (Exception e) {
Transaction.rollback();
}

Locked entities in postgres after retrieval

We have a table in postgres that we use for our system configurations, from which i want to delete configurations with certain ids. The problem is that just before the deletion, we have some code that fetches some of the configuration entries and on the next step (the deletion one) i can't delete them because the rows are locked. I've checked the pg_locks table and each configuration retrieval's transaction stays in postgres for around a minute. And it stays in status "idle in transaction", while the deletion step is waiting on it.
This is how we retrieve the configuration entities
Query query = getEntityManager().createQuery("some query");
...
list = query.getResultList();
There's no transaction involved, we don't really add any transactional jta, ejb annotations, i guess it's added automatically and after this query is where it gets locked.
And this is how we try to delete the rows
Query query = getEntityManager().createNamedQuery(namedQuery);
query.executeUpdate();
Thats it. Nothing special, yet since some rows are locked, the query fails on waiting for the transaction to finish. Any ideas ?
I've tried seemingly everything. Setting hibernate's autocommit mode to true, setting the locktype when executing the queries to pessimistic read/writes, sending over query hints with the queries with max transaction times, executing the queries in separate JTA transactions, executing them in the same transaction, nothing seems to work, the delete query always 'hangs'

Hibernate Delayed Write

I am wondering if there is a possibility of hibernate delaying its writes to the DB. I have hibernate configured for mysql. Scenarios I hope to support are 80% reads and 20% writes. So I do not want to optimize my writes, I rather have the client wait until the DB has been written to, than to return a bit earlier. My tests currently have 100 client in parallel, the cpu does sometimes max out. I need this flush method to write to DB immediately and return only when the data is written.
On my client side, I send a write request and then a read request, but the read request sometimes returns null. I suspect hibernate is not writing to db immediately.
public final ThreadLocal session = new ThreadLocal();
public Session currentSession() {
Session s = (Session) session.get();
// Open a new Session, if this thread has none yet
if (s == null || !s.isOpen()) {
s = sessionFactory.openSession();
// Store it in the ThreadLocal variable
session.set(s);
}
return s;
}
public synchronized void flush(Object dataStore) throws DidNotSaveRequestSomeRandomError {
Transaction txD;
Session session;
session = currentSession();
txD = session.beginTransaction();
session.save(dataStore);
try {
txD.commit();
} catch (ConstraintViolationException e) {
e.printStackTrace();
throw new DidNotSaveRequestSomeRandomError(dataStore, feedbackManager);
} catch (TransactionException e) {
log.debug("txD state isActive" + txD.isActive() + " txD is participating" + txD.isParticipating());
log.debug(e);
} finally {
// session.flush();
txD = null;
session.close();
}
// mySession.clear();
}
#Siddharth Hibernate does not really delay in writing the response , and your code also does not speaks the same. I have also faced similar issue earlier and doubt you might be facing the same that is , when there a numerous request for write into hibernate are there many threads share same instance of your db and even having consecutive commits by hibernate you really dont see any changes .
You may also catch this by simple looking at you MySQL logs during the transaction and see what exactly went wrong !
Thanks for your hint. Took me some time to debug. Mysql logs are amazing.
This is what I run to check the time stamp on my inserts and mysql writes. mysql logs all db operations in a binlog. To read it we need to use the tool called mysqlbinlog. my.cnf too needs to be http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpm7%2Fm7enablelogs.html
I check which is the lastest mysql bin log file, and run this to grep for 1 line above the log, to get the time stamp. Then in java, I call Calendar.getInstance().getTimeInMilli() to compare with the time stamp.
sudo mysqlbinlog mysql/mysql-bin.000004 | grep "mystring" -1
So I debugged my problem. It was a delayed write problem. So I implemented a sync write too instead of all async. In other words the server call wont return until db is flushed for this object.

How to restart transaction after deadlocked or timed out in Java ?

How to restart a transaction (so that it executes at least once) when we get:
( com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException:Deadlock found when trying to get lock; Try restarting transaction ) OR ( transaction times out ) ?
I'm using MySQL(innoDB ENGINE) and Java.Please help and also link any useful resources or codes.
When ever you are catching such type of exception in your catch block
catch(Exception e){
if(e instanceof TransactionRollbackException){
//Retrigger Your Transaction
}
// log your exception or throw it its upto ur implementation
}
If you use plain JDBC, you have to do it manually, in a loop (and don't forget to check the pre-conditions every time.
If you use spring, "How to restart transactions on deadlock/lock-timeout in Spring?" sould help.

Categories