How to restart transaction after deadlocked or timed out in Java ? - java

How to restart a transaction (so that it executes at least once) when we get:
( com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException:Deadlock found when trying to get lock; Try restarting transaction ) OR ( transaction times out ) ?
I'm using MySQL(innoDB ENGINE) and Java.Please help and also link any useful resources or codes.

When ever you are catching such type of exception in your catch block
catch(Exception e){
if(e instanceof TransactionRollbackException){
//Retrigger Your Transaction
}
// log your exception or throw it its upto ur implementation
}

If you use plain JDBC, you have to do it manually, in a loop (and don't forget to check the pre-conditions every time.
If you use spring, "How to restart transactions on deadlock/lock-timeout in Spring?" sould help.

Related

logging issue while using jdbc

I'm using jdbc for the first time and reading from a file which contains sql queries I wrote earlier. Although the queries are properly executed and my program goes on without any apparent issue, After I've statement.executeBatch();, I cannot write anything into my log anymore.
-I'm using the java.util.logging to do my logging with a FileHandler. To read my ".sql" file, I'm using a BufferedReader and a FileReader. I know I'm not sharing a lot of code to fully understand the context, but that's all I have from memory. I'm closing all the readers after use.
Any ideas what could be the problem?
MyLogger.log(Level.WARNING, "it does write");
statement.executeBatch();
MyLogger.log(Level.WARNING, "it doesn't write anymore");
statement.close();
MyLogger.log(Level.WARNING, "still doesn't");
Thanks
edit: MyLogger is a class with a statig log method
edit2: #Tim Biegeleisen
statement.executeBatch() returns an array of int, one for each batch. I tried :
try {
int[] results = statement.executeBatch();
for (int result : results)
{
if (result == Statement.EXECUTE_FAILED)
{
MyLogger.log(Level.SEVERE, "batch failed, but driver seems to still be alive.");
System.out.println("batch failed, but driver seems to still be alive.");
}
}
} catch (SQLException e) {
MyLogger.log(Level.SEVERE, "the batch failed, and the driver died too.");
System.out.println("the batch failed, and the driver died too.");
}
and it printed and logged nothing.
edit3: I guess I was asking too much of my shutdown hook. I'm not familiar with it so I'm not sure what was precisely the problem.
The explanation which seems most likely to me is that your code is rolling over upon hitting executeBatch(). After this, the subsequent calls to the logger do not appear to be working, because they aren't being hit at all. One simple way to test this would be to surround your call to executeBatch() with a try catch block:
try {
int result = statement.executeBatch();
if (result == Statement.EXECUTE_FAILED) {
MyLogger.log(Level.ERROR, "batch failed, but driver seems to still be alive.");
}
} catch (SQLException e) {
MyLogger.log(Level.ERROR, "the batch failed, and the driver died too.");
}
I guess I was asking too much of my shutdown hook. I'm not familiar with it >so I'm not sure what was precisely the problem.
See: LogManager$Cleaner() can prevent logging in other shutdown hooks.
If you are trying to perform logging in a shutdown hook then you are racing with the LogManager$Cleaner thread.
As a workaround to creating a custom shutdown hook you can create a custom handler and install it on the root logger.
The first action of the LogManager$Cleaner is to close all the installed handlers on the logger.
Once the cleaner calls the close on the custom handler you can then do one of the following:
Have LogManager$Cleaner run your shutdown code inside the handler.
Find your custom shutdown hook using the Thread API and join with it which will block the cleaner thread.

Hibernate open-close SessionFactory on every query vs on app instance

I am currently working on a Java Swing application in NetBeans with Hibernate guided with this wonderful repo from GitHub.
From the example code found here, it basically urges new programmers to open and close SessionFactory connection every time certain queries have been executed:
try {
HibernateSessionFactory.Builder.configureFromDefaultHibernateCfgXml()
.createSessionFactory();
new MySqlExample().doSomeDatabaseStuff();
} catch (Throwable th) {
th.printStackTrace();
} finally {
HibernateSessionFactory.closeSessionFactory();
}
private void doSomeDatabaseStuff() {
deleteAllUsers();
insertUsers();
countUsers();
User user = findUser(USER_LOGIN_A);
LOG.info("User A: " + user);
}
Is this a good programming exercise? Isn't it more efficient to open the SessionFactory on app startup and close it on WindowClosing event? What are the drawbacks of each method?
Thanks.
Using a persistent connection means you are going to have as many opened connections on your database as opened clients, plus you'll have to make sure it stays open (very often it will be closed if it stays idle for a long time).
On the other hand, executing a query will be significantly faster if the connection is already opened.
So it really depends on often your clients will use the database. If they use it very rarely, a persistent connection is useless.

Hibernate Delayed Write

I am wondering if there is a possibility of hibernate delaying its writes to the DB. I have hibernate configured for mysql. Scenarios I hope to support are 80% reads and 20% writes. So I do not want to optimize my writes, I rather have the client wait until the DB has been written to, than to return a bit earlier. My tests currently have 100 client in parallel, the cpu does sometimes max out. I need this flush method to write to DB immediately and return only when the data is written.
On my client side, I send a write request and then a read request, but the read request sometimes returns null. I suspect hibernate is not writing to db immediately.
public final ThreadLocal session = new ThreadLocal();
public Session currentSession() {
Session s = (Session) session.get();
// Open a new Session, if this thread has none yet
if (s == null || !s.isOpen()) {
s = sessionFactory.openSession();
// Store it in the ThreadLocal variable
session.set(s);
}
return s;
}
public synchronized void flush(Object dataStore) throws DidNotSaveRequestSomeRandomError {
Transaction txD;
Session session;
session = currentSession();
txD = session.beginTransaction();
session.save(dataStore);
try {
txD.commit();
} catch (ConstraintViolationException e) {
e.printStackTrace();
throw new DidNotSaveRequestSomeRandomError(dataStore, feedbackManager);
} catch (TransactionException e) {
log.debug("txD state isActive" + txD.isActive() + " txD is participating" + txD.isParticipating());
log.debug(e);
} finally {
// session.flush();
txD = null;
session.close();
}
// mySession.clear();
}
#Siddharth Hibernate does not really delay in writing the response , and your code also does not speaks the same. I have also faced similar issue earlier and doubt you might be facing the same that is , when there a numerous request for write into hibernate are there many threads share same instance of your db and even having consecutive commits by hibernate you really dont see any changes .
You may also catch this by simple looking at you MySQL logs during the transaction and see what exactly went wrong !
Thanks for your hint. Took me some time to debug. Mysql logs are amazing.
This is what I run to check the time stamp on my inserts and mysql writes. mysql logs all db operations in a binlog. To read it we need to use the tool called mysqlbinlog. my.cnf too needs to be http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpm7%2Fm7enablelogs.html
I check which is the lastest mysql bin log file, and run this to grep for 1 line above the log, to get the time stamp. Then in java, I call Calendar.getInstance().getTimeInMilli() to compare with the time stamp.
sudo mysqlbinlog mysql/mysql-bin.000004 | grep "mystring" -1
So I debugged my problem. It was a delayed write problem. So I implemented a sync write too instead of all async. In other words the server call wont return until db is flushed for this object.

Writing selenium errors to a log file and handling errors

I have a selenium script and I need to write the failures to a log file. For example if a page is not found or selenium.waitForPageToLoad expires. Instead of going straight to tearDown(), I would like to log why my script has stopped.
selenium.open("/confluence/login.action?logout=true");
selenium.type("os_username", "login");
selenium.type("os_password", "pw");
selenium.click("loginButton");
selenium.waitForPageToLoad("10000");
selenium.click("title-heading");
selenium.click("spacelink-INTR");
selenium.waitForPageToLoad("10000");
selenium.click("link=Create Issue Links");
selenium.waitForPageToLoad("10000");
selenium.click("quick-search-query");
selenium.type("quick-search-query", "create issue links");
selenium.click("quick-search-submit");
selenium.waitForPageToLoad("100000");
stoptime = System.currentTimeMillis();
Also would it be possible to skip a steap if it fails, right now if anything at all fails it will go straight to the teardown() method.
I am using java.
What you are asking is exception handling. If any of the steps in your tests fails then selenium will throw an exception and tests would stop. If you handle the exceptions using try catch then you should be able to achieve what you are looking for. As an example, see the code below. This would handle the initial page load timeout. Even if selenium.open fails, script will handle the exception and move ahead to next statement. You should read more about exception handling to find what is the best way to handle these exceptions.
try{
selenium.open("/confluence/login.action?logout=true");
}
catch(Exception e)
{
//Code to write data to a file goes here
System.out.println("Selenium open did not work")
}

spring ibatis mysql intermittent asynchronous problem

I'm using ibatis in spring to write to mysql.
I have an intermittent bug. On each cycle of a process I write two rows to the db. The next cycle I read in the rows from the previous cycle. Sometimes (one time in 30, sometimes more frequently, sometimes less) I only get back one row from the db.
I have turned off all caching I can think of. My sqlmap-config.xml just says:
<sqlMapConfig>
<settings enhancementEnabled="false" statementCachingEnabled="false" classInfoCacheEnabled="false"/>
<sqlMap resource="ibatis/model/cognitura_core.xml"/>
Is there some asynchrony, or else caching to spring or ibatis or the mysql driver that I'm missing?
Using spring 3.0.5, mybatis 2.3.5, mysql-connector-java 5.0.5
EDIT 1:
Could it be because I'm using a pool of connections (c3p0)? Is it possible the insert is still running when I'm reading. It's weird, though, I thought everything would be occuring synchronously unless I explicitly declared asynch?
Are you calling SqlSession.commit() after the inserts? C3P0 asynchronously "closes" the connections, which may be calling commit under the covers. That could explain the behavior you are seeing.
I'm getting similar behavior. This is what I'm doing. I have an old version of IBATIS I don't plan on upgrading. You can easily move this into a decorator.
SqlMapSession session = client.openSession();
try {
try {
session.startTransaction();
// do work
session.commitTransaction();
// The transaction should be committed now, but it doesn't always happen.
session.getCurrentConnection().commit(); // Commit again :/
} finally {
session.endTransaction();
}
} finally {
session.close(); // would be nice if it was 'AutoCloseable'
}

Categories