Why doesn't MySQL JDBC use "Start transaction" query to start transactions? - java

I have a Java app which uses JTA(Apache Geronimo) to manage transactions. The database in use is MySQL. App has a lot of begin and commit methods. However looking at the MySQL general log I could not find a single "Start transaction" query/command. The log is full of SET autocommit=1 and SET autocommit=0 with commit and rollback. Due to this, looking at logs I am unable to identify at what point the transaction began. I am not a Java guy and I could not find any helping resource on this either.

Start transaction and commit statements are used in MySql InnoDB. But In MySQL MyISAM, these commands are not valid so you need to use set autocommit = 0 instead of Start Transaction and set autocommit = 1 in place of commit;
InnoDB allows both the ways but MyISAM allows only set autocommit. Also, note that these commands perform somewhat similar work but they are not identical. And the use of set autocommit is not recommended in InnoDB.
For more information refer this question in StackOverflow.

MySQL's JDBC driver implements the Java JDBC API. The java.sql.Connection interface does not have a method to start a transaction.
A transaction begins implicitly when you execute an SQL query.
If the driver is in autocommit mode, the transaction is committed automatically once the SQL query finishes.
If the driver is not in autocommit mode, the transaction started by your query remains active until you call Connection.commit() or Connection.rollback().
See also How to start a transaction in JDBC?

Related

SQL UPDATE over JDBC in SAP Hana is not persistent

We are currently migrating from Oracle to SAP Hana, and we are using Java-programs that access the database via JDBC.
When I do a SQL-update to the Hana database in Java, this change is not written to the database.
After the update, I use Squirrel to check whether the table has been changed and the change is not visible!
How can I write data with UPDATE, INSERT and DELETE in Hana via JDBC?
As SipCat found out himself, this behavior is caused by
using autocommit=false
and NOT COMMITing the changes made by the application.
In this situation, the transaction will be rolled back when the application disconnects the DB session.
Several SQL Editor/Query UI tools (like DBSquirrel) use autocommit=true or automatically issue a COMMIT after each command, so that it may appear that there is no COMMIT required or that no transaction handling would be involved.
That is a false impression. In fact, even SELECTs (which "just" read data) technically are always in a transaction context.

DML commit via jdbc

Hi I have executed the DML query via jbdc statement without giving autocommit and insert,update and delete happens . Can anyone help to understand how it works?
For example
Sample program
In JDBC, autocommit property is enabled by default when the connection is created. You might need to disable the property explicitly if you want the transaction to be completed only after using commit() function.
For more information, refer the following link,
http://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html#disable_auto_commit

Batch update in Postgresql JDBC driver rolls back in autocommit

I am doing batch inserts using postgres 9.3-1100-jdbc41 JDBC4 driver.
According to the JDBC specification, its up
to the application to disable autocommit and to commit or
rollback the transaction.
In my case, I am not using any transaction(i.e., auto commit is true) but still the inserts are rolled back if one of the insert in the batch fails.
According to JDBC specification "If one of the commands in a batch update fails to execute properly, this method throws a
BatchUpdateException, and a JDBC driver may or may not continue to process the remaining commands
in the batch.". Here it does not says that previously executed commands will be rolled back.
Is my understanding wrong? If not why driver is behaving in this way and if yes what is the right behavior according to specification.
As far as I can tell the spec essentially leaves this up to the driver; it doesn't specify whether or not already-processed work is committed if the batch fails.
PgJDBC executes the batch in a transaction, so if any part of the batch fails then it will all be aborted.
If you feel this behaviour to be incorrect, the first thing you need to do is write a test case demonstrating that other drivers consistently behave a different way to PgJDBC and submit it to the PgJDBC issue tracker. We do not have time to research behaviour of other drivers, so you need to write the test case and run it some other popular databases (MS SQL Server, Oracle, DB2, MySQL, etc) or arrange to have it run by others. If you show that PgJDBC's behaviour differs from how other drivers handle batches then it'll be worth thinking about adding an option to change the behaviour (and working on making it the default eventually).

Are batch inserts not working only because of the MySQL driver? What about others?

Earlier I was trying to get batch inserts working in Hibernate. I tried everything: For the config I set batch_size(50), order_inserts(true), order_updates(true), use_second_level_cache(false), use_query_cache(false). For the session I used setCacheMode(CacheMode.IGNORE) and setFlushMode(FlushMode.MANUAL). Still the MySQL query log showed that each insert was coming in separately.
The ONLY thing that worked was setting rewriteBatchedStatements=true in the JBDC connection string. This worries me, as my application is supposed to support any JBDC database and I'm trying to avoid DB specific optimizations.
Is the only reason hibernate can't actually use batch statements because the MySQL driver doesn't support them by default? What about other drivers, do I have to add options to the connection string so they can support batched inserts? If you need specific db's, think SQL server, sqlite, Postgres, etc
One reason it could not be working is that hibernate disables batching if you use the Identity id generation strategy.
Also MySQL doesn't support JDBC batch prepared statements the same way as other databases without turning on the rewrite option.
I don't see that it is a problem to turn this flag on though, if your are setting up your application for a different database you will have to change the settings such as dialect, driver name, etc. anyway and as this is part of the JDBC connect String then you are isolated from the configuration.
Basically I think you are doing the right thing.
As batch insert (or bulk insert) is part of the SQL standard, ORM frameworks like Hibernate support and implement it. Please see Chapter 13. Batch Processing and Hibernate / MySQL Bulk insert problem .
Basically, you need to set the JDBC batch size via the variable named hibernate.jdbc.batch_size to a reasonable size. Also don't forget to end the batch transaction with flush() and clear().

How to assert that database connection is in a transaction?

I'm using an object/relational mapper to talk to the database. The library in my case is iBatis which is also used for transaction management.
However, I recently had a case where iBatis didn't actually start a transaction even though startTransaction() etc. was called as documented.
After some debugging I found out that there was a configuration mistake on my side.
One might blame iBatis but I would like to avoid such misconceptions in the future.
So here's the question:
How can I programmatically assert that the current database connection is running in a transaction?
The databases I'm using are Oracle, MySQL and H2 (for testing).
I'm not 100% sure if this is absolutely indicative of being in a tx, but Connection.getAutoCommit() tells you if the connection is in auto-commit mode, where auto-commit "on" means "no transaction".
There may well be cases where this assertion does not hold, but most JDBC-based frameworks will use that setting to control transactions.

Categories