Hi I have executed the DML query via jbdc statement without giving autocommit and insert,update and delete happens . Can anyone help to understand how it works?
For example
Sample program
In JDBC, autocommit property is enabled by default when the connection is created. You might need to disable the property explicitly if you want the transaction to be completed only after using commit() function.
For more information, refer the following link,
http://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html#disable_auto_commit
Related
I have a Java app which uses JTA(Apache Geronimo) to manage transactions. The database in use is MySQL. App has a lot of begin and commit methods. However looking at the MySQL general log I could not find a single "Start transaction" query/command. The log is full of SET autocommit=1 and SET autocommit=0 with commit and rollback. Due to this, looking at logs I am unable to identify at what point the transaction began. I am not a Java guy and I could not find any helping resource on this either.
Start transaction and commit statements are used in MySql InnoDB. But In MySQL MyISAM, these commands are not valid so you need to use set autocommit = 0 instead of Start Transaction and set autocommit = 1 in place of commit;
InnoDB allows both the ways but MyISAM allows only set autocommit. Also, note that these commands perform somewhat similar work but they are not identical. And the use of set autocommit is not recommended in InnoDB.
For more information refer this question in StackOverflow.
MySQL's JDBC driver implements the Java JDBC API. The java.sql.Connection interface does not have a method to start a transaction.
A transaction begins implicitly when you execute an SQL query.
If the driver is in autocommit mode, the transaction is committed automatically once the SQL query finishes.
If the driver is not in autocommit mode, the transaction started by your query remains active until you call Connection.commit() or Connection.rollback().
See also How to start a transaction in JDBC?
I am in need to clarify below ,
Is it mandatory to set the hibernate.hbm2ddl.auto property If we are using Hibernate.
hibernate.hbm2ddl.auto create-drop will affect any thing in the production DB
I am using spring localsession factory builder to build hibernate session .The queries all executed fine by using #Transaction but after query execution im getting Invalid data access sql grammer exception.I am assuming Hibernate trying to update something with DB which couldn't.
That's y asking help on hbm2ddl.auto property?
No. The default is fine: it does nothing with the database
Well, it will drop your database (you'll lose everything) when the session factory is closed, and recreate the schema when it restarts. So you really, really don't want that in production.
If you want help on your exception, you should show your code, show the stack trace of the exception, and show the relevant information (what the relevant tables look like, for example). I doubt hbm2ddl.auto has anything to do with that exception. Usually, when an exception occurs, it's a problem with your code, not a problem with hibernate. That's true of any framework/library.
I am doing batch inserts using postgres 9.3-1100-jdbc41 JDBC4 driver.
According to the JDBC specification, its up
to the application to disable autocommit and to commit or
rollback the transaction.
In my case, I am not using any transaction(i.e., auto commit is true) but still the inserts are rolled back if one of the insert in the batch fails.
According to JDBC specification "If one of the commands in a batch update fails to execute properly, this method throws a
BatchUpdateException, and a JDBC driver may or may not continue to process the remaining commands
in the batch.". Here it does not says that previously executed commands will be rolled back.
Is my understanding wrong? If not why driver is behaving in this way and if yes what is the right behavior according to specification.
As far as I can tell the spec essentially leaves this up to the driver; it doesn't specify whether or not already-processed work is committed if the batch fails.
PgJDBC executes the batch in a transaction, so if any part of the batch fails then it will all be aborted.
If you feel this behaviour to be incorrect, the first thing you need to do is write a test case demonstrating that other drivers consistently behave a different way to PgJDBC and submit it to the PgJDBC issue tracker. We do not have time to research behaviour of other drivers, so you need to write the test case and run it some other popular databases (MS SQL Server, Oracle, DB2, MySQL, etc) or arrange to have it run by others. If you show that PgJDBC's behaviour differs from how other drivers handle batches then it'll be worth thinking about adding an option to change the behaviour (and working on making it the default eventually).
Earlier I was trying to get batch inserts working in Hibernate. I tried everything: For the config I set batch_size(50), order_inserts(true), order_updates(true), use_second_level_cache(false), use_query_cache(false). For the session I used setCacheMode(CacheMode.IGNORE) and setFlushMode(FlushMode.MANUAL). Still the MySQL query log showed that each insert was coming in separately.
The ONLY thing that worked was setting rewriteBatchedStatements=true in the JBDC connection string. This worries me, as my application is supposed to support any JBDC database and I'm trying to avoid DB specific optimizations.
Is the only reason hibernate can't actually use batch statements because the MySQL driver doesn't support them by default? What about other drivers, do I have to add options to the connection string so they can support batched inserts? If you need specific db's, think SQL server, sqlite, Postgres, etc
One reason it could not be working is that hibernate disables batching if you use the Identity id generation strategy.
Also MySQL doesn't support JDBC batch prepared statements the same way as other databases without turning on the rewrite option.
I don't see that it is a problem to turn this flag on though, if your are setting up your application for a different database you will have to change the settings such as dialect, driver name, etc. anyway and as this is part of the JDBC connect String then you are isolated from the configuration.
Basically I think you are doing the right thing.
As batch insert (or bulk insert) is part of the SQL standard, ORM frameworks like Hibernate support and implement it. Please see Chapter 13. Batch Processing and Hibernate / MySQL Bulk insert problem .
Basically, you need to set the JDBC batch size via the variable named hibernate.jdbc.batch_size to a reasonable size. Also don't forget to end the batch transaction with flush() and clear().
I have a transaction based application and it rolls the operation back on an error. However
in some cases, the rollback doesn't happen. ( Though its getting called from the application). Ours is a very complex application and there are chances that some code is directly committing the data. Is there a way to debug the commit to the database? (Either from java or from the database?). From java, we are not able to do this because, java.sql.Connection is an interface and the implementation is provided by sybase jconnect for which we don't have the source.
I am not sure this will help, but with this driver, this helps in tracing all the DML operations you perform on the database including commit & rollback. When you use p6spy
it logs every database hit into a log file, from where you can easily figure out where your application is performing a commit.
Apart from the above suggestion, I fell every database comes with some sort of monitoring tools, which with you can monitor which DML is fired within a span of time.