We are currently migrating from Oracle to SAP Hana, and we are using Java-programs that access the database via JDBC.
When I do a SQL-update to the Hana database in Java, this change is not written to the database.
After the update, I use Squirrel to check whether the table has been changed and the change is not visible!
How can I write data with UPDATE, INSERT and DELETE in Hana via JDBC?
As SipCat found out himself, this behavior is caused by
using autocommit=false
and NOT COMMITing the changes made by the application.
In this situation, the transaction will be rolled back when the application disconnects the DB session.
Several SQL Editor/Query UI tools (like DBSquirrel) use autocommit=true or automatically issue a COMMIT after each command, so that it may appear that there is no COMMIT required or that no transaction handling would be involved.
That is a false impression. In fact, even SELECTs (which "just" read data) technically are always in a transaction context.
Related
I'm creating a spring web application that uses the MySQL database with spring JDBCTemplate. The problem is I want to record any changes in data that store in the MySQL database. I couldn't find any solution for Spring Data Envers with JDBCTemplates to record the changes.
What is the best way to record any changes of data of database? or by simply writing a text file on the spring app?
Envers on which Spring Data Envers builds is an add on of Hibernate and uses it's change detection mechanism in order to trigger writing revisions to the database.
JdbcTemplate doesn't have any of that, it just eases the execution of SQL statements by abstracting away repetitive tasks like exception handling or iterating over the ResultSet of queries. JdbcTemplate has no knowledge of what the statement it is executing is actually doing.
As so often you have a couple of options:
put triggers on your database that record changes.
use some database dependent feature like Oracles Change Data Capture
You could create a wrapper of JdbcTemplate which analyses the SQL statement and produces a log entry. This is only feasible when you need only very limited information, like what kind of statement was executed and which table was affected.
If you need more semantic information it is probably best to use an even higher level of your application stack like the controller or service to gather the relevant information and write it to the database. Probably using the JdbcTemplate as well.
I'm developing a Spring MVC web application using Windows 7, Eclipse Juno, Eclipselink JPA as ORM and Glassfish as application server with Oracle 11g. While I was working with Eclipselink I noticed when I update a table manually by execute an update PL/SQL query it doesn't has any affect on entities that already retrieved by Eclipselink until restart the server. Although, I disabled Eclipselink cache by having <shared-cache-mode>NONE</shared-cache-mode> in persistance.xml and using EntityManager.clear(), EntityManager.close() and #Cacheable(false).
Then, I noticed when I update tables using Oracle-SQLDeveloper table designer it totally works fine and entities are showing updated information. So I checked SQLDeveloper log to see what query it's using to update rows and I saw that it's using ORA_ROWSCN and ROW ROWID in where clause. After that, I exactly used the same where clause as the one SQLDeveloper used to update tables, but still entities were showing old information.
I'm wondering what factors are involved here that Eclipslink is not fetching real time data from database ? but, after updating table with SQLDeveloper designer Eclipselink is showing real time data. It seems that modifying a table data with SQLDeveloper table designer also marks the table as changed by using some database features. Then, Eclipselink will read the mark before hitting the table.
Also to get more clarification, anyone knows what steps are involved in Eclipselink before it decides to hit database when user commands to execute a TypedQuery ? I'm so curious that where it stores cached entities ? since cache rest just by restarting the computer; I tried restarting Glassfish, killing the java process and logoff current user, but none of them worked. Why Eclipselink still is caching entities since I configured it to not use any caching procedure? Is it possible to completely turn off cache in Eclipselink?
Earlier I was trying to get batch inserts working in Hibernate. I tried everything: For the config I set batch_size(50), order_inserts(true), order_updates(true), use_second_level_cache(false), use_query_cache(false). For the session I used setCacheMode(CacheMode.IGNORE) and setFlushMode(FlushMode.MANUAL). Still the MySQL query log showed that each insert was coming in separately.
The ONLY thing that worked was setting rewriteBatchedStatements=true in the JBDC connection string. This worries me, as my application is supposed to support any JBDC database and I'm trying to avoid DB specific optimizations.
Is the only reason hibernate can't actually use batch statements because the MySQL driver doesn't support them by default? What about other drivers, do I have to add options to the connection string so they can support batched inserts? If you need specific db's, think SQL server, sqlite, Postgres, etc
One reason it could not be working is that hibernate disables batching if you use the Identity id generation strategy.
Also MySQL doesn't support JDBC batch prepared statements the same way as other databases without turning on the rewrite option.
I don't see that it is a problem to turn this flag on though, if your are setting up your application for a different database you will have to change the settings such as dialect, driver name, etc. anyway and as this is part of the JDBC connect String then you are isolated from the configuration.
Basically I think you are doing the right thing.
As batch insert (or bulk insert) is part of the SQL standard, ORM frameworks like Hibernate support and implement it. Please see Chapter 13. Batch Processing and Hibernate / MySQL Bulk insert problem .
Basically, you need to set the JDBC batch size via the variable named hibernate.jdbc.batch_size to a reasonable size. Also don't forget to end the batch transaction with flush() and clear().
I have a transaction based application and it rolls the operation back on an error. However
in some cases, the rollback doesn't happen. ( Though its getting called from the application). Ours is a very complex application and there are chances that some code is directly committing the data. Is there a way to debug the commit to the database? (Either from java or from the database?). From java, we are not able to do this because, java.sql.Connection is an interface and the implementation is provided by sybase jconnect for which we don't have the source.
I am not sure this will help, but with this driver, this helps in tracing all the DML operations you perform on the database including commit & rollback. When you use p6spy
it logs every database hit into a log file, from where you can easily figure out where your application is performing a commit.
Apart from the above suggestion, I fell every database comes with some sort of monitoring tools, which with you can monitor which DML is fired within a span of time.
I'm using an object/relational mapper to talk to the database. The library in my case is iBatis which is also used for transaction management.
However, I recently had a case where iBatis didn't actually start a transaction even though startTransaction() etc. was called as documented.
After some debugging I found out that there was a configuration mistake on my side.
One might blame iBatis but I would like to avoid such misconceptions in the future.
So here's the question:
How can I programmatically assert that the current database connection is running in a transaction?
The databases I'm using are Oracle, MySQL and H2 (for testing).
I'm not 100% sure if this is absolutely indicative of being in a tx, but Connection.getAutoCommit() tells you if the connection is in auto-commit mode, where auto-commit "on" means "no transaction".
There may well be cases where this assertion does not hold, but most JDBC-based frameworks will use that setting to control transactions.