I have a transaction based application and it rolls the operation back on an error. However
in some cases, the rollback doesn't happen. ( Though its getting called from the application). Ours is a very complex application and there are chances that some code is directly committing the data. Is there a way to debug the commit to the database? (Either from java or from the database?). From java, we are not able to do this because, java.sql.Connection is an interface and the implementation is provided by sybase jconnect for which we don't have the source.
I am not sure this will help, but with this driver, this helps in tracing all the DML operations you perform on the database including commit & rollback. When you use p6spy
it logs every database hit into a log file, from where you can easily figure out where your application is performing a commit.
Apart from the above suggestion, I fell every database comes with some sort of monitoring tools, which with you can monitor which DML is fired within a span of time.
Related
I am doing batch inserts using postgres 9.3-1100-jdbc41 JDBC4 driver.
According to the JDBC specification, its up
to the application to disable autocommit and to commit or
rollback the transaction.
In my case, I am not using any transaction(i.e., auto commit is true) but still the inserts are rolled back if one of the insert in the batch fails.
According to JDBC specification "If one of the commands in a batch update fails to execute properly, this method throws a
BatchUpdateException, and a JDBC driver may or may not continue to process the remaining commands
in the batch.". Here it does not says that previously executed commands will be rolled back.
Is my understanding wrong? If not why driver is behaving in this way and if yes what is the right behavior according to specification.
As far as I can tell the spec essentially leaves this up to the driver; it doesn't specify whether or not already-processed work is committed if the batch fails.
PgJDBC executes the batch in a transaction, so if any part of the batch fails then it will all be aborted.
If you feel this behaviour to be incorrect, the first thing you need to do is write a test case demonstrating that other drivers consistently behave a different way to PgJDBC and submit it to the PgJDBC issue tracker. We do not have time to research behaviour of other drivers, so you need to write the test case and run it some other popular databases (MS SQL Server, Oracle, DB2, MySQL, etc) or arrange to have it run by others. If you show that PgJDBC's behaviour differs from how other drivers handle batches then it'll be worth thinking about adding an option to change the behaviour (and working on making it the default eventually).
Having the following problem of understanding why a connection session exists that goes on for 6 hours and also holding a lock and breaking the thread the question is rased if a connection of XADataSource (Oracle driver) needs a Java reference at all to be kept alive.
In terms of distributed transaction set to kept till it is explicitly ended (keep-xa-conn-till-tx-complete=true) I wonder how a driver could decide wether to close connection and commit a transaction once the GC recycled the connection. Is there even a way for the DBMS to do so?
So the question goes how does a DBMS decide wether to abandon a distributed transaction or not.
The DBMS does not decide to end the transaction, the transaction manager does.
The transaction manager is set up to run in your application containers, whether they are Spring, J2EE application servers, or something else. They have to "know" they are distributed transaction managers by being configured correctly. In a distributed transaction environment, where you have multiple transactional applications and/or services interacting to complete a transaction, they must all be able to support distributed transactions to make proper use of the XADataSources and send the right signals up and down the chain for commit vs rollback.
Presuming that you have the above situation, multiple applications and/or services participating in a distributed transaction, it sounds as if one of them is not configured correctly. Something in the chain is not truly a distributed transaction, so that when it completes it's only completing locally and not sending the signal back down the line. So the distributed transaction never completes, holding the connection open.
Check the configuration of every application and/or service participating in the transaction. Add logging output on each that details whether the transaction is a distributed one or not. If you can't do that, then dd logging output to each that records the start and end of the transaction. Find the latest point in the sequence of actions where you see an open-transaction but not a close-transaction. If you see that, it's likely the node right after it in the chain that is doing something wrong.
Good luck.
Recently I was asked a question which left me thinking..want to get the community views on the same question.
I have a CustomerEJB which has say a createCustomer method. My EJB is exposed as a web service and hence createCustomer is one of its operations.
When a request hits createCustomer, 2 operations need to be performed
An INSERT SQL query into the database which may be adding certain data into db that came in input request
creation of a text file, say .txt in the file system.
Now the question is I want to couple these two tasks into a transaction. If any one task fails, I rollback the other task as well.
Without mentioning any hot technologies, like Spring/Hibernate what is the approach I can follow for Transaction management
My thoughts:
1. I can use JTA, demarcate the transaction boundaries and perform commit and rollback accordingly. JDBC can be used for the SQL task
2. I can use DAOs
Inviting your kind suggestions/comments
You would need to wrap the file creating in a XA capable JCA connector (not sure whether there's a ready made one out there, a quick good only found this fsconnector which doesn't support transactions yet), and use an XA driver for your DB transaction (most DBs will will be able handle this) and then wrap your EJB in an XA transaction (should be straightforward).
As long as both resources can handle the XA transactions, you'll get the benefit of 2-phase commits, which is what you're after.
I have a bunch of tests in a Hibernate/Spring application. Yesterday, I transitioned them from using the JUnit 3.8 base test class provided by Spring to the JUnit 4.4 one.
Everything works great, because now, my tests are wrapped in transactions, and data created/modified is automatically rolled back (instead of me writing code to delete newly-created entities).
The only problem is that I cannot peek into the database during test execution. If a test fails, I often add breakpoints near the end and peer into the MySQL database via SQL Yog to see what's going on. But now, I just see empty tables. (I mean in integration tests that simulate production very closely and actually touch the database.)
I tried setting the global isolation level to read uncommitted, but it didn't change the fact that I can't see the data. How can I configure Spring/Hibernate to allow me to view the data from another process?
I had the same issue, and found that setting the session isolation level while using YOG sometimes helped.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
This only uncovered another disturbing issue - while running tests, the Hibernate didn't actually run some of the actions unless I used HibernateTemplate.flush(); after every Hibernate operation.
As this very annoying, I finally set Hibernate so it would always flush queries, like this:
HibernateTemplate hibernateTemplate;
...
hibernateTemplate.setFlushMode(HibernateTemplate.FLUSH_ALWAYS);
I have a scenario where the unit of work is defined as:
Update table T1 in database server S1
Update table T2 in database server S2
And I want the above unit of work to happen either completely or none at all (as the case with any database transaction). How can I do this? I searched extensively and found this post close to what I am expecting but this seems to be very specific to Hibernate.
I am using Spring, iBatis and Tomcat (6.x) as the container.
It really depends on how robust a solution you need. The minimal level of reliability on such a thing is XA transactions. To use that, you need a database and JDBC driver that supports it for starters, then you could configure Spring to use it (here is an outline).
If XA isn't robust enough for you (XA has failure scenarios, such as if something goes wrong in the second phase of commits, such as a hardware failure) then what you really need to do is put all the data in one database and then have a separate process propagate it. So the data may be inconsistent, but it is recoverable.
Edit: What I mean is that put the whole of the data into one database. Either the first database, or a different database for this purpose. This database would essentially become a queue from which the final data view is fed. The write to that database (assuming a decent database product) will be complete, or fail completely. Then, a separate thread would poll that database and distribute any missing data to the other databases. So if the process should fail, when that thread starts up again it will continue the distribution process. The data may not exist in every place you want it to right away, but nothing would get lost.
You want a distributed transaction manager. I like using Atomikos which can be run within a JVM.