Need help in JDBC transaction control mechanism in JAVA.
Issue:
There are certain stored procedures in our Sybase DB that needs to be run on Unchained mode. Since we are updating our data on two different databases (unfortunately, both Sybase) we need to be able to rollback all our previous transactions, if there is any failure.
But running with Unchained Mode (Auto commit - on) is not helping us with the rollbacks as some of the SPs have already committed the transactions.
Connection connection = getConnection();
PreparedStatement ps = null;
try{
String sql = getQuery(); // SQL Chained Mode
ps = connection.prepareStatement(sql);
ps.executeUpdate(); //Step 1
.
.
sql = getTransctionQuery(); // SQL Unchained Mode
connection.setAutoCommit(true); //Step 2
ps = connection.prepareStatement(sql);
ps.executeUpdate();
connection.setAutoCommit(false);
.
.
sql = getQuery(); // SQL Chained Mode
ps = connection.prepareStatement(sql);
ps.executeUpdate(); //Step 3 This step fails.
connection.commit();
}catch(){
connection.rollback(); //Doesn’t rollback step 1 and understandably step 2.
}
finally{
connection.close(); //cleanup code
}
We would ideally like to rollback both step 1 and step 2 effectively if 3 fails.
Current Solution:
Our idea is to reinvent the wheel and write our own version of rollback (by deleting inserted records and reverting the updated values, from Java).
Need effective solution
Since this solution is effort intensive and not fool proof we would like to know if there are any other better solutions.
Thanks
You need to perform an explicit BEGIN TRANSACTION statement. Otherwise, every DML is a transaction by itself which you cannot control. Obviously autocommit must be off as well.
Related
I have to execute multiple insert queries using JDBC for which I am trying to execute batch statement. Everything works fine in my code but when i try to see values in the table, the table is empty.
Here is the code :
SessionImpl sessionImpl = (SessionImpl) getSessionFactory().openSession();
Connection conn = (Connection) sessionImpl.connection();
Statement statement = (Statement) conn.createStatement();
for (String query : queries) {
statement.addBatch(query);
}
statement.executeBatch();
statement.close();
conn.close();
And the
List<String> queries
contains insert queries like:
insert into demo values (null,'Sharmzad','10006','http://demo.com','3 Results','some values','$44.00','10006P2','No Ratings','No Reviews','Egypt','Duration: 8 hours','tour','Day Cruises');
And the table structure is like:
create table demo ( ID INTEGER PRIMARY KEY AUTO_INCREMENT,supplierName varchar(200),supplierId varchar(200),supplierUrl varchar(200),totalActivities varchar(200),activityName varchar(200),activityPrice varchar(200),tourCode varchar(200),starRating varchar(200),totalReviews varchar(200),geography varchar(200),duration varchar(200),category varchar(200),subCategory varchar(200));
No exception is thrown anywhere but no value is inserted. Can someone explain?
Most JDBC drivers use autocommit, but some of them do not. If you don't know, you should use either .setAutoCommit(true) before the transaction or .commit() after it..
Could be a transaction issue. Perhaps you're not committing your transaction? If so, then it is normal not to see anything in the database.
You can check if this is the case by running a client in READ_UNCOMMITTED transaction mode, right after .executeBatch(); (but before close()) and see if there are any rows.
You don't should assign a value to ID add supply all the others columns name
insert into demo
(
supplierName
,supplierId
,supplierUrl
,totalActivities
,activityName
,activityPrice
,tourCode
,starRating
,totalReviews
,geography
,duration
,category
,subCategory
)
values (
'Sharmzad'
,'10006'
,'http://demo.com'
,'3 Results'
,'some values'
,'$44.00'
,'10006P2'
,'No Ratings'
,'No Reviews'
,'Egypt'
,'Duration: 8 hours
','tour'
,'Day Cruises'
);
and add commit to your code
To resolve the issue mentioned here.
We are creating and using 2 same JDBC Singleton Connections(Regular, Proxy).
But by doing so we are facing deadlock when we try to use both connections consecutively on same table for doing multiple inserts and updates.
When this happens, I cannot run any queries from the DB tool (Aqua Data Studio) as well.
My assumption is that it waits indefinitely for other connection to release lock.
Note: We are not dealing with multi-threading here.
Issue:
// Auto Commit false
// Singelton
Connection connection = getConnection(); //same
// Auto Commit true
// // Singelton
Connection proxyConnection= getConnection(); //same
PreparedStatement ps = null;
try{
connection.setAutoCommit(false);
//Step 1
String sql = getQuery();
ps = proxyConnection.prepareStatement(sql);
ps.executeUpdate();
.
.
//Step 2
// if I don't execute this step everything works fine.
sql = getTransctionQuery();
ps = connection.prepareStatement(sql);
ps.executeUpdate();
.
.
//Step 3
sql = getQuery();
ps = proxyConnection.prepareStatement(sql);
ps.executeUpdate(); // this line never completes (if Step 2 runs)
}catch(){
connection.rollback(); //Doesn’t rollback step 1 and understandably step 2.
}
finally{
connection.close(); //cleanup code
proxyConnection.close();
}
Question:
How to resolve this issue?
How to make sure different connections, though they are creating using same class loader, won't lock database/table.
Thanks
I'm no expert here but I used to have problems with Oracle DB when running a query and then forgetting to commit (or cancel). So I think that the fact that you didn't commit after step 2 locks the database for the next access.
question background:
1.database is neo4j 2.3.1, driver using jdbc;
2.db connection initialized as a class member, default is auto-commit(not changed);
To avoid insert duplicates, i query before insert. after program stopped, found duplicates. why?
code:
String query = "CREATE (n:LABEL {name:'jack'})";
System.out.println(query);
Statement stmt = dbConnection.createStatement();
stmt.executeUpdate(query);
stmt.close();
Use MERGE + unique constraints instead
How do you "check"
You would have to check in the same tx and also take a write lock
after debugging i found that for neo4j-jdbc(v2.1.4), the default db connection transaction level is TRANSACTION_NONE, then i set it to TRANSACTION_READ_COMMITTED, above issue disappeared. so i think that TRANSACTION_READ_COMMITTED will force the previous insert committed, though this is not the recommended way. for isolation level refer to:Difference between read commit and repeatable read
I want some advice on some concurrency issues regarding jdbc, i basically need to update a value and then retrieve that value using a update then a select, I'm assuming by turning auto commit off no other transaction can access this table, hence other transactions won't be able to perform update and select queries until this has been committed.
Below is some example code. Do you think this will work and does any one else have a better solution to implementing this?
int newVal=-1;
con.setAutoCommit(false);
PreparedStatement statement = con.prepareStatement("UPDATE atable SET val=val+1 WHERE id=?");
statement.setInt(1, id);
int result = statement.executeUpdate();
if (result != 1) {
throw new SQLException("Nothing updated");
} else {
statement = con.prepareStatement("SELECT val FROM atable WHERE id=?");
statement.setInt(1, id);
ResultSet resultSet = statement.executeQuery();
if (resultSet.next()) {
newVal = resultSet.getInt("val");
}
}
statement.close();
con.commit();
con.setAutoCommit(true);
Thanks.
Assuming you use some form of data source, you may configure there if you want transactionality and the isolation level. But to be explicit:
try(Connection con = ds.getConnection()){
con.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
con.setAutoCommit(false);
//...
} catch(SQLException sqle) {
throw new MyModelException(e)
}
Now, you could trigger pesimistic locking by updating a version (or timestamp) field in your table. This will trigger a lock in the database (most likely at the record level):
try(PreparedStatement pStm = con.prepareStatement("update atable set version=version+1")){
pStm.executeUpdate();
}
At this point, if another user is trying to update the same record simultaneously, this connection will either wait or timeout, so you must be ready for both things. The record will not be unlocked until your transaction ends (commit or rollback).
Then, you can safely select and update whatever you want and be sure that nobody else is touching your record as you process your data. If anybody else tries they will be put on wait until you finish (or they will timeout depending on connection configuration).
Alternatively you could use optimistic locking. In this case you read your record, do modifications to it, but in the update you make sure nobody else has changed it since you read it by checking that the version/timestamp field is the same as the one you orginally read. In this case you must be prepared to retry a transaction (or abort it alltogether) if you realize you have stale/outdated data.
i.e. update atable set afield=? where id=? and version=1
If the number of rows affected is 0, then you know that is probable that the record was updated between your read and your update and the record is no longer in version 1.
Setting autocommit=false on your connection will not prevent other connections/threads from changing the row in the database! It will only disable automatic commits after each JDBC operation on that specific connection.
You will need to lock the row, eg. with select ... for update to prevent other transactions against the row, and also you will need to do your selects and updates within a single transaction.
Cheers,
I have two blocks of queries with preparedStatement.
This is the first:
String sql = "update cikan_malzeme set miktar = ? where proje_id = ? and malzeme_id = ?";
PreparedStatement prep = dbConnect.connection.prepareStatement(sql);
prep.setFloat(1, toplam);
prep.setInt(2, pid);
prep.setInt(3, mid);
prep.executeUpdate();
And this is the second:
String sql2 = "update malzemeler set miktar = ? where malz_adi = ?";
PreparedStatement prep2 = dbConnect.connection.prepareStatement(sql2);
prep2.setFloat(1, fark);
prep2.setString(2, malzemeadi);
prep2.executeUpdate();
Now I want to execute them with the transaction BEGIN; and COMMIT;
How can I handle transaction with preparedStatement?
You should use Connection.setAutoCommit(false) to disable auto-commit and Connection.commit() and Connection.rollback().
When auto-commit is disabled, a transaction will be started automatically the first time you execute a command or query that requires a transaction.
You should not be using the database specific transaction control commands, as the driver will most likely do additional cleanup of resources when a commit() or rollback() is issued.
Set auto commit to false.
Put your PreparedStatements in a try block. Commit at the end; rollback in the catch block.
That's how it's usually done in bare bones JDBC.
http://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html
If you use EJB3 or Spring you can add a transaction manager and specify them declaratively. That's more sophisticated and flexible.