Informix JBDC hold cursors over commit - java

Using Informix 12.10FC5DE and JDBC.4.10.JC5DE, I am trying to write a Java program that will perform a "controlled" delete on a table.
The database containing the table is logged and the table has "lock mode row".
The program will receive a maximum number of rows to delete and a perform periodic commits every X rows (to limit the number of locks and prevent long transactions).
Using SPL, I can declare a cursor with hold and do a foreach loop with "delete where current of".
Using the JDBC, I can use the resultSet methods .next() and .deleteRow() to perform what I want (I set up the connection and statement to not autocommit or close the resultSet on commit).
It works, but it is slow (under the hood the JDBC is sending something like "DELETE FROM TABLE WHERE COLUMN = ?").
The code I have is something similar to the following:
Class.forName("com.informix.jdbc.IfxDriver");
Connection conn = DriverManager.getConnection(url);
conn.setHoldability(ResultSet.HOLD_CURSORS_OVER_COMMIT);
conn.setAutoCommit(false);
String cmd = "SELECT id FROM teste_001 FOR UPDATE;";
Statement stmt = conn.createStatement(
ResultSet.TYPE_FORWARD_ONLY,
ResultSet.CONCUR_UPDATABLE,
ResultSet.HOLD_CURSORS_OVER_COMMIT);
stmt.setFetchSize(100);
stmt.setCursorName("myowncursor");
ResultSet resultados = stmt.executeQuery(cmd); // Get the resulSet and cursor
int maximo = 2000;
int passo = 100;
int cTotal = 0;
int cIter = 0;
cmd2 = "DELETE FROM teste_001 WHERE CURRENT OF " +
resultados.getCursorName();
PreparedStatement stmtDel2 = conn.prepareStatement(cmd2);
while (resultados.next())
{
if (cIter < maximo)
{
int resultCode2 = stmtDel2.executeUpdate();
if (resultCode2 == 1)
{
cTotal++;
}
cIter++;
if ((cIter % passo) == 0)
{
conn.commit(); // Perform periodic commit
}
}
else
{
break; // maximum number of rows reached
}
}
conn.commit(); // Perform final commit
stmtDel2.close();
resultados.close();
stmt.close();
conn.close();
The problem is that when I perform the 1st periodic commit, I get this error when I try the next delete:
java.sql.SQLException: There is no current row for UPDATE/DELETE cursor.
SQLCODE = -266 ; MESSAGE: There is no current row for UPDATE/DELETE cursor. ; SQLSTATE = 24000
Seems the cursor is being closed even with me setting the "HOLD_CURSORS_OVER_COMMIT".
Does anyone know if what I am trying with the JDBC is possible?
EDIT:
Since informix suports de IBM DRDA protocol i configured a drda listener on my test informix instance and used DB2 UDB JDBC Universal Driver.
The code is pretty much the same, only the driver changes:
Class.forName("com.ibm.db2.jcc.DB2Driver");
With the DRDA driver, the cursor is kept open over commits and the program behaves as expected. Tracing the session on informix instance i get this type of statement:
DELETE FROM test_001 WHERE CURRENT OF SQL_CURSH600C1
So, informix does suport "HOLD_CURSORS_OVER_COMMIT" with the DRDA driver but i still cannot make it work with the "ifxDriver".

Related

How to minimise deadlock on database table when executing operations from JAVA?

I am facing exception as
com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 493) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
When high number of users hit my site's particular transaction. This is because there is lock on a table and others requesting to acquire lock on this particular table.
Also 20 tables are used for this particular transaction and for at least five tables first delete query is executing then fresh data is inserted, which might be holding table for long and causing deadlock. Below is sample code.
public void save2(){
con = DBConnFactory.getConnection();
con.setAutoCommit(false);
String deleteQuery1 = "delete from TEST_TABLE1";
String insertQuery1 = "insert into TEST_TABLE1 values ('66','7')";
String deleteQuery2 = "delete from TEST_TABLE2";
String insertQuery2 = "insert into TEST_TABLE2 values ('66','7')";
String deleteQuery3 = "delete from TEST_TABLE3";
String insertQuery3 = "insert into TEST_TABLE3 values ('66','7')";
String deleteQuery4 = "delete from TEST_TABLE4";
String insertQuery4 = "insert into TEST_TABLE4 values ('66','7')";
ps1 = con.prepareStatement(deleteQuery1);
ps2 = con.prepareStatement(insertQuery1);
ps1.executeUpdate();
ps2.executeUpdate();
ps1 = con.prepareStatement(deleteQuery2);
ps2 = con.prepareStatement(insertQuery2);
ps1.executeUpdate();
ps2.executeUpdate();
ps1 = con.prepareStatement(deleteQuery3);
ps2 = con.prepareStatement(insertQuery3);
ps1.executeUpdate();
ps2.executeUpdate();
ps1 = con.prepareStatement(deleteQuery4);
ps2 = con.prepareStatement(insertQuery4);
ps1.executeUpdate();
ps2.executeUpdate();
System.out.println("success2");
con.commit();
}
public class DBConnection {
public static Connection getConnection() {
Connection conn = null;
try {
Context initContext = new InitialContext();
Context envContext = (Context) initContext.lookup("java:/comp/env");
DataSource dataSource = (DataSource) envContext.lookup("jdbc/DBConnection");
if ((conn == null) || conn.isClosed()) {
conn = dataSource.getConnection();
}
} catch (NamingException e) {
LOG.error("DBConnFactory - JNDI namin error in getConnection =>"+ e.getMessage());
} catch (SQLException e) {
LOG.error("DBConnFactory - SQL error in getConnection =>"+ e.getMessage());
}
return conn;
}
}
I was thinking about deleting all the tables data in stored procedure and then inserting through JAVA based on my business logic. Would it help?
Please suggest how to resolve this, do I need to change my approach?
Since you are deleting all of the rows in the table, try using TRUNCATE TABLE TEST_TABLE1 instead. Truncate is much faster than a Delete, though there are additional restrictions on its use since it is a DDL statement.
String deleteQuery1 = "truncate table TEST_TABLE1;";
Another approach you can use is combine your Delete and Insert statement into a single batch:
String query1 = "delete from TEST_TABLE1; insert into TEST_TABLE1 values ('66','7');";
A Delete statement without a where clause takes a full table lock, which will prevent other delete or insert statements from executing. By combining the two statements into a single batch, you'll reduce the number of connections and
latency.
You can combine all of the delete and insert statements into a single batch.
One significant concern is your use of Delete without a where clause. Since you are running into Deadlocks, that means you have multiple connections hitting the same objects at the same time. For example, if connectionA is inserting into Test_Table1 while connectionB is trying to delete from the table, you run tremendous risk of a soft conflict in which the connection deletes the data in connectionA before it can be used for anything else. You really should not use Delete without a Where clause in a transaction system like this.
If you are trying to delete the specific value you're about to insert, you should add a Where clause to limit it. That may also change the locking level from Table to Row level, assuming you have proper indexing, and eliminate the deadlocks.
String query1 = "delete from TEST_TABLE1 where ID = '66'; insert into TEST_TABLE1 values ('66','7');";

Using the JDBC getGeneratedKeys function in multithreaded environment

I have an web application that uses the AUTO INCREMENT value of one table to insert into other tables. I need to ensure that the value read for the Auto Increment column is correct in the presence of potential concurrent INSERTs into that table. Since each thread will have its own connection (from the container pool) do I still have to put the code within a transaction?
PreparedStatement ps = null;
ResultSet rs = null;
String sql = "INSERT INTO KYC_RECORD ....";
int autoIncKeyFromApi = -1;
Connection connection = ....
try {
connection.setAutoCommit(false);
ps = connection.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS);
ps.setString( ... );
ps.executeUpdate();
rs = ps.getGeneratedKeys();
if (rs.next()) {
autoIncKeyFromApi = rs.getInt(1);
} else {
// throw an exception from here
}
connection.commit();
}
The value of autoincrement of the column is managed on database level. Therefore you can fetch the value to getGeneratedKeys() without worry in multithreaded environment.
The transaction is started as soon as you call the update SQL statement. It happens on database level. It stays open until you commit it manually or if autocommit is enabled.
If you need to get more info about transactions, see Java Tutorial.

Operation not allowed after ResultSet closed when deleting from a database

I am currently trying to delete items from my database using JDBC but am getting an error that I can not figure out how to get away. The error is:
Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
Here is the java:
System.out.println("Connecting database for Delete...");
Integer deletedCount = 0;
Statement deleteStatement = null;
try (Connection conn = DriverManager.getConnection(url, username, password)) {
System.out.println("Database connected!");
deleteStatement = conn.createStatement();
String selectDelete = "SELECT id FROM table WHERE end <= (NOW() - INTERVAL 1 HOUR)";
ResultSet rs = deleteStatement.executeQuery(selectDelete);
while (rs.next()) {
String eventid = rs.getString("id");
String deleteSQL = "DELETE FROM table WHERE id = " + eventid;
deleteStatement.executeUpdate(deleteSQL);
deletedCount++;
System.out.println(deletedCount);
}
System.out.println("Completed Delete! "+ deletedCount +" deleted!");
} catch (SQLException e) {
throw new IllegalStateException("Cannot connect the database!", e);
}
What I am doing here is first selecting all the items that date has already passed and setting it to a result set. I then go through a while loop in an attempt to delete them from the database. It runs through one time and get the error which I will fully put below. Would I need to create a new delete statement every time I go through the loop? I can not figure a way to properly do this.
Here is the full error:
Connecting database for Delete...
Database connected!
1
Exception in thread "main" java.lang.IllegalStateException: Cannot connect the database!
at JDBC.deleteOverDueEvents(JDBC.java:56)
at EventJSON.main(EventJSON.java:31)
Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:963)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:896)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:885)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:860)
at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:743)
at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:6313)
at JDBC.deleteOverDueEvents(JDBC.java:45)
... 1 more
Why SELECT, LOOP and DELETE when
String deleteSQL= "DELETE FROM table WHERE end <= (NOW() - INTERVAL 1 HOUR)";
Statement deleteStatement = conn.createStatement();
deleteStatement.executeUpdate(deleteSQL);
would work more efficiently
When the line
deleteStatement.executeUpdate(deleteSQL);
is executed, then ResultSet rs is automatically closed:
A ResultSet object is automatically closed when the Statement object
that generated it is closed, re-executed, or used to retrieve the next
result from a sequence of multiple results.
But you can solve this simply by running
String deleteCmd = "DELETE FROM table WHERE end <= (NOW() - INTERVAL 1 HOUR)";
int deletedCount = deleteStatement.executeUpdate(deleteCmd);
instead of selecting the records and deleting each of them.
I think this is happening because your deleting the row while looping to the result set and your logic is totally wrong. Your selecting all ids with "end <= (NOW() - INTERVAL 1 HOUR)" then delete it afterward. Why not delete it all in the first place? like this "DELETE FROM table WHERE end <= (NOW() - INTERVAL 1 HOUR)". If you need to get the count to a select statement first then to the delete query.

How to manage hundreds of insertion?

What is the best way to avoid ORA-0100: Maximum open cursors exceeded when we cannot change the numbers of cursors ?
Is there a bette way than :
Connection connection = DriverManager.getConnection(/* Oracle Driver */);
Statement st = null;
st = connection.createStatement();
for (/* a lot of iteration with counter */) {
st.executeUpdate(insertSql);
if ( counter % 500 == 0) {
st.close();
connection.commit();
st = connection.createStatement();
}
}
Which method call consumes a cursor : to executeUpdate or to createStatement ?
I think it is the executeUpdate method this is why I have made this counter.
For the Oracle I work on :
select * from v$version;
Result :
BANNER
----------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
You are closing only every 50th statement...
for (/* a lot of iteration with counter */) {
st = connection.createStatement(); //you create a statement each iteration!
//...
// This whole thing not right here
if ( counter % 50 == 0) {
st.close(); // not right here -- will have 49 open statements by now
connection.commit(); //this too
}
}
You should rather use prepared statements, and batch insert for this amount of data.
PreparedStatement statement = connection.prepareStatement(insertTableSQL);
for(<the objects you have>) {
//set parameters into insert query
statement.set*.*(1, <paramValue>);
statement.addBatch(); //add it to the batch
}
statement.executeBatch();//execute whole batch
connection.commit();
You only need one statement for all actions, so you can create the statement out of the loop.
Connection connection = DriverManager.getConnection(/* Oracle Driver */);
Statement statement = connection.createStatement();
for (/* a lot of iteration with counter */) {
// do some INSERT, SELECT, UPDATE
}
statement.close();
connection.close();
Now, inside the loop you can run your queries, for example:
statement.executeUpdate("query");

How to create a database deadlock using jdbc and JUNIT

I am trying to create a database deadlock and I am using JUnit. I have two concurrent tests running which are both updating the same row in a table over and over again in a loop.
My idea is that you update say row A in Table A and then row B in Table B over and over again in one test. Then at the same time you update row B table B and then row A Table A over and over again. From my understanding this should eventually result in a deadlock.
Here is the code For the first test.
public static void testEditCC()
{
try{
int rows = 0;
int counter = 0;
int large=10000000;
Connection c=DataBase.getConnection();
while(counter<large)
{
int pid = 87855;
int cCode = 655;
String newCountry="Egypt";
int bpl = 0;
stmt = c.createStatement();
rows = stmt.executeUpdate("UPDATE main " + //create lock on main table
"SET BPL="+cCode+
"WHERE ID="+pid);
rows = stmt.executeUpdate("UPDATE BPL SET DESCRIPTION='SomeWhere' WHERE ID=602"); //create lock on bpl table
counter++;
}
assertTrue(rows == 1);
//rows = stmt.executeUpdate("Insert into BPL (ID, DESCRIPTION) VALUES ("+cCode+", '"+newCountry+"')");
}
catch(SQLException ex)
{
ex.printStackTrace();
//ex.getMessage();
}
}
And here is the code for the second test.
public static void testEditCC()
{
try{
int rows = 0;
int counter = 0;
int large=10000000;
Connection c=DataBase.getConnection();
while(counter<large)
{
int pid = 87855;
int cCode = 655;
String newCountry="Jordan";
int bpl = 0;
stmt = c.createStatement();
//stmt.close();
rows = stmt.executeUpdate("UPDATE BPL SET DESCRIPTION='SomeWhere' WHERE ID=602"); //create lock on bpl table
rows = stmt.executeUpdate("UPDATE main " + //create lock on main table
"SET BPL="+cCode+
"WHERE ID="+pid);
counter++;
}
assertTrue(rows == 1);
//rows = stmt.executeUpdate("Insert into BPL (ID, DESCRIPTION) VALUES ("+cCode+", '"+newCountry+"')");
}
catch(SQLException ex)
{
ex.printStackTrace();
}
}
I am running these two separate JUnit tests at the same time and am connecting to an apache Derby database that I am running in network mode within Eclipse. Can anyone help me figure out why a deadlock is not occurring? Perhaps I am using JUnit wrong.
You should check the transaction isolation level, as it determines whether or not the DB locks rows touched by a transaction. If the isolation level is too low, no locking occurs, so no deadlock either.
Update: according to this page, the default tx isolation level for Derby is read committed, which should be OK. The page is worth reading btw, as it explains tx isolation and its different levels, and what problems it solves.
Next question then: what is DataBase in your code? This seems to be a nonstandard way to get a connection.
Update2: I think I got it. Quote from the API doc:
Note: By default a Connection object is in auto-commit mode, which means that it automatically commits changes after executing each statement. If auto-commit mode has been disabled, the method commit must be called explicitly in order to commit changes; otherwise, database changes will not be saved.
In other words, rows are not locked because your effective transactions last only for the lifetime of individual updates. You should switch off autocommit before starting to work with your connection:
Connection c=DataBase.getConnection();
c.setAutoCommit(false);

Categories