I have the below code which I wanted to review and understand how I can add a rollback step when something fails.
/**
* Insert some data into some table in bulk
*
* #param requestData requested data
*/
private void insertingRowsByBatches(RequestData requestData) {
try (
Connection connection = myDataSource.getConnection();
Statement deleteStatement = connection.createStatement()
) {
connection.setAutoCommit(TRUE);
String stagingDeleteSql = buildMessage("DELETE FROM my_table");
int rowsAffected = deleteStatement.executeUpdate(stagingDeleteSql);
log.info("[{}] records deleted", rowsAffected);
String stagingInsertSql = "INSERT INTO my_table(SOME_DATA) values(?)";
Lists.partition(Optional.ofNullable(requestData.getData()).orElse(emptyList()), MAX_ROWS_PER_INSERT)
.forEach(recordIds -> {
try (PreparedStatement pstmt = connection.prepareStatement(stagingInsertSql, RETURN_GENERATED_KEYS)) {
for (String recordId: recordIds ) {
pstmt.setString(1, recordId);
pstmt.addBatch();
}
long start = currentTimeMillis();
pstmt.executeBatch();
long end = currentTimeMillis();
log.info("Total time taken to insert [{}] rows: {}ms", recordIds.size(), (end - start));
} catch (SQLException ex) {
log.error("Staging job failed due to an exception: {}", ex.getMessage());
}
});
connection.commit();
} catch (Exception e) {
throw new GenericRuntimeException(e);
}
}
I'm a bit concerned on how this code will work if I get a SQLException in the loop for some reason. I wish to rollback all changes in proper ACID way.
UPDATE
After looking at the suggestions, this is what I came up with:
/**
* Insert some data into some table in bulk
*
* #param requestData requested data
*/
public void insertingRowsByBatches(RequestData requestData) {
try (Connection connection = myDataSource.getConnection()) {
loadInStagingTable(requestData, connection);
} catch (SQLException e) {
throw new GenericRuntimeException(e);
}
}
/**
* Insert given data to staging table
*
* #param requestData requested data
* #param connection instance of {#link Connection}
* #throws SQLException exception
*/
private void loadInStagingTable(RequestData requestData, Connection connection) throws SQLException {
try (
PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM my_table");
PreparedStatement insertStatement = connection.prepareStatement("INSERT INTO my_table(SOME_DATA) values(?)", RETURN_GENERATED_KEYS)
) {
connection.setAutoCommit(false);
log.info("Deleting any existing records from staging table...");
deleteStatement.executeUpdate();
log.info("Inserting given records in staging table...");
long start = currentTimeMillis();
List<String> records = requestData.getData();
List<List<String>> partitions = partition(records , MAX_ROWS_PER_INSERT);
int partitionCount = 0;
for (List<String> recordIds: partitions) {
log.info("Partition [{}/{}] - Inserting [{}] records in staging table", ++partitionCount, partitions.size(), recordIds.size());
for (String recordId: recordIds) {
insertStatement.setString(1, recordId);
insertStatement.addBatch();
}
insertStatement.executeBatch();
}
connection.commit();
log.debug("Total time taken to insert [{}] rows: {}ms", records.size(), (currentTimeMillis() - start));
} catch (SQLException e) {
connection.rollback();
throw new GenericRuntimeException(e);
}
}
According to the JDBC specification:
In the example, auto-commit mode is disabled to prevent the driver from
committing the transaction when Statement.executeBatch is called. Disabling
auto-commit allows an application to decide whether or not to commit the
transaction in the event that an error occurs and some of the commands in a batch
cannot be processed successfully. For this reason, auto-commit should always be
turned off when batch updates are done. The commit behavior of executeBatch is
always implementation-defined when an error occurs and auto-commit is true.
(section 14.1.1)
When auto-commit is disabled, each transaction must be explicitly committed by
calling the Connection method commit or explicitly rolled back by calling the
Connection method rollback, respectively.
(section 10.1.1)
Although section 14.1.1 specifically is about Statements, it mentions that the same behavior should be assumed for PreparedStatements (which are described in section 14.1.4): If auto-commit is set to true, then executeBatch commits the transaction but the error behavior depends on the driver implementation and should not be relied upon. Therefore, the proper way (as suggested by the spec) is to set auto-commit to false.
I wish to rollback all changes in proper ACID way.
What do you mean by "all"? If you are referring to an individual partition, then you can call connection.rollback() within the catch block. If you are referring to all partitions then you could set some flag within the catch block (perhaps via some AtomicBoolean) and then commit or rollback depending on the value of the flag. In my opinion, in this case it would be better to not use forEach but a for-loop and break out early once an error occurs:
connection.setAutoCommit(false);
for (String[] recordIds : partitions) {
try (PreparedStatement pstmt = ...) {
for (String recordId : recordIds) {
pstmt.setString(1, recordId);
pstmt.addBatch();
}
pstmt.executeBatch();
} catch (SQLException ex) {
connection.rollback();
throw new GenericRuntimeException(ex);
}
}
connection.commit();
Related
The Code:
protected List<Object> getAll(Pair<String, Object> primaryKey, String columnLabel) {
List<Object> objects = new ArrayList<>();
try {
ResultSet resultSet = getRows(primaryKey);
while (resultSet.next()) {
objects.add(resultSet.getObject(columnLabel));
}
} catch (Exception e) {
e.printStackTrace();
} finally {
return objects;
}
}
protected ResultSet getRows(Pair<String, Object> primaryKey) {
try (Connection con = DataSource.getConnection();
PreparedStatement pst = con.prepareStatement("SELECT * FROM " + table + " WHERE `" + primaryKey.fst + "` = ?")) {
pst.setObject(1, primaryKey.snd);
return pst.executeQuery();
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
Error
Table's columns are: id (int) | player(string) | friend(string)
as id is auto increment.
The error pointing to (line 5):
while (resultSet.next()) {
when does ResultSet gets closed?
SQLEmmber#getAll(Pair<String, Object> primaryKey, String columnLabel) gets called in SQLFriends.java
Your method getRows() uses a try-with-resources block that contains the connection and the prepared statement. At the end of the try-with-resources block, this will explicitly close the prepared statement and the connection.
Closing the prepared statement will close the result set (and closing the connection will close any open statements, and therefor also close any open result sets).
As documented in ResultSet:
A ResultSet object is automatically closed when the Statement
object that generated it is closed, re-executed, or used to retrieve
the next result from a sequence of multiple results.
You will need to restructure your code so you only close the connection after you have retrieved all rows from the result set.
This is the code where I'm trying to execute a second query on the resultSet of my first lengthy query. I need to upload this
data somewhere.
Is it the right thing to do?
Or is there a better approach apart from querying the database again?
public String createQuery() throws SQLException {
StringBuilder Query = new StringBuilder();
try {
Query.append(" SELECT ...... ")
} catch (Exception e) {
e.printStackTrace();
}
return Query.toString();
}
private void openPreparedStatements() throws SQLException {
myQuery = createQuery();
try {
QueryStatement = dbConnection.prepareStatement(myQuery);
} catch (SQLException e) {
e.printStackTrace();
return;
}
}
public ResultSet selectData(String timestamp) throws SQLException {
openConnection();
ResultSet result = null;
ResultSet rs_new=null;
try {
result = QueryStatement.executeQuery();
while (result.next()) {
String query = "SELECT * FROM " + result + " WHERE " + "ID" + " =" + "ABC";
rs_new =QueryStatementNew.executeQuery(query);
System.out.print(rs_new);
}
} catch (SQLException e) {
LOGGER.info("Exception", e);
}
return result;
}
Instead of running two separate queries (when you don't need the intermediate one) you can combine them.
For example you can do:
SELECT *
FROM (
-- first query here
) x
WHERE ID = 'ABC'
You cannot use two statement objects within one database connection. So you can either open another database connection and execute the second statement in the 2nd connection, or iterate through the resultset from first statement and store the value you need (e.g. in an array/collection) then close that statement and run the second one, this time retrieving the value from the array/collection you saved them in. Refer to Java generating query from resultSet and executing the new query
Make Db2 keep your intermediate result set in a Global Temporary Table, if you have an ability to use it, and you application uses the same database connection session.
DECLARE GLOBAL TEMPORARY TABLE SESSION.TMP_RES AS
(
SELECT ID, ... -- Your first lengthy query text goes here
) WITH DATA WITH REPLACE ON COMMIT PRESERVE ROWS NOT LOGGED;
You may send the result of subsequent SELECT ... FROM SESSION.TMP_RES to FTP, and the result of SELECT * FROM SESSION.TMP_RES WHERE ID = 'ABC' to elastic.
I'm trying to run a query from java to PosgresSQL and I get an error from the stmt.execute(sql)
I would like to execute a new query to help me print out the specific row failing, but when I get to the catch (Exception e) the transaction is aborted.
I cant create a new transaction because I'm working with temp tables. How do I prevent the transaction to abort?
org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block
try (Statement stmt = data.db.getConnection().createStatement()) {
// data.db.getConnection().setSavepoint("sp01");
// insert to fact table
TableSchema factTableSchema = factInfo.getTableSchema();
// build SQL
String sql = "Select * From....";
try {
stmt.execute(sql); // this row is failing
}
catch (Exception e) {
try {
// now i would like to run a query only in case arrived here, but the transaction is closed
// how could i prevent from trasanction to close ?
ResultSet rs = stmt.executeQuery(" SELECT Bla,Bla From..");
Log.debug("");
}
catch (Exception e2) {
Log.debug("");
}
}
Create a new statement in the 2nd try block before you execute the query.
Statement stmt = data.db.getConnection().createStatement();
ResultSet rs = stmt.executeQuery(" SELECT Bla,Bla From..");
ResultSet needs to be connected and alive to perform a next();
I am trying to deal with multitheading in Java.
I have read many articles and question(here on StackOverflow) but didn't find any clear examples how to use it.
I have Unique_Numbers table in HsqlDB database. There are 2 columns: NUMBER and QTY.
My task is to check if number exsists and increase QTY of number if yes and insert this number if not.
So, what did I get.
This is my configuration of Database
private final ComboPooledDataSource dataSource;
public Database(String url, String userName, String password) throws PropertyVetoException {
dataSource = new ComboPooledDataSource();
dataSource.setDriverClass("org.hsqldb.jdbcDriver");
dataSource.setJdbcUrl(url);
dataSource.setUser(userName);
dataSource.setPassword(password);
dataSource.setMaxPoolSize(10);
dataSource.setMaxStatements(180);
dataSource.setMinPoolSize(5);
dataSource.setAcquireIncrement(5);
}
This is my logic:
public void insertRow(String number) throws SQLException {
int cnt = getCount(number);
if (cnt == 0) {
insert(number);
} else if (cnt > 0) {
update(number);
}
}
Get count of number in the table
private int getCount(String number) {
int cnt = 0;
String sql = "select count(number) as cnt from \"PUBLIC\".UNIQUE_NUMBER where number='" + number + "'";
try {
Statement sta;
try (Connection connection = dataSource.getConnection()) {
sta = connection.createStatement();
ResultSet rs = sta.executeQuery(sql);
if (rs.next()) {
cnt = rs.getInt("cnt");
}
}
sta.close();
} catch (Exception e) {
LOGGER.error("error select cnt by number" + e.toString());
}
return cnt;
}
Insert and update
private boolean insert(String number) throws SQLException {
String sql = "insert into \"PUBLIC\".UNIQUE_NUMBER (number, qty) values(?, ?)";
try (Connection connection = dataSource.getConnection()) {
connection.setAutoCommit(false);
try (PreparedStatement ps = connection.prepareStatement(sql)) {
ps.setString(1, number);
ps.setInt(2, 0);
ps.addBatch();
ps.executeBatch();
try {
connection.commit();
} catch (Exception e) {
connection.rollback();
LOGGER.error(e.toString());
return false;
}
}
}
return true;
}
private boolean update(String number) throws SQLException {
String sql = "update \"PUBLIC\".UNIQUE_NUMBER set (qty) = (?) where number = ?";
int qty = selectQtyByNumber(number) + 1;
try (Connection connection = dataSource.getConnection()) {
connection.setAutoCommit(false);
try (PreparedStatement ps = connection.prepareStatement(sql)) {
ps.setInt(1, qty);
ps.setString(2, number);
ps.executeUpdate();
try {
connection.commit();
} catch (Exception e) {
connection.rollback();
LOGGER.error(e.toString());
return false;
}
}
}
return true;
}
As I read, I must use Pool Connection. It is important to give one connection to each thread.
When I start my application, I got constraint exception or exception with Rollback: serialization failed.
What am I doing wrong?
Here is my logs
[INFO] [generate38] ERROR se.homework.hwbs.tasks.un.server.threads.InsertRowThread - exception while inserting numberintegrity constraint violation: check constraint; SYS_CT_10114 table: UNIQUE_NUMBER
[INFO] [generate38] ERROR se.homework.hwbs.tasks.un.server.database.Database - error select cnt by number java.sql.SQLTransactionRollbackException: transaction rollback: serialization failure
[INFO] [generate38] ERROR se.homework.hwbs.tasks.un.server.threads.InsertRowThread - exception while inserting numbertransaction rollback: serialization failure
[INFO] [generate38] ERROR se.homework.hwbs.tasks.un.server.database.Database - error select cnt by number java.sql.SQLTransactionRollbackException: transactionrollback: serialization failure
the non-transactional way
Do the increment first
update UNIQUE_NUMBER set qty = qty + 1 where number = ?
Check if it did update any row, insert the number if it didn't
int rowsMatched = ps.executeUpdate();
if(rowsMatched == 0) {
try {
insert into UNIQUE_NUMBER (number, qty) values(?, 0)
} catch(Exception e) {
// the insert will fail if another thread has already
// inserted the same number. check if that's the case
// and if so, increment instead.
if(isCauseUniqueConstraint(e)) {
update UNIQUE_NUMBER set qty = qty + 1 where number = ?
} else {throw e;}
}
}
No transaction handling (setAutoCommit(false), commit() or rollback()) reqired.
the transactional way
If you still want to do this in a transactional way, you need to do all steps within a single transaction, like #EJP suggested:
connection.setAutoCommit(false);
// check if number exists
// increment if it does
// insert if it doesn't
// commit, rollback & repeat in case of error
connection.setAutoCommit(true);
Set auto commit back to true if this code shares the connection pool with other code (as that's the default state others will expect the connection to be in) or make it clear that connections in the pool will always be in transactional mode.
In your code, getCount will sometimes get a connection in auto commit mode (first use) and sometimes get a connection in transactional mode (reused after insert and/or update) - that's why you see rollback exceptions in getCount.
I am practicing JDBC batch processing and having errors:
error 1: Unsupported feature
error 2: Execute cannot be empty or null
Property files include:
itemsdao.updateBookName = Update Books set bookname = ? where books.id = ?
itemsdao.updateAuthorName = Update books set authorname = ? where books.id = ?
I know I can execute about DML statements in one update, but I am practicing batch processing in JDBC.
Below is my method
public void update(Item item) {
String query = null;
try {
connection = DbConnector.getConnection();
property = SqlPropertiesLoader.getProperties("dml.properties");
connection.setAutoCommit(false);
if ( property == null )
{
Logging.log.debug("dml.properties does not exist. Check property loader or file name is spelled right");
return;
}
query = property.getProperty("itemsdao.updateBookName");
statement = connection.prepareStatement(query);
statement.setString(1, item.getBookName());
statement.setInt(2, item.getId());
statement.addBatch(query);
query = property.getProperty("itemsdao.updateAuthorName");
statement = connection.prepareStatement(query);
statement.setString(1, item.getAuthorName());
statement.setInt(2, item.getId());
statement.addBatch(query);
statement.executeBatch();
connection.commit();
}catch (ClassNotFoundException e) {
Logging.log.error("Connection class does not exist", e);
}
catch (SQLException e) {
Logging.log.error("Violating PK constraint",e);
}
//helper class th
finally {
DbUtil.close(connection);
DbUtil.closePreparedStatement(statement);
}
You are mixing together methods of Statement and PreparedStatement classes:
(addBatch(String sql) belongs to Statement and cannot be called on a PreparedStatement or CallableStatement
addBatch() is to be used with PreparedStatement (as your tutorial shows).
Oracle implements both it's own and standard (JDBC 2.0) batch processing. From the Standard Update Batching docs:
In Oracle JDBC applications, update batching is intended for use with
prepared statements that are being processed repeatedly with different
sets of bind values.