I have a method that can be described with the following steps:
Insert rows into temporary table 1.
Insert rows into temporary table 2.
Insert (inner join of table 1 + table 2) into temporary table 3.
Select rows of temporary table 3.
The steps are executed sequentially. However, it is a slow method, and I want to parallelize STEP1 and STEP 2, because they are independent. It is important to know that the 3 temporary tables have the clause "ON COMMIT DELETE ROWS" so all the steps must be performed in a single transaction.
private void temporaryTables() {
String st1 = "insert into table1(name) values('joe')";
String st2 = "insert into table2(name) values('foo')";
jdbcTemplate.update(st1);
jdbcTemplate.update(st2);
//Arrays.asList(st1,st2).parallelStream().map(x -> {
// jdbcTemplate.update(x);
//});
//if I use parallel stream and I select both tables, one table is empty.
}
#Transactional
public List<Response> method() {
temporaryTables();
return jdbcTemplate.query(SELECT_TABLE_3, new BeanPropertyRowMapper<>(Response.class));
}
If I uncomment the parallel code, it doesn't work as expected. It only works with the caller thread, the other thread won't execute in the same transaction, and because of that STEP 3 will fail because one temporary table is empty.
I also tried with raw JDBC transactions. However, I can't share the Connection object because it is synchronized.
How can I solve this problem?
Related
I want to like to create a batch delete something like:
DELETE t WHERE t.my_attribute = ?
First try was:
private void deleteRecord( ) {
//loop
final MyRecord myRecord = new MyRecord();
myRecord.setMyAttribute(1234);
getDslContext().batchDelete(myRecord) .execute();
}
But here the SQL contains always the pk instead of my attribute.
Second try was to create a delete statement with a bind value, but here i found no solution how i can create a where clause with ?
//loop
getDslContext().delete( MY_RECORD ).where( ???)
.bind( 12234 );
Can anybody help me further?
The DELETE statement itself
Just add your comparison predicate as you would in SQL:
getDslContext()
.delete(T)
.where(T.MY_ATTRIBUTE.eq(12234))
.execute();
This is assuming you are using the code generator, so you can static import your com.example.generated.Tables.T table reference.
Batching that
You have two options of batching such statements in jOOQ:
1. Using the explicit batch API
As explained here, create a query with a dummy bind value as I've shown above, but don't execute it directly, use the Batch API instead:
// Assuming these are your input attributes
List<Integer> attributes = ...
Query query = getDslContext().delete(T).where(T.MY_ATTRIBUTE.eq(0));
getDSLContext()
.batch(query)
.bind(attributes
.stream().map(a -> new Object[] { a }).toArray(Object[][]::new)
).execute();
2. Collect individual executions in a batched connection
You can always use the convenient batched collection in jOOQ to transparently collect executed SQL and delay it into a batch:
getDslContext().batched(c -> {
for (Integer attribute : attributes)
c.dsl().getDslContext()
.delete(T)
.where(T.MY_ATTRIBUTE.eq(attribute)
.execute(); // Doesn't execute the query yet
}); // Now the entire batch is executed
In the latter case, the SQL string might be re-generated for every single execution, so the former is probably better for simple batches.
Bulk execution
However, why batch when you can run a single query? Just do this, perhaps?
getDslContext()
.delete(T)
.where(T.MY_ATTRIBUTE.in(attributes))
.execute();
I am conducting Junit test on an AM.
The thing is that, in some cases there are changes to the values of the attributes in one row, sometimes even have to delete the whole row in the table.
How can I restore the row in a Java programming way at the end of each test case because I don't want to change the data in DB?
Thanks!
Use merge for specific rows which will update or insert record as needed. Example:
MERGE INTO bonuses D USING (SELECT employee_id, salary,
department_id FROM employees WHERE department_id = 80) S ON
(D.employee_id = S.employee_id) WHEN MATCHED THEN UPDATE SET
D.bonus = D.bonus + S.salary*.01
DELETE WHERE (S.salary > 8000) WHEN NOT MATCHED THEN INSERT (D.employee_id, D.bonus)
VALUES (S.employee_id, S.salary*.01)
WHERE (S.salary <= 8000);
In java call it with PreparedStatement, for example:
preparedStatement = dbConnection.prepareStatement("Merge ..");
Another option which isn't for use in production is flashback which can add restore point which you can return to.
The scenario is simple.
I have a somehow large MySQL db containing two tables:
-- Table 1
id (primary key) | some other columns without constraints
-----------------+--------------------------------------
1 | foo
2 | bar
3 | foobar
... | ...
-- Table 2
id_src | id_trg | some other columns without constraints
-------+--------+---------------------------------------
1 | 2 | ...
1 | 3 | ...
2 | 1 | ...
2 | 3 | ...
2 | 5 | ...
...
On table1 only id is a primary key. This table contains about 12M entries.
On table2 id_src and id_trg are both primary keys and both have foreign key constraints on table1's id and they also have the option DELETE ON CASCADE enabled. This table contains about 110M entries.
Ok, now what I'm doing is only to create a list of ids that I want to remove from table 1 and then I'm executing a simple DELETE FROM table1 WHERE id IN (<the list of ids>);
The latter process is as you may have guessed would delete the corresponding id from table2 as well. So far so good, but the problem is that when I run this on a multi-threaded env and I get many Deadlocks!
A few notes:
There is no other process running at the same time nor will be (for the time being)
I want this to be fast! I have about 24 threads (if this does make any difference in the answer)
I have already tried almost all of transaction isolation levels (except the TRANSACTION_NONE) Java sql connection transaction isolation
Ordering/sorting the id's I think would not help!
I have already tried SELECT ... FOR UPDATE, but a simple DELETE would take up to 30secs! (so there is no use of using it) :
DELETE FROM table1
WHERE id IN (
SELECT id FROM (
SELECT * FROM table1
WHERE id='some_id'
FOR UPDATE) AS x);
How can I fix this?
I would appreciate any help and thanks in advance :)
Edit:
Using InnoDB engine
On a single thread this process would take a dozen hours even maybe a whole day, but I'm aiming for a few hours!
I'm already using a connection pool manager: java.util.concurrent
For explanation on double nested SELECTs please refer to MySQL can’t specify target table for update in FROM clause
The list that is to be deleted from DB, may contain a couple of million entries in total which is divided into chunks of 200
The FOR UPDATE clause is that I've heard that it locks a single row instead of locking the whole table
The app uses Spring's batchUpdate(String sqlQuery) method, thus the transactions are managed automatically
All ids have index enabled and the ids are unique 50 chars max!
DELETE ON CASCADE on id_src and id_trg (each separately) would mean that every delete on table1 id=x would lead to deletes on table2 id_src=x and id_trg=x
Some code as requested:
public void write(List data){
try{
Arraylist idsToDelete = getIdsToDelete();
String query = "DELETE FROM table1 WHERE id IN ("+ idsToDelete + " )";
mysqlJdbcTemplate.getJdbcTemplate().batchUpdate(query);
} catch (Exception e) {
LOG.error(e);
}
}
and myJdbcTemplate is just an abstract class that extends JdbcDaoSupport.
First of all your first simple delete query in which you are passing ids, should not create problem if you are passing ids till a limit like 1000 (total no of rows in child table also should be near about but not to many like 10,000 etc.), but if you are passing like 50,000 or more then it can create locking issue.
To avoid deadlock, you can follow below approach to take care this issue (assuming bulk deletion will not be part of production system)-
Step1: Fetch all ids by select query and keep in cursor.
Step2: now delete these ids stored in cursor in a stored procedure one by one.
Note: To check why deletion is acquiring locks we have to check several things like how many ids you are passing, what is transaction level set at DB level, what is your Mysql configuration setting in my.cnf etc...
It may be dangereous to delete many (> 10000) parent records each having child records deleted by cascade, because the most records you delete in a single time, the most chances of lock conflict leading to deadlock or rollback.
If it is acceptable (meaning you can make a direct JDBC connection to the database) you should (no threading involved here) :
compute the list of ids to delete
delete them by batches (between 10 and 100 a priori) committing every 100 or 1000 records
As the heavier job should be on database part, I hardly doubt that threading will help here. If you want to try it, I would recommend :
one single thread (with a dedicated database connection) computing the list of ids to delete and alimenting a synchronized queue with them
a small number of threads (4 maybe 8), each with its own database connection that :
use a prepared DELETE FROM table1 WHERE id = ? in batches
take ids from the queue and prepare the batches
send a batch to the database every 10 or 100 records
do a commit every 10 or 100 batches
I cannot imagine that the whole process could take more than several minutes.
After some other readings, it looks like I was used to old systems and that my numbers are really conservative.
Ok here's what I did, it might not actually avoid having Deadlocks but was my only option at time being.
This solution is actually a way of handling MySQL Deadlocks using Spring.
Catch and retry Deadlocks:
public void write(List data){
try{
Arraylist idsToDelete = getIdsToDelete();
String query = "DELETE FROM table1 WHERE id IN ("+ idsToDelete + " )";
try {
mysqlJdbcTemplate.getJdbcTemplate().batchUpdate(query);
} catch (org.springframework.dao.DeadlockLoserDataAccessException e) {
LOG.info("Caught DEADLOCK : " + e);
retryDeadlock(query); // Retry them!
}
} catch (Exception e) {
LOG.error(e);
}
}
public void retryDeadlock(final String[] sqlQuery) {
RetryTemplate template = new RetryTemplate();
TimeoutRetryPolicy policy = new TimeoutRetryPolicy();
policy.setTimeout(30000L);
template.setRetryPolicy(policy);
try {
template.execute(new RetryCallback<int[]>() {
public int[] doWithRetry(RetryContext context) {
LOG.info("Retrying DEADLOCK " + context);
return mysqlJdbcTemplate.getJdbcTemplate().batchUpdate(sqlQuery);
}
});
} catch (Exception e1) {
e1.printStackTrace();
}
}
Another solution could be to use Spring's multiple step mechanism.
So that the DELETE queries are split into 3 and thus by starting the first step by deleting the blocking column and other steps delete the two other columns respectively.
Step1: Delete id_trg from child table;
Step2: Delete id_src from child table;
Step3: Delete id from parent table;
Of course the last two steps could be merged into 1, but in that case two distinct ItemsWriters would be needed!
How to delete List of records from a single query in JOOQ? Is this possible with JOOQ API? Or i have to delete record one by one ,Just get one record fire query and so on?
Yes, you can do that!
Using a batch statement
You can batch-delete records through
DSLContext.batchDelete(UpdatableRecord...) or
DSLContext.batchDelete(Collection<? extends UpdatableRecord<?>>)
Example:
MyTableRecord record1 = //...
MyTableRecord record2 = //...
DSL.using(configuration).batchDelete(record1, record2).execute();
This will generate a JDBC batch statement, which can be executed much faster than single deletes.
Using a single DELETE statement with an IN predicate
Another option would be to create a single DELETE statement as such:
DSLContext create = DSL.using(configuration);
// This intermediate result is only used to extract ID values later on
Result<MyTableRecord> result = create.newResult(MY_TABLE);
result.add(record1);
result.add(record2);
create.delete(MY_TABLE)
.where(MY_TABLE.ID.in(result.getValues(MY_TABLE.ID))
.execute();
I am using Apache dbcp for connection pooling and ibatis to do the database transactions with spring support. The scenario that i am trying to workout is:
create BasicDataSource with max initial connection as 5
Create a temp table
Write bulk of records in temp table.
Write the records onto actual table.
Delete the temp table
The issue here is step 2-5 runs in multi threaded mode. Also since i am using connection pooling, i cannot guranatee that sttep 2,3,4,5 will get the same connection object from the pool and hence i see in step 3/4/5 that temp table XYZ not found.
How can i gurantee that i can reuse the same connection accross the 4 operations. Here's the code for step 3 and 4. I am not thinking to use Global temp table.
#Transactional
public final void insertInBulk(final List<Rows> rows) {
getSqlMapClientTemplate().execute(new SqlMapClientCallback<Object>() {
public Object doInSqlMapClient(
SqlMapExecutor exe) throws SQLException {
executor.startBatch();
for (Rows row : rows) {
for (Object row : row.getMultiRows()) {
exe.insert("##TEMPTABLE.insert", row);
}
}
exe.executeBatch();
return null;
}});
}
public void copyValuesToActualTable() {
final Map<String, Object> procInput = new HashMap<String, Object>();
procInputMap.put("tableName", "MYTABLE");
getSqlMapClientTemplate().queryForObject("##TEMPTABLE.NAME", procInput);
}
I am thinking of improving the design further by creating temp table just once when connection is initialised and instead of dropping truncate the table but one for later and will still have issues with step 3 and 4. Reason for temp table is i dont have access (permission) to directly modify the actual table but via temp table.
I would actually create the temp table (step 2) in the main thread, then break the workload of inserting records in to temp table (Step 3 and Step 4) into chunks and spawn thread for each chunk.
JDK 7 provides the ForkJoin for this step that you may be interested.
Once the insertion into temp and actual table is done, then delete the temp table again in the main thread.
In this way, you don't need to ensure that the same connection is being used everywhere. You can use different connection objects to the same database and perform the step 3 & 4 in parallel.
Hope this helps.