I want to like to create a batch delete something like:
DELETE t WHERE t.my_attribute = ?
First try was:
private void deleteRecord( ) {
//loop
final MyRecord myRecord = new MyRecord();
myRecord.setMyAttribute(1234);
getDslContext().batchDelete(myRecord) .execute();
}
But here the SQL contains always the pk instead of my attribute.
Second try was to create a delete statement with a bind value, but here i found no solution how i can create a where clause with ?
//loop
getDslContext().delete( MY_RECORD ).where( ???)
.bind( 12234 );
Can anybody help me further?
The DELETE statement itself
Just add your comparison predicate as you would in SQL:
getDslContext()
.delete(T)
.where(T.MY_ATTRIBUTE.eq(12234))
.execute();
This is assuming you are using the code generator, so you can static import your com.example.generated.Tables.T table reference.
Batching that
You have two options of batching such statements in jOOQ:
1. Using the explicit batch API
As explained here, create a query with a dummy bind value as I've shown above, but don't execute it directly, use the Batch API instead:
// Assuming these are your input attributes
List<Integer> attributes = ...
Query query = getDslContext().delete(T).where(T.MY_ATTRIBUTE.eq(0));
getDSLContext()
.batch(query)
.bind(attributes
.stream().map(a -> new Object[] { a }).toArray(Object[][]::new)
).execute();
2. Collect individual executions in a batched connection
You can always use the convenient batched collection in jOOQ to transparently collect executed SQL and delay it into a batch:
getDslContext().batched(c -> {
for (Integer attribute : attributes)
c.dsl().getDslContext()
.delete(T)
.where(T.MY_ATTRIBUTE.eq(attribute)
.execute(); // Doesn't execute the query yet
}); // Now the entire batch is executed
In the latter case, the SQL string might be re-generated for every single execution, so the former is probably better for simple batches.
Bulk execution
However, why batch when you can run a single query? Just do this, perhaps?
getDslContext()
.delete(T)
.where(T.MY_ATTRIBUTE.in(attributes))
.execute();
Related
I have a table where it has columns as
Task_Track
TaskID Number (PK AutoGeneratedSequence)
TaskCd Varchar2
RefCd Varchar2
RefID varchar2
Params varchar2
...etc
I am working on a scenario where I run a select query on this table get the result set.
Select * from Task_Track where RefCd = ? and RefID = ? and TaskCd = ?;
If i don't have any results I will insert a new task with RefCd RefID TaskCd Params values. Params is ususaly a person_id related to the task.
If i get the resultset I will append the new param and update the resultset.
if(resultset!=null and resultSet.length()>0)
update params logic
else
insert new task logic.
This is working as expected in a sequential run.
But when I have 2 parallel queues running and get the same RefCd RefID TaskCd values at the same time.
My first bucket is finding the resultset and is going to perform the update logic as expected but the second queue is not able to find the result and is going into insert logic.
From what I understand even if the first queue has locked the row for the update, the second queue should not have any problems with the read and should fail while updating because of the lock if the first queue hasn't released the lock. But my read itself is failing where it is not throwing any exception but returning an empty resultset(length=0). Because of which it is moving into insert logic.
Is it possible that the read is affected by the update happening in parallel? If so how should I resolve it?
Note: I am using Oracle 11G and Java8 with Websphere 9
Thank you
You need to cache the resultSet before make another request. Try this:
CachedRowSet crs = RowSetProvider.newFactory().createCachedRowSet();
crs.populate(myResultSet);
I want to fetch the 10 latest records from the BATCH_JOB_EXECUTION-table joined with the BATCH_JOB_INSTANCE-table.
So how can I access these tables?
In this application I have used Spring Data JPA. It's another application which uses Spring Batch and created these tables. In other words, I would just like to run a JOIN-query and map it directly to my custom object with just the necessary fields. As far as it's possible, I would like to avoid making seperate models for the two tables. But I don't know the best approach here.
If you want to do it from Spring Batch code you need to use JobExplorer and apply filters on either START_TIME or END_TIME. Alternatively, just send an SQL query with your desired JOIN to the DB using JDBC. The DDLs of the metadata tables can be found here.
EDIT
If you want to try to do it in SpringBatch, I guess you need to iterate through JobExecutions and find the ones that interest you, then do your thing )) someth. like:
List<JobInstance> jobInstances = jobExplorer.getJobInstances(jobName);
for (JobInstance jobInstance : jobInstances) {
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstance);
for (JobExecution jobExecution : jobExecutions) {
if (//jobExecution.getWhatever...)) {
// do your thing...
}
}
}
Good Luck!
Since JobExplorer doesn't have the interface .getJobInstances(jobName) anymore, I have done this (this example with BatchStatus as a condition) adapted with streams :
List<JobInstance> lastExecutedJobs = jobExplorer.getJobInstances(jobName, 0, Integer.MAX_VALUE);
Optional<JobExecution> jobExecution = lastExecutedJobs
.stream()
.map(jobExplorer()::getJobExecutions)
.flatMap(jes -> jes.stream())
.filter(je -> BatchStatus.COMPLETED.equals(je.getStatus()))
.findFirst();
To return N elements, you could use others capacities of stream (limit, max, collectors, ...).
I have a Spring Batch project running in Spring Boot that is working perfectly fine. For my reader I'm using JdbcPagingItemReader with a MySqlPagingQueryProvider.
#Bean
public ItemReader<Person> reader(DataSource dataSource) {
MySqlPagingQueryProvider provider = new MySqlPagingQueryProvider()
provider.setSelectClause(ScoringConstants.SCORING_SELECT_STATEMENT)
provider.setFromClause(ScoringConstants.SCORING_FROM_CLAUSE)
provider.setSortKeys("p.id": Order.ASCENDING)
JdbcPagingItemReader<Person> reader = new JdbcPagingItemReader<Person>()
reader.setRowMapper(new PersonRowMapper())
reader.setDataSource(dataSource)
reader.setQueryProvider(provider)
//Setting these caused the exception
reader.setParameterValues(
startDate: new Date() - 31,
endDate: new Date()
)
reader.afterPropertiesSet()
return reader
}
However, when I modified my query with some named parameters to replace previously hard coded date values and set these parameter values on the reader as shown above, I get the following exception on the second page read (the first page works fine because the _id parameter hasn't been made use of by the paging query provider):
org.springframework.dao.InvalidDataAccessApiUsageException: No value supplied for the SQL parameter '_id': No value registered for key '_id'
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:336)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.getPreparedStatementCreator(NamedParameterJdbcTemplate.java:374)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:192)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:199)
at org.springframework.batch.item.database.JdbcPagingItemReader.doReadPage(JdbcPagingItemReader.java:218)
at org.springframework.batch.item.database.AbstractPagingItemReader.doRead(AbstractPagingItemReader.java:108)
Here is an example of the SQL, which has no WHERE clause by default. One does get created automatically when the second page is read:
select *, (select id from family f where date_created between :startDate and :endDate and f.creator_id = p.id) from person p
On the second page, the sql is modified to the following, however it seems that the named parameter for _id didn't get supplied:
select *, (select id from family f where date_created between :startDate and :endDate and f.creator_id = p.id) from person p WHERE id > :_id
I'm wondering if I simply can't use the MySqlPagingQueryProvider sort keys together with additional named parameters set in JdbcPagingItemReader. If not, what is the best alternative to solving this problem? I need to be able to supply parameters to the query and also page it (vs. using the cursor). Thank you!
I solved this problem with some intense debugging. It turns out that MySqlPagingQueryProvider utilizes a method getSortKeysWithoutAliases() when it builds up the SQL query to run for the first page and for subsequent pages. It therefore appends and (p.id > :_id) instead of and (p.id > :_p.id). Later on, when the second page sort values are created and stored in JdbcPagingItemReader's startAfterValues field it will use the original "p.id" String specified and eventually put into the named parameter map the pair ("_p.id",10). However, when the reader tries to fill in _id in the query, it doesn't exist because the reader used the non-alias removed key.
Long story short, I had to remove the alias reference when defining my sort keys.
provider.setSortKeys("p.id": Order.ASCENDING)
had to change to in order for everything to work nicely together
provider.setSortKeys("id": Order.ASCENDING)
I had the same issue and got another possible solution.
My table T has a primary key field INTERNAL_ID.
The query in JdbcPagingItemReader was like this:
SELECT INTERNAL_ID, ... FROM T WHERE ... ORDER BY INTERNAL_ID ASC
So, the key is: in some conditions, the query didn't return results, and then, raised the error above No value supplied for...
The solution is:
Check in a Spring Batch decider element if there are rows.
If it is, continue with chunk: reader-processor-writer.
It it's not, go to another step.
Please, note that they are two different scenarios:
At the beginning, there are rows. You get them by paging and finally, there are no more rows. This has no problem and decider trick is not required.
At the beginning, there are no rows. Then, this error raised, and the decider solved it.
Hope this helps.
How to delete List of records from a single query in JOOQ? Is this possible with JOOQ API? Or i have to delete record one by one ,Just get one record fire query and so on?
Yes, you can do that!
Using a batch statement
You can batch-delete records through
DSLContext.batchDelete(UpdatableRecord...) or
DSLContext.batchDelete(Collection<? extends UpdatableRecord<?>>)
Example:
MyTableRecord record1 = //...
MyTableRecord record2 = //...
DSL.using(configuration).batchDelete(record1, record2).execute();
This will generate a JDBC batch statement, which can be executed much faster than single deletes.
Using a single DELETE statement with an IN predicate
Another option would be to create a single DELETE statement as such:
DSLContext create = DSL.using(configuration);
// This intermediate result is only used to extract ID values later on
Result<MyTableRecord> result = create.newResult(MY_TABLE);
result.add(record1);
result.add(record2);
create.delete(MY_TABLE)
.where(MY_TABLE.ID.in(result.getValues(MY_TABLE.ID))
.execute();
I'm currently using Jooq for a project, but I need a way to ignore duplicate keys on insert.
I've got an array of objects I want to write into a table but if they already exist determined by a composite unique index on START_TS and EVENT_TYPE I want the insert to silently fail.
My Code looks something like this:
InsertValuesStep<MyRecord> query = fac.insertInto(MY_REC,
MY_REC.START_TS,
MY_REC.STOP_TS,
MY_REC.EVENT_DATA,
MY_REC.EVENT_TYPE,
MY_REC.PUBLISHED_TS,
MY_REC.MY_ID
);
for(int i=0;i<recs.length;i++)
{
MyClass evt = recs[i];
query.values(
new java.sql.Date(evt.startTS.getTime()),
(evt.stopTS == null) ? null : new java.sql.Date(evt.stopTS.getTime()),
evt.eventData,
evt.type.name(),
date,
id)
}
query.execute();
A solution like this would be ideal: https://stackoverflow.com/a/4920619/416338
I figure I need to add something like:
.onDuplicateKeyUpdate().set(MY_REC.EVENT_TYPE,MY_REC.EVENT_TYPE);
But whatever I add it still seems to throw an error on duplicates.
Support for MySQL's INSERT IGNORE INTO syntax is on the roadmap for jOOQ 2.3.0. This had been discussed recently on the jOOQ user group. This syntax will be simulated in all other SQL dialects that support the SQL MERGE statement.
In the mean time, as a workaround, you could try to insert one record at a time