Is Spring Batch appropriate for my service code? - java

From a high level, my application flow looks like
REST Controller RequestMapping is triggered by a GET() request. REST Controller calls a method in a Service class.
#RequestMapping(value="/eventreports", method = RequestMethod.POST, produces = "application/json")
public #ResponseBody List<EventReports> addReportIds(#RequestBody List<Integer> reportIds) {
List<EventReports> eventReports = railAgentCollectorServiceImpl.addReportIds(reportIds);
return eventReports;
}
Service method calls a methodin a DAO class.
#Override
public List<EventReports> addReportIds(List<Integer> reportIds) {
List<EventReports> eventReports = eventReportsDAOImpl.listEventReportsInJsonRequest(reportIds);
return eventReports;
}
DAO method executes a StoredProcedureQuery against a SQL datasource that returns results as an ArrayList of domain objects. Service class passes this Arraylist of domain objects back to REST Controller, which returns the ArrayList of domain objects as a JSON string.
#Override
public List<EventReports> listEventReportsInJsonRequest(List<Integer> reportIds) {
ArrayList<EventReports> erArr = new ArrayList<EventReports>();
try {
StoredProcedureQuery q = em.createStoredProcedureQuery("sp_get_event_reports", "eventReportsResult");
q.registerStoredProcedureParameter("reportIds", String.class, ParameterMode.IN);
q.setParameter("reportIds", reportIdsList);
boolean isResultSet = q.execute(); //try catch here
erArr = (ArrayList<EventReports>) q.getResultList();
} catch (Exception e) {
System.out.println("No event reports found for list " + reportIdsList);
}
return erArr;
}
I've been investigating integrating Spring Batch processing into the above pattern. I've been looking at the Spring getting started guide for batch processing here https://spring.io/guides/gs/batch-processing/ - paying particular attention to the source code for BatchConfiguration.java - I'm uncertain whether my application is suited for Spring Batch, maybe my imcomplete knowledge of Spring Batch and the various ways it can be implemented is preventing me from conceptualizing it. The BatchConfiguration.java code below suggests to me that Spring Batch may be best suited to iterate through a list of items, read them one by one, process them one by one, and write them one by one - whereas my service code is based on gathering and and writing a list of objects all at once.
#Bean
public FlatFileItemReader<Person> reader() {
FlatFileItemReader<Person> reader = new FlatFileItemReader<Person>();
reader.setResource(new ClassPathResource("sample-data.csv"));
reader.setLineMapper(new DefaultLineMapper<Person>() {{
setLineTokenizer(new DelimitedLineTokenizer() {{
setNames(new String[] { "firstName", "lastName" });
}});
setFieldSetMapper(new BeanWrapperFieldSetMapper<Person>() {{
setTargetType(Person.class);
}});
}});
return reader;
}
#Bean
public PersonItemProcessor processor() {
return new PersonItemProcessor();
}
#Bean
public JdbcBatchItemWriter<Person> writer() {
JdbcBatchItemWriter<Person> writer = new JdbcBatchItemWriter<Person>();
writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<Person>());
writer.setSql("INSERT INTO people (first_name, last_name) VALUES (:firstName, :lastName)");
writer.setDataSource(dataSource);
return writer;
}
// end::readerwriterprocessor[]
// tag::jobstep[]
#Bean
public Job importUserJob(JobCompletionNotificationListener listener) {
return jobBuilderFactory.get("importUserJob")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(step1())
.end()
.build();
}
#Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.<Person, Person> chunk(10)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
Is this true? Could I still take advantage of resume-ability, scheduling and synchronization provided by Spring Batch for my existing code? Any suggestions appreciated.

I think the main thing that you need to consider this synchronous behavior and asynchronous behavior. Batch process are used for long running tasks, So,
Consider your task it long running or not. If your task is long running you can use batch. This is going to be asynchronous because, your request come in and start the task and then respond back to the user.
And the batch will run and complete and write a result back to the database, User will have to either poll for the result using ajax or you may have to implement push notification mechanism to handle the state of the task for the asynchronous behavior/ prevent polling.

Its true that a Spring Batch chunk consisting of a reader -> processor -> writer reads one item, processes one item but writes a chunk of items according to defined chunk size.
So you can send thousand of items in one go to writer to write to storage depending on your defined chunk_size.
Having said that, a reader reads one item but its not necessary to read only one item from source(from file/DB etc) itself. There are readers which can read a large quantity of items in one go from source, hold it in itself in a list and hand over one by one to processor till list is exhausted.
One such reader is JdbcPagingItemReader so e.g. it reads few thousand rows from database in one go as per defined reader page_size ( that reduces DB calls significantly ) and then keep handing over one by one to processor and then processor automatically keep accumulating processed outputs till chunk_size is reached and then hands over to writer in bulk.
Its just another case that something might not be ready off the shelf for your requirement in API - in that case, you will have to write your own ItemReader.
Look at the code of JdbcPagingItemReader to get ideas.
For your situation, writer of Spring Batch doesn't seem a problem at all, it already writes in bulk with just a simple configuration. You will have to feed Controller's output to reader which works on similar lines as JdbcPagingItemReader.
All I want to say that in-memory processing is one by one ( and that is very fast ) but IO can be done in bulk in spring batch ( if you choose so).
Hope it helps !!

Related

Spring Batch Reader Writer transfer of data not working as expected

I have created a Generic Spring Batch job for Processing of Data and storing into a CSV. I need some data from the Reader passed into the writer which I am trying to do using JobExecution. However suprisingly, the code seems to call the getWriter() first than the getReader() function.
My config is given below. Could someone explain why it is happening and if there is any alternative way to pass data from reader to writer.
#Bean
#StepScope
public ItemReader<Map<String, Object>> getDataReader() throws Exception {
return springBatchReader.getReader();
}
#Bean
#StepScope
public FlatFileItemWriter<Map<String, Object>> getDataWriter() throws Exception {
return (FlatFileItemWriter<Map<String, Object>>) springBatchWriter.getWriter();
}
#Bean
public Job SpringBatchJob(Step generateReport) throws Exception {
return jobBuilderFactory.get("SpringBatchJob" + System.currentTimeMillis())
.preventRestart()
.incrementer(new RunIdIncrementer())
.flow(generateReport)
.end()
.build();
}
#Bean
public Step generateReport() throws Exception {
return stepBuilderFactory.get("generateReport").<Map<String, Object>, Map<String, Object>>chunk(batchSize)
.reader(getDataReader()).writer(getDataWriter()).build();
}
The Data I want to pass from Reader to Writer is the column names for the CSV. Since my Reader runs variable SQL queries(passing the SQL query to be run as a command line argument) and hence the result-set/columns are not static and vary based on the given query. To provide the writer with the column names to be written for that particular execution in the setHeaderCallback was the rationale behind sending data from Reader to Writer.
The Reader simple runs the given query and puts the data into Map<String, Object> rather than any POJO due to the variable nature of the data. Here the key of the Map represent the column name while the corresponding object holds the values for that column. So essentially I want the writer setHeaderCallback to be able to access Keys of the passed Map or pass the keys from the Reader to the Writer somehow.
The Writer Code is as follows:
public FlatFileItemWriter<Map<String, Object>> getWriter() throws Exception {
String reportName = getReportName();
saveToContext(reportName, reportPath);
FileSystemResource resource = new FileSystemResource(String.join(File.separator, reportPath, getReportName()));
FlatFileItemWriter<Map<String, Object>> flatFileItemWriter = new FlatFileItemWriter<>();
flatFileItemWriter.setResource(resource);
//NEED HELP HERE..HOW TO SET THE HEADER TO BE THE KEYS OF THE MAP
//flatFileItemWriter.setHeaderCallback();
flatFileItemWriter.setLineAggregator(new DelimitedLineAggregator<Map<String, Object>>() {
{
setDelimiter(delimiter);
setFieldExtractor(
new PassThroughFieldExtractor<>()
);
}
});
flatFileItemWriter.afterPropertiesSet();
return flatFileItemWriter;
}
The execution order of those methods does not matter. You should not be looking for a way to pass data from the reader to the writer using the execution context, the Chunk-oriented Tasklet implementation provided by Spring Batch will do that for you.
The execution context could be used to pass data from one step to another, but not from the reader to the writer within the same step.
EDIT: update answer based on comments:
Your issue is that you are calling saveToContext(reportName, reportPath); in the getWriter method. This method is called at configuration time and not at runtime.
What you really need is provide the column names either via job parameters or put them in the execution context with a step, then use a step-scoped Header callback that is configured with those headers.
You can find an example here: https://stackoverflow.com/a/56719077/5019386. This example is for the lineMapper but you can do the same for the headerCallback. If you don't want to use the job parameters approach, you can create a tasklet step that determines column names and puts them in the execution context, then configure the step-scoped header callback with those names from the execution context, something like:
#Bean
#StepScope
public FlatFileHeaderCallback headerCallback(#Value("#{jobExecutionContext['columnNames']}") String columnNames) {
return new FlatFileHeaderCallback() {
#Override
public void writeHeader(Writer writer) throws IOException {
// use columnNames here
}
};
}

Master/slave Job architectural design in spring batch using modular job approach

I hope you're doing great.
I'm facing design problem in spring batch.
Let me explain:
I have a modular spring batch job architecture,
each job has its own config file et context.
I am designing a master Job to launch the subjobs (50+ types of subjobs).
X obj has among other name, state and blob which contains the csv file attached to it.
X obj will be updated after being processed.
I follow the first approach of fetching all X obj and then looping (in java stream) to call the appropriate job.
But this approach has a lot of limitations.
So I design a masterJob with reader processor and writer.
MasterJob should read X obj and call the appropriate subJob and the update the state of X obj.
masterJobReader which call a custom service to get a list of let's say X obj.
I started by trying to launch subjob from within the masterJob processor but It did not work.
I did some research and I find that JobStep could be more adequate for this scenario.
But I'm stuck with how to pass the item read by masterJobReader to JobStep has parameter.
I did saw DefaultJobParameterExtractor and I try to set the Item read into the stepExecutionContext but It's not working.
My question how to pass parameter from MasterJob to SubJob using
JobStep approach?
If there is better way to deal with this then I'm all yours!
I'm using Java Config and spring batch 4.3.
Edit to provide sample code:
#Configuration
public class MasterJob {
#Value("${defaultCompletionPolicy}")
private Integer defaultCompletionPolicy;
#Autowired
protected StepBuilderFactory masterStepBuilderFactory;
private Logger logger = LoggerFactory.getLogger(MasterJob.class);
#Autowired
protected JobRepository jobRepo;
#Autowired
protected PlatformTransactionManager transactionManager;
#Autowired
#Qualifier("JOB_NAME1")
private Job JOB_NAME1; // this should change to be dynamic as there are around 50 types of job
#Bean(name = "masterJob")
protected Job masterBatchJob() throws ApiException {
return new JobBuilderFactory(jobRepo).get("masterJob")
.incrementer(new RunIdIncrementer())
.start(masterJobStep(masterJobReader(), masterJobWriter()))
.next(jobStepJobStep1(null))
.next(masterUpdateStep()) // update the state of objX
.build();
}
#Bean(name = "masterJobStep")
protected Step masterJobStep(#Qualifier("masterJobReader") MasterJobReader masterReader,
#Qualifier("masterJobWriter") MasterJobWriter masterWriter) throws ApiException {
logger.debug("inside masterJobStep");
return this.masterStepBuilderFactory.get("masterJobStep")
.<Customer, Customer>chunk(defaultCompletionPolicy)
.reader(masterJobReader())
.processor(masterJobProcessor())
.writer(masterJobWriter())
.transactionManager(transactionManager)
.listener(new MasterJobWriter()) // I set the parameter inside this.
.listener(masterPromotionListener())
.build();
}
#Bean(name = "masterJobWriter", destroyMethod = "")
#StepScope
protected MasterJobWriter masterJobWriter() {
return new MasterJobWriter();
}
#Bean(name = "masterJobReader", destroyMethod = "")
#StepScope
protected MasterJobReader masterJobReader() throws ApiException {
return new MasterJobReader();
}
protected FieldSetMapper<Customer> mapper() {
return new CustomerMapper();
}
#Bean(name="masterPromotionListener")
public ExecutionContextPromotionListener masterPromotionListener() {
ExecutionContextPromotionListener listener = new ExecutionContextPromotionListener();
listener.setKeys(
new String[]
{
"inputFile",
"outputFile",
"customerId",
"comments",
"customer"
});
//listener.setStrict(true);
return listener;
}
#Bean(name = "masterUpdateStep")
public Step masterUpdateStep() {
return this.masterStepBuilderFactory.get("masterCleanStep").tasklet(new MasterUpdateTasklet()).build();
}
#Bean(name = "masterJobProcessor", destroyMethod = "")
#StepScope
protected MasterJobProcessor masterJobProcessor() {
return new MasterJobProcessor();
}
#Bean
public Step jobStepJobStep1(JobLauncher jobLauncher) {
return this.masterStepBuilderFactory.get("jobStepJobStep1")
.job(JOB_NAME1)
.launcher(jobLauncher)
.parametersExtractor(jobParametersExtractor())
.build();
}
#Bean
public DefaultJobParametersExtractor jobParametersExtractor() {
DefaultJobParametersExtractor extractor = new DefaultJobParametersExtractor();
extractor.setKeys(
new String[] { "inputFile", "outputFile", , "customerId", "comments", "customer" });
return extractor;
}
}
This is how I set parameter from within the MasterJobWriter:
String inputFile = fetchInputFile(customer);
String outputFile = buildOutputFileName(customer);
Comments comments = "comments"; // from business logic
ExecutionContext stepContext = this.stepExecution.getExecutionContext();
stepContext.put("inputFile", inputFile);
stepContext.put("outputFile", outputFile);
stepContext.put("customerId", customer.getCustomerId());
stepContext.put("comments", new CustomJobParameter<Comments>(comments));
stepContext.put("customer", new CustomJobParameter<Customer>(customer));
I follow this section of the documentation of spring batch
My question how to pass parameter from MasterJob to SubJob using JobStep approach?
The JobParametersExtractor is what you are looking for. It allows you to extract parameters from the main job and pass them to the subjob. You can find an example here.
EDIT: Adding suggestions based on comments
I have a list of X obj in the DB. X obj has among other fields, id, type(of work), name, state and blob which contains the csv file attached to it. The blob field containing the csv file depends on the type field so it's not one pattern csv file. I need to process each X obj and save the content of the csv file in the DB and generate a csv result file containing the original data plus a comment field in the result csv file and update X obj state with the result csv field attached to X obj and other fields.
As you can see, the process is already complex for a single X object. So trying to process all X objects in the same job of jobs is too complex IMHO. So much complexity in software comes from trying to make one thing do two things..
If there is better way to deal with this then I'm all yours!
Since you are open for suggestions, I will recommend two options:
Option 1:
If it were up to me, I would create a job instance per X obj. This way, I can 1) parallelize things and 2) in case of failure, restart only the failed job. These two characteristics (Scalability and Restartability) are almost impossible with the job of jobs approach. Even if you have a lot of X objects, this is not a problem. You can use one of the scaling techniques provided by Spring Batch to process things in parallel.
Option 2:
If you really can't or don't want to use different job instances, you can use a single job with a chunk-oriented step that iterates over X objects list. The processing logic seems independent from one record to another, so this step should be easily scalable with multiple threads.

Spring Batch Transactions in StepExecutionListener

I have a spring batch job that reads data from a web service, does some enriching in a processor and then saves to DB. If someone runs the same job twice for same set of param I want to delete the old data in db and then re-write as part of this job.
I have written the delete logic in StepExecutionListener Before Step Method.
How can I make my step transactional so that if there is an error in the job the delete operation is rolledback?
this.stepBuilderFactory.get("xStep")
.<Item,Item>chunk(1000)
.reader(xReader)
.processor(xProcessor)
.writer(xWriter)
.listener(xStepExecutionListenerForDelete)
.build()
How can I make my step transactional so that if there is an error in the job the delete operation is rolledback?
You can write the delete logic as part of the item writer which is called inside the transaction driven by Spring Batch. If the transaction is rolled back for any reason, your delete operation will be rolled back. Note that an item writer is not only used for inserting data, but can be used to update data and delete it as well (MongoItemWriter#setDelete is an example).
Job Parameters
The job won't begin if there is already an instance with the same "identifying" job parameters.
The job will shutdown with the following exception:
org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException
You could create a job parameter that is a randomly generated value, or perhaps a date. In your listener, you could verify the job parameters of the previous job instance excluding your "unique" job parameter.
Transactional Listener
Add the #Transactional annotation to the listener to have it wrapped in a transaction.
Example
#Transactional
public class DataErasureListener implements StepExecutionListener {
#Autowired
JobExplorer jobExplorer;
#Override
public void beforeStep(StepExecution stepExecution) {
String jobName = stepExecution.getJobExecution().getJobInstance().getJobName();
Map<String, JobParameter> currentJobParameters = stepExecution.getJobParameters().getParameters();
int jobInstanceCount = jobExplorer.getJobInstanceCount(jobName);
if (jobInstanceCount == 1) {
//No prior run
return;
}
//The list of JobInstances are in descending order of creation. Grab the 2nd one.
JobInstance priorJobInstance = jobExplorer.getJobInstances(jobName, 0, jobInstanceCount).get(1);
JobExecution priorJobExecution= jobExplorer.getLastJobExecution(priorJobInstance);
Map<String, JobParameter> priorJobParameters = priorJobExecution.getJobParameters().getParameters();
//Compare prior job parameters excluding "unique" job parameters
currentJobParameters.remove("unique");
priorJobParameters.remove("unique");
if (currentJobParameters.equals(priorJobParameters)) {
//Delete old data
}
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
return stepExecution.getExitStatus();
}
}

Faster method to write items to mariadb other than jpaitemwriter

I have a batch job that reads from and Oracle database, stores JPA with JPA ITEM READER and writes to MariaDB with JPA ITEM WRITER. Is there a way to do bulk insert into MariaDB or a bulk execute the way mongodb has bulkoperations.execute() method?
I've used the provided JpaItemWriter as Follows:
#Bean
#Transactional
public JpaItemWriter<entity.maria.class> classJpaItemWriter() {
JpaItemWriter<entity.maria.class> writer = new JpaItemWriter<>();
writer.setEntityManagerFactory(mariaEntityManager.getObject());
return writer;
}
The Reader is :
public JpaPagingItemReader<PojoClass> classJpaReader() throws Exception {
String jpqlQuery = "SELECT t FROM PojoClass t where rownum < 15001";
JpaPagingItemReader<PojoClass> reader = new JpaPagingItemReader<>();
reader.setQueryString(jpqlQuery);
reader.setEntityManagerFactory(oracleEntityManager.getObject());
reader.setPageSize(100000);
reader.afterPropertiesSet();
reader.setSaveState(true);
return reader;
}
The Step Configuration is:
#Bean
public Step classStep() throws Exception {
Step auditStep = stepBuilderFactory.get(
"entity.oracle.PojoClass").<class, entity.maria.class>chunk(
10000)
.reader(classJpaReader())
.writer(classJpaItemWriter())
.transactionManager(mariaTransactionManager)
.listener(auditWriterListener())
.faultTolerant()
.skipLimit(10000)
.skip(Exception.class)
.build();
return auditStep;
}
I would like to do a custom writer that would bulk insert values into mariaDb and would like the time for insert/upsert to decrease. Currently the time taken for insertion of 15000 is 326 seconds...This seems kind of lengthy.
Any Suggestions?
You can try the JdbcBatchItemWriter which uses JdbcTemplate#batchUpdate behind the scene. This is usually faster the JPA item writers as it does not interact with a persistent context, first/second level cache, etc.
Hope this helps.

How to safely pass params from Tasklet to step when running parallel jobs

I am trying to pass safely params from tasklet to a step in the same job.
My job consist 3 tasklets(step1,step2,step3) one after another and in the end a step4(processor,reader,writer)
this job is being executed many times in parallel.
In step1 inside the tasklet I am evaluating param(hashId) via web service) than I am passing it all over my chain till my reader (which on step 4)
In step 3 I am creating new param called: filePath which is based on hashid and I send it over to step4(the reader) as a file resource location
I am using stepExecution to pass this param(hashId and filePath).
I tried 3 ways doing it via the tasklet:
to pass the param(hashId from step1 into step2 and from step2 into step 3) I am doing this:
chunkContext.getStepContext()
.getStepExecution()
.getExecutionContext()
.put("hashId", hashId);
In step4 I am populating filePath based on hashId and pass it this way to my last step(which is reader processor and a writer)
public class DownloadFileTasklet implements Tasklet, StepExecutionListener {
..
#Override
public RepeatStatus execute(ChunkContext chunkContext, ExecutionContext
executionContext) throws IOException {
String hashId = chunkContext.getStepContext().getStepExecution().getJobExecution().getExecutionContext().get("hashId");
...
filepath="...hashId.csv";
//I used here executionContextPromotionListener in order to promote those keys
chunkContext.getStepContext()
.getStepExecution()
.getExecutionContext()
.put("filePath", filePath);
}
logger.info("filePath + "for hashId=" + hashId);
}
#Override
public void beforeStep(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
Pay attention that I am printing hashId and filePath values right before I am finished that step(step3). by the logs they are consistent and populated as expected
I also added logs within my reader to see log the params that I get.
#Bean
#StepScope
public ItemStreamReader<MyDTO> reader(#Value("#{jobExecutionContext[filePath]}") String filePath) {
logger.info("test filePath="+filePath+");
return itemReader;
}
When I execute this job ~10 times I can see that the param filePath value is populated with other jobs filePath values when executing in parallel
This is how I promote the job's keys with executionContextPromotionListener:
job definition:
#Bean
public Job processFileJob() throws Exception {
return this.jobs.get("processFileJob").
start.(step1).
next(step2)
next(downloadFileTaskletStep()). //step3
next(processSnidFileStep()).build(); //step4
}
step 3 definition
public Step downloadFileTaskletStep() {
return this.steps.get("downloadFileTaskletStep").tasklet(downloadFileTasklet()).listener(executionContextPromotionListener()).build();
}
#Bean
public org.springframework.batch.core.listener.ExecutionContextPromotionListener executionContextPromotionListener() {
ExecutionContextPromotionListener executionContextPromotionListener = new ExecutionContextPromotionListener();
executionContextPromotionListener.setKeys(new String[]{"filePath"});
return executionContextPromotionListener;
}
Same results threads messing the params
I can track the results via spring batch database table: batch_job_execution_context.short_context:
here you can see the the filePatch which built by the hashid is not identical to the origin hashId
//incorrect record///
{"map":[{"entry":[{"string":"totalRecords","int":5},{"string":"segmentId","long":13},{"string":["filePath","/etc/mydir/services/notification_processor/files/2015_04_22/f1c7b0f2180b7e266d36f87fcf6fb7aa.csv"]},{"string":["hashId","20df39d201fffc7444423cfdf2f43789"]}]}]}
Now if we check other records they seems good. but always one or two messed up
//correct records
{"map":[{"entry":[{"string":"totalRecords","int":5},{"string":"segmentId","long":13},{"string":["filePath","\/etc\/mydir\/services\/notification_processor\/files\/2015_04_22\/**c490c8282628b894727fc2a4d6fc0cb5**.csv"]},{"string":["hashId","**c490c8282628b894727fc2a4d6fc0cb5**"]}]}]}
{"map":[{"entry":[{"string":"totalRecords","int":5},{"string":"segmentId","long":13},{"string":["filePath","\/etc\/mydir\/services\/notification_processor\/files\/2015_04_22\/**2b21d3047208729192b87e90e4a868e4**.csv"]},{"string":["hashId","**2b21d3047208729192b87e90e4a868e4**"]}]}]}
Any idea why I have those Threading issues?
To review your attempted methods:
Method 1 - Editing JobParameters
JobParameters are immutable in a job so attempting to modify them during job execution should not be attempted.
Method 2 - Editing JobParameters v2
Method 2 is really the same as method 1, you're only going at getting the reference to the JobParameters a different way.
Method 3 - Using the ExecutionContextPromotionListener. This is the correct way, but you're doing things incorrectly. The ExecutionContextPromotionListener looks at the step's ExecutionContext and copies the keys you specify over to the job's ExecutionContext. You're adding the keys directly to the Job ExecutionContext which is a bad idea.
So in short, Method 3 is the closest to correct, but you should be adding the properties you want to share to the step's ExecutionContext and then configure the ExecutionContextPromotionListener to promote the appropriate keys to the Job's ExecutionContext.
The code would be updated as follows:
chunkContext.getStepContext()
.getStepExecution()
.getExecutionContext()
.put("filePath", filePath);

Categories