#Configuration
#EnableBatchProcessing(modular=true)
public class ModularJobConfiguration {
#Bean
public ApplicationContextFactory job() {
return new GenericApplicationContextFactory(FlatfileToDbJobConfiguration.class);
}
#Bean
public ApplicationContextFactory anotherJob() {
return new GenericApplicationContextFactory(FlatfileToDbJobAnotherConfiguration.class);
}
}
For example show only one config, another like this.
#Configuration
public class FlatfileToDbJobConfiguration {
#Autowired
private JobBuilderFactory jobBuilders;
#Autowired
private StepBuilderFactory stepBuilders;
#Bean
public Job flatfileToDbJob(){
return jobBuilders.get("flatfileToDbJob")
.listener(protocolListener())
.start(step())
.build();
}
#Bean
public Step step(){
return stepBuilders.get("step")
.<Partner,Partner>chunk(1)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
// ...
// rest part of code
// ...
}
All works fine, but all beans methods in config files call twice. I think, first in common context and second in individual. Is it normal? I can't use post construct, because it will call twice. How to have modular=true and only one bean initialization? Thanks!
I find the answer. I need #Lazy initialization. Then config will initialize then individual context creates.
#Configuration
#Lazy
public class FlatfileToDbJobConfiguration {
#Autowired
private JobBuilderFactory jobBuilders;
#Autowired
private StepBuilderFactory stepBuilders;
#Bean
public Job flatfileToDbJob(){
return jobBuilders.get("flatfileToDbJob")
.listener(protocolListener())
.start(step())
.build();
}
#Bean
public Step step(){
return stepBuilders.get("step")
.<Partner,Partner>chunk(1)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
// ...
// rest part of code
// ...
}
Related
I´m trying to inject parameters from an outside context to an ItemReader using spring batch.
Below I have my code that trigger the job:
Date date = Date.from(advanceSystemDateEvent.getReferenceDate().atStartOfDay(ZoneId.systemDefault()).toInstant());
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder();
jobParametersBuilder.addLong("uniqueness", System.nanoTime());
jobParametersBuilder.addDate("date", date);
jobLauncher.run(remuneradorJob, jobParametersBuilder.toJobParameters());
My job:
#EnableBatchProcessing
#Configuration
public class JobsConfig {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Bean
public Job remuneradorJob(Step remuneradorStep) {
return jobBuilderFactory
.get("remuneradorJob")
.start(remuneradorStep)
.incrementer(new RunIdIncrementer())
.build();
}
}
My step:
#Configuration
public class StepsConfig {
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Bean
public Step remuneradorStep(ItemReader<Entity> remuneradorReader, ItemWriter<Entity> remuneradorWriter) {
return stepBuilderFactory
.get("remuneradorStep")
.<Entity, Entity>chunk(1)
.reader(remuneradorReader)
.writer(remuneradorWriter)
.build();
}
}
My itemreader:
#Configuration
public class RemuneradorReaderConfig {
#Autowired
private EntityManagerFactory entityManagerFactory;
#Bean
#StepScope
public JpaPagingItemReader<Entity> remuneradorReader(#Value("#{jobParameters}") Map jobParameters) {
//LocalDate localDate = date.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
LocalDate localDate = LocalDate.of(2011,5,16);
JpaPagingItemReader<Entity> databaseReader = new JpaPagingItemReader<>();
databaseReader.setEntityManagerFactory(entityManagerFactory);
databaseReader.setQueryString("...");
databaseReader.setPageSize(1000);
databaseReader.setParameterValues(Collections.<String, Object>singletonMap("limit", localDate));
return databaseReader;
}
}
I got the error:
Error creating bean with name 'scopedTarget.remuneradorReader': Scope 'step' is not active for the current thread; consider defining a scoped proxy for this bean if you intend to refer to it from a singleton; nested exception is java.lang.IllegalStateException: No context holder available for step scope
I tried replacing #StepScope for #JobScope, but I got the same error.
I saw this issue:
Spring batch scope issue while using spring boot
And finally, the application runs.
But now I´m facing another problem, according the code below:
#Bean
#StepScope
public JpaPagingItemReader<Entity> remuneradorReader(#Value("#{jobParameters}") Map jobParameters) {
JpaPagingItemReader<Entity> databaseReader = new JpaPagingItemReader<>();
databaseReader.setEntityManagerFactory(entityManagerFactory);
databaseReader.setQueryString("select o from Object o");
databaseReader.setPageSize(1000);
return databaseReader;
}
When I execute this reader, it gives me:
Deposit{Deposit_ID='null', Legacy_ID ='null', Valor_Depósito='10000', Saldo='10000'}
The idDeposit and idLegacy comes null.
But when I remove #StepScope and #Value("#{jobParameters}") Map jobParameters from ItemReader, like code below:
#Bean
public JpaPagingItemReader<Entity> remuneradorReader() {
JpaPagingItemReader<Entity> databaseReader = new JpaPagingItemReader<>();
databaseReader.setEntityManagerFactory(entityManagerFactory);
databaseReader.setQueryString("select o from Object o");
databaseReader.setPageSize(1000);
return databaseReader;
}
The reader gives me the correct response:
Deposit{Deposit_ID='98', Legacy_ID ='333', Valor_Depósito='10000', Saldo='10000'}
I think it´s missing something else.
Can anyone help me?
I have a need to share relatively large amounts of data between job steps for a spring batch implementation. I am aware of StepExecutionContext and JobExecutionContext as mechanisms for this. However, I read that since these must be limited in size (less than 2500 characters). That is too small for my needs. In my novice one-step Spring Batch implementation, my single step job is as below:
#Configuration
#EnableBatchProcessing
public class BatchConfig {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
private static final String GET_DATA =
" SELECT " +
"stuffA, " +
"stuffB, " +
"FROM STUFF_TABLE " +
"ORDER BY stuffA ASC";
#Bean
public ItemReader<StuffDto> databaseCursorItemReader(DataSource dataSource) {
return new JdbcCursorItemReaderBuilder<StuffDto>()
.name("cursorItemReader")
.dataSource(dataSource)
.sql(GET_DATA)
.rowMapper(new BeanPropertyRowMapper<>(StuffDto.class))
.build();
}
#Bean
ItemProcessor<StuffDto, StuffDto> databaseXmlItemProcessor() {
return new QueryLoggingProcessor();
}
#Bean
public ItemWriter<StuffDto> databaseCursorItemWriter() {
return new LoggingItemWriter();
}
#Bean
public Step databaseCursorStep(#Qualifier("databaseCursorItemReader") ItemReader<StuffDto> reader,
#Qualifier("databaseCursorItemWriter") ItemWriter<StuffDto> writer,
StepBuilderFactory stepBuilderFactory) {
return stepBuilderFactory.get("databaseCursorStep")
.<StuffDto, StuffDto>chunk(1)
.reader(reader)
.writer(writer)
.build();
}
#Bean
public Job databaseCursorJob(#Qualifier("databaseCursorStep") Step exampleJobStep,
JobBuilderFactory jobBuilderFactory) {
return jobBuilderFactory.get("databaseCursorJob")
.incrementer(new RunIdIncrementer())
.flow(exampleJobStep)
.end()
.build();
}
}
This works fine in the sense that I can successfully read from the database and write in the writer step to a loggingitemwriter like this:
public class LoggingItemWriter implements ItemWriter<StuffDto> {
private static final Logger LOGGER = LoggerFactory.getLogger(LoggingItemWriter.class);
#Override
public void write(List<? extends StuffDto> list) throws Exception {
LOGGER.info("Writing stuff: {}", list);
}
}
However, I need to be able to make available that StuffDto (or equivalent) and it's data to a second step that would be performing some processing against it rather than just logging it.
I would be grateful for any ideas on how that could be accomplished if you assume that the step and job contexts are too limited. Thanks.
If you do not want to write the data in the database or filesystem, one way to achieve the same is like below:
Create your own job context bean in your config class having the required properties and annotated it with #JobScope.
Implement org.springframework.batch.core.step.tasklet interface to your reader, processor and writer classes. If you want more control over steps you can also implement org.springframework.batch.core.StepExecutionListener with it.
Get your own context object using #Autowire and use the setter-getter method of it to store and retrieve the data.
Sample Code:
Config.java
#Autowired
private Processor processor;
#Autowired
private Reader reader;
#Autowired
private Writer writer;
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Bean
#JobScope
public JobContext getJobContexts() {
JobContext newJobContext = new JobContext();
return newJobContext;
}
#Bean
public Step reader() {
return stepBuilderFactory.get("reader")
.tasklet(reader)
.build();
}
#Bean
public Step processor() {
return stepBuilderFactory.get("processor")
.tasklet(processor)
.build();
}
#Bean
public Step writer() {
return stepBuilderFactory.get("writer")
.tasklet(writer)
.build();
}
public Job testJob() {
return jobBuilderFactory.get("testJob")
.start(reader())
.next(processor())
.next(writer())
.build();
}
//Below will start the job
#Scheduled(fixedRate = 1000)
public void starJob(){
Map<String, JobParameter> confMap = new HashMap<>();
confMap.put("time", new JobParameter(new Date()));
JobParameters jobParameters = new JobParameters(confMap);
monitorJobLauncher.run(testJob(), jobParameters);
}
JobContext.java
private List<StuffDto> dataToProcess = new ArrayList<>();
private List<StuffDto> dataToWrite = new ArrayList<>();
//getter
SampleReader.java
#Component
public class SampleReader implements Tasklet,StepExecutionListener{
#Autowired
private JobContext context;
#Override
public void beforeStep(StepExecution stepExecution) {
//logic that you need to perform before the execution of this step.
}
#Override
public void afterStep(StepExecution stepExecution) {
//logic that you need to perform after the execution of this step.
}
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext){
// Whatever code is here will get executed for reader.
// Fetch StuffDto object from database and add it to jobContext
//dataToProcess list.
return RepeatStatus.FINISHED;
}
}
SampleProcessor.java
#Component
public class SampleProcessor implements Tasklet{
#Autowired
private JobContext context;
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext){
// Whatever code is here will get executed for processor.
// context.getDataToProcessList();
// apply business logic and set the data to write.
return RepeatStatus.FINISHED;
}
Same ways for the writer class.
Note: Please note here that here you need to write database-related boilerplate code on your own. But this way you can have more control over your logic and nothing to worry about context size limit. All the data will be in memory so as soon as operation done those will be garbage collected. I hope you get the idea of what I willing to convey.
To read more about Tasklet vs Chunk read this.
I did some searching, but couldn't find an example code. Spring batch
reading from REST api (which I have done) and writing multiple records
for one read to a single DB table using JdbcBatchItemWriter.
Below is my BatchConfig code, but it writes only one record. I think I
have to make my processor return a List of Registration object and the
JDBCItemWriter has to write multiple records
code
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
private Environment environment;
#Bean
RestTemplate restTemplate() {
return new RestTemplate();
}
//my reader
#Bean
ItemReader<EmployeeEmploymentDTO> restEmployeeReader(Environment
environment,
RestTemplate restTemplate) {
return new RESTEmployeeReader(
environment.getRequiredProperty("rest.api.to.listemployees.ugs.api.url"),
restTemplate
);
}
//my processor which is a separate class
#Bean
public RegistrationItemProcessor processor() {
return new RegistrationItemProcessor();
}
//my writer which now only inserts one record for a read but i want to
insert multiple varying number of records for a read
#Bean
public JdbcBatchItemWriter<Registration> writer(DataSource dataSource) {
return new JdbcBatchItemWriterBuilder<Registration>()
.itemSqlParameterSourceProvider(new
BeanPropertyItemSqlParameterSourceProvider<>())
.sql("INSERT INTO registration //.....*ommitted insert statement
.dataSource(dataSource)
.build();
}
#Bean
public Job importUserJob(JobCompletionNotificationListener listener,
Step step1) {
return jobBuilderFactory.get("importUserJob")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(step1)
.end()
.build();
}
#Bean
public Step step1(JdbcBatchItemWriter<Registration> writer) {
return stepBuilderFactory.get("step1")
.<EmployeeEmploymentDTO, Registration> chunk(10)
.reader(restEmployeeReader(environment,restTemplate()))
.processor(processor())
.writer(writer)
.build();
}
}
My processor returned a list of List
My writer as below
public class MultiOutputItemWriter implements ItemWriter<List<Registration>> {
ItemWriter<Registration> itemWriter;
#Autowired
NamedParameterJdbcTemplate namedParamJdbcTemplate;
#Override
public void write(List<? extends List<Registration>> items) throws Exception {
for (List<Registration> registrations : items) {
final String SQL_INSERT_INTO_REGISTRATION="INSERT INTO registration (employee_id, ....";
final List<MapSqlParameterSource> params = new ArrayList<>();
for (Registration registration : registrations) {
MapSqlParameterSource param = new MapSqlParameterSource();
param.addValue("employeeId", registration.getEmployeeId());
param.addValue("startDate", registration.getStartDate());
param.addValue("user", registration.getUser());
param.addValue("endTime", registration.getEndTime());
params.add(param);
}
namedParamJdbcTemplate.batchUpdate(SQL_INSERT_INTO_REGISTRATION,params.toArray(new MapSqlParameterSource[params.size()]));
}
}
}
I am using Spring Boot + Spring Batch (annotation) , have come across a scenario where I have to run 2 jobs.
I have Employee and Salary records which needs to updated using spring batch. I have configured BatchConiguration classes by following this tutorial spring-batch getting started tutorial for Employee and Salary objects, respectively named as BatchConfigurationEmployee & BatchConfigurationSalary.
I have Defined the ItemReader, ItemProcessor, ItemWriter and Job by following the tutorial which is mentioned above already.
When I start my Spring Boot application either of the Job runs, I want to run both the BatchConfigured classes. How can I achieve this
********* BatchConfigurationEmployee.java *************
#Configuration
#EnableBatchProcessing
public class BatchConfigurationEmployee {
public ItemReader<employee> reader() {
return new EmployeeItemReader();
}
#Bean
public ItemProcessor<Employee, Employee> processor() {
return new EmployeeItemProcessor();
}
#Bean
public Job Employee(JobBuilderFactory jobs, Step s1) {
return jobs.get("Employee")
.incrementer(new RunIdIncrementer())
.flow(s1)
.end()
.build();
}
#Bean
public Step step1(StepBuilderFactory stepBuilderFactory, ItemReader<Employee> reader,
ItemProcessor<Employee, Employee> processor) {
return stepBuilderFactory.get("step1")
.<Employee, Employee> chunk(1)
.reader(reader)
.processor(processor)
.build();
}
}
Salary Class is here
#Configuration
#EnableBatchProcessing
public class BatchConfigurationSalary {
public ItemReader<Salary> reader() {
return new SalaryItemReader();
}
#Bean
public ItemProcessor<Salary, Salary> processor() {
return new SalaryItemProcessor();
}
#Bean
public Job salary(JobBuilderFactory jobs, Step s1) {
return jobs.get("Salary")
.incrementer(new RunIdIncrementer())
.flow(s1)
.end()
.build();
}
#Bean
public Step step1(StepBuilderFactory stepBuilderFactory, ItemReader<Salary> reader,
ItemProcessor<Salary, Salary> processor) {
return stepBuilderFactory.get("step1")
.<Salary, Salary> chunk(1)
.reader(reader)
.processor(processor)
.build();
}
}
The names of the Beans have to be unique in the whole Spring Context.
In both jobs, you are instantiating the reader, writer and processor with the same methodname. The methodname is the name that is used to identifiy the bean in the context.
In both job-definitions, you have reader(), writer() and processor(). They will overwrite each other. Give them unique names like readerEmployee(), readerSalary() and so on.
That should solve your problem.
You jobs are not annotated with #Bean, so the spring-context doesn't know them.
Have a look at the class JobLauncherCommandLineRunner. All Beans in the SpringContext implementing the Job interface will be injected. All jobs that are found will be executed. (this happens inside the method executeLocalJobs in JobLauncherCommandLineRunner)
If, for some reason, you don't want to have them as beans in the context, then you have to register your jobs with the jobregistry.( the method execute registeredJobs of JobLauncherCommandLineRunner will take care of launching the registered jobs)
BTW, you can control with the property
spring.batch.job.names= # Comma-separated list of job names to execute on startup (For instance
`job1,job2`). By default, all Jobs found in the context are executed.
which jobs should be launched.
I feel that this also is a pretty good way to run mutiple Jobs.
I am making use of a Job Launcher to configure and execute the job and independent commandLineRunner implementation to run them. These are ordered to make sure they are executed sequentially in the required though
Apologies for the big post but I wanted to give a clear picture of what can be achieved using JobLauncher configurations with multiple command line runners
This is the current BeanConfiguration that I have
#Configuration
public class BeanConfiguration {
#Autowired
DataSource dataSource;
#Autowired
PlatformTransactionManager transactionManager;
#Bean(name="jobOperator")
public JobOperator jobOperator(JobExplorer jobExplorer,
JobRegistry jobRegistry) throws Exception {
SimpleJobOperator jobOperator = new SimpleJobOperator();
jobOperator.setJobExplorer(jobExplorer);
jobOperator.setJobRepository(createJobRepository());
jobOperator.setJobRegistry(jobRegistry);
jobOperator.setJobLauncher(jobLauncher());
return jobOperator;
}
/**
* Configure joblaucnher to set the execution to be done asycn
* Using the ThreadPoolTaskExecutor
* #return
* #throws Exception
*/
#Bean
public JobLauncher jobLauncher() throws Exception {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(createJobRepository());
jobLauncher.setTaskExecutor(taskExecutor());
jobLauncher.afterPropertiesSet();
return jobLauncher;
}
// Read the datasource and set in the job repo
protected JobRepository createJobRepository() throws Exception {
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(dataSource);
factory.setTransactionManager(transactionManager);
factory.setIsolationLevelForCreate("ISOLATION_SERIALIZABLE");
//factory.setTablePrefix("BATCH_");
factory.setMaxVarCharLength(10000);
return factory.getObject();
}
#Bean
public RestTemplateBuilder restTemplateBuilder() {
return new RestTemplateBuilder().additionalInterceptors(new CustomRestTemplateLoggerInterceptor());
}
#Bean(name=AppConstants.JOB_DECIDER_BEAN_NAME_EMAIL_INIT)
public JobExecutionDecider jobDecider() {
return new EmailInitJobExecutionDecider();
}
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(15);
taskExecutor.setMaxPoolSize(20);
taskExecutor.setQueueCapacity(30);
return taskExecutor;
}
}
I have setup the database to hold the job exectuion details in postgre and hence the DatabaseConfiguration looks like this (two different beans for two different profiles -env)
#Configuration
public class DatasourceConfiguration implements EnvironmentAware{
private Environment env;
#Bean
#Qualifier(AppConstants.DB_BEAN)
#Profile("dev")
public DataSource getDataSource() {
HikariDataSource ds = new HikariDataSource();
boolean isAutoCommitEnabled = env.getProperty("spring.datasource.hikari.auto-commit") != null ? Boolean.parseBoolean(env.getProperty("spring.datasource.hikari.auto-commit")):false;
ds.setAutoCommit(isAutoCommitEnabled);
// Connection test query is for legacy connections
//ds.setConnectionInitSql(env.getProperty("spring.datasource.hikari.connection-test-query"));
ds.setPoolName(env.getProperty("spring.datasource.hikari.pool-name"));
ds.setDriverClassName(env.getProperty("spring.datasource.driver-class-name"));
long timeout = env.getProperty("spring.datasource.hikari.idleTimeout") != null ? Long.parseLong(env.getProperty("spring.datasource.hikari.idleTimeout")): 40000;
ds.setIdleTimeout(timeout);
long maxLifeTime = env.getProperty("spring.datasource.hikari.maxLifetime") != null ? Long.parseLong(env.getProperty("spring.datasource.hikari.maxLifetime")): 1800000 ;
ds.setMaxLifetime(maxLifeTime);
ds.setJdbcUrl(env.getProperty("spring.datasource.url"));
ds.setPoolName(env.getProperty("spring.datasource.hikari.pool-name"));
ds.setUsername(env.getProperty("spring.datasource.username"));
ds.setPassword(env.getProperty("spring.datasource.password"));
int poolSize = env.getProperty("spring.datasource.hikari.maximum-pool-size") != null ? Integer.parseInt(env.getProperty("spring.datasource.hikari.maximum-pool-size")): 10;
ds.setMaximumPoolSize(poolSize);
return ds;
}
#Bean
#Qualifier(AppConstants.DB_PROD_BEAN)
#Profile("prod")
public DataSource getProdDatabase() {
HikariDataSource ds = new HikariDataSource();
boolean isAutoCommitEnabled = env.getProperty("spring.datasource.hikari.auto-commit") != null ? Boolean.parseBoolean(env.getProperty("spring.datasource.hikari.auto-commit")):false;
ds.setAutoCommit(isAutoCommitEnabled);
// Connection test query is for legacy connections
//ds.setConnectionInitSql(env.getProperty("spring.datasource.hikari.connection-test-query"));
ds.setPoolName(env.getProperty("spring.datasource.hikari.pool-name"));
ds.setDriverClassName(env.getProperty("spring.datasource.driver-class-name"));
long timeout = env.getProperty("spring.datasource.hikari.idleTimeout") != null ? Long.parseLong(env.getProperty("spring.datasource.hikari.idleTimeout")): 40000;
ds.setIdleTimeout(timeout);
long maxLifeTime = env.getProperty("spring.datasource.hikari.maxLifetime") != null ? Long.parseLong(env.getProperty("spring.datasource.hikari.maxLifetime")): 1800000 ;
ds.setMaxLifetime(maxLifeTime);
ds.setJdbcUrl(env.getProperty("spring.datasource.url"));
ds.setPoolName(env.getProperty("spring.datasource.hikari.pool-name"));
ds.setUsername(env.getProperty("spring.datasource.username"));
ds.setPassword(env.getProperty("spring.datasource.password"));
int poolSize = env.getProperty("spring.datasource.hikari.maximum-pool-size") != null ? Integer.parseInt(env.getProperty("spring.datasource.hikari.maximum-pool-size")): 10;
ds.setMaximumPoolSize(poolSize);
return ds;
}
public void setEnvironment(Environment environment) {
// TODO Auto-generated method stub
this.env = environment;
}
}
Make sure that the initial app launcher catches the app execution which will be returned once the job execution terminates (either gets failed or completed) so that you can gracefully shutdown the jvm. Else using joblauncher makes the jvm to be alive even after all jobs get completed
#SpringBootApplication
#ComponentScan(basePackages="com.XXXX.Feedback_File_Processing.*")
#EnableBatchProcessing
public class FeedbackFileProcessingApp
{
public static void main(String[] args) throws Exception {
ApplicationContext appContext = SpringApplication.run(FeedbackFileProcessingApp.class, args);
// The batch job has finished by this point because the
// ApplicationContext is not 'ready' until the job is finished
// Also, use System.exit to force the Java process to finish with the exit code returned from the Spring App
System.exit(SpringApplication.exit(appContext));
}
}
............. so on , you can configure your own decider , your own job/steps as you said above for two different configurations like below and use them seperately in commandline runners (since the post is getting bigger, I am giving the details of just the job and command line runner)
These are the two jobs
#Configuration
public class DefferalJobConfiguration {
#Autowired
JobLauncher joblauncher;
#Autowired
private JobBuilderFactory jobFactory;
#Autowired
private StepBuilderFactory stepFactory;
#Bean
#StepScope
public Tasklet newSampleTasklet() {
return ((stepExecution, chunkContext) -> {
System.out.println("execution of step after flow");
return RepeatStatus.FINISHED;
});
}
#Bean
public Step sampleStep() {
return stepFactory.get("sampleStep").listener(new CustomStepExecutionListener())
.tasklet(newSampleTasklet()).build();
}
#Autowired
#Qualifier(AppConstants.FLOW_BEAN_NAME_EMAIL_INITIATION)
private Flow emailInitFlow;
#Autowired
#Qualifier(AppConstants.JOB_DECIDER_BEAN_NAME_EMAIL_INIT)
private JobExecutionDecider jobDecider;
#Autowired
#Qualifier(AppConstants.STEP_BEAN_NAME_ITEMREADER_FETCH_DEFERRAL_CONFIG)
private Step deferralConfigStep;
#Bean(name=AppConstants.JOB_BEAN_NAME_DEFERRAL)
public Job deferralJob() {
return jobFactory.get(AppConstants.JOB_NAME_DEFERRAL)
.start(emailInitFlow)
.on("COMPLETED").to(sampleStep())
.next(jobDecider).on("COMPLETED").to(deferralConfigStep)
.on("FAILED").fail()
.end().build();
}
}
#Configuration
public class TestFlowJobConfiguration {
#Autowired
private JobBuilderFactory jobFactory;
#Autowired
#Qualifier("testFlow")
private Flow testFlow;
#Bean(name = "testFlowJob")
public Job testFlowJob() {
return jobFactory.get("testFlowJob").start(testFlow).end().build();
}
}
Here are the command line runners (I am making sure that the first job is completed before the second job is initialized but it is totally up to the user to execute them in parallel following a different stratergy)
#Component
#Order(1)
public class DeferralCommandLineRunner implements CommandLineRunner, EnvironmentAware{
// If the jobLauncher is not used, then by default jobs are launched using SimpleJobLauncher
// with default configuration(assumption)
// hence modified the jobLauncher with vales set in BeanConfig
// of spring batch
private Environment env;
#Autowired
JobLauncher jobLauncher;
#Autowired
#Qualifier(AppConstants.JOB_BEAN_NAME_DEFERRAL)
Job deferralJob;
#Override
public void run(String... args) throws Exception {
// TODO Auto-generated method stub
JobParameters jobparams = new JobParametersBuilder()
.addString("run.time", LocalDateTime.now().
format(DateTimeFormatter.ofPattern(AppConstants.JOB_DATE_FORMATTER_PATTERN)).toString())
.addString("instance.name",
(deferralJob.getName() != null) ?deferralJob.getName()+'-'+UUID.randomUUID().toString() :
UUID.randomUUID().toString())
.toJobParameters();
jobLauncher.run(deferralJob, jobparams);
}
#Override
public void setEnvironment(Environment environment) {
// TODO Auto-generated method stub
this.env = environment;
}
}
#Component
#Order(2)
public class TestJobCommandLineRunner implements CommandLineRunner {
#Autowired
JobLauncher jobLauncher;
#Autowired
#Qualifier("testFlowJob")
Job testjob;
#Autowired
#Qualifier("jobOperator")
JobOperator operator;
#Override
public void run(String... args) throws Exception {
// TODO Auto-generated method stub
JobParameters jobParam = new JobParametersBuilder().addString("name", UUID.randomUUID().toString())
.toJobParameters();
System.out.println(operator.getJobNames());
try {
Set<Long> deferralExecutionIds = operator.getRunningExecutions(AppConstants.JOB_NAME_DEFERRAL);
System.out.println("deferralExceutuibuds:" + deferralExecutionIds);
operator.stop(deferralExecutionIds.iterator().next());
} catch (NoSuchJobException | NoSuchJobExecutionException | JobExecutionNotRunningException e) {
// just add a logging here
System.out.println("exception caught:" + e.getMessage());
}
jobLauncher.run(testjob, jobParam);
}
}
Hope this gives a complete idea of how it can be done. I am using spring-boot-starter-batch:jar:2.0.0.RELEASE
I am trying to write a IntegrationFlow test. It goes something like this:
JMS(in) -> (find previous versions in db) -> reduce(in,1...n) -> (to db) -> JMS(out)
So, no suprise: I want to mock the DB calls; they are Dao beans. But, I also want it to pickup other beans through component scan; I will selectively scan all packages except dao.
Create a test config and mock the Daos. No problem
Follow spring boot instructions for testing to get Component scanned beans. No problem
I just want to verify the sequence of steps and the resultant output as the outbound JMS queue would see it. Can someone just help me fill in the blanks?
This CANT be tough! The use of mocks seems to be problematic because plenty of essential fields are final. I am reading everywhere about this and just not coming up with a clear path. I inherited this code BTW
My error:
org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
Here is my code
#Configuration
#ImportResource("classpath:retry-context.xml")
public class LifecycleConfig {
#Autowired
private MessageProducerSupport inbound;
#Autowired
private MessageHandler outbound;
#Autowired
#Qualifier("reducer")
private GenericTransformer<Collection<ExtendedClaim>,ExtendedClaim> reducer;
#Autowired
#Qualifier("claimIdToPojo")
private GenericTransformer<String,ClaimDomain> toPojo;
#Autowired
#Qualifier("findPreviousVersion")
private GenericTransformer<ExtendedClaim,Collection<ExtendedClaim>> previousVersions;
#Autowired
#Qualifier("saveToDb")
private GenericHandler<ExtendedClaim> toDb;
#Bean
public DirectChannel getChannel() {
return new DirectChannel();
}
#Bean
#Autowired
public StandardIntegrationFlow processClaim() {
return IntegrationFlows.from(inbound).
channel(getChannel()).
transform(previousVersions).
transform(reducer).
handle(ExtendedClaim.class,toDb).
transform(toPojo).
handle(outbound).get();
}
}
Test Config
#Configuration
public class TestConfig extends AbstractClsTest {
#Bean(name = "claimIdToPojo")
public ClaimIdToPojo getClaimIdToPojo() {
return spy(new ClaimIdToPojo());
}
#Bean
public ClaimToId getClaimToIdPojo() {
return spy(new ClaimToId());
}
#Bean(name = "findPreviousVersion")
public FindPreviousVersion getFindPreviousVersion() {
return spy(new FindPreviousVersion());
}
#Bean(name = "reducer")
public Reducer getReducer() {
return spy(new Reducer());
}
#Bean(name = "saveToDb")
public SaveToDb getSaveToDb() {
return spy(new SaveToDb());
}
#Bean
public MessageProducerSupport getInbound() {
MessageProducerSupport mock = mock(MessageProducerSupport.class);
// when(mock.isRunning()).thenReturn(true);
return mock;
}
#Bean
public PaymentDAO getPaymentDao() {
return mock(PaymentDAO.class);
}
#Bean
public ClaimDAO getClaimDao() {
return mock(ClaimDAO.class);
}
#Bean
public MessageHandler getOutbound() {
return new CaptureHandler<ExtendedClaim>();
}
}
Actual test won't load
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = {TestConfig.class, LifecycleConfig.class})
public class ClaimLifecycleApplicationTest extends AbstractClsTest {
#Autowired
private MessageHandler outbound;
#Autowired
#Qualifier("reducer")
private GenericTransformer<Collection<ExtendedClaim>,ExtendedClaim> reducer;
#Autowired
#Qualifier("claimIdToPojo")
private GenericTransformer<String,ClaimDomain> toPojo;
#Autowired
#Qualifier("findPreviousVersion")
private GenericTransformer<ExtendedClaim,Collection<ExtendedClaim>> previousVersions;
#Autowired
#Qualifier("saveToDb")
private GenericHandler<ExtendedClaim> toDb;
#Autowired
private DirectChannel defaultChannel;
#Test
public void testFlow() throws Exception {
ExtendedClaim claim = getClaim();
Message<ExtendedClaim> message = MessageBuilder.withPayload(claim).build();
List<ExtendedClaim> previousClaims = Arrays.asList(claim);
defaultChannel.send(message);
verify(previousVersions).transform(claim);
verify(reducer).transform(previousClaims);
verify(toDb).handle(claim, anyMap());
verify(toPojo).transform(claim.getSubmitterClaimId());
verify(outbound);
}
}
There are a lot of domain-specific object, so I can't test it to reproduce or find some other issue with your code.
But I see that you don't use an #EnableIntegration on your #Configurations classes.