Run springboot application every 1 minut and with synchronise every controller - java

I developed spring-boot application with 4 controllers and I need to run the first one after is finished the seconde controller launched to do some work and the there and finally the fourth one, I do #Scheduled(fixedRate = 30000) for every controller, but with this solution, I don't have a synchro between the 4 controllers, I need to help me with a solution to run it with synchro or another solution to automatically run all controller together with priority. and i attached the code architecture
#CrossOrigin("*")
#RestController
public class collector{
#Autowired
DataSource datasource;
#Scheduled(fixedRate = 40000)
public void collector( )
{
// the methodes
}
}
#CrossOrigin("*")
#RestController
public class Loader {
#Autowired
DataSource datasource;
#Scheduled(fixedRate = 40000)
public void loador( )
{
// the methodes
}
}
#CrossOrigin("*")
#RestController
public class export{
#Autowired
DataSource datasource;
#Scheduled(fixedRate = 40000)
public void export( )
{
// the methodes
}
}
#CrossOrigin("*")
#RestController
public class send{
#Autowired
DataSource datasource;
#Scheduled(fixedRate = 40000)
public void send( )
{
// the methodes
}
}

This is not how schedulers in Spring work. Since you want to execute each method synchronously right after the previous one has finished, schedule just the one that triggers the chain of methods and feel free to call them inside the scheduled method.
#Scheduled(fixedRate = 40000)
public void collector() {
// the methods // body of the 1st one
this.loader.loador(); // 2nd one
this.export.export(); // 3rd one
this.send.send(); // 4th one
}

Related

How to write spring boot unit test for a method which calls another #Async method inside it?

I need to write integration test for "processEvent" method which calls a #Async method inside it. I tried writing a test for this method, but the issue was it did not save the Student object to the DB. However removing #Async annotation allows me to save the object. I want to know how should I write Test cases for this method, eliminating the #Async issue. I want to save the student object while testing. I have attached my code and below.
Here is the ClassA and it has the method I want to test.
#Service
public class ClassA {
private final ConfigurationProcessor<Student> configurationProcessor;
#Autowired
public ClassA(ConfigurationProcessor<Student> configurationProcessor) {
this.configurationProcessor = configurationProcessor;
}
public void processEvent(Configuration configuration) {
configurationProcessor.process(configuration);
}
}
This is the interface ConfigurationProcessor class
public interface ConfigurationProcessor<T> {
void process(Configuration configuration);
}
And this is its Impl class
#Service
public class ConfigurationProcessoeStudents10Impl implements ConfigurationProcessor<Student> {
private final StudentRepository studentRepository;
#Autowired
public ConfigurationProcessoeStudents10Impl(StudentRepository studentRepository) {
this.studentRepository = studentRepository;
}
#Override
#Async
public void process(Configuration configuration) {
studentRepository.save(Student.builder().Name(configuration.name).Age(configuration.age));
}
}
This is the Test I have written so far.
#EnableAutoConfiguration
#SpringBootTest
public class AudienceC10IT {
#Autowired
ClassA classA;
#Test
#Tag("VerifyProcess")
#DisplayName("Verify kafka event consumer from configuration manager")
void verifyProcess(){
Configuration configuration = new Configuration("lal",12);
classA.processEvent(configuration);
}
}
If you have have set up a ThreadPoolTaskExecutor bean you can Autowire it in your test.
Then, after you call your async method, you can await the termination.
Then you can check for the expected behaviour / result.
Something like this:
#Autowired
private ThreadPoolTaskExecutor asyncTaskExecutor;
#Test
void test() {
callAsyncMethod();
boolean terminated = asyncTaskExecutor.getThreadPoolExecutor().awaitTermination(1, TimeUnit.SECONDS);
assertAsyncBehaviour();
}

Adding scheduler for #Scheduled annotation in spring without using xml annotations

I have several methods with the annotation #Scheduled. For each annotation or a group of them I want a different scheduler to be used. For example:
Group A has 3 methods with #Scheduled annotation which need to use Scheduler X.
Group B has 5 methods with #Scheduled annotation which need to use Scheduler Y.
From what I have read in Does spring #Scheduled annotated methods runs on different threads?, if the scheduler is not specified then only one of those methods will run at a time.
I know how this connection can be done using xml-based annotation. But is there a way that this can be done using Java-based annotation only?
It can be done using java config. But not using an annotation attributes.
You could have a look at the Spring API doc for some extended example.
For example:
#Configuration
#EnableScheduling
public class AppConfig implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
taskRegistrar.setScheduler(taskScheduler());
}
#Bean(destroyMethod="shutdown")
public Executor taskScheduler() {
return Executors.newScheduledThreadPool(42);
}
}
#Scheduled group is not yet supported. See this open issue.
If you want use more than one scheduler you have to create and configure them programmatically. For example:
#Configuration
#EnableScheduling
public class AppConfig implements SchedulingConfigurer {
[...]
#Bean(destroyMethod="shutdown", name = "taskSchedulerA")
public Executor taskSchedulerA() {
return Executors.newScheduledThreadPool(42);
}
#Bean(destroyMethod="shutdown", name = "taskSchedulerB")
public Executor taskSchedulerA() {
return Executors.newScheduledThreadPool(42);
}
}
#Service
public class MyService {
#Autowired #Qualifier("taskSchedulerA")
private Executor taskSchedulerA;
#Autowired #Qualifier("taskSchedulerB")
private Executor taskSchedulerB;
#PostConstruct
public void schedule(){
Executors.newScheduledThreadPool(42).schedule(new Runnable() {
#Override
public void run() {
functionOfGroupA();
}
} , ..);
}
}

How to run multiple jobs in spring batch using annotations

I am using Spring Boot + Spring Batch (annotation) , have come across a scenario where I have to run 2 jobs.
I have Employee and Salary records which needs to updated using spring batch. I have configured BatchConiguration classes by following this tutorial spring-batch getting started tutorial for Employee and Salary objects, respectively named as BatchConfigurationEmployee & BatchConfigurationSalary.
I have Defined the ItemReader, ItemProcessor, ItemWriter and Job by following the tutorial which is mentioned above already.
When I start my Spring Boot application either of the Job runs, I want to run both the BatchConfigured classes. How can I achieve this
********* BatchConfigurationEmployee.java *************
#Configuration
#EnableBatchProcessing
public class BatchConfigurationEmployee {
public ItemReader<employee> reader() {
return new EmployeeItemReader();
}
#Bean
public ItemProcessor<Employee, Employee> processor() {
return new EmployeeItemProcessor();
}
#Bean
public Job Employee(JobBuilderFactory jobs, Step s1) {
return jobs.get("Employee")
.incrementer(new RunIdIncrementer())
.flow(s1)
.end()
.build();
}
#Bean
public Step step1(StepBuilderFactory stepBuilderFactory, ItemReader<Employee> reader,
ItemProcessor<Employee, Employee> processor) {
return stepBuilderFactory.get("step1")
.<Employee, Employee> chunk(1)
.reader(reader)
.processor(processor)
.build();
}
}
Salary Class is here
#Configuration
#EnableBatchProcessing
public class BatchConfigurationSalary {
public ItemReader<Salary> reader() {
return new SalaryItemReader();
}
#Bean
public ItemProcessor<Salary, Salary> processor() {
return new SalaryItemProcessor();
}
#Bean
public Job salary(JobBuilderFactory jobs, Step s1) {
return jobs.get("Salary")
.incrementer(new RunIdIncrementer())
.flow(s1)
.end()
.build();
}
#Bean
public Step step1(StepBuilderFactory stepBuilderFactory, ItemReader<Salary> reader,
ItemProcessor<Salary, Salary> processor) {
return stepBuilderFactory.get("step1")
.<Salary, Salary> chunk(1)
.reader(reader)
.processor(processor)
.build();
}
}
The names of the Beans have to be unique in the whole Spring Context.
In both jobs, you are instantiating the reader, writer and processor with the same methodname. The methodname is the name that is used to identifiy the bean in the context.
In both job-definitions, you have reader(), writer() and processor(). They will overwrite each other. Give them unique names like readerEmployee(), readerSalary() and so on.
That should solve your problem.
You jobs are not annotated with #Bean, so the spring-context doesn't know them.
Have a look at the class JobLauncherCommandLineRunner. All Beans in the SpringContext implementing the Job interface will be injected. All jobs that are found will be executed. (this happens inside the method executeLocalJobs in JobLauncherCommandLineRunner)
If, for some reason, you don't want to have them as beans in the context, then you have to register your jobs with the jobregistry.( the method execute registeredJobs of JobLauncherCommandLineRunner will take care of launching the registered jobs)
BTW, you can control with the property
spring.batch.job.names= # Comma-separated list of job names to execute on startup (For instance
`job1,job2`). By default, all Jobs found in the context are executed.
which jobs should be launched.
I feel that this also is a pretty good way to run mutiple Jobs.
I am making use of a Job Launcher to configure and execute the job and independent commandLineRunner implementation to run them. These are ordered to make sure they are executed sequentially in the required though
Apologies for the big post but I wanted to give a clear picture of what can be achieved using JobLauncher configurations with multiple command line runners
This is the current BeanConfiguration that I have
#Configuration
public class BeanConfiguration {
#Autowired
DataSource dataSource;
#Autowired
PlatformTransactionManager transactionManager;
#Bean(name="jobOperator")
public JobOperator jobOperator(JobExplorer jobExplorer,
JobRegistry jobRegistry) throws Exception {
SimpleJobOperator jobOperator = new SimpleJobOperator();
jobOperator.setJobExplorer(jobExplorer);
jobOperator.setJobRepository(createJobRepository());
jobOperator.setJobRegistry(jobRegistry);
jobOperator.setJobLauncher(jobLauncher());
return jobOperator;
}
/**
* Configure joblaucnher to set the execution to be done asycn
* Using the ThreadPoolTaskExecutor
* #return
* #throws Exception
*/
#Bean
public JobLauncher jobLauncher() throws Exception {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(createJobRepository());
jobLauncher.setTaskExecutor(taskExecutor());
jobLauncher.afterPropertiesSet();
return jobLauncher;
}
// Read the datasource and set in the job repo
protected JobRepository createJobRepository() throws Exception {
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(dataSource);
factory.setTransactionManager(transactionManager);
factory.setIsolationLevelForCreate("ISOLATION_SERIALIZABLE");
//factory.setTablePrefix("BATCH_");
factory.setMaxVarCharLength(10000);
return factory.getObject();
}
#Bean
public RestTemplateBuilder restTemplateBuilder() {
return new RestTemplateBuilder().additionalInterceptors(new CustomRestTemplateLoggerInterceptor());
}
#Bean(name=AppConstants.JOB_DECIDER_BEAN_NAME_EMAIL_INIT)
public JobExecutionDecider jobDecider() {
return new EmailInitJobExecutionDecider();
}
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(15);
taskExecutor.setMaxPoolSize(20);
taskExecutor.setQueueCapacity(30);
return taskExecutor;
}
}
I have setup the database to hold the job exectuion details in postgre and hence the DatabaseConfiguration looks like this (two different beans for two different profiles -env)
#Configuration
public class DatasourceConfiguration implements EnvironmentAware{
private Environment env;
#Bean
#Qualifier(AppConstants.DB_BEAN)
#Profile("dev")
public DataSource getDataSource() {
HikariDataSource ds = new HikariDataSource();
boolean isAutoCommitEnabled = env.getProperty("spring.datasource.hikari.auto-commit") != null ? Boolean.parseBoolean(env.getProperty("spring.datasource.hikari.auto-commit")):false;
ds.setAutoCommit(isAutoCommitEnabled);
// Connection test query is for legacy connections
//ds.setConnectionInitSql(env.getProperty("spring.datasource.hikari.connection-test-query"));
ds.setPoolName(env.getProperty("spring.datasource.hikari.pool-name"));
ds.setDriverClassName(env.getProperty("spring.datasource.driver-class-name"));
long timeout = env.getProperty("spring.datasource.hikari.idleTimeout") != null ? Long.parseLong(env.getProperty("spring.datasource.hikari.idleTimeout")): 40000;
ds.setIdleTimeout(timeout);
long maxLifeTime = env.getProperty("spring.datasource.hikari.maxLifetime") != null ? Long.parseLong(env.getProperty("spring.datasource.hikari.maxLifetime")): 1800000 ;
ds.setMaxLifetime(maxLifeTime);
ds.setJdbcUrl(env.getProperty("spring.datasource.url"));
ds.setPoolName(env.getProperty("spring.datasource.hikari.pool-name"));
ds.setUsername(env.getProperty("spring.datasource.username"));
ds.setPassword(env.getProperty("spring.datasource.password"));
int poolSize = env.getProperty("spring.datasource.hikari.maximum-pool-size") != null ? Integer.parseInt(env.getProperty("spring.datasource.hikari.maximum-pool-size")): 10;
ds.setMaximumPoolSize(poolSize);
return ds;
}
#Bean
#Qualifier(AppConstants.DB_PROD_BEAN)
#Profile("prod")
public DataSource getProdDatabase() {
HikariDataSource ds = new HikariDataSource();
boolean isAutoCommitEnabled = env.getProperty("spring.datasource.hikari.auto-commit") != null ? Boolean.parseBoolean(env.getProperty("spring.datasource.hikari.auto-commit")):false;
ds.setAutoCommit(isAutoCommitEnabled);
// Connection test query is for legacy connections
//ds.setConnectionInitSql(env.getProperty("spring.datasource.hikari.connection-test-query"));
ds.setPoolName(env.getProperty("spring.datasource.hikari.pool-name"));
ds.setDriverClassName(env.getProperty("spring.datasource.driver-class-name"));
long timeout = env.getProperty("spring.datasource.hikari.idleTimeout") != null ? Long.parseLong(env.getProperty("spring.datasource.hikari.idleTimeout")): 40000;
ds.setIdleTimeout(timeout);
long maxLifeTime = env.getProperty("spring.datasource.hikari.maxLifetime") != null ? Long.parseLong(env.getProperty("spring.datasource.hikari.maxLifetime")): 1800000 ;
ds.setMaxLifetime(maxLifeTime);
ds.setJdbcUrl(env.getProperty("spring.datasource.url"));
ds.setPoolName(env.getProperty("spring.datasource.hikari.pool-name"));
ds.setUsername(env.getProperty("spring.datasource.username"));
ds.setPassword(env.getProperty("spring.datasource.password"));
int poolSize = env.getProperty("spring.datasource.hikari.maximum-pool-size") != null ? Integer.parseInt(env.getProperty("spring.datasource.hikari.maximum-pool-size")): 10;
ds.setMaximumPoolSize(poolSize);
return ds;
}
public void setEnvironment(Environment environment) {
// TODO Auto-generated method stub
this.env = environment;
}
}
Make sure that the initial app launcher catches the app execution which will be returned once the job execution terminates (either gets failed or completed) so that you can gracefully shutdown the jvm. Else using joblauncher makes the jvm to be alive even after all jobs get completed
#SpringBootApplication
#ComponentScan(basePackages="com.XXXX.Feedback_File_Processing.*")
#EnableBatchProcessing
public class FeedbackFileProcessingApp
{
public static void main(String[] args) throws Exception {
ApplicationContext appContext = SpringApplication.run(FeedbackFileProcessingApp.class, args);
// The batch job has finished by this point because the
// ApplicationContext is not 'ready' until the job is finished
// Also, use System.exit to force the Java process to finish with the exit code returned from the Spring App
System.exit(SpringApplication.exit(appContext));
}
}
............. so on , you can configure your own decider , your own job/steps as you said above for two different configurations like below and use them seperately in commandline runners (since the post is getting bigger, I am giving the details of just the job and command line runner)
These are the two jobs
#Configuration
public class DefferalJobConfiguration {
#Autowired
JobLauncher joblauncher;
#Autowired
private JobBuilderFactory jobFactory;
#Autowired
private StepBuilderFactory stepFactory;
#Bean
#StepScope
public Tasklet newSampleTasklet() {
return ((stepExecution, chunkContext) -> {
System.out.println("execution of step after flow");
return RepeatStatus.FINISHED;
});
}
#Bean
public Step sampleStep() {
return stepFactory.get("sampleStep").listener(new CustomStepExecutionListener())
.tasklet(newSampleTasklet()).build();
}
#Autowired
#Qualifier(AppConstants.FLOW_BEAN_NAME_EMAIL_INITIATION)
private Flow emailInitFlow;
#Autowired
#Qualifier(AppConstants.JOB_DECIDER_BEAN_NAME_EMAIL_INIT)
private JobExecutionDecider jobDecider;
#Autowired
#Qualifier(AppConstants.STEP_BEAN_NAME_ITEMREADER_FETCH_DEFERRAL_CONFIG)
private Step deferralConfigStep;
#Bean(name=AppConstants.JOB_BEAN_NAME_DEFERRAL)
public Job deferralJob() {
return jobFactory.get(AppConstants.JOB_NAME_DEFERRAL)
.start(emailInitFlow)
.on("COMPLETED").to(sampleStep())
.next(jobDecider).on("COMPLETED").to(deferralConfigStep)
.on("FAILED").fail()
.end().build();
}
}
#Configuration
public class TestFlowJobConfiguration {
#Autowired
private JobBuilderFactory jobFactory;
#Autowired
#Qualifier("testFlow")
private Flow testFlow;
#Bean(name = "testFlowJob")
public Job testFlowJob() {
return jobFactory.get("testFlowJob").start(testFlow).end().build();
}
}
Here are the command line runners (I am making sure that the first job is completed before the second job is initialized but it is totally up to the user to execute them in parallel following a different stratergy)
#Component
#Order(1)
public class DeferralCommandLineRunner implements CommandLineRunner, EnvironmentAware{
// If the jobLauncher is not used, then by default jobs are launched using SimpleJobLauncher
// with default configuration(assumption)
// hence modified the jobLauncher with vales set in BeanConfig
// of spring batch
private Environment env;
#Autowired
JobLauncher jobLauncher;
#Autowired
#Qualifier(AppConstants.JOB_BEAN_NAME_DEFERRAL)
Job deferralJob;
#Override
public void run(String... args) throws Exception {
// TODO Auto-generated method stub
JobParameters jobparams = new JobParametersBuilder()
.addString("run.time", LocalDateTime.now().
format(DateTimeFormatter.ofPattern(AppConstants.JOB_DATE_FORMATTER_PATTERN)).toString())
.addString("instance.name",
(deferralJob.getName() != null) ?deferralJob.getName()+'-'+UUID.randomUUID().toString() :
UUID.randomUUID().toString())
.toJobParameters();
jobLauncher.run(deferralJob, jobparams);
}
#Override
public void setEnvironment(Environment environment) {
// TODO Auto-generated method stub
this.env = environment;
}
}
#Component
#Order(2)
public class TestJobCommandLineRunner implements CommandLineRunner {
#Autowired
JobLauncher jobLauncher;
#Autowired
#Qualifier("testFlowJob")
Job testjob;
#Autowired
#Qualifier("jobOperator")
JobOperator operator;
#Override
public void run(String... args) throws Exception {
// TODO Auto-generated method stub
JobParameters jobParam = new JobParametersBuilder().addString("name", UUID.randomUUID().toString())
.toJobParameters();
System.out.println(operator.getJobNames());
try {
Set<Long> deferralExecutionIds = operator.getRunningExecutions(AppConstants.JOB_NAME_DEFERRAL);
System.out.println("deferralExceutuibuds:" + deferralExecutionIds);
operator.stop(deferralExecutionIds.iterator().next());
} catch (NoSuchJobException | NoSuchJobExecutionException | JobExecutionNotRunningException e) {
// just add a logging here
System.out.println("exception caught:" + e.getMessage());
}
jobLauncher.run(testjob, jobParam);
}
}
Hope this gives a complete idea of how it can be done. I am using spring-boot-starter-batch:jar:2.0.0.RELEASE

Configure Spring task scheduler to run at a fixeDelay or run at once based on a boolean

I have a code which runs at a regular interval.
Below is code for that
#EnableScheduling
#Component
public class Cer {
#Autowired
private A a;
#Autowired
private CacheManager cacheManager;
#Scheduled(fixedDelayString = "${xvc}")
public void getData() {
getCat();
getB();
return;
}
}
I want to change#Scheduled(fixedDelayString = "${xvc}") this based on a boolean say runOnce if runOnce is true #scheduled should run once only say on code startup.
Any advice how to achieve this.
Thanks in advance.
I would place the functionality that you want to schedule in a component:
#Component
public class ServiceToSchedule {
public void methodThatWillBeScheduled() {
// whatever
return;
}
}
And have two additional components that will be instantiated depending on a Profile. One of them schedules the task, and the other one just executes it once.
#Profile("!scheduled")
#Component
public class OnceExecutor {
#Autowired
private ServiceToSchedule service;
#PostConstruct
public void executeOnce() {
// will just be execute once
service.methodThatWillBeScheduled();
}
}
#Profile("scheduled")
#Component
#EnableScheduling
public class ScheduledExecutor {
#Autowired
private ServiceToSchedule service;
#Scheduled(fixedRate = whateverYouDecide)
public void schedule() {
// will be scheduled
service.methodThatWillBeScheduled();
}
}
Depending on which profile is active, your method will be executed just once (scheduled profile is not active), or will be scheduled (scheduled profile is active).
You set the spring profiles by (for example) starting your service with:
-Dspring.profiles.active=scheduled

Mocking config with spring boot isn't picking up property files

I am trying to write a IntegrationFlow test. It goes something like this:
JMS(in) -> (find previous versions in db) -> reduce(in,1...n) -> (to db) -> JMS(out)
So, no suprise: I want to mock the DB calls; they are Dao beans. But, I also want it to pickup other beans through component scan; I will selectively scan all packages except dao.
Create a test config and mock the Daos. No problem
Follow spring boot instructions for testing to get Component scanned beans. No problem
I just want to verify the sequence of steps and the resultant output as the outbound JMS queue would see it. Can someone just help me fill in the blanks?
This CANT be tough! The use of mocks seems to be problematic because plenty of essential fields are final. I am reading everywhere about this and just not coming up with a clear path. I inherited this code BTW
My error:
org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
Here is my code
#Configuration
#ImportResource("classpath:retry-context.xml")
public class LifecycleConfig {
#Autowired
private MessageProducerSupport inbound;
#Autowired
private MessageHandler outbound;
#Autowired
#Qualifier("reducer")
private GenericTransformer<Collection<ExtendedClaim>,ExtendedClaim> reducer;
#Autowired
#Qualifier("claimIdToPojo")
private GenericTransformer<String,ClaimDomain> toPojo;
#Autowired
#Qualifier("findPreviousVersion")
private GenericTransformer<ExtendedClaim,Collection<ExtendedClaim>> previousVersions;
#Autowired
#Qualifier("saveToDb")
private GenericHandler<ExtendedClaim> toDb;
#Bean
public DirectChannel getChannel() {
return new DirectChannel();
}
#Bean
#Autowired
public StandardIntegrationFlow processClaim() {
return IntegrationFlows.from(inbound).
channel(getChannel()).
transform(previousVersions).
transform(reducer).
handle(ExtendedClaim.class,toDb).
transform(toPojo).
handle(outbound).get();
}
}
Test Config
#Configuration
public class TestConfig extends AbstractClsTest {
#Bean(name = "claimIdToPojo")
public ClaimIdToPojo getClaimIdToPojo() {
return spy(new ClaimIdToPojo());
}
#Bean
public ClaimToId getClaimToIdPojo() {
return spy(new ClaimToId());
}
#Bean(name = "findPreviousVersion")
public FindPreviousVersion getFindPreviousVersion() {
return spy(new FindPreviousVersion());
}
#Bean(name = "reducer")
public Reducer getReducer() {
return spy(new Reducer());
}
#Bean(name = "saveToDb")
public SaveToDb getSaveToDb() {
return spy(new SaveToDb());
}
#Bean
public MessageProducerSupport getInbound() {
MessageProducerSupport mock = mock(MessageProducerSupport.class);
// when(mock.isRunning()).thenReturn(true);
return mock;
}
#Bean
public PaymentDAO getPaymentDao() {
return mock(PaymentDAO.class);
}
#Bean
public ClaimDAO getClaimDao() {
return mock(ClaimDAO.class);
}
#Bean
public MessageHandler getOutbound() {
return new CaptureHandler<ExtendedClaim>();
}
}
Actual test won't load
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = {TestConfig.class, LifecycleConfig.class})
public class ClaimLifecycleApplicationTest extends AbstractClsTest {
#Autowired
private MessageHandler outbound;
#Autowired
#Qualifier("reducer")
private GenericTransformer<Collection<ExtendedClaim>,ExtendedClaim> reducer;
#Autowired
#Qualifier("claimIdToPojo")
private GenericTransformer<String,ClaimDomain> toPojo;
#Autowired
#Qualifier("findPreviousVersion")
private GenericTransformer<ExtendedClaim,Collection<ExtendedClaim>> previousVersions;
#Autowired
#Qualifier("saveToDb")
private GenericHandler<ExtendedClaim> toDb;
#Autowired
private DirectChannel defaultChannel;
#Test
public void testFlow() throws Exception {
ExtendedClaim claim = getClaim();
Message<ExtendedClaim> message = MessageBuilder.withPayload(claim).build();
List<ExtendedClaim> previousClaims = Arrays.asList(claim);
defaultChannel.send(message);
verify(previousVersions).transform(claim);
verify(reducer).transform(previousClaims);
verify(toDb).handle(claim, anyMap());
verify(toPojo).transform(claim.getSubmitterClaimId());
verify(outbound);
}
}
There are a lot of domain-specific object, so I can't test it to reproduce or find some other issue with your code.
But I see that you don't use an #EnableIntegration on your #Configurations classes.

Categories