I have a piece of code where I am using integration flow as:
#Bean
public IntegrationFlow avlFlow() {
return IntegrationFlows.from(channelName)
.split(splitter)
// An executor channel, with a fixed thread pool size.
.channel(c -> c.executor(SRThreadUtils
.getNamedFixedSizeExecutorService(maxThreads,"ThreadPoolname")))
.handle(requestHandler)
.resequence()
.aggregate(
new Consumer<AggregatorSpec>() {
#Override
public void accept(final AggregatorSpec t) {
t.outputProcessor(requestAggregator);
}
})
.get();
}
The code is creating threads using executor, which is then not getting shutdown and due to which it remains in running state always. How can the executorService be closed in this integration flow?
Related
I currently have implemented in a Spring Boot project running on Fargate an SQS listener.
It's possible that under the hood, the SqsAsyncClient which appears to be a listener, is actually polling though.
Separately, as a PoC, on I implemented a Lambda function trigger on a different queue. This would be invoked when there are items in the queue and would post to my service. This seems unnecessarily complex to me but removes a single point of failure if I were to only have one instance of the service.
I guess my major point of confusion is whether I am needlessly worrying about polling vs listening on a SQS queue and whether it matters.
Code for example purposes:
#Component
#Slf4j
#RequiredArgsConstructor
public class SqsListener {
private final SqsAsyncClient sqsAsyncClient;
private final Environment environment;
private final SmsMessagingServiceImpl smsMessagingService;
#PostConstruct
public void continuousListener() {
String queueUrl = environment.getProperty("aws.sqs.sms.queueUrl");
Mono<ReceiveMessageResponse> responseMono = receiveMessage(queueUrl);
Flux<Message> messages = getItems(responseMono);
messages.subscribe(message -> disposeOfFlux(message, queueUrl));
}
protected Flux<Message> getItems(Mono<ReceiveMessageResponse> responseMono) {
return responseMono.repeat().retry()
.map(ReceiveMessageResponse::messages)
.map(Flux::fromIterable)
.flatMap(messageFlux -> messageFlux);
}
protected void disposeOfFlux(Message message, String queueUrl) {
log.info("Inbound SMS Received from SQS with MessageId: {}", message.messageId());
if (someConditionIsMet())
deleteMessage(queueUrl, message);
}
protected Mono<ReceiveMessageResponse> receiveMessage(String queueUrl) {
return Mono.fromFuture(() -> sqsAsyncClient.receiveMessage(
ReceiveMessageRequest.builder()
.maxNumberOfMessages(5)
.messageAttributeNames("All")
.queueUrl(queueUrl)
.waitTimeSeconds(10)
.visibilityTimeout(30)
.build()));
}
protected void deleteMessage(String queueUrl, Message message) {
sqsAsyncClient.deleteMessage(DeleteMessageRequest.builder()
.queueUrl(queueUrl)
.receiptHandle(message.receiptHandle())
.build())
.thenAccept(deleteMessageResponse -> log.info("deleted message with handle {}", message.receiptHandle()));
}
}
I have spring integration flow that gets triggered once a every day, that pulls all parties from database and sends each party to an executorChannel.
The next flow would pull data for each party and then process them parallelly by sending in to a different executor channel.
Challenge i'm facing is how do i know when this entire process ends. Any ideas on how to acheve this .
Here's my pseudo code of executor channels and integration flows.
#Bean
public IntegrationFlow fileListener() {
return IntegrationFlows.from(Files.inboundAdapter(new
File("pathtofile"))).channel("mychannel").get();
}
#Bean
public IntegrationFlow flowOne() throws ParserConfigurationException {
return IntegrationFlows.from("mychannel").handle("serviceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowOne() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelOne").handle("parallelServiceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowTwo() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelTwo").handle("parallelServiceHandlerTwo",
"handle").nullChannel();
}
#Bean
public MessageChannel executorChannelOne() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Bean
public MessageChannel executorChannelTwo;() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Component
#Scope("prototype")
public class ServiceHandlerOne{
#Autowired
MessageChannel executorChannelOne;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("parties");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelOne.send(message);
});
return message;
}
}
#Component
#Scope("prototype")
public class ParallelServiceHandlerOne{
#Autowired
MessageChannel executorChannelTwo;;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("party");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelTwo;.send(message);
});
return message;
}
}
First of all no reason to make your services as #Scope("prototype"): I don't see any state holding in your services, so they are stateless, therefore can simply be as singleton. Second: since you make your flows ending with the nullChannel(), therefore point in returning anything from your service methods. Therefore just void and flow is going to end over there naturally.
Another observation: you use executorChannelOne.send(message) directly in the code of your service method. The same would be simply achieved if you just return that new message from your service method and have that executorChannelOne as the next .channel() in your flow definition after that handle("parallelServiceHandlerOne", "handle").
Since it looks like you do that in the loop, you might consider to add a .split() in between: the handler return your List<?> rowDatas and splitter will take care for iterating over that data and replies each item to that executorChannelOne.
Now about your original question.
There is really no easy to say that your executors are not busy any more. They might not be at the moment of request just because the message for task has not reached an executor channel yet.
Typically we recommend to use some async synchronizer for your data. The aggregator is a good way to correlate several messages in-the-flight. This way the aggregator collects a group and does not emit reply until that group is completed.
The splitter I've mentioned above adds a sequence details headers by default, so subsequent aggregator can track a message group easily.
Since you have layers in your flow, it looks like you would need a several aggregators: two for your executor channels after splitting, and one top level for the file. Those two would reply to the top-level for the final, per-file grouping.
You also may think about making those parties and party calls in parallel using a PublishSubscribeChannel, which also can be configured with a applySequence=true. This info then will be used by the top-level aggregator for info per file.
See more in docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#splitter
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator
I'm actively using ApplicationEventPublisher in my app and the main result of some methods executions is publishing event with ApplicationEventPublisher.
I am using a simple trap for events in the test environment in order to collect events and verify them:
#Singleton
public class MessageListenerTestHelper {
private ConcurrentLinkedQueue queue = new ConcurrentLinkedQueue<>();
#Async
#EventListener
public void onEvent(Object event) {
queue.add(event);
}
public Queue getQueue() {
return queue;
}
public <T> Future<T> getEventFromQueue(Class<T> eventClass) {
CompletableFuture<T> future = new CompletableFuture<>();
Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(() -> {
Optional eventOpt = queue.stream()
.filter(eventClass::isInstance)
.findAny();
if (eventOpt.isPresent()) {
future.complete((T) eventOpt.get());
}
}, 100, 100, TimeUnit.MILLISECONDS);
return future;
}
}
But my tests are flaky - its usually fails in github actions, but works at my computer. So I want to fix it by mock ApplicationEventPublisher. But #Replaces annotation doesn't work. I tried it in the test and in factory available only in test environment, but neither of this is worked.
I am going to refuse to use #MicronautTest annotation, and inject mocks manually. But maybe there is another choise?
I am trying to create a few instances of state machine from one uml-model. I use stateMachineFactory. I would like these machines working independently and asynchrously.
Everything work great if I use only "base" states. Machine instances can independently go to stateB and StateC. However, when I use regions and sub-states (stateD), machine instances execute action (insideStateD1) one after another. Please look at .
I've found that states are executed via stateMachineTaskExecutor (which default is SyncTaskExecutor) but substates are executed via taskScheduler (which default is ConcurrentTaskScheduler).
This is configuration:
#Configuration
#EnableStateMachineFactory
public class StateMachineConfig extends StateMachineConfigurerAdapter<String, String> {
#Autowired
StateMachineComponentResolver<String, String> stateMachineComponentResolver;
#Bean
public StateMachineModelFactory<String, String> modelFactory() {
UmlStateMachineModelFactory umlStateMachineModelFactory = new UmlStateMachineModelFactory("classpath:uml/testSM1.uml");
umlStateMachineModelFactory.setStateMachineComponentResolver(stateMachineComponentResolver);
return umlStateMachineModelFactory;
}
#Override
public void configure(StateMachineModelConfigurer<String, String> model) throws Exception {
model
.withModel()
.factory(modelFactory());
}
#Override
public void configure(StateMachineConfigurationConfigurer<String, String> config) throws Exception {
config
.withConfiguration()
// .taskExecutor() // I tried various taskExecutors
// .taskScheduler() // I tried various taskSchedulers
;
}
}
What is the correct way to achieve many instances of state machine from the same model?
Multiple instances of a SM can be obtained by the StateMachineFactory.
stateMachineFactory.getStateMachine(); //builds a new state machine
The configuration that you have created in StateMachineConfig applies to all SM instances.
Spring State Machine uses TaskExecutor for region executions (doesn't matter top level or nested regions) and by default it's synchronous. To achieve async execution you need to override the default task executor. This can be achieved in the configuration:
#Override
public void configure(StateMachineConfigurationConfigurer<States, Events> config) throws Exception {
config
.withConfiguration()
//other configs
.taskExecutor(myAsyncTaskExecutor())
}
public TaskExecutor myAsyncTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(5);
return taskExecutor;
}
or by declaring a bean:
#Bean(name = StateMachineSystemConstants.TASK_EXECUTOR_BEAN_NAME)
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(5);
return taskExecutor;
}
TaskScheduler is used for action executions (actions associated with states or transitions) and not for sub-states.
Can anybody tell my is there a way of using the Spring Framework's #Async annotation without blocking / waiting on the result? Here is some code to clarify my question:
#Service
public class AsyncServiceA {
#Autowired
private AsyncServiceB asyncServiceB;
#Async
public CompletableFuture<String> a() {
ThreadUtil.silentSleep(1000);
return asyncServiceB.b();
}
}
#Service
public class AsyncServiceB {
#Async
public CompletableFuture<String> b() {
ThreadUtil.silentSleep(1000);
return CompletableFuture.completedFuture("Yeah, I come from another thread.");
}
}
and the configuration:
#SpringBootApplication
#EnableAsync
public class Application implements AsyncConfigurer {
private static final Log LOG = LogFactory.getLog(Application.class);
private static final int THREAD_POOL_SIZE = 1;
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
return args -> {
final AsyncServiceA bean = ctx.getBean(AsyncServiceA.class);
bean.a().whenComplete(LOG::info);
};
}
#Override
#Bean(destroyMethod = "shutdown")
public ThreadPoolTaskExecutor getAsyncExecutor() {
final ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(THREAD_POOL_SIZE);
executor.setMaxPoolSize(THREAD_POOL_SIZE);
executor.initialize();
return executor;
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
// omitted
}
}
When I run the application the executor goes through calling AsyncServiceA.a() and leaves, but it still holds the thread from the pool waiting on the CompletableFuture.get() method. Since there is just a single thread in the pool the AsyncServiceB.b() cannot be executed. What I'm expecting is that thread to be returned to the pool after it executes the AsyncServiceA.a() and then be available to execute the AsyncServiceB.b().
Is there a way to do that?
Note 1: I've tried with ListenableFuture also but the result is the same.
Note 2: I've successfully did it manually (without the #Async) by giving the executor to each method like so:
AsyncServiceA
public CompletableFuture<String> manualA(Executor executor) {
return CompletableFuture.runAsync(() -> {
LOG.info("manualA() working...");
ThreadUtil.silentSleep(1000);
}, executor)
.thenCompose(x -> asyncServiceB.manualB(executor));
}
AsyncServiceB
public CompletableFuture<String> manualB(Executor executor) {
return CompletableFuture.runAsync(() -> {
LOG.info("manualB() working...");
ThreadUtil.silentSleep(1000);
}, executor)
.thenCompose(x -> CompletableFuture
.supplyAsync(() -> "Yeah, I come from another thread.", executor));
}
Here is the ThreadUtil if someone was wondering.
public class ThreadUtil {
public static void silentSleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
Update: Added issue for non-blocking Async annotation https://jira.spring.io/browse/SPR-15401
The #Async support has been part of Spring since Spring 3.0 which was way before the existence of Java8 (or 7 for that matter). Although support has been added for CompletableFutures in later versions it still is to be used for simple async execution of a method call. (The initial implementation reflects/shows the call).
For composing callbacks and operate non blocking the async support was never designed or intended to.
For non blocking support you would want to wait for Spring 5 with its reactive/non-blocking core, next to that you can always submit a ticket for non-blocking support in the async support.
I've responded on the ticket https://jira.spring.io/browse/SPR-15401 but I'll respond here as well to qualify the response by M. Deinum.
#Async by virtue of how it works (decorating the method call via AOP) can only do one thing, which is to turn the entire method from sync to async. That means the method has to be sync, not a mix of sync and async.
So ServiceA which does some sleeping and then delegates to the async ServiceB would have to wrap the sleeping part in some #Async ServiceC and then compose on ServiceB and C. That way ServiceA becomes async and does not need to have the #Async annotation itself..