I don't understand how SQS QueueListener works.
This is my config:
/**
* AWS Credentials Bean
*/
#Bean
public AWSCredentials awsCredentials() {
return new BasicAWSCredentials(accessKey, secretAccessKey);
}
/**
* AWS Client Bean
*/
#Bean
public AmazonSQS amazonSQSAsyncClient() {
AmazonSQS sqsClient = new AmazonSQSClient(awsCredentials());
sqsClient.setRegion(Region.getRegion(Regions.US_EAST_1));
return sqsClient;
}
/**
* AWS Connection Factory
*/
#Bean
public SQSConnectionFactory connectionFactory() {
SQSConnectionFactory.Builder factoryBuilder = new SQSConnectionFactory.Builder(
Region.getRegion(Regions.US_EAST_1));
factoryBuilder.setAwsCredentialsProvider(new AWSCredentialsProvider() {
#Override
public AWSCredentials getCredentials() {
return awsCredentials();
}
#Override
public void refresh() {
}
});
return factoryBuilder.build();
}
/**
* Registering QueueListener for queueName
*/
#Bean
public DefaultMessageListenerContainer defaultMessageListenerContainer() {
DefaultMessageListenerContainer messageListenerContainer = new DefaultMessageListenerContainer();
messageListenerContainer.setConnectionFactory(connectionFactory());
messageListenerContainer.setDestinationName(queueName);
messageListenerContainer.setMessageListener(new MessageListenerAdapter(new LabQueueListener()));
messageListenerContainer.setErrorHandler(new QueueListenerErrorHandler());
messageListenerContainer.setTaskExecutor(Executors.newFixedThreadPool(3));
return messageListenerContainer;
}
As you can see, I have configured my DefaultMessageListenerContainer with Executors.newFixedThreadPool(3)
This way I expect to have 3 concurrent task execution in my queue listener at one time.
Thsi is my listener logic:
public class QueueListener {
public void handleMessage(String messageContent) {
try {
logger.info(String.format("message received: %s", messageContent));
logger.info("wait 30 sec");
Thread.sleep(1000 * 30);
logger.info("done");
} catch (Throwable th) {
throw new QueueListenerException(messageContent, th);
}
}
}
Right now each handleMessage method blocks (Thread.sleep(1000 * 30);) execution for 30 seconds and only 1 handleMessage method executes at one time.
What am I doing wrong ?
How to achieve concurrent handleMessage method invocation at one time ?
With a current configuration I expect to have 3 handleMessage that are performed simultaneously.
You can add the parameter to handle concurrent execution in the bean for DefaultMessageListenerConfigurator by adding messageListenerContainer.setConcurrency("3-10"); This means it will start with 3 threads and scale up to 10.
The number of concurrentConsumers can also be alternatively set by using messageListenerContainer.setConcurrentConsumers(3);
Refer: https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/listener/DefaultMessageListenerContainer.html#setConcurrency-java.lang.String-
Related
The configuration class(part):
public static RabbitQueueConfig clubProNotAvailableConfig =
new RabbitQueueConfig("club-pro-not-available", "club-pro-not-available", "club-pro-not-available-status", "3-3");
#Bean
public SimpleMessageListenerContainer listenerContainer5(ClubProNotAvailableListener listener, ConnectionFactory connectionFactory) {
return initListenerContainer(listener, clubProNotAvailableConfig, connectionFactory);
}
private SimpleMessageListenerContainer initListenerContainer(
ChannelAwareMessageListener listener,
RabbitQueueConfig config,
ConnectionFactory connectionFactory
) {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory);
listenerContainer.setQueueNames(config.getQueue());
listenerContainer.setMessageListener(listener);
listenerContainer.setAcknowledgeMode(AcknowledgeMode.MANUAL);
listenerContainer.setConcurrency(config.getThreadPoolSize());
listenerContainer.setPrefetchCount(1);
return listenerContainer;
}
Method of sending a message:
try {
success = clientRepository.updateAnketa(privatePersonProfile.getProfileId(), clubProAnketa, null);
} catch (ClubProNotAvailableException e) {
ClubProNotAvailableRabbit clubProNotAvailableRabbit = new ClubProNotAvailableRabbit();
clubProNotAvailableRabbit.setRequestContextRabbit(RequestContextRabbit.createContext(requestContextService.getContext()));
clubProNotAvailableRabbit.setCountRetry(0L);
clubProNotAvailableRabbit.setProfileId(privatePersonProfile.getProfileId());
clubProNotAvailableRabbit.setNameMethod(ChangeMethod.CHANGE_ANKETA);
clubProNotAvailableRabbit.setChangeAnketaData(anketa);
rabbitTemplate.convertAndSend(config.getExchange(), config.getRoutingKey(), clubProNotAvailableRabbit, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message message) throws AmqpException {
message.getMessageProperties().setHeader("x-delay", 10000);
return message;
}
});
throw new ClubProNotAvailableException();
}
Configuration in the broker:
Queue configuration:
configuration of the exchanger:
I've read the documentation, tried a couple of options, but I can't apply it to my code.
What am I doing wrong? I will be very grateful for your help.
It looks like you don't have the delayed exchange plugin; you have also declared the exchange as a simple fanout; this is what the exchange should look like this:
Also, to set the delay when sending, you should use:
template.convertAndSend(exchangeName, queue.getName(), "foo", message -> {
message.getMessageProperties().setDelay(1000);
return message;
});
I am trying to implement an Integration flow for a sqs queue using a void async service activator but the handling logic is never triggered.
The message is received in the flow, succesfuly converted by my custom transformer but the async handling is never completed.
This is my configuration class:
#Configuration
public class SqsConfiguration {
/**
...
...
**/
#Bean("amazonSQSClientConfiguration")
ClientConfiguration getAmazonSQSClientConfiguration() {
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setConnectionTimeout(connectionTimeout);
clientConfiguration.setMaxConnections(maxConnections);
clientConfiguration.setSocketTimeout(socketTimeout);
clientConfiguration.setMaxConsecutiveRetriesBeforeThrottling(maxConsecutiveRetriesBeforeThrottling);
return clientConfiguration;
}
#Bean("amazonSQSAsync")
AmazonSQSAsync getAmazonSQSAsync() {
return AmazonSQSAsyncClientBuilder.standard()
.withClientConfiguration(getAmazonSQSClientConfiguration())
.withRegion(this.region)
.build();
}
#Bean("amazonSQSRequestListenerContainerConsumerPool")
protected ThreadPoolTaskExecutor amazonSQSRequestListenerContainerConsumerPool() {
int maxSize = (int) Math.round(concurrentHandlers * poolSizeFactor);
int queueCapacity = (int) Math.round(concurrentHandlers * poolQueueSizeFactor);
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(concurrentHandlers);
taskExecutor.setMaxPoolSize(maxSize);
taskExecutor.setKeepAliveSeconds(poolKeepAliveTimeSeconds);
taskExecutor.setQueueCapacity(queueCapacity);
taskExecutor.setThreadFactory(new NamedDaemonThreadFactory("AmazonSQSRequestHandler"));
taskExecutor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
log.info(
String.format(
"Amazon SQS request handler pool settings: {coreSize: %d, maxSize: %d, queueCapacity: %d}",
concurrentHandlers,
maxSize,
queueCapacity
)
);
return taskExecutor;
}
#Bean("sqsMessageDrivenChannelAdapter")
public MessageProducerSupport sqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(getAmazonSQSAsync(), this.queueName);
adapter.setMaxNumberOfMessages(this.maxNumberOfMessages);
adapter.setVisibilityTimeout(this.visibilityTimeout);
adapter.setSendTimeout(this.sendTimeout);
adapter.setWaitTimeOut(this.waitTimeOut);
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.ON_SUCCESS);
adapter.setTaskExecutor(amazonSQSRequestListenerContainerConsumerPool());
return adapter;
}
#Bean
#SuppressWarnings("unchecked")
IntegrationFlow sqsRequestIntegrationFlow() {
SqsEventHandlerDispatcher commandHandler = applicationContext.getBean(SqsEventHandlerDispatcher.class);
return IntegrationFlows.from(sqsMessageDrivenChannelAdapter())
.transform(converter::toEvent)
.log()
.handle(commandHandler, "handle", a -> a.async(true))
.log()
.get();
}
}
This is my handler:
#Slf4j
#Component
#MessageEndpoint
public class SqsEventHandlerDispatcher {
/**
...
...
**/
public ListenableFuture<?> handle(EventMessage event) {
return new ListenableFutureTask<Void>(() -> doHandle(event), null);
}
private void doHandle(EventMessage event) {
//my handling logic
}
}
The logic in doHandle() method is never reached.
Same integration flow with a sync handler which will return void works perfectly:
#Bean
#SuppressWarnings("unchecked")
IntegrationFlow sqsRequestIntegrationFlow() {
SqsEventHandlerDispatcher commandHandler = applicationContext.getBean(SqsEventHandlerDispatcher.class);
return IntegrationFlows.from(sqsMessageDrivenChannelAdapter())
.transform(converter::toEvent)
.log()
.handle(commandHandler, "handle")
.log()
.get();
}
===============================================================================
#Slf4j
#Component
#MessageEndpoint
public class SqsEventHandlerDispatcher {
public void handle(EventMessage event) {
//my handling logic
}
}
Am I missing something? Or can I achieve it by using Mono?
I don't have much experience neither with spring integration nor async processing.
I found a solution using reactive java.
This is how my service activator looks now:
public Mono handle(EventMessage event, #Header(AwsHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
return Mono.fromRunnable(() -> doHandle(event)).subscribeOn(Schedulers.elastic())
.doOnSuccess(r -> {
log.trace("Message successfully processed. Will delete it now!");
acknowledgment.acknowledge();
});
}
private void doHandle(EventMessage event) {
//my handling logic
}
I ve also updated the sqs message deletion policy to NEVER and will manually acknowledge when a message was successfully processed and can be deleted.
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.NEVER);
I'm running into this bug (more info here) which appears to mean that for multi-threaded batches using remote chunking you can't use a common response channel. I'm not exactly sure how to proceed to get this working. Surely there's a way to get this working, because without it I can't see much benefit to remote chunking.
Here's my DSL config that creates a JobRequest:
#Bean
IntegrationFlow newPollingJobsAdapter(JobLaunchingGateway jobLaunchingGateway) {
// Start by polling the DB for new PollingJobs according to the polling rate
return IntegrationFlows.from(jdbcPollingChannelAdapter(),
c -> c.poller(Pollers.fixedRate(10000)
// Do the polling on one of 10 threads.
.taskExecutor(Executors.newFixedThreadPool(10))
// pull out up to 100 new ids for each poll.
.maxMessagesPerPoll(100)))
.log(LoggingHandler.Level.WARN)
// The polling adapter above returns a list of ids. Split them out into
// individual ids
.split()
// Now push each one onto a separate thread for batch processing.
.channel(MessageChannels.executor(Executors.newFixedThreadPool(10)))
.log(LoggingHandler.Level.WARN)
// Transform each one into a JobLaunchRequest
.<Long, JobLaunchRequest>transform(id -> {
logger.warn("Creating job for ID {}", id);
JobParametersBuilder builder = new JobParametersBuilder()
.addLong("polling-job-id", id, true);
return new JobLaunchRequest(job, builder.toJobParameters());
})
.handle(jobLaunchingGateway)
// TODO: Notify somebody? No idea yet
.<JobExecution>handle(exec -> System.out.println("GOT EXECUTION: " + exec))
.get();
}
Nothing in here is particularly special, no odd configs that I'm aware of.
The job itself is pretty straight-forward, too:
/**
* This is the definition of the entire batch process that runs polling.
* #return
*/
#Bean
Job pollingJobJob() {
return jobBuilderFactory.get("pollingJobJob")
.incrementer(new RunIdIncrementer())
// Ship it down to the slaves for actual processing
.start(remoteChunkingStep())
// Now mark it as complete
.next(markCompleteStep())
.build();
}
/**
* Sends the job to a remote slave via an ActiveMQ-backed JMS queue.
*/
#Bean
TaskletStep remoteChunkingStep() {
return stepBuilderFactory.get("polling-job-step-remote-chunking")
.<Long, String>chunk(20)
.reader(runningPollingJobItemReader)
.processor(toJsonProcessor())
.writer(chunkWriter)
.build();
}
/**
* This step just marks the PollerJob as Complete.
*/
#Bean
Step markCompleteStep() {
return stepBuilderFactory.get("polling-job-step-mark-complete")
// We want each PollerJob instance to be a separate job in batch, and the
// reader is using the id passed in via job params to grab the one we want,
// so we don't need a large chunk size. One at a time is fine.
.<Long, Long>chunk(1)
.reader(runningPollingJobItemReader)
.processor(new PassThroughItemProcessor<Long>())
.writer(this.completeStatusWriter)
.build();
}
Here's the chunk writer config:
/**
* This is part of the bridge between the spring-batch and spring-integration. Nothing special or weird is going
* on, so see the RemoteChunkHandlerFactoryBean for a description.
*/
#Bean
RemoteChunkHandlerFactoryBean<PollerJob> remoteChunkHandlerFactoryBean() {
RemoteChunkHandlerFactoryBean<PollerJob> factory = new RemoteChunkHandlerFactoryBean<>();
factory.setChunkWriter(chunkWriter);
factory.setStep(remoteChunkingStep());
return factory;
}
/**
* This is the writer that will actually send the chunk to the slaves. Note that it also configures the
* internal channel on which replies are expected.
*/
#Bean
#StepScope
ChunkMessageChannelItemWriter<String> chunkWriter() {
ChunkMessageChannelItemWriter<String> writer = new ChunkMessageChannelItemWriter<>();
writer.setMessagingOperations(batchMessagingTemplate());
writer.setReplyChannel(batchResponseChannel());
writer.setThrottleLimit(1000);
return writer;
}
The problem seems to be that last section sets up the ChunkMessageChannelItemWriter such that the replyChannel is the same one used by all of the writers, despite each writer being step-scoped. It would seem that I need to add a replyChannel header to one of the messages, but I'm not sure where in the chain to do that or how to process that (if I need to at all?).
Also, this is being sent to the slaves via JMS/ActiveMQ and I'd like to avoid having just a stupid number of nearly-identical queues on ActiveMQ just to support this.
What are my options?
Given that you are using a shared JMS infrastructure, you will need a router to get the responses back to the correct chunk writer.
If you use prototype scope on the batchResponseChannel() #Bean; you'll get a unique channel for each writer.
I don't have time to figure out how to set up a chunked batch job so the following simulates your environment (non-singleton bean that needs a unique reply channel for each instance). Hopefully it's self-explanatory...
#SpringBootApplication
public class So44806067Application {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So44806067Application.class, args);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker1 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker2 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
if (chunker1.equals(chunker2)) {
throw new IllegalStateException("Expected different instances");
}
chunker1.sendSome();
chunker2.sendSome();
ChunkResponse results = chunker1.getResults();
if (results == null) {
throw new IllegalStateException("No results1");
}
if (results.getJobId() != 1L) {
throw new IllegalStateException("Incorrect routing1");
}
results = chunker2.getResults();
if (results == null) {
throw new IllegalStateException("No results2");
}
if (results.getJobId() != 2L) {
throw new IllegalStateException("Incorrect routing2");
}
context.close();
}
#Bean
public Map<Long, PollableChannel> registry() {
// TODO: should clean up entry for jobId when job completes.
return new ConcurrentHashMap<>();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker() {
MessagingTemplate template = template();
final PollableChannel replyChannel = replyChannel();
SomeNonSingletonNeedingDistinctRequestAndReplyChannels bean =
new SomeNonSingletonNeedingDistinctRequestAndReplyChannels(template, replyChannel);
AbstractSubscribableChannel requestChannel = (AbstractSubscribableChannel) template.getDefaultDestination();
requestChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
registry().putIfAbsent(((ChunkRequest<?>) message.getPayload()).getJobId(), replyChannel);
return message;
}
});
BridgeHandler bridge = bridge();
requestChannel.subscribe(bridge);
return bean;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MessagingTemplate template() {
MessagingTemplate messagingTemplate = new MessagingTemplate();
messagingTemplate.setDefaultChannel(requestChannel());
return messagingTemplate;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public PollableChannel replyChannel() {
return new QueueChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannel(outboundChannel());
return bridgeHandler;
}
#Bean
public DirectChannel outboundChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel masterReplyChannel() {
return new DirectChannel();
}
#ServiceActivator(inputChannel = "outboundChannel")
public void simulateJmsChannelAdapterPair(ChunkRequest<?> request) {
masterReplyChannel()
.send(new GenericMessage<>(new ChunkResponse(request.getSequence(), request.getJobId(), null)));
}
#Router(inputChannel = "masterReplyChannel")
public MessageChannel route(ChunkResponse reply) {
// TODO: error checking - missing reply channel for jobId
return registry().get(reply.getJobId());
}
public static class SomeNonSingletonNeedingDistinctRequestAndReplyChannels {
private final static AtomicLong jobIds = new AtomicLong();
private final long jobId = jobIds.incrementAndGet();
private final MessagingTemplate template;
private final PollableChannel replyChannel;
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels(MessagingTemplate template,
PollableChannel replyChannel) {
this.template = template;
this.replyChannel = replyChannel;
}
public void sendSome() {
ChunkRequest<String> cr = new ChunkRequest<>(0, Collections.singleton("foo"), this.jobId, null);
this.template.send(new GenericMessage<>(cr));
}
public ChunkResponse getResults() {
#SuppressWarnings("unchecked")
Message<ChunkResponse> received = (Message<ChunkResponse>) this.replyChannel.receive(10_000);
if (received != null) {
if (received.getPayload().getJobId().equals(this.jobId)) {
System.out.println("Got the right one");
}
else {
System.out.println(
"Got the wrong one " + received.getPayload().getJobId() + " instead of " + this.jobId);
}
return received.getPayload();
}
return null;
}
}
}
1500 records that I'm breaking up with asynchronous processing with JMS into smaller groups (~250).
1500 too is not a fixed value though. For each client can be more or less. In some cases there can be a 8000 products, or more. I will have N clients doing this operation one, two, three, or four times per day.
I have been breaking the records into smaller groups to avoid having a transaction with 1500 records.
I need to start some task only when all parts have been processed (all 1500).
How can I do this? I'm using Spring 4, JMS 2, HornetQ, and for now using config by annotations.
Maybe I'm not doing the right thing using JMS for that problem. I need help with that too. I have an XML file (from a webservice) with 1500 products (code, price, stock, stock_local, title) and I have to persist all of them.
After, and only after all of them are processed I need to start the task that will update Stock and Price values of each (into a remote system), based on the newly stored values (along with some other conditions)
The code:
// in some RestController i have
Lists.partition(newProducts, 250).forEach(listPart->
myQueue.add(createMessage(Lists.newArrayList(listPart))));
//called some times. Each message contains a list of 250 products to persist
public void add(ProductsMessage message) {
this.jmsTemplate.send(QUEUE_NAME, session -> session.createObjectMessage(message));
}
#JmsListener(destination = QUEUE_NAME, )
public void importProducts(ProductsMessage message) {
....
//at this method i get message.getList and persist all 250 products
}
Actual config JMS:
#Configuration
#EnableJms
public class JmsConfig {
public static final int DELIVERY_DELAY = 1000;
public static final int SESSION_CACHE_SIZE = 10;
#Bean
#Autowired
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(PlatformTransactionManager transactionManager) {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setDestinationResolver(destinationResolver());
factory.setConcurrency("1-2");
factory.setTransactionManager(transactionManager);
return factory;
}
#Bean
public DestinationResolver destinationResolver() {
return new DynamicDestinationResolver();
}
#Bean
public ConnectionFactory connectionFactory() {
TransportConfiguration transport = new TransportConfiguration(InVMConnectorFactory.class.getName());
ConnectionFactory originalConnectionFactory = HornetQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transport);
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setTargetConnectionFactory(originalConnectionFactory);
connectionFactory.setSessionCacheSize(SESSION_CACHE_SIZE);
return connectionFactory;
}
#Bean
public JmsTemplate template(ConnectionFactory connectionFactory) {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory);
template.setDeliveryDelay(DELIVERY_DELAY);
template.setSessionTransacted(true);
return template;
}
/**
* Inicializa um broker JMS embarcado
*/
#Bean(initMethod = "start", destroyMethod = "stop")
public EmbeddedJMS startJmsBroker() {
return new EmbeddedJMS();
}
}
I have a controller with WebAsyncTask. Further on I'm using a timeout callback.
As writen here I shall have an option to notifies the Callable to cancel processing. However I don't see any option to do so.
#Controller
public class UserDataProviderController {
private static final Logger log = LoggerFactory.getLogger(UserDataProviderController.class.getName());
#Autowired
private Collection<UserDataService> dataServices;
#RequestMapping(value = "/client/{socialSecurityNumber}", method = RequestMethod.GET)
public #ResponseBody
WebAsyncTask<ResponseEntity<CustomDataResponse>> process(#PathVariable final String socialSecurityNumber) {
final Callable<ResponseEntity<CustomDataResponse>> callable = new Callable<ResponseEntity<CustomDataResponse>>() {
#Override
public ResponseEntity<CustomDataResponse> call() throws Exception {
CustomDataResponse CustomDataResponse = CustomDataResponse.newInstance();
// Find user data
for(UserDataService dataService:dataServices)
{
List<? extends DataClient> clients = dataService.findBySsn(socialSecurityNumber);
CustomDataResponse.put(dataService.getDataSource(), UserDataConverter.convert(clients));
}
// test long execution
Thread.sleep(4000);
log.info("Execution thread continued and shall be terminated:"+Thread.currentThread().getName());
HttpHeaders responseHeaders = new HttpHeaders();
responseHeaders.setContentType(new MediaType("application", "json", Charset.forName("UTF-8")));
return new ResponseEntity(CustomDataResponse,responseHeaders,HttpStatus.OK);
}
};
final Callable<ResponseEntity<CustomDataResponse>> callableTimeout = new Callable<ResponseEntity<CustomDataResponse>>() {
#Override
public ResponseEntity<CustomDataResponse> call() throws Exception {
// Error response
HttpHeaders responseHeaders = new HttpHeaders();
responseHeaders.setContentType(new MediaType("application", "json", Charset.forName("UTF-8")));
return new ResponseEntity("Request has timed out!",responseHeaders,HttpStatus.INTERNAL_SERVER_ERROR);
}
};
WebAsyncTask<ResponseEntity<CustomDataResponse>> task = new WebAsyncTask<>(3000,callable);
task.onTimeout(callableTimeout);
return task;
}
}
My #WebConfig
#Configuration
#EnableWebMvc
class WebAppConfig extends WebMvcConfigurerAdapter {
#Override
public void configureAsyncSupport(AsyncSupportConfigurer configurer) {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setKeepAliveSeconds(60 * 60);
executor.afterPropertiesSet();
configurer.registerCallableInterceptors(new TimeoutCallableProcessingInterceptor());
configurer.setTaskExecutor(executor);
}
}
And quite standard Interceptor:
public class TimeoutCallableProcessingInterceptor extends CallableProcessingInterceptorAdapter {
#Override
public <T> Object handleTimeout(NativeWebRequest request, Callable<T> task) {
throw new IllegalStateException("[" + task.getClass().getName() + "] timed out");
}
}
Everything work as it should, but Callable from controller always completes, which is obvious, but how to stop processing there ?
You can use WebAsyncTask to implement the timeout control and Thread management to stop the new async thread gracefully.
Implement a Callable to run the process
In this method (that runs in a diferent thread) store the current Thread in a Controller's local variable
Implement another Callable to handle timeout event
In this method retrieve the previously stored Thread and interrupt it calling the interrupt() method.
Also throw a TimeoutException to stop the controller process
In the running process, check if the thread interrupted with Thread.currentThread().isInterrupted(), if so, then rollback the transaction throwing an Exception.
Controller:
public WebAsyncTask<ResponseEntity<BookingFileDTO>> confirm(#RequestBody final BookingConfirmationRQDTO bookingConfirmationRQDTO)
throws AppException,
ProductException,
ConfirmationException,
BeanValidationException {
final Long startTimestamp = System.currentTimeMillis();
// The compiler obligates to define the local variable shared with the callable as final array
final Thread[] asyncTaskThread = new Thread[1];
/**
* Asynchronous execution of the service's task
* Implemented without ThreadPool, we're using Tomcat's ThreadPool
* To implement an specific ThreadPool take a look at http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#mvc-ann-async-configuration-spring-mvc
*/
Callable<ResponseEntity<BookingFileDTO>> callableTask = () -> {
//Stores the thread of the newly started asynchronous task
asyncTaskThread[0] = Thread.currentThread();
log.debug("Running saveBookingFile task at `{}`thread", asyncTaskThread[0].getName());
BookingFileDTO bookingFileDTO = bookingFileService.saveBookingFile(
bookingConfirmationRQDTO,
MDC.get(HttpHeader.XB3_TRACE_ID))
.getValue();
if (log.isDebugEnabled()) {
log.debug("The saveBookingFile task took {} ms",
System.currentTimeMillis() - startTimestamp);
}
return new ResponseEntity<>(bookingFileDTO, HttpStatus.OK);
};
/**
* This method is executed if a timeout occurs
*/
Callable<ResponseEntity<BookingFileDTO>> callableTimeout = () -> {
String msg = String.format("Timeout detected at %d ms during confirm operation",
System.currentTimeMillis() - startTimestamp);
log.error("Timeout detected at {} ms during confirm operation: informing BookingFileService.", msg);
// Informs the service that the time has ran out
asyncTaskThread[0].interrupt();
// Interrupts the controller call
throw new TimeoutException(msg);
};
WebAsyncTask<ResponseEntity<BookingFileDTO>> webAsyncTask = new WebAsyncTask<>(timeoutMillis, callableTask);
webAsyncTask.onTimeout(callableTimeout);
log.debug("Timeout set to {} ms", timeoutMillis);
return webAsyncTask;
}
Service implementation:
/**
* If the service has been informed that the time has ran out
* throws an AsyncRequestTimeoutException to roll-back transactions
*/
private void rollbackOnTimeout() throws TimeoutException {
if(Thread.currentThread().isInterrupted()) {
log.error(TIMEOUT_DETECTED_MSG);
throw new TimeoutException(TIMEOUT_DETECTED_MSG);
}
}
#Transactional(rollbackFor = TimeoutException.class, propagation = Propagation.REQUIRES_NEW)
DTOSimpleWrapper<BookingFileDTO> saveBookingFile(BookingConfirmationRQDTO bookingConfirmationRQDTO, String traceId) {
// Database operations
// ...
return retValue;
}