I want to use RxJava in my project and I wrote simple asynchronous method and now I try to test it. I cannot test it with Executor because I get result: java.lang.AssertionError: Not completed! (0 completions)
Service method:
#Override
public Observable<User> addOrderAsynchronously(String username, Order order) {
if (username != null && order != null) {
return Observable
.fromCallable(() -> userRepository.findByUsername(username))
.filter(Objects::nonNull)
.map(user -> {
synchronized (this) {
user.getOrders().add(order);
order.setUser(user);
}
return user;
})
.map(userRepository::save);
}
return Observable.empty();
}
Working test without Executor:
#Test
public void shouldSaveUserSynchronouslyTest() throws Exception {
// given
TestSubscriber<User> subscriber = new TestSubscriber<>();
final Order order = new Order();
final User user = new User();
when(userRepository.findByUsername(user.getUsername()))
.thenReturn(user);
when(userRepository.save(any(User.class)))
.thenReturn(user);
// when
userService.addOrderAsynchronously(user.getUsername(), order)
.subscribe(subscriber);
// then
subscriber.assertCompleted();
subscriber.assertNoErrors();
assertThat(subscriber.getOnNextEvents()).isEqualTo(Collections.singletonList(user));
}
And now I want to test with executor:
#Test
public void shouldSaveUserSynchronouslyTest() throws Exception {
// given
TestSubscriber<User> subscriber = new TestSubscriber<>();
final Order order = new Order();
final User user = new User();
when(userRepository.findByUsername(user.getUsername()))
.thenReturn(user);
when(userRepository.save(any(User.class)))
.thenReturn(user);
// when
userService.addOrderAsynchronously(user.getUsername(), order)
.subscribeOn(Schedulers.from(executor))
.subscribe(subscriber);
// then
subscriber.assertCompleted();
subscriber.assertNoErrors();
assertThat(subscriber.getOnNextEvents()).isEqualTo(Collections.singletonList(user));
}
And this test doesn't work because I get result java.lang.AssertionError: Not completed! (0 completions).
executor is comes from #Bean method:
#Bean
public Executor executor() {
final ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(8);
executor.setMaxPoolSize(8);
executor.setQueueCapacity(500);
executor.initialize();
return executor;
}
How can I write unit test with executor?
Related
I am new to vertx and async programming.
I have 2 verticles communicating via an event bus as follows:
//API Verticle
public class SearchAPIVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private Integer defaultPort;
private void sendSearchRequest(RoutingContext routingContext) {
final JsonObject requestMessage = routingContext.getBodyAsJson();
final EventBus eventBus = vertx.eventBus();
eventBus.request(GET_USEARCH_DOCS, requestMessage, reply -> {
if (reply.succeeded()) {
Logger.info("Search Result = " + reply.result().body());
routingContext.response()
.putHeader("content-type", "application/json")
.setStatusCode(200)
.end((String) reply.result().body());
} else {
Logger.info("Document Search Request cannot be processed");
routingContext.response()
.setStatusCode(500)
.end();
}
});
}
#Override
public void start() throws Exception {
Logger.info("Starting the Gateway service (Event Sender) verticle");
// Create a Router
Router router = Router.router(vertx);
//Added bodyhandler so we can process json messages via the event bus
router.route().handler(BodyHandler.create());
// Mount the handler for incoming requests
// Find documents
router.post("/api/search/docs/*").handler(this::sendSearchRequest);
// Create an HTTP Server using default options
HttpServer server = vertx.createHttpServer();
// Handle every request using the router
server.requestHandler(router)
//start listening on port 8083
.listen(config().getInteger("http.port", 8083)).onSuccess(msg -> {
Logger.info("*************** Search Gateway Server started on "
+ server.actualPort() + " *************");
});
}
#Override
public void stop(){
//house keeping
}
}
//Below is the target verticle should be making the multiple web client call and merging the responses
.
#Component
public class SolrCloudVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private SearchRepository searchRepositoryService;
#Override
public void start() throws Exception {
Logger.info("Starting the Solr Cloud Search Service (Event Consumer) verticle");
super.start();
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file")
.setConfig(new JsonObject().put("path", "conf/config.json"));
ConfigRetrieverOptions configRetrieverOptions = new ConfigRetrieverOptions()
.addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, configRetrieverOptions);
configRetriever.getConfig(ar -> {
if (ar.succeeded()) {
JsonObject configJson = ar.result();
EventBus eventBus = vertx.eventBus();
eventBus.<JsonObject>consumer(GET_USEARCH_DOCS).handler(getDocumentService(searchRepositoryService, configJson));
Logger.info("Completed search service event processing");
} else {
Logger.error("Failed to retrieve the config");
}
});
}
private Handler<Message<JsonObject>> getDocumentService(SearchRepository searchRepositoryService, JsonObject configJson) {
return requestMessage -> vertx.<String>executeBlocking(future -> {
try {
//I need to incorporate the logic here that adds futures to list and composes the compositefuture
/*
//Below is my logic to populate the future list
WebClient client = WebClient.create(vertx);
List<Future> futureList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
Future<String> future1 = client.post(8983, "127.0.0.1", "/solr/" + collection + "/query")
.expect(ResponsePredicate.SC_OK)
.sendJsonObject(requestMessage.body())
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
futureList.add(future1);
}
//Below is the CompositeFuture logic, but the logic and construct does not make sense to me. What goes as first and second argument of executeBlocking method
/*CompositeFuture.join(futureList)
.onSuccess(result -> {
result.list().forEach( x -> {
if(x != null){
requestMessage.reply(result.result());
}
}
);
})
.onFailure(error -> {
System.out.println("We should not fail");
})
*/
future.complete("DAO returns a Json String");
} catch (Exception e) {
future.fail(e);
}
}, result -> {
if (result.succeeded()) {
requestMessage.reply(result.result());
} else {
requestMessage.reply(result.cause()
.toString());
}
});
}
}
I was able to use the org.springframework.web.reactive.function.client.WebClient calls to compose my search result from multiple web client calls, as against using Future<io.vertx.ext.web.client.WebClient> with CompositeFuture.
I was trying to avoid mixing Springboot and Vertx, but unfortunately Vertx CompositeFuture did not work here:
//This method supplies the parameter for the future.complete(..) line in getDocumentService(SearchRepository,JsonObject)
private List<JsonObject> findByQueryParamsAndDataSources(SearchRepository searchRepositoryService,
JsonObject configJson,
JsonObject requestMessage)
throws SolrServerException, IOException {
List<JsonObject> searchResultList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
searchResultList.add(new JsonObject(doSearchPerCollection(collection.toString(), requestMessage.toString())));
}
return aggregateMultiCollectionSearchResults(searchResultList);
}
public String doSearchPerCollection(String collection, String message) {
org.springframework.web.reactive.function.client.WebClient client =
org.springframework.web.reactive.function.client.WebClient.create();
return client.post()
.uri("http://127.0.0.1:8983/solr/" + collection + "/query")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(message.toString()))
.retrieve()
.bodyToMono(String.class)
.block();
}
private List<JsonObject> aggregateMultiCollectionSearchResults(List<JsonObject> searchList){
//TODO: Search result aggregation
return searchList;
}
My use case is the second verticle should make multiple vertx web client calls and should combine the responses.
If an API call falls, I want to log the error and still continue processing and merging responses from other calls.
Please, any help on how my code above could be adaptable to handle the use case?
I am looking at vertx CompositeFuture, but no headway or useful example seen yet!
What you are looking for can done with Future coordination with a little bit of additional handling:
CompositeFuture.join(future1, future2, future3).onComplete(ar -> {
if (ar.succeeded()) {
// All succeeded
} else {
// All completed and at least one failed
}
});
The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join
takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of them is failed
Using join you will wait for all Futures to complete, the issue is that if one of them fails you will not be able to obtain response from others as CompositeFuture will be failed. To avoid this you should add Future<T> recover(Function<Throwable, Future<T>> mapper) on each of your Futures in which you should log the error and pass an empty response so that the future does not fail.
Here is short example:
Future<String> response1 = client.post(8887, "localhost", "work").expect(ResponsePredicate.SC_OK).send()
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
Future<String> response2 = client.post(8887, "localhost", "error").expect(ResponsePredicate.SC_OK).send()
map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
CompositeFuture.join(response2, response1)
.onSuccess(result -> {
result.list().forEach(x -> {
if(x != null) {
System.out.println(x);
}
});
})
.onFailure(error -> {
System.out.println("We should not fail");
});
Edit 1:
Limit for CompositeFuture.join(Future...) is 6 Futures, in the case you need more you can use: CompositeFuture.join(Arrays.asList(future1, future2, future3)); where you can pass unlimited number of futures.
I created a cache with the following parameters:
cacheTempFiles = CacheBuilder.newBuilder().maximumSize(250).expireAfterWrite(15, TimeUnit.SECONDS).removalListener(new RemovalListener<String, Path>()
{
#Override
public void onRemoval(RemovalNotification<String, Path> notification)
{
deleteTemporaryFile(notification.getValue());
}
}).build();
Moreover, I'm calling every 2 minutes cacheTempFiles.cleanUp();. However, it seems that onRemoval is never called.
What is missing in my implementation?
It definitely should work, see example below:
#Test
public void shouldCallRemovalListener() {
AtomicInteger counter = new AtomicInteger();
MutableClock clock = MutableClock.epochUTC();
Ticker ticker = new Ticker() {
#Override
public long read() {
return TimeUnit.MILLISECONDS.toNanos(clock.millis());
}
};
Path tmpPath = Path.of("/tmp");
Cache<String, Path> cacheTempFiles = CacheBuilder.newBuilder()
.ticker(ticker)
.maximumSize(250)
.expireAfterWrite(15, TimeUnit.SECONDS)
.removalListener(
(RemovalNotification<String, Path> notification) ->
System.out.println(String.format(
"Delete '%s -> %s' (%d times)",
notification.getKey(), notification.getValue(), counter.incrementAndGet())))
.build();
cacheTempFiles.put("tmp", tmpPath);
assertThat(cacheTempFiles.asMap()).containsOnly(Assertions.entry("tmp", tmpPath));
assertThat(counter).hasValue(0);
clock.add(Duration.ofSeconds(20));
cacheTempFiles.cleanUp();
assertThat(cacheTempFiles.asMap()).isEmpty();
assertThat(counter).hasValue(1);
}
Passes and outputs Delete 'tmp -> /tmp' (1 times).
The unit test keeps giving me =
Wanted but not invoked: However, there were exactly 3 interactions with this mock.
All I am trying to do is, testing the timeout for a method execution - if the method takes more time, then terminate it and publish count(to understand the timed out response rate) as metric.
#Test
public void testTimeoutFunction() throws Exception {
Response response = getResponseForTest();
when(processor
.process(any(Request.class)))
.thenAnswer((Answer<Response>) invocation -> {
Thread.sleep(100);
return response;
});
when(itemRequest.getRequestContext()).thenReturn(itemRequestContext);
testClass = spy(new TestClass(processor, executorService));
List<Item> output = testClass.getItemList(ID, itemRequest);
verify(testClass, times(1)).responseTimedOutCount();
assertTrue(output.isEmpty());
verify(testClass, timeout(EXECUTION_TIMEOUT)).buildResponse(itemRequest);
verify(testClass, times(1)).buildResponse(itemRequest);
}
This is method which I am testing for:
public class TestClass {
#VisibleForTesting
void responseTimedOutCount() {
//log metrics
}
private CompletableFuture<Response> getResponseAsync(final ScheduledExecutorService delayer,
final ItemRequest itemRequest) {
return timeoutWithTimeoutFunction(delayer, EXECUTION_TIMEOUT, TimeUnit.MILLISECONDS,
CompletableFuture.supplyAsync(() -> getResponseWithTimeoutFunction(itemRequest), executorService),
Response.emptyResponse(), () -> responseTimedOutCount());
}
private Response getResponseWithTimeoutFunction(final ItemRequest itemRequest) {
//do something and return response
}
public List<Item> getItemList(final String id, final ItemRequest itemRequest) throws Exception {
final ScheduledExecutorService delayer = Executors.newScheduledThreadPool(1);
Response response;
if(validateItemId(id){
try {
response = getResponseAsync(delayer, itemRequest).get();
} catch (final Throwable t) {
response = Response.emptyResponse();
} finally {
delayer.shutdown();
}
return transform(response, id).getItems();
} else {
return null;
}
}
}
Exception from Junit :
For this assert -
verify(testClass, times(1)).responseTimedOutCount();
Wanted but not invoked:
testClass.responseTimedOutCount();
However, there were exactly 3 interactions with this mock:
testClass.getItemList(ID, itemRequest);
testClass.validateItemId(ID);
testClass.getResponseWithTimeoutFunction(itemRequest);
I'm running into this bug (more info here) which appears to mean that for multi-threaded batches using remote chunking you can't use a common response channel. I'm not exactly sure how to proceed to get this working. Surely there's a way to get this working, because without it I can't see much benefit to remote chunking.
Here's my DSL config that creates a JobRequest:
#Bean
IntegrationFlow newPollingJobsAdapter(JobLaunchingGateway jobLaunchingGateway) {
// Start by polling the DB for new PollingJobs according to the polling rate
return IntegrationFlows.from(jdbcPollingChannelAdapter(),
c -> c.poller(Pollers.fixedRate(10000)
// Do the polling on one of 10 threads.
.taskExecutor(Executors.newFixedThreadPool(10))
// pull out up to 100 new ids for each poll.
.maxMessagesPerPoll(100)))
.log(LoggingHandler.Level.WARN)
// The polling adapter above returns a list of ids. Split them out into
// individual ids
.split()
// Now push each one onto a separate thread for batch processing.
.channel(MessageChannels.executor(Executors.newFixedThreadPool(10)))
.log(LoggingHandler.Level.WARN)
// Transform each one into a JobLaunchRequest
.<Long, JobLaunchRequest>transform(id -> {
logger.warn("Creating job for ID {}", id);
JobParametersBuilder builder = new JobParametersBuilder()
.addLong("polling-job-id", id, true);
return new JobLaunchRequest(job, builder.toJobParameters());
})
.handle(jobLaunchingGateway)
// TODO: Notify somebody? No idea yet
.<JobExecution>handle(exec -> System.out.println("GOT EXECUTION: " + exec))
.get();
}
Nothing in here is particularly special, no odd configs that I'm aware of.
The job itself is pretty straight-forward, too:
/**
* This is the definition of the entire batch process that runs polling.
* #return
*/
#Bean
Job pollingJobJob() {
return jobBuilderFactory.get("pollingJobJob")
.incrementer(new RunIdIncrementer())
// Ship it down to the slaves for actual processing
.start(remoteChunkingStep())
// Now mark it as complete
.next(markCompleteStep())
.build();
}
/**
* Sends the job to a remote slave via an ActiveMQ-backed JMS queue.
*/
#Bean
TaskletStep remoteChunkingStep() {
return stepBuilderFactory.get("polling-job-step-remote-chunking")
.<Long, String>chunk(20)
.reader(runningPollingJobItemReader)
.processor(toJsonProcessor())
.writer(chunkWriter)
.build();
}
/**
* This step just marks the PollerJob as Complete.
*/
#Bean
Step markCompleteStep() {
return stepBuilderFactory.get("polling-job-step-mark-complete")
// We want each PollerJob instance to be a separate job in batch, and the
// reader is using the id passed in via job params to grab the one we want,
// so we don't need a large chunk size. One at a time is fine.
.<Long, Long>chunk(1)
.reader(runningPollingJobItemReader)
.processor(new PassThroughItemProcessor<Long>())
.writer(this.completeStatusWriter)
.build();
}
Here's the chunk writer config:
/**
* This is part of the bridge between the spring-batch and spring-integration. Nothing special or weird is going
* on, so see the RemoteChunkHandlerFactoryBean for a description.
*/
#Bean
RemoteChunkHandlerFactoryBean<PollerJob> remoteChunkHandlerFactoryBean() {
RemoteChunkHandlerFactoryBean<PollerJob> factory = new RemoteChunkHandlerFactoryBean<>();
factory.setChunkWriter(chunkWriter);
factory.setStep(remoteChunkingStep());
return factory;
}
/**
* This is the writer that will actually send the chunk to the slaves. Note that it also configures the
* internal channel on which replies are expected.
*/
#Bean
#StepScope
ChunkMessageChannelItemWriter<String> chunkWriter() {
ChunkMessageChannelItemWriter<String> writer = new ChunkMessageChannelItemWriter<>();
writer.setMessagingOperations(batchMessagingTemplate());
writer.setReplyChannel(batchResponseChannel());
writer.setThrottleLimit(1000);
return writer;
}
The problem seems to be that last section sets up the ChunkMessageChannelItemWriter such that the replyChannel is the same one used by all of the writers, despite each writer being step-scoped. It would seem that I need to add a replyChannel header to one of the messages, but I'm not sure where in the chain to do that or how to process that (if I need to at all?).
Also, this is being sent to the slaves via JMS/ActiveMQ and I'd like to avoid having just a stupid number of nearly-identical queues on ActiveMQ just to support this.
What are my options?
Given that you are using a shared JMS infrastructure, you will need a router to get the responses back to the correct chunk writer.
If you use prototype scope on the batchResponseChannel() #Bean; you'll get a unique channel for each writer.
I don't have time to figure out how to set up a chunked batch job so the following simulates your environment (non-singleton bean that needs a unique reply channel for each instance). Hopefully it's self-explanatory...
#SpringBootApplication
public class So44806067Application {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So44806067Application.class, args);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker1 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker2 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
if (chunker1.equals(chunker2)) {
throw new IllegalStateException("Expected different instances");
}
chunker1.sendSome();
chunker2.sendSome();
ChunkResponse results = chunker1.getResults();
if (results == null) {
throw new IllegalStateException("No results1");
}
if (results.getJobId() != 1L) {
throw new IllegalStateException("Incorrect routing1");
}
results = chunker2.getResults();
if (results == null) {
throw new IllegalStateException("No results2");
}
if (results.getJobId() != 2L) {
throw new IllegalStateException("Incorrect routing2");
}
context.close();
}
#Bean
public Map<Long, PollableChannel> registry() {
// TODO: should clean up entry for jobId when job completes.
return new ConcurrentHashMap<>();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker() {
MessagingTemplate template = template();
final PollableChannel replyChannel = replyChannel();
SomeNonSingletonNeedingDistinctRequestAndReplyChannels bean =
new SomeNonSingletonNeedingDistinctRequestAndReplyChannels(template, replyChannel);
AbstractSubscribableChannel requestChannel = (AbstractSubscribableChannel) template.getDefaultDestination();
requestChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
registry().putIfAbsent(((ChunkRequest<?>) message.getPayload()).getJobId(), replyChannel);
return message;
}
});
BridgeHandler bridge = bridge();
requestChannel.subscribe(bridge);
return bean;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MessagingTemplate template() {
MessagingTemplate messagingTemplate = new MessagingTemplate();
messagingTemplate.setDefaultChannel(requestChannel());
return messagingTemplate;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public PollableChannel replyChannel() {
return new QueueChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannel(outboundChannel());
return bridgeHandler;
}
#Bean
public DirectChannel outboundChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel masterReplyChannel() {
return new DirectChannel();
}
#ServiceActivator(inputChannel = "outboundChannel")
public void simulateJmsChannelAdapterPair(ChunkRequest<?> request) {
masterReplyChannel()
.send(new GenericMessage<>(new ChunkResponse(request.getSequence(), request.getJobId(), null)));
}
#Router(inputChannel = "masterReplyChannel")
public MessageChannel route(ChunkResponse reply) {
// TODO: error checking - missing reply channel for jobId
return registry().get(reply.getJobId());
}
public static class SomeNonSingletonNeedingDistinctRequestAndReplyChannels {
private final static AtomicLong jobIds = new AtomicLong();
private final long jobId = jobIds.incrementAndGet();
private final MessagingTemplate template;
private final PollableChannel replyChannel;
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels(MessagingTemplate template,
PollableChannel replyChannel) {
this.template = template;
this.replyChannel = replyChannel;
}
public void sendSome() {
ChunkRequest<String> cr = new ChunkRequest<>(0, Collections.singleton("foo"), this.jobId, null);
this.template.send(new GenericMessage<>(cr));
}
public ChunkResponse getResults() {
#SuppressWarnings("unchecked")
Message<ChunkResponse> received = (Message<ChunkResponse>) this.replyChannel.receive(10_000);
if (received != null) {
if (received.getPayload().getJobId().equals(this.jobId)) {
System.out.println("Got the right one");
}
else {
System.out.println(
"Got the wrong one " + received.getPayload().getJobId() + " instead of " + this.jobId);
}
return received.getPayload();
}
return null;
}
}
}
I have a controller with WebAsyncTask. Further on I'm using a timeout callback.
As writen here I shall have an option to notifies the Callable to cancel processing. However I don't see any option to do so.
#Controller
public class UserDataProviderController {
private static final Logger log = LoggerFactory.getLogger(UserDataProviderController.class.getName());
#Autowired
private Collection<UserDataService> dataServices;
#RequestMapping(value = "/client/{socialSecurityNumber}", method = RequestMethod.GET)
public #ResponseBody
WebAsyncTask<ResponseEntity<CustomDataResponse>> process(#PathVariable final String socialSecurityNumber) {
final Callable<ResponseEntity<CustomDataResponse>> callable = new Callable<ResponseEntity<CustomDataResponse>>() {
#Override
public ResponseEntity<CustomDataResponse> call() throws Exception {
CustomDataResponse CustomDataResponse = CustomDataResponse.newInstance();
// Find user data
for(UserDataService dataService:dataServices)
{
List<? extends DataClient> clients = dataService.findBySsn(socialSecurityNumber);
CustomDataResponse.put(dataService.getDataSource(), UserDataConverter.convert(clients));
}
// test long execution
Thread.sleep(4000);
log.info("Execution thread continued and shall be terminated:"+Thread.currentThread().getName());
HttpHeaders responseHeaders = new HttpHeaders();
responseHeaders.setContentType(new MediaType("application", "json", Charset.forName("UTF-8")));
return new ResponseEntity(CustomDataResponse,responseHeaders,HttpStatus.OK);
}
};
final Callable<ResponseEntity<CustomDataResponse>> callableTimeout = new Callable<ResponseEntity<CustomDataResponse>>() {
#Override
public ResponseEntity<CustomDataResponse> call() throws Exception {
// Error response
HttpHeaders responseHeaders = new HttpHeaders();
responseHeaders.setContentType(new MediaType("application", "json", Charset.forName("UTF-8")));
return new ResponseEntity("Request has timed out!",responseHeaders,HttpStatus.INTERNAL_SERVER_ERROR);
}
};
WebAsyncTask<ResponseEntity<CustomDataResponse>> task = new WebAsyncTask<>(3000,callable);
task.onTimeout(callableTimeout);
return task;
}
}
My #WebConfig
#Configuration
#EnableWebMvc
class WebAppConfig extends WebMvcConfigurerAdapter {
#Override
public void configureAsyncSupport(AsyncSupportConfigurer configurer) {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setKeepAliveSeconds(60 * 60);
executor.afterPropertiesSet();
configurer.registerCallableInterceptors(new TimeoutCallableProcessingInterceptor());
configurer.setTaskExecutor(executor);
}
}
And quite standard Interceptor:
public class TimeoutCallableProcessingInterceptor extends CallableProcessingInterceptorAdapter {
#Override
public <T> Object handleTimeout(NativeWebRequest request, Callable<T> task) {
throw new IllegalStateException("[" + task.getClass().getName() + "] timed out");
}
}
Everything work as it should, but Callable from controller always completes, which is obvious, but how to stop processing there ?
You can use WebAsyncTask to implement the timeout control and Thread management to stop the new async thread gracefully.
Implement a Callable to run the process
In this method (that runs in a diferent thread) store the current Thread in a Controller's local variable
Implement another Callable to handle timeout event
In this method retrieve the previously stored Thread and interrupt it calling the interrupt() method.
Also throw a TimeoutException to stop the controller process
In the running process, check if the thread interrupted with Thread.currentThread().isInterrupted(), if so, then rollback the transaction throwing an Exception.
Controller:
public WebAsyncTask<ResponseEntity<BookingFileDTO>> confirm(#RequestBody final BookingConfirmationRQDTO bookingConfirmationRQDTO)
throws AppException,
ProductException,
ConfirmationException,
BeanValidationException {
final Long startTimestamp = System.currentTimeMillis();
// The compiler obligates to define the local variable shared with the callable as final array
final Thread[] asyncTaskThread = new Thread[1];
/**
* Asynchronous execution of the service's task
* Implemented without ThreadPool, we're using Tomcat's ThreadPool
* To implement an specific ThreadPool take a look at http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#mvc-ann-async-configuration-spring-mvc
*/
Callable<ResponseEntity<BookingFileDTO>> callableTask = () -> {
//Stores the thread of the newly started asynchronous task
asyncTaskThread[0] = Thread.currentThread();
log.debug("Running saveBookingFile task at `{}`thread", asyncTaskThread[0].getName());
BookingFileDTO bookingFileDTO = bookingFileService.saveBookingFile(
bookingConfirmationRQDTO,
MDC.get(HttpHeader.XB3_TRACE_ID))
.getValue();
if (log.isDebugEnabled()) {
log.debug("The saveBookingFile task took {} ms",
System.currentTimeMillis() - startTimestamp);
}
return new ResponseEntity<>(bookingFileDTO, HttpStatus.OK);
};
/**
* This method is executed if a timeout occurs
*/
Callable<ResponseEntity<BookingFileDTO>> callableTimeout = () -> {
String msg = String.format("Timeout detected at %d ms during confirm operation",
System.currentTimeMillis() - startTimestamp);
log.error("Timeout detected at {} ms during confirm operation: informing BookingFileService.", msg);
// Informs the service that the time has ran out
asyncTaskThread[0].interrupt();
// Interrupts the controller call
throw new TimeoutException(msg);
};
WebAsyncTask<ResponseEntity<BookingFileDTO>> webAsyncTask = new WebAsyncTask<>(timeoutMillis, callableTask);
webAsyncTask.onTimeout(callableTimeout);
log.debug("Timeout set to {} ms", timeoutMillis);
return webAsyncTask;
}
Service implementation:
/**
* If the service has been informed that the time has ran out
* throws an AsyncRequestTimeoutException to roll-back transactions
*/
private void rollbackOnTimeout() throws TimeoutException {
if(Thread.currentThread().isInterrupted()) {
log.error(TIMEOUT_DETECTED_MSG);
throw new TimeoutException(TIMEOUT_DETECTED_MSG);
}
}
#Transactional(rollbackFor = TimeoutException.class, propagation = Propagation.REQUIRES_NEW)
DTOSimpleWrapper<BookingFileDTO> saveBookingFile(BookingConfirmationRQDTO bookingConfirmationRQDTO, String traceId) {
// Database operations
// ...
return retValue;
}