#Async method throws TaskRejectedException when running all JUnit tests - java

This might be a difficult one. I have a method (public void someMethod()) in my Spring Boot project, annotated with #Async("MyExecutor"), with MyExecutor being defined as:
#Configuration
#EnableAsync(mode = AdviceMode.ASPECTJ)
public class VideoStreamingConfig implements AsyncConfigurer {
#Override
#Bean(name = "MyExecutor")
public Executor getAsyncExecutor() {
final ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(4);
executor.setMaxPoolSize(12);
executor.setQueueCapacity(50);
executor.initialize();
return executor;
}
This works as expected during normal program execution, and also when running 1 or multiple tests against the annotated async method.
However, if I run 'All Tests' via IntelliJ, the test which calls this method fails with the following exception:
org.springframework.core.task.TaskRejectedException: Executor [java.util.concurrent.ThreadPoolExecutor#5c3cfe93[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]] did not accept task: java.util.concurrent.CompletableFuture$AsyncSupply#5fb0f41a
The executor appears to be shutting down before this test executes. I don't understand why this only happens when all tests are run. I can run all tests within the package & sub-package containing the method, and it works fine.
I'm not sure how to post a minimum reproducible example, as this error only occurs when I execute the complete test suite, and I don't want to post my entire project.
Thanks for any help offered.

Related

Do I need to shutdown ExecutorService that processes data on a consistent basis in java

I have a Spirngboot application that runs tasks in separate threads every 24 hours. Tasks take a little bit of time but after that the app becomes idle before the next batch triggers the api the next day. Demo code is as follows:
private ExecutorService es = Executors.newFixedThreadPool(2);
#PostMapping
public void startTest(#RequestBody DummyModel dummy) throws InterruptedException {
int i = 0;
while(i<3) {
es.execute(new ListProcessing(dummy));
es.execute(new AnotherListProcessing(dummy));
i++;
}
// because methods are async this line is reached in an instant before processing is done
}
Now as you can see, I have no es.shutdown() after my while loop. In most articls and discussions, there seem to be an emphisis on how important a .shutdown() command is after you have completed your work. Adding it after my while loop would mean that that the next post request I do will result in error (which makes sense since .shutdown() states that it will not allow new tasks after the existing tasks are completed).
Now, I want to know is it really important to do .shutdown() here? My app will receive a post request once a day everyday, so ExecutorService will be used frequently. Are there downsides of not shuting down your Executor for a prolong period of time? And if I do really need to shut it down everytime ExecutorService was used, how can I do it so that app is ready to receive a new request the following day?
I was thinking of adding these lines after my while loop:
es.shutdown();
es = Executors.newFixedThreadPool(2);
It works, but that seems:
a) unnecessary (why shut it down and waste effort recreating it)
b) that, just looks & feels wrong. There has to be a better way.
Update
So it seems that you can either create a custom ThreadPoolExecutor (which for my simple usecase seems like an overkill) or you can use CachedThreadPool option of ExecutorService (thou it will attempt to use as many threads as there are available, so if you only need to use n number of them that option may not be for you).
Update 2
As explained by Thomas and Deinum, we can use customl defined executor. It does exactly what ExecutorService does + clean up and also allows for quick & easy way to configure it. For anyone curious, here is how I implemented it:
#Autowired
private TaskExecutor taskExecutor;
#PostMapping
public void startTest(#RequestBody DummyModel dummy) throws InterruptedException {
taskExecutor.execute(new ProcessList());
taskExecutor.execute(new AnotherProcessList());
taskExecutor.execute(new YetAnotherProcessList());
}
where taskExecutor is a bean defined in my main class (or it can be defined in any class with #Configuration annotation). It is as follows:
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5); // min number of threads that are always there
executor.setMaxPoolSize(10); // if threads are full and queue is full then additional threads will be created
executor.setQueueCapacity(5); // the number of tasks to be placed in the queue (caution queues require memory. Larger queue = more memory)
return executor;
}
Your proposed solution won't work reliably.
es.shutdown();
es = Executors.newFixedThreadPool(2);
This assumes that the method startTest is only ever to be invoked by 1 incoming request concurrently and that the next one that comes in is always after a executor has been shutdown and refreshed.
That solution would only work if you also create the ExecutorService inside the method for the scope of the method. However that is also problematic. If 100 requests come in you will create 200 concurrent threads, each thread takes up resources. So you effectivly created a potential resource leak (or at least an attack vector for your application).
General rule of thumb if you create the Executor yourself in the same scope then you should close it, if not leave it untouched. In your case you basically use a shared thread pool and should only do a shutdown on application stop. You could do that in an #PreDestroy method in your controller
#PreDestroy
public void cleanUp() {
es.shutdown();
}
However instead of adding this to your controller you could also define the ExecutorService as a bean and configure a destroy method.
#Bean(destroyMethod="shutdown")
public ExecutorService taskExecutor() {
return Executors.newFixedThreadPool(2);
}
You could now dependency inject the ExecutorService in your controller.
#RestController
public class YourController {
private final ExecutorService es;
public YourController(ExecutorService es) {
this.es=es;
}
}
Finally I suspect you are even better of using the Spring (Boot) provided TaskExecutor which taps into the Spring context lifecycle automatically. You can simply inject this into your controller instead of the ExecutorService.
#RestController
public class YourController {
private final TaskExecutor executor;
public YourController(TaskExecutor executor) {
this.executor = executor;
}
}
Spring Boot provides one by default which will be injected, you can control those by using the spring.task.execution.pool.* properties.
spring.task.execution.pool.max-size=10
spring.task.execution.pool.core-size=5
spring.task.execution.pool.queue-capacity=15
Or you could define a bean, this would override the default TaskExecutor as well.
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(5);
return executor;
}

Spring scheduled task to run separate threads

I have 4 jobs scheduled in my spring boot application using #Scheduled notation. Problem is that I want to run them on different threads and also put a time out on each scheduled job execution, if that job doesn't complete within given time, kill that job and execute next instance
The way I have implemented this in following way
MAIN CLASS:
#SpringBootApplication
#EnableJpaAuditing
#EnableCaching
#EnableJms
#EnableScheduling
#EnableSchedulerLock(defaultLockAtMostFor = "PT50S")
#ComponentScan
public class mainApplication {
.......
#Bean
public LockProvider lockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(dataSource, "shedlock");
}
}
JOB FILE:
#Scheduled(cron = "0 0/59 * * * ?")
#SchedulerLock(name = "task_1", lockAtLeastFor = "PT120S", lockAtMostFor = "PT600S")
public void periodicTask() throws Exception {
//Execution code
}
Similar to this periodicTask() I have 3 different tasks running. Currently all of these run on same thread. How do I make them run on different threads and also put a timeout on each tasks.

Should I have a singleton ThreadPoolTaskExecutor to be shared different services?

I have declared ThreadPoolTaskExecutor as a #Bean as per my application context as :
#Configuration
#ConfigurationProperties(prefix = "application")
#EnableCaching
public class ApplicationConfig {
private static final int POOL_SIZE = 2;
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor pool = new ThreadPoolTaskExecutor();
pool.setCorePoolSize(POOL_SIZE);
return pool;
}
}
I have 2 different services that need get wired an instance of ThreadPoolTaskExecutor. Each service will submit a Runnable that will do some service-specific job.
For example, these are the 2 services:
#Service
public class TerminatedContractsService {
#Autowired
private ThreadPoolTaskExecutor taskExec;
public void notifyTerminatedContracts(Date d) {
// do some contract specific work
taskExec.submit(() -> System.out.println("emailing terminated contracts..."));
}
}
#Service
public class SalaryCalculationService {
#Autowired
private ThreadPoolTaskExecutor taskExec;
public void calculateSalary(Date d) {
// do some salary related work
taskExec.submit(() -> System.out.println("calculating salaries..."));
}
}
Is should be safe to share the same ThreadPoolTaskExecutor instance (since its singleton) for both services right?
Do you foresee any issues with this and if I should prototype instead?
Yes, it's ok for multiple services to use the same executor. There isn't any state kept by the executor that would make it a good idea to throw it away and create a new one.
There can be things to look out for. If you have tasks of varying duration that you submit to the same executor, short duration tasks can be blocked if they are queued up behind long running ones. You may want to make sure tasks submitted to an executor have similar durations.
Also if you have some category of task that you need to execute predictably and reliably you might want to reserve a dedicated executor for it. Otherwise if those tasks share a queue with others and there's an issue that prevents those tasks from completing or just slows them down, then the tasks you need executed reliably may be stuck queued up behind them.
But no, prototype scope shouldn't be necessary.

Spring Boot, Scheduled task, double invocation

Got a pretty standard Spring Boot (1.3.5) application.
Enabled scheduling with #EnableScheduling (tried on main application entry point and a #Configuration annotated class.
Created a simple class with a #Scheduled method (simple fixedDelay schedule).
Scheduled task executes twice (always).
From what I have gathered so far, it is probably because two contexts are being loaded, and thusly picking up my beans twice.
Ok.
So how do I fix/prevent this double execution, since all the config is basically hidden Spring Boot magic?
Framework versions:
Spring Boot 1.3.5
Spring Cloud Brixton SR1
Main application:
#SpringBootApplication
#EnableDiscoveryClient
#EnableAsync
#EnableCircuitBreaker
public class AlertsApplication {
public static void main(final String[] args) {
SpringApplication.run(AlertsApplication.class, args);
}
}
My task class (HookCreateRequest list is pulled in from application.yml - I do not believe that to be relevant currently, but if required, can be provided):
#ConditionalOnProperty(name = "init.runner", havingValue = "InitRunner")
#ConfigurationProperties(prefix = "webhook")
public class InitRunner /*implements CommandLineRunner*/ {
private final List<HookCreateRequest> receivers = new ArrayList<>();
#Autowired
private WebHookService hookService;
#Scheduled (fixedRate = 300000)
public void run() throws Exception {
getReceivers().stream().forEach(item -> {
log.debug("Request : {}", item);
hookService.create(item);
});
}
public List<HookCreateRequest> getReceivers() {
return receivers;
}
}
There is zero xml configuration.
Not sure what else might be relevant?
EDIT 2016/07/04
I have modified to output the scheduled instance when it runs (I suspected that two different instances were being created). However, the logs seem to indicate it is the SAME instance of the task object.
logs:
15:01:16.170 DEBUG - scheduled.ScheduleHookRecreation - Schedule task running: scheduled.ScheduleHookRecreation#705a651b
...task stuff happening
...first run completes, then:
15:01:39.050 DEBUG - scheduled.ScheduleHookRecreation - Schedule task running: scheduled.ScheduleHookRecreation#705a651b
So it would seem it is the same task instance (#705a651b). Now why would in the name of sweet things would it be executed twice?
EDIT 2016/07/05
I added a #PostConstruct method to the class that carries the scheduled method, with just some logging output in. By doing that I could verify that the #PostConstruct method is being called twice - which seems to confirm that the bean is being picked up twice, which which presumably means it is fed to the scheduler twice. So how to prevent this?
Had the same problem, in my case the reason was in #Scheduled annotation's initialDelay parameter absence - method was called on application start.

Does spring #Scheduled annotated methods runs on different threads?

I have several methods annotated with #Scheduled(fixedDelay=10000).
In the application context, I have this annotation-driven setup:
<task:annotation-driven />
The problem is, sometimes some of the method executions get delayed by seconds and even minutes.
I'm assuming that even if a method takes a while to finish executing, the other methods would still execute. So I don't understand the delay.
Is there a way to maybe lessen or even remove the delay?
For completeness, code below shows the simplest possible way to configure scheduler with java config:
#Configuration
#EnableScheduling
public class SpringConfiguration {
#Bean(destroyMethod = "shutdown")
public Executor taskScheduler() {
return Executors.newScheduledThreadPool(5);
}
...
When more control is desired, a #Configuration class may implement SchedulingConfigurer.
The documentation about scheduling says:
If you do not provide a pool-size attribute, the default thread pool will only have a single thread.
So if you have many scheduled tasks, you should configure the scheduler, as explained in the documentation, to have a pool with more threads, to make sure one long task doesn't delay all the other ones.
If you're using Spring Boot:
There is also a property you can set in your application properties file that increases the pool size:
spring.task.scheduling.pool.size=10
Seems to be there since Spring Boot 2.1.0.
A method annotated with #Scheduled is meant to be run separately, on a different thread at a moment in time.
If you haven't provided a TaskScheduler in your configuration, Spring will use
Executors.newSingleThreadScheduledExecutor();
which returns an ScheduledExecutorService that runs on a single thread. As such, if you have multiple #Scheduled methods, although they are scheduled, they each need to wait for the thread to complete executing the previous task. You might keep getting bigger and bigger delays as the the queue fills up faster than it empties out.
Make sure you configure your scheduling environment with an appropriate amount of threads.
The #EnableScheduling annotation provides the key information and how to resolve it:
By default, will be searching for an associated scheduler definition:
either a unique TaskScheduler bean in the context, or a TaskScheduler
bean named "taskScheduler" otherwise; the same lookup will also be
performed for a ScheduledExecutorService bean. If neither of the two
is resolvable, a local single-threaded default scheduler will be
created and used within the registrar.
When more control is desired, a #Configuration class may implement
SchedulingConfigurer. This allows access to the underlying
ScheduledTaskRegistrar instance. For example, the following example
demonstrates how to customize the Executor used to execute scheduled
tasks:
#Configuration
#EnableScheduling
public class AppConfig implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
taskRegistrar.setScheduler(taskExecutor());
}
#Bean(destroyMethod="shutdown")
public Executor taskExecutor() {
return Executors.newScheduledThreadPool(100);
}
}
(emphasis added)
you can use:
#Bean()
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler taskScheduler = new ThreadPoolTaskScheduler();
taskScheduler.setPoolSize(2);
return taskScheduler;
}
Use below link for the reference..great explanation and implementation:
https://crmepham.github.io/spring-boot-multi-thread-scheduling/#:~:text=By%20default%20Spring%20Boot%20will,there%20is%20enough%20threads%20available).
Using XML file add below lines..
<task:scheduler id="taskScheduler" pool-size="15" />
<task:scheduled-tasks scheduler="taskScheduler" >
....
</task:scheduled-tasks>
default spring using a single thread for schedule task. you can using #Configuration for class implements SchedulingConfigurer . referce: https://crmepham.github.io/spring-boot-multi-thread-scheduling/
We need to pass our own thread pool scheduler, otherwise it will use default single threaded executor. Have added below code to fix-
#Bean
public Executor scheduledTaskThreadPool() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(10);
executor.setThreadNamePrefix("name-");
executor.initialize();
return executor;
}

Categories