I have a application where I have multiple threads reading messages from a jms destination. The listener thread reads the message, makes some changes to it and calls several other methods of different classes. These methods are annotated with #Async annotation that all the methods gets executed in parallel using a custom ThreadPoolTaskExecutor.
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(corePoolSize);
executor.setMaxPoolSize(maxPoolSize);
executor.setQueueCapacity(queueCapacity);
executor.setKeepAliveSeconds(keepAliveSeconds);
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
executor.setTaskDecorator(new LoggingTaskDecorator());
executor.initialize();
return executor;
}
Until now all the messages were considered to be of equal priority everything was fine, as all messages were going into LinkedBlockingQueue if none of the Executor threads left available.
Now, there comes a requirement where a particular type of message read from the queue is expected to be given higher priority than any other message read from the queue.
Currently, I am using "org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor", which doesn't provide any method where I can set Priority Queue as my Blocking queue implementation.
Could, you please help me solve this scenario?
Or is that the existing design of the system could not accommodate this change?
Or what could be the best solution to handle such scenarios?
Thanks !
By simply overriding the createQueue method. Also you should use an #Bean method to create an instance of the bean, that way Spring can properly manage the lifecycle, a small but important thing (else shutdown wouldn't work properly)..
#Override
public Executor getAsyncExecutor() {
return taskExecutor();
}
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor() {
protected BlockingQueue<Runnable> createQueue(int queueCapacity) {
return new PriorityBlockingQueue<>(queueCapacity);
}
};
executor.setCorePoolSize(corePoolSize);
executor.setMaxPoolSize(maxPoolSize);
executor.setQueueCapacity(queueCapacity);
executor.setKeepAliveSeconds(keepAliveSeconds);
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
executor.setTaskDecorator(new LoggingTaskDecorator());
return executor;
}
Something like this should work. The createQueue method now creates a PriorityBlockingQueue instead of the default LinkedBlockingQueue.
Related
I have a Spirngboot application that runs tasks in separate threads every 24 hours. Tasks take a little bit of time but after that the app becomes idle before the next batch triggers the api the next day. Demo code is as follows:
private ExecutorService es = Executors.newFixedThreadPool(2);
#PostMapping
public void startTest(#RequestBody DummyModel dummy) throws InterruptedException {
int i = 0;
while(i<3) {
es.execute(new ListProcessing(dummy));
es.execute(new AnotherListProcessing(dummy));
i++;
}
// because methods are async this line is reached in an instant before processing is done
}
Now as you can see, I have no es.shutdown() after my while loop. In most articls and discussions, there seem to be an emphisis on how important a .shutdown() command is after you have completed your work. Adding it after my while loop would mean that that the next post request I do will result in error (which makes sense since .shutdown() states that it will not allow new tasks after the existing tasks are completed).
Now, I want to know is it really important to do .shutdown() here? My app will receive a post request once a day everyday, so ExecutorService will be used frequently. Are there downsides of not shuting down your Executor for a prolong period of time? And if I do really need to shut it down everytime ExecutorService was used, how can I do it so that app is ready to receive a new request the following day?
I was thinking of adding these lines after my while loop:
es.shutdown();
es = Executors.newFixedThreadPool(2);
It works, but that seems:
a) unnecessary (why shut it down and waste effort recreating it)
b) that, just looks & feels wrong. There has to be a better way.
Update
So it seems that you can either create a custom ThreadPoolExecutor (which for my simple usecase seems like an overkill) or you can use CachedThreadPool option of ExecutorService (thou it will attempt to use as many threads as there are available, so if you only need to use n number of them that option may not be for you).
Update 2
As explained by Thomas and Deinum, we can use customl defined executor. It does exactly what ExecutorService does + clean up and also allows for quick & easy way to configure it. For anyone curious, here is how I implemented it:
#Autowired
private TaskExecutor taskExecutor;
#PostMapping
public void startTest(#RequestBody DummyModel dummy) throws InterruptedException {
taskExecutor.execute(new ProcessList());
taskExecutor.execute(new AnotherProcessList());
taskExecutor.execute(new YetAnotherProcessList());
}
where taskExecutor is a bean defined in my main class (or it can be defined in any class with #Configuration annotation). It is as follows:
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5); // min number of threads that are always there
executor.setMaxPoolSize(10); // if threads are full and queue is full then additional threads will be created
executor.setQueueCapacity(5); // the number of tasks to be placed in the queue (caution queues require memory. Larger queue = more memory)
return executor;
}
Your proposed solution won't work reliably.
es.shutdown();
es = Executors.newFixedThreadPool(2);
This assumes that the method startTest is only ever to be invoked by 1 incoming request concurrently and that the next one that comes in is always after a executor has been shutdown and refreshed.
That solution would only work if you also create the ExecutorService inside the method for the scope of the method. However that is also problematic. If 100 requests come in you will create 200 concurrent threads, each thread takes up resources. So you effectivly created a potential resource leak (or at least an attack vector for your application).
General rule of thumb if you create the Executor yourself in the same scope then you should close it, if not leave it untouched. In your case you basically use a shared thread pool and should only do a shutdown on application stop. You could do that in an #PreDestroy method in your controller
#PreDestroy
public void cleanUp() {
es.shutdown();
}
However instead of adding this to your controller you could also define the ExecutorService as a bean and configure a destroy method.
#Bean(destroyMethod="shutdown")
public ExecutorService taskExecutor() {
return Executors.newFixedThreadPool(2);
}
You could now dependency inject the ExecutorService in your controller.
#RestController
public class YourController {
private final ExecutorService es;
public YourController(ExecutorService es) {
this.es=es;
}
}
Finally I suspect you are even better of using the Spring (Boot) provided TaskExecutor which taps into the Spring context lifecycle automatically. You can simply inject this into your controller instead of the ExecutorService.
#RestController
public class YourController {
private final TaskExecutor executor;
public YourController(TaskExecutor executor) {
this.executor = executor;
}
}
Spring Boot provides one by default which will be injected, you can control those by using the spring.task.execution.pool.* properties.
spring.task.execution.pool.max-size=10
spring.task.execution.pool.core-size=5
spring.task.execution.pool.queue-capacity=15
Or you could define a bean, this would override the default TaskExecutor as well.
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(5);
return executor;
}
Could you tell me what are the default parameters for Spring #Async ThreadPoolTaskExecutor or how can I find them one my own?
What are the default values for maxPoolSize, corePoolSize, and queueCapcity?
Should I override them to improve my application or is it just fine to use default values?
I assume you would like to use #EnableAsync (javadoc) annotation to support async tasks execution in spring.
In this case the documentation states the following:
By default, Spring will be searching for an associated thread pool definition: either
unique org.springframework.core.task.TaskExecutor bean in the context, or an java.util.concurrent.Executor bean named "taskExecutor" otherwise.
If neither of the two is resolvable, a org.springframework.core.task.SimpleAsyncTaskExecutor will be used to process async method invocations.
Now if you want to provide your own customization, you can define (implement) an AsyncConfigurer (javadoc) that basically allows to define an executor and exception handler (out of scope for this question).
Regarding ThreadPoolTaskExecutor's implementation. You can check it at their github repository. ThreadPoolTaskExecutor
private int corePoolSize = 1;
private int maxPoolSize = Integer.MAX_VALUE;
private int queueCapacity = Integer.MAX_VALUE;
I think you need #EnableAsync to enable #Async annotation and this annotation will use default implementation SimpleAsyncTaskExecutor
SimpleAsyncTaskExecutor implementation does not reuse any threads, rather it starts up a new thread for each invocation. However, it does support a concurrency limit which will block any invocations that are over the limit until a slot has been freed up.
You can define your own ThreadPoolTaskExecutor like
#Configuration
public class ThreadConfig {
#Bean("otherExecutor")
public TaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(16);
executor.setMaxPoolSize(32);
executor.initialize();
return executor;
}
}
And refer to this in the #Async
#Async("otherExecutor")
void doSomething(String s) {
// this will be executed asynchronously by "otherExecutor"
}
According to Spring sources #EnableAsync annotation configures acctually SimpleAsyncTaskExecutor and that doesn’t reuse threads and the number of threads used at any time aren’t limited by default.
There's a queue between that process which submits jobs and the thread pool. If all threads are occupied, the job will just be queued. If the queue is full and the threads are also occupied, then the new task will be rejected. There are couple of rejection policies you can choose (for example. caller runs).
If you are looking for true pooling look at SimpleThreadPoolTaskExecutor and ThreadPoolTaskExecutor
The bean name for the task executor that #Async uses is applicationTaskExecutor
The properties for applicationTaskExecutor are defined in TaskExecutionProperties
private int queueCapacity = Integer.MAX_VALUE;
private int coreSize = 8;
private int maxSize = Integer.MAX_VALUE;
Defined in TaskExecutionProperties. Autoconfigure uses this file instead of ThreadPoolTaskExecutor as mentioned in other answer
I'm using spring Async task in my application and I have a problem with a task that requires all the server resources.
In particularly I have this configuration:
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(100);
executor.initialize();
return executor;
}
And I'm using aync task on three method, two are very simple and fast but one is complex because it creates a process where a Matlab routine take from few seconds to several minutes and a huge amount ofe resources. So only for this task I would like to have one thread and put in queue all the other request to achieve a sequentially execution.
With the configuration above I manage all the threads of my application, is there a way to limit only the specific Async method?
If it is not possible the best solution could be to use a Semaphore or ExecutorServices?
What you want is to create another custom thread pool for specific long running task, so it does not block your threads from running.
#Bean(name= "myExecutor")
public Executor getCustomAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(100);
executor.initialize();
return executor;
}
And set it for your async method:
#Async("myExecutor")
I have declared ThreadPoolTaskExecutor as a #Bean as per my application context as :
#Configuration
#ConfigurationProperties(prefix = "application")
#EnableCaching
public class ApplicationConfig {
private static final int POOL_SIZE = 2;
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor pool = new ThreadPoolTaskExecutor();
pool.setCorePoolSize(POOL_SIZE);
return pool;
}
}
I have 2 different services that need get wired an instance of ThreadPoolTaskExecutor. Each service will submit a Runnable that will do some service-specific job.
For example, these are the 2 services:
#Service
public class TerminatedContractsService {
#Autowired
private ThreadPoolTaskExecutor taskExec;
public void notifyTerminatedContracts(Date d) {
// do some contract specific work
taskExec.submit(() -> System.out.println("emailing terminated contracts..."));
}
}
#Service
public class SalaryCalculationService {
#Autowired
private ThreadPoolTaskExecutor taskExec;
public void calculateSalary(Date d) {
// do some salary related work
taskExec.submit(() -> System.out.println("calculating salaries..."));
}
}
Is should be safe to share the same ThreadPoolTaskExecutor instance (since its singleton) for both services right?
Do you foresee any issues with this and if I should prototype instead?
Yes, it's ok for multiple services to use the same executor. There isn't any state kept by the executor that would make it a good idea to throw it away and create a new one.
There can be things to look out for. If you have tasks of varying duration that you submit to the same executor, short duration tasks can be blocked if they are queued up behind long running ones. You may want to make sure tasks submitted to an executor have similar durations.
Also if you have some category of task that you need to execute predictably and reliably you might want to reserve a dedicated executor for it. Otherwise if those tasks share a queue with others and there's an issue that prevents those tasks from completing or just slows them down, then the tasks you need executed reliably may be stuck queued up behind them.
But no, prototype scope shouldn't be necessary.
I have several methods annotated with #Scheduled(fixedDelay=10000).
In the application context, I have this annotation-driven setup:
<task:annotation-driven />
The problem is, sometimes some of the method executions get delayed by seconds and even minutes.
I'm assuming that even if a method takes a while to finish executing, the other methods would still execute. So I don't understand the delay.
Is there a way to maybe lessen or even remove the delay?
For completeness, code below shows the simplest possible way to configure scheduler with java config:
#Configuration
#EnableScheduling
public class SpringConfiguration {
#Bean(destroyMethod = "shutdown")
public Executor taskScheduler() {
return Executors.newScheduledThreadPool(5);
}
...
When more control is desired, a #Configuration class may implement SchedulingConfigurer.
The documentation about scheduling says:
If you do not provide a pool-size attribute, the default thread pool will only have a single thread.
So if you have many scheduled tasks, you should configure the scheduler, as explained in the documentation, to have a pool with more threads, to make sure one long task doesn't delay all the other ones.
If you're using Spring Boot:
There is also a property you can set in your application properties file that increases the pool size:
spring.task.scheduling.pool.size=10
Seems to be there since Spring Boot 2.1.0.
A method annotated with #Scheduled is meant to be run separately, on a different thread at a moment in time.
If you haven't provided a TaskScheduler in your configuration, Spring will use
Executors.newSingleThreadScheduledExecutor();
which returns an ScheduledExecutorService that runs on a single thread. As such, if you have multiple #Scheduled methods, although they are scheduled, they each need to wait for the thread to complete executing the previous task. You might keep getting bigger and bigger delays as the the queue fills up faster than it empties out.
Make sure you configure your scheduling environment with an appropriate amount of threads.
The #EnableScheduling annotation provides the key information and how to resolve it:
By default, will be searching for an associated scheduler definition:
either a unique TaskScheduler bean in the context, or a TaskScheduler
bean named "taskScheduler" otherwise; the same lookup will also be
performed for a ScheduledExecutorService bean. If neither of the two
is resolvable, a local single-threaded default scheduler will be
created and used within the registrar.
When more control is desired, a #Configuration class may implement
SchedulingConfigurer. This allows access to the underlying
ScheduledTaskRegistrar instance. For example, the following example
demonstrates how to customize the Executor used to execute scheduled
tasks:
#Configuration
#EnableScheduling
public class AppConfig implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
taskRegistrar.setScheduler(taskExecutor());
}
#Bean(destroyMethod="shutdown")
public Executor taskExecutor() {
return Executors.newScheduledThreadPool(100);
}
}
(emphasis added)
you can use:
#Bean()
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler taskScheduler = new ThreadPoolTaskScheduler();
taskScheduler.setPoolSize(2);
return taskScheduler;
}
Use below link for the reference..great explanation and implementation:
https://crmepham.github.io/spring-boot-multi-thread-scheduling/#:~:text=By%20default%20Spring%20Boot%20will,there%20is%20enough%20threads%20available).
Using XML file add below lines..
<task:scheduler id="taskScheduler" pool-size="15" />
<task:scheduled-tasks scheduler="taskScheduler" >
....
</task:scheduled-tasks>
default spring using a single thread for schedule task. you can using #Configuration for class implements SchedulingConfigurer . referce: https://crmepham.github.io/spring-boot-multi-thread-scheduling/
We need to pass our own thread pool scheduler, otherwise it will use default single threaded executor. Have added below code to fix-
#Bean
public Executor scheduledTaskThreadPool() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(10);
executor.setThreadNamePrefix("name-");
executor.initialize();
return executor;
}