How to configure ExecutorService newFixedThreadPool() behavior? - java

Oracle defines newFixedThreadPool(1) method as follows,
Creates a thread pool that reuses a fixed number of threads operating
off a shared unbounded queue
Can I set the queue size fixed such as 1 so I can block new tasks being processed until the current task execution finishes, or even use stack instead of queue,When working in timely manner, first tasks might be invalid after a while ,therefore a fixed size stack might be needed.

You can't. Executors provides some commonly used static factory methods like
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
But if that's not what you need, make your own ThreadPoolExecutor. E.g.
public static ExecutorService newFixedThreadPoolWithBoundQueue(
int nThreads,
int capacity) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(capacity));
}
or even use stack instead of queue
I would not try that.

Related

Executor Thread Pool - limit queue size and dequeue oldest

I am using a fixed thread pool for a consumer of produced messages within a spring boot application. My producer is producing (a lot) faster than the producer is able to handle a message, therefore the queue of the thread pool seems to be "flooding".
What would be the best way to limit the queue size? The intended queue behaviour would be "if the queue is full, remove the head and insert the new Runnable". Is it possible to configure Executors thread pool like this?
ThreadPoolExecutor supports this function via ThreadPoolExecutor.DiscardOldestPolicy:
A handler for rejected tasks that discards the oldest unhandled
request and then retries execute, unless the executor is shut down, in
which case the task is discarded.
You need to construct the pool with this policy mannully, for exmaple:
int poolSize = ...;
int queueSize = ...;
RejectedExecutionHandler handler = new ThreadPoolExecutor.DiscardOldestPolicy();
ExecutorService executorService = new ThreadPoolExecutor(poolSize, poolSize,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>(queueSize),
handler);
This will create a thread pool for you of the size that you pass.
ExecutorService service = Executors.newFixedThreadPool(THREAD_SIZE);
This internally creates an instance of ThreadPoolExecutor, which implements ExecutorService.
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
To create a custom thead pool, you can just do.
ExecutorService service = new ThreadPoolExecutor(5, 5,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(10));
Here we can specify the size of the queue, using the overloaded constructor of the LinkedBlockingQueue.
public LinkedBlockingQueue(int capacity) {
if (capacity <= 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node<E>(null);
}
Hope this helps. Cheers !!!
For example if you working with Data Base(psql) which has capable of 100 connections at a time . and task may takes 2000ms ...
int THREADS = 50;
ExecutorService exe = new ThreadPoolExecutor(THREADS,
50,
0L,
TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(10),
new ThreadPoolExecutor.CallerRunsPolicy()); ```

SynchronousQueue does not block when offered task by ThreadPoolExecutor

I use a pretty much default newCachedThreadPool but I want to limit the thread creation so I create ExecutorService like this
new ThreadPoolExecutor(0, Runtime.getRuntime().availableProcessors() * 2,
60L, TimeUnit.SECONDS,
new SynchronousQueue<>());
After reading javadocs I expected it to work in way that when I submit Runnable, taskExecutor blocks when offering SynchronousQueue a new task, untill there is available thread to execute it and then a handoff occurs. Unfortunately after reaching the thread pool capacity and when all threads are busy, taskExecutor throws RejectedExecutionException. I know I can pass a RejectedExecutionHandler that will block, but I'm just suprise that it seems I have to. Can someone explain if it is really working as intended or am I doing something wrong?
This code reproduces my case:
public static void main(String[] args) {
ThreadPoolExecutor executor = new ThreadPoolExecutor(0, Runtime.getRuntime().availableProcessors() * 2,
60L, TimeUnit.SECONDS,
new SynchronousQueue<>());
while (true) {
executor.submit(() -> System.out.println("bla"));
}
}
This is in accordance with ThreadPoolExecutor API:
• If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.
As to why SynchronousQueue does not block - because ThreadPoolExecutor uses queue.offer() instead of put()
SynchronousQueue doesn't block until something takes the waiting element because offer is used. It just fails to add the element to the queue. The blocking queue part is the take method, which blocks until an element is added.
SynchronousQueue<Integer> que = new SynchronousQueue<>();
System.out.println(que.offer(1));
Object lock = new Object();
synchronized(lock){
new Thread(()->{
synchronized(lock){
lock.notify();
}
try{
que.take();
}
catch(Exception e){}
}
).start();
lock.wait();
}
System.out.println(que.offer(1));
This example will output false, then maybe (slight race condition) true. The first add just fails because nobody is waiting to take the offered element.

How to use preexisting runnables, limiting the number of runnables to create.?

Problem Statement:
I have a 5000 id's that point to rows in a database.[ Could be more than 5000 ]
Each Runnable retrieves the row in a database given an id and performs some time consuming tasks
public class BORunnable implements Callable<Properties>{
public BORunnable(String branchID) {
this.branchID=branchID;
}
public setBranchId(String branchID){
this.branchID=branchID;
}
public Properties call(){
//Get the branchID
//Do some time consuming tasks. Merely takes 1 sec to complete
return propObj;
}
}
I am going to submit these runnables to the executor service.
For that, I need to create and submit 5000 or even more runnables to the executor service. This creation of runnables, in my environment could throw out of memory exception.
[given that 5000 is just an example]
So I came up with a approach, I would be thankful if you provide anything different:
Created a thread pool of fixed size 10.
int corePoolSize = 10;
ThreadPoolExecutor executor = new ThreadPoolExecutor(corePoolSize,
corePoolSize + 5, 10, TimeUnit.SECONDS,
new LinkedBlockingQueue<Runnable>());
Collection<Future<Properties>> futuresCollection =
new LinkedList<Future<Properties>>();
Added all of the branchIDs to the branchIdQueue
Queue<String> branchIdQueue = new LinkedList<String>();
Collections.addAll(branchIdQueue, branchIDs);
I am trying to reuse runnable. Created a bunch of runnable
Now i want this number of elements to be dequeued and create runnable for each
int noOfElementsToDequeue = Math.min(corePoolSize, branchIdQueue.size());
ArrayList<BORunnable>runnablesList = dequeueAndSubmitRunnable(
branchIdQueue,noOfElementsToDequeue);
ArrayList<BORunnable> dequeueAndSubmitRunnable(branchIdQueue,
noOFElementsToDequeue){
ArrayList<BORunnable> runnablesList= new ArrayList<BORunnable>();
for (int i = 0; i < noOfElementsToDequeue; i++) {
//Create this number of runnables
runnablesList.add(new BORunnable(branchIdQueue.remove()));
}
return runnablesList;
}
Submitting the retrieved runnables to the executor
for(BORunnable boRunnableObj:runnablesList){
futuresCollection.add(executor.submit(boRunnableObj));
}
If the queue is empty, I created the runnables I needed. if it's not, I want to reuse the runnable and submit to the executor.
Here I get number of runnables to be reused = the total count - current active count
[Approximate is enough for me]
int coreSize=executor.getCorePoolSize();
while(!branchIdQueue.isEmpty()){
//Total size - current active count
int runnablesToBeReused=coreSize-executor.getActiveCount();
if(runnablesToBeReused!=0){
ArrayList<String> branchIDsTobeReset = removeElementsFromQueue(
branchIdQueue,runnablesToBeReused);
ArrayList<BORunnable> boRunnableToBeReusedList =
getBORunnableToBeReused(boRunnableList,runnablesToBeReused);
for(BORunnable aRunnable:boRunnableList){
//aRunnable.set(branchIDSTobeRest.get(0));
}
}
}
My Problem is
I couldn't able to find out which Runnable has been released by the thread pool so i could use that to submit
Hence, I randomly take few runnables and try to set the branchId, but then thread race problem may occur. [don't want to use volatile]
Reusing the Runnables makes no sense as the problem is not the cost of creating or freeing the runnable instances. These come almost for free in Java.
What you want to do is to limit the number of pending jobs which is easy to achieve: just provide a limit to the queue you are passing to the executor service. That’s as easy as passing an int value (the limit) to the LinkedBlockingQueue’s constructor. Note that you can also use an ArrayBlockingQueue then as a LinkedBlockingQueue does not provide an advantage for bounded queue usage.
When you have provided a limit to the queue, the executor will reject queuing up new jobs. The only thing left to do is to provide an appropriate RejectedExecutionHandler to the executor. E.g. CallerRunsPolicy would be sufficient to avoid that the caller creates more new jobs while the threads are all busy and the queue is full.
After execution, the Runnables are subject to garbage collection.

ExecutorService vs ThreadPoolExecutor using LinkedBlockingQueue

I am working on a multithreaded project in which I need to spawn multiple threads to measure the end to end performance of my client code, as I'm doing Load and Performance testing. So I created the below code which is using ExecutorService.
Below is the code with ExecutorService:
public class MultithreadingExample {
public static void main(String[] args) throws InterruptedException {
ExecutorService executor = Executors.newFixedThreadPool(20);
for (int i = 0; i < 100; i++) {
executor.submit(new NewTask());
}
executor.shutdown();
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
}
}
class NewTask implements Runnable {
#Override
public void run() {
//Measure the end to end latency of my client code
}
}
Problem statement:
Now I was reading some article on the Internet. I found out there is ThreadPoolExecutor as well. So I got confused which one I should be using.
If I replace my above code from:
ExecutorService executor = Executors.newFixedThreadPool(20);
for (int i = 0; i < 100; i++) {
executor.submit(new NewTask());
}
to:
BlockingQueue<Runnable> threadPool = new LinkedBlockingQueue<Runnable>();
ThreadPoolExecutor tpExecutor = new ThreadPoolExecutor(20, 2000, 0L, TimeUnit.MILLISECONDS, threadPool);
tpExecutor.prestartAllCoreThreads();
for (int i = 0; i < 100; i++) {
tpExecutor.execute(new NewTask());
}
will this make any difference? I am trying to understand what is the difference between my original code using ExecutorService and the new code pasted using ThreadPoolExecutor. Some of my team mates said second one (ThreadPoolExecutor) is the right way to use.
Can anyone clarify this for me?
Here is the source of Executors.newFixedThreadPool:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
It internally uses ThreadPoolExecutor class with default configuration as you can see above. Now there are scenarios where default configuration is not suitable say instead of LinkedBlockingQueue a priority queue needs to be used etc. In such cases caller can directly work on underlying ThreadPoolExecutor by instantiating it and passing desired configuration to it.
then that will make any difference?
It will make your code more complicated for little benefit.
I am trying to understand what is the difference between my original code which is using ExecutorService and the new code, that I pasted which is using ThreadPoolExectuor?
Next to nothing. Executors creates a ThreadPoolExecutor to do the real work.
Some of my team mates said second one (ThreadPoolExecutor) is right way to use?
Just because it's more complicated doesn't mean it's the right thing to do. The designers provided the Executors.newXxxx methods to make life simpler for you and because they expected you to use those methods. I suggest you use them as well.
Executors#newFixedThreadPool(int nThreads)
ExecutorService executor = Executors.newFixedThreadPool(20);
is basically
return new ThreadPoolExecutor(20, 20,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
2.
BlockingQueue<Runnable> threadPool = new LinkedBlockingQueue<Runnable>();
ThreadPoolExecutor tpExecutor = new ThreadPoolExecutor(20, 2000, 0L,
TimeUnit.MILLISECONDS, threadPool);
In the second case, you are just increasing the maxPoolSize to 2000, which I doubt you would need.
I believe one more advantage is with RejectionHandler. Correct me if wrong
In first example, You have created just 20 threads with below statement
ExecutorService executor = Executors.newFixedThreadPool(20);
In second example, you have set the thread limits range in between 20 to 2000
ThreadPoolExecutor tpExecutor = new ThreadPoolExecutor(20, 2000, 0L,
TimeUnit.MILLISECONDS,threadPool);
More threads are available for processing. But you have configured task queue as unbounded queue.
ThreadPoolExecutor would be more effective if you have customized many or all of below parameters.
ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler)
RejectedExecutionHandler would be useful when you set max capacity for workQueue and number of tasks, which have been submitted to Executor are more than workQueue capacity.
Have a look at Rejected tasks section in ThreadPoolExecutor for more details.
After 2 days of GC out of memory exception, ThreadPoolExecutor saved my life. :)
As Balaji said,
[..] one more advantage is with RejectionHandler.
In my case I had a lot of RejectedExecutionException and specifying (as follow) the discard policy solved all my problems.
private ThreadPoolExecutor executor = new ThreadPoolExecutor(1, cpus, 1, TimeUnit.SECONDS, new SynchronousQueue<Runnable>(), new ThreadPoolExecutor.DiscardPolicy());
But be careful! It works only if you don't need to execute all the threads that you submit to the executor.
For further information about ThreadPoolExecutor take a look at Darren's answer

ThreadPoolExecutor fixed thread pool with custom behaviour

i'm new to this topic ... i'm using a ThreadPoolExecutor created with Executors.newFixedThreadPool( 10 ) and after the pool is full i'm starting to get a RejectedExecutionException .
Is there a way to "force" the executor to put the new task in a "wait" status instead of rejecting it and starting it when the pool is freed ?
Thanks
Issue regarding this
https://github.com/evilsocket/dsploit/issues/159
Line of code involved https://github.com/evilsocket/dsploit/blob/master/src/it/evilsocket/dsploit/net/NetworkDiscovery.java#L150
If you use Executors.newFixedThreadPool(10); it queues the tasks and they wait until a thread is ready.
This method is
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
As you can see, the queue used is unbounded (which can be a problem in itself) but it means the queue will never fill and you will never get a rejection.
BTW: If you have CPU bound tasks, an optimal number of threads can be
int processors = Runtime.getRuntime().availableProcessors();
ExecutorService es = Executors.newFixedThreadPool(processors);
A test class which might illustrate the situation
public static void main(String... args) {
ExecutorService es = Executors.newFixedThreadPool(2);
for (int i = 0; i < 1000 * 1000; i++)
es.submit(new SleepOneSecond());
System.out.println("Queue length " + ((ThreadPoolExecutor) es).getQueue().size());
es.shutdown();
System.out.println("After shutdown");
try {
es.submit(new SleepOneSecond());
} catch (Exception e) {
e.printStackTrace(System.out);
}
}
static class SleepOneSecond implements Callable<Void> {
#Override
public Void call() throws Exception {
Thread.sleep(1000);
return null;
}
}
prints
Queue length 999998
After shutdown
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#e026161 rejected from java.util.concurrent.ThreadPoolExecutor#3e472e76[Shutting down, pool size = 2, active threads = 2, queued tasks = 999998, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2013)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
at Main.main(Main.java:17)
It is very possible that a thread calls exit, which sets mStopped to false and shutdowns the executor, but:
your running thread might be in the middle of the while (!mStopped) loop and tries to submit a task to the executor which has been shutdown by exit
the condition in the while returns true because the change made to mStopped is not visible (you don't use any form of synchronization around that flag).
I would suggest:
make mStopped volatile
handle the case where the executor is shutdown while you are in the middle of the loop (for example by catching RejectedExecutionException, or probably better: shutdown your executor after your while loop instead of shutting it down in your exit method).
Building on earlier suggestions, you can use a blocking queue to construct a fixed size ThreadPoolExecutor. If you then supply your own RejectedExecutionHandler which adds tasks to the blocking queue, it will behave as described.
Here's an example of how you could construct such an executor:
int corePoolSize = 10;
int maximumPoolSize = 10;
int keepAliveTime = 0;
int maxWaitingTasks = 10;
ThreadPoolExecutor blockingThreadPoolExecutor = new ThreadPoolExecutor(
corePoolSize, maximumPoolSize,
keepAliveTime, TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(maxWaitingTasks),
new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
executor.getQueue().put(r);
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while submitting task", e);
}
}
});
If I understand correctly, you have your ThreadPool created with fixed number of threads but you might have more tasked to be submitted to the thread pool. I would calcuate the keepAliveTime based on the request and set it dynamically. That way you would not have RejectedExecutionException.
For example
long keepAliveTime = ((applications.size() * 60) / FIXED_NUM_OF_THREADS) * 1000;
threadPoolExecutor.setKeepAliveTime(keepAliveTime, TimeUnit.MILLISECONDS);
where application is a collection of task that could be different every time.
That should solve your problem if you know average time the task take.

Categories