ExecutorService vs ThreadPoolExecutor using LinkedBlockingQueue - java

I am working on a multithreaded project in which I need to spawn multiple threads to measure the end to end performance of my client code, as I'm doing Load and Performance testing. So I created the below code which is using ExecutorService.
Below is the code with ExecutorService:
public class MultithreadingExample {
public static void main(String[] args) throws InterruptedException {
ExecutorService executor = Executors.newFixedThreadPool(20);
for (int i = 0; i < 100; i++) {
executor.submit(new NewTask());
}
executor.shutdown();
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
}
}
class NewTask implements Runnable {
#Override
public void run() {
//Measure the end to end latency of my client code
}
}
Problem statement:
Now I was reading some article on the Internet. I found out there is ThreadPoolExecutor as well. So I got confused which one I should be using.
If I replace my above code from:
ExecutorService executor = Executors.newFixedThreadPool(20);
for (int i = 0; i < 100; i++) {
executor.submit(new NewTask());
}
to:
BlockingQueue<Runnable> threadPool = new LinkedBlockingQueue<Runnable>();
ThreadPoolExecutor tpExecutor = new ThreadPoolExecutor(20, 2000, 0L, TimeUnit.MILLISECONDS, threadPool);
tpExecutor.prestartAllCoreThreads();
for (int i = 0; i < 100; i++) {
tpExecutor.execute(new NewTask());
}
will this make any difference? I am trying to understand what is the difference between my original code using ExecutorService and the new code pasted using ThreadPoolExecutor. Some of my team mates said second one (ThreadPoolExecutor) is the right way to use.
Can anyone clarify this for me?

Here is the source of Executors.newFixedThreadPool:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
It internally uses ThreadPoolExecutor class with default configuration as you can see above. Now there are scenarios where default configuration is not suitable say instead of LinkedBlockingQueue a priority queue needs to be used etc. In such cases caller can directly work on underlying ThreadPoolExecutor by instantiating it and passing desired configuration to it.

then that will make any difference?
It will make your code more complicated for little benefit.
I am trying to understand what is the difference between my original code which is using ExecutorService and the new code, that I pasted which is using ThreadPoolExectuor?
Next to nothing. Executors creates a ThreadPoolExecutor to do the real work.
Some of my team mates said second one (ThreadPoolExecutor) is right way to use?
Just because it's more complicated doesn't mean it's the right thing to do. The designers provided the Executors.newXxxx methods to make life simpler for you and because they expected you to use those methods. I suggest you use them as well.

Executors#newFixedThreadPool(int nThreads)
ExecutorService executor = Executors.newFixedThreadPool(20);
is basically
return new ThreadPoolExecutor(20, 20,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
2.
BlockingQueue<Runnable> threadPool = new LinkedBlockingQueue<Runnable>();
ThreadPoolExecutor tpExecutor = new ThreadPoolExecutor(20, 2000, 0L,
TimeUnit.MILLISECONDS, threadPool);
In the second case, you are just increasing the maxPoolSize to 2000, which I doubt you would need.

I believe one more advantage is with RejectionHandler. Correct me if wrong

In first example, You have created just 20 threads with below statement
ExecutorService executor = Executors.newFixedThreadPool(20);
In second example, you have set the thread limits range in between 20 to 2000
ThreadPoolExecutor tpExecutor = new ThreadPoolExecutor(20, 2000, 0L,
TimeUnit.MILLISECONDS,threadPool);
More threads are available for processing. But you have configured task queue as unbounded queue.
ThreadPoolExecutor would be more effective if you have customized many or all of below parameters.
ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler)
RejectedExecutionHandler would be useful when you set max capacity for workQueue and number of tasks, which have been submitted to Executor are more than workQueue capacity.
Have a look at Rejected tasks section in ThreadPoolExecutor for more details.

After 2 days of GC out of memory exception, ThreadPoolExecutor saved my life. :)
As Balaji said,
[..] one more advantage is with RejectionHandler.
In my case I had a lot of RejectedExecutionException and specifying (as follow) the discard policy solved all my problems.
private ThreadPoolExecutor executor = new ThreadPoolExecutor(1, cpus, 1, TimeUnit.SECONDS, new SynchronousQueue<Runnable>(), new ThreadPoolExecutor.DiscardPolicy());
But be careful! It works only if you don't need to execute all the threads that you submit to the executor.
For further information about ThreadPoolExecutor take a look at Darren's answer

Related

Executor Thread Pool - limit queue size and dequeue oldest

I am using a fixed thread pool for a consumer of produced messages within a spring boot application. My producer is producing (a lot) faster than the producer is able to handle a message, therefore the queue of the thread pool seems to be "flooding".
What would be the best way to limit the queue size? The intended queue behaviour would be "if the queue is full, remove the head and insert the new Runnable". Is it possible to configure Executors thread pool like this?
ThreadPoolExecutor supports this function via ThreadPoolExecutor.DiscardOldestPolicy:
A handler for rejected tasks that discards the oldest unhandled
request and then retries execute, unless the executor is shut down, in
which case the task is discarded.
You need to construct the pool with this policy mannully, for exmaple:
int poolSize = ...;
int queueSize = ...;
RejectedExecutionHandler handler = new ThreadPoolExecutor.DiscardOldestPolicy();
ExecutorService executorService = new ThreadPoolExecutor(poolSize, poolSize,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>(queueSize),
handler);
This will create a thread pool for you of the size that you pass.
ExecutorService service = Executors.newFixedThreadPool(THREAD_SIZE);
This internally creates an instance of ThreadPoolExecutor, which implements ExecutorService.
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
To create a custom thead pool, you can just do.
ExecutorService service = new ThreadPoolExecutor(5, 5,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(10));
Here we can specify the size of the queue, using the overloaded constructor of the LinkedBlockingQueue.
public LinkedBlockingQueue(int capacity) {
if (capacity <= 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node<E>(null);
}
Hope this helps. Cheers !!!
For example if you working with Data Base(psql) which has capable of 100 connections at a time . and task may takes 2000ms ...
int THREADS = 50;
ExecutorService exe = new ThreadPoolExecutor(THREADS,
50,
0L,
TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(10),
new ThreadPoolExecutor.CallerRunsPolicy()); ```

SynchronousQueue does not block when offered task by ThreadPoolExecutor

I use a pretty much default newCachedThreadPool but I want to limit the thread creation so I create ExecutorService like this
new ThreadPoolExecutor(0, Runtime.getRuntime().availableProcessors() * 2,
60L, TimeUnit.SECONDS,
new SynchronousQueue<>());
After reading javadocs I expected it to work in way that when I submit Runnable, taskExecutor blocks when offering SynchronousQueue a new task, untill there is available thread to execute it and then a handoff occurs. Unfortunately after reaching the thread pool capacity and when all threads are busy, taskExecutor throws RejectedExecutionException. I know I can pass a RejectedExecutionHandler that will block, but I'm just suprise that it seems I have to. Can someone explain if it is really working as intended or am I doing something wrong?
This code reproduces my case:
public static void main(String[] args) {
ThreadPoolExecutor executor = new ThreadPoolExecutor(0, Runtime.getRuntime().availableProcessors() * 2,
60L, TimeUnit.SECONDS,
new SynchronousQueue<>());
while (true) {
executor.submit(() -> System.out.println("bla"));
}
}
This is in accordance with ThreadPoolExecutor API:
• If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.
As to why SynchronousQueue does not block - because ThreadPoolExecutor uses queue.offer() instead of put()
SynchronousQueue doesn't block until something takes the waiting element because offer is used. It just fails to add the element to the queue. The blocking queue part is the take method, which blocks until an element is added.
SynchronousQueue<Integer> que = new SynchronousQueue<>();
System.out.println(que.offer(1));
Object lock = new Object();
synchronized(lock){
new Thread(()->{
synchronized(lock){
lock.notify();
}
try{
que.take();
}
catch(Exception e){}
}
).start();
lock.wait();
}
System.out.println(que.offer(1));
This example will output false, then maybe (slight race condition) true. The first add just fails because nobody is waiting to take the offered element.

How to configure ExecutorService newFixedThreadPool() behavior?

Oracle defines newFixedThreadPool(1) method as follows,
Creates a thread pool that reuses a fixed number of threads operating
off a shared unbounded queue
Can I set the queue size fixed such as 1 so I can block new tasks being processed until the current task execution finishes, or even use stack instead of queue,When working in timely manner, first tasks might be invalid after a while ,therefore a fixed size stack might be needed.
You can't. Executors provides some commonly used static factory methods like
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
But if that's not what you need, make your own ThreadPoolExecutor. E.g.
public static ExecutorService newFixedThreadPoolWithBoundQueue(
int nThreads,
int capacity) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(capacity));
}
or even use stack instead of queue
I would not try that.

What is the best way to wait for the completion of all workers in a thread pool?

Imagine I have following code:
final ExecutorService threadPool = Executors.newFixedThreadPool(
NUMBER_OF_WORKERS);
for (int i=0; i < NUMBER_OF_WORKERS; i++)
{
final Worker worker = new BirthWorker(...);
threadPool.execute(worker);
}
Now I need a piece of code, which waits, until all workers have completed their work.
Options I'm aware of:
while (!threadPool.isTerminated()) {}
Modify the code like that:
final List futures = new ArrayList(NUMBER_OF_WORKERS);
final ExecutorService threadPool = Executors.newFixedThreadPool(NUMBER_OF_WORKERS);
for (int i=0; i < NUMBER_OF_WORKERS; i++)
{
final Worker worker = new Worker(...);
futures.add(threadPool.submit(worker));
}
for (final Future future : futures) {
future.get();
}
// When we arrive here, all workers are guaranteed to have completed their work.
What is the best practice to wait for the completion of all workers?
I would suggest you use CountDownLatch (assuming this is one time activity) where in your constructor, you can specify how many threads you want to wait for and you share that instance accross the threads and you wait on all the threads to complete using await api (using timeout or complete blocking) and your thread's calling countdown api when they are done.
Another option would be, to call join method in thread to wait for their completion if you have access to each and every thread that you wish to complete.
I would use ThreadPoolExecutor.invokeAll(Collection<? extends Callable<T>> tasks)
API: Executes the given tasks, returning a list of Futures holding their status and results when all complete
CountDownLatch,as stated above, would do the work well, just keep in mind that you want to shut down the executur after your done:
final ExecutorService threadPool = Executors.newFixedThreadPool(
NUMBER_OF_WORKERS);
for (int i=0; i < NUMBER_OF_WORKERS; i++)
{
final Worker worker = new BirthWorker(...);
threadPool.execute(worker);
}
threadPool.shutdown();
unless you shut it down, threadPool.isTerminated will stay false, even when all the workers are done.

ThreadPoolExecutor fixed thread pool with custom behaviour

i'm new to this topic ... i'm using a ThreadPoolExecutor created with Executors.newFixedThreadPool( 10 ) and after the pool is full i'm starting to get a RejectedExecutionException .
Is there a way to "force" the executor to put the new task in a "wait" status instead of rejecting it and starting it when the pool is freed ?
Thanks
Issue regarding this
https://github.com/evilsocket/dsploit/issues/159
Line of code involved https://github.com/evilsocket/dsploit/blob/master/src/it/evilsocket/dsploit/net/NetworkDiscovery.java#L150
If you use Executors.newFixedThreadPool(10); it queues the tasks and they wait until a thread is ready.
This method is
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
As you can see, the queue used is unbounded (which can be a problem in itself) but it means the queue will never fill and you will never get a rejection.
BTW: If you have CPU bound tasks, an optimal number of threads can be
int processors = Runtime.getRuntime().availableProcessors();
ExecutorService es = Executors.newFixedThreadPool(processors);
A test class which might illustrate the situation
public static void main(String... args) {
ExecutorService es = Executors.newFixedThreadPool(2);
for (int i = 0; i < 1000 * 1000; i++)
es.submit(new SleepOneSecond());
System.out.println("Queue length " + ((ThreadPoolExecutor) es).getQueue().size());
es.shutdown();
System.out.println("After shutdown");
try {
es.submit(new SleepOneSecond());
} catch (Exception e) {
e.printStackTrace(System.out);
}
}
static class SleepOneSecond implements Callable<Void> {
#Override
public Void call() throws Exception {
Thread.sleep(1000);
return null;
}
}
prints
Queue length 999998
After shutdown
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#e026161 rejected from java.util.concurrent.ThreadPoolExecutor#3e472e76[Shutting down, pool size = 2, active threads = 2, queued tasks = 999998, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2013)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
at Main.main(Main.java:17)
It is very possible that a thread calls exit, which sets mStopped to false and shutdowns the executor, but:
your running thread might be in the middle of the while (!mStopped) loop and tries to submit a task to the executor which has been shutdown by exit
the condition in the while returns true because the change made to mStopped is not visible (you don't use any form of synchronization around that flag).
I would suggest:
make mStopped volatile
handle the case where the executor is shutdown while you are in the middle of the loop (for example by catching RejectedExecutionException, or probably better: shutdown your executor after your while loop instead of shutting it down in your exit method).
Building on earlier suggestions, you can use a blocking queue to construct a fixed size ThreadPoolExecutor. If you then supply your own RejectedExecutionHandler which adds tasks to the blocking queue, it will behave as described.
Here's an example of how you could construct such an executor:
int corePoolSize = 10;
int maximumPoolSize = 10;
int keepAliveTime = 0;
int maxWaitingTasks = 10;
ThreadPoolExecutor blockingThreadPoolExecutor = new ThreadPoolExecutor(
corePoolSize, maximumPoolSize,
keepAliveTime, TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(maxWaitingTasks),
new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
executor.getQueue().put(r);
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while submitting task", e);
}
}
});
If I understand correctly, you have your ThreadPool created with fixed number of threads but you might have more tasked to be submitted to the thread pool. I would calcuate the keepAliveTime based on the request and set it dynamically. That way you would not have RejectedExecutionException.
For example
long keepAliveTime = ((applications.size() * 60) / FIXED_NUM_OF_THREADS) * 1000;
threadPoolExecutor.setKeepAliveTime(keepAliveTime, TimeUnit.MILLISECONDS);
where application is a collection of task that could be different every time.
That should solve your problem if you know average time the task take.

Categories