My ThreadPoolExecutor is failing to create new threads. In fact I wrote a somewhat hacky LinkedBlockingQueue that will accept any task (i.e. it is unbounded) but call an additional handler - which in my application spews warning trace that the pool is behind - which gives me very explicit information that the TPE is refusing to create new threads even though the queue has thousands of entries in it. My constructor is as follows:
private final ExecutorService s3UploadPool =
new ThreadPoolExecutor(1, 40, 1, TimeUnit.HOURS, unboundedLoggingQueue);
Why is it not creating new threads?
This gotcha is covered in this blog post:
This construction of thread pool will simply not work as expected. This is due to the logic within the ThreadPoolExecutor where new threads are added if there is a failure to offer a task to the queue. In our case, we use an unbounded LinkedBlockingQueue, where we can always offer a task to the queue. It effectively means that we will never grow above the core pool size and up to the maximum pool size.
If you also need to decouple the minimum from maximum pool sizes, you will have to do some extended coding. I am not aware of a solution that exists in the Java libraries or Apache Commons. The solution is to create a coupled BlockingQueue that is aware of the TPE, and will go out of its way to reject a task if it knows the TPE has no threads available, then manually requeue. It is covered in more detail in linked post. Ultimately your construction will look like:
public static ExecutorService newScalingThreadPool(int min, int max, long keepAliveTime) {
ScalingQueue queue = new ScalingQueue();
ThreadPoolExecutor executor =
new ScalingThreadPoolExecutor(min, max, keepAliveTime, TimeUnit.MILLISECONDS, queue);
executor.setRejectedExecutionHandler(new ForceQueuePolicy());
queue.setThreadPoolExecutor(executor);
return executor;
}
However more simply set corePoolSize to maxPoolSize and don't worry about this nonsense.
There is a workaround to this problem. Consider the following implementation:
int corePoolSize = 40;
int maximumPoolSize = 40;
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(corePoolSize, maximumPoolSize,
60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
threadPoolExecutor.allowCoreThreadTimeOut(true);
By setting the allowCoreThreadTimeOut() to true, the threads in the pool are allowed to terminate after the specified timeout (60 seconds in this example). With this solution, it is the corePoolSize constructor argument that determines the maximum pool size in practice, because the thread pool will grow up to the corePoolSize, and then start adding jobs to the queue. It is likely that the pool may never grow bigger than that, because the pool will not spawn new threads until the queue is full (which, given that the LinkedBlockingQueue has an Integer.MAX_VALUE capacity may never happen). Consequently, there is little point in setting maximumPoolSize to a larger value than corePoolSize.
Consideration: The thread pool have 0 idle threads after the timeout has expired, which means that there will be some latency before the threads are created (normally, you would always have corePoolSize threads available).
More details can be found in the JavaDoc of ThreadPoolExecutor.
As mentioned by #djechlin, this is part of the (surprising to many) defined behavior of the ThreadPoolExecutor. I believe I've found a somewhat elegant solution around this behavior that I show in my answer here:
How to get the ThreadPoolExecutor to increase threads to max before queueing?
Basically you extend LinkedBlockingQueue to have it always return false for queue.offer(...) which will add an additional threads to the pool, if necessary. If the pool is already at max threads and they all are busy, the RejectedExecutionHandler will be called. It is the handler which then does the put(...) into the queue.
See my code there.
Related
At the moment I'm creating fixed thread pool using the Executor service like
executor = Executors.newFixedThreadPool(coreAmount);
While this is fine, I was wondering if it's possible to keep the behaviour of creating a number of threads and change the max pool limit so that if all the threads are in use to create a new thread and use it instead of waiting for one of the threads to terminate to start.
So for example if 8 threads are created and are being used, 9th task enters I want it to create a new thread in addition to the 8 currently in use.
It seems newCachedThreadPool() has the behaviour but I also want the ability to create number of threads similar to newFixedThreadPool(int nThreads)
Maybe you can use the ThreadPoolExecutor class. It is an ExecutorService and has the concept of core pool count and max pool count. It also has other features that make it more customizable than the Objects returned by Executors.
Below is an example.
int coreAmount = 8;
ExecutorService executor;
//executor = Executors.newFixedThreadPool(coreAmount);
int overrun = 4;
int maxWorkCount = 1_000;
executor = new ThreadPoolExecutor(coreAmount, coreAmount + overrun, 1, TimeUnit.MINUTES, new ArrayBlockingQueue<>(maxWorkCount));
Here is more info about the params passed in the constructor in the example above.
corePoolSize - the number of
threads to keep in the pool, even if they are idle, unless
allowCoreThreadTimeOut is set
maximumPoolSize - the maximum number of
threads to allow in the pool
keepAliveTime - when the number of
threads is greater than the core, this is the maximum time that excess
idle threads will wait for new tasks before terminating.
unit - the time unit for the keepAliveTime argument
workQueue - the queue to use for holding tasks before they are executed. This queue will hold only the Runnable tasks submitted by the execute method.
Like you said, a cached thread pool is exactly what you're looking for. From it's documentation,
Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available. These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks. Calls to execute will reuse previously constructed threads if available. If no existing thread is available, a new thread will be created and added to the pool. Threads that have not been used for sixty seconds are terminated and removed from the cache. Thus, a pool that remains idle for long enough will not consume any resources.
(Emphasis: mine)
So for example if 8 threads are created and are being used, 9th task enters I want it to create a new thread in addition to the 8 currently in use.
This is exactly the case with Executors#newCachedThreadPool, as seen from its documentation above.
Here is what you can use if you want to emulate a cached thread pool, but with a minimum amount of 8 threads:
ExecutorService service = new ThreadPoolExecutor(8, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
This is available in the ThreadPoolExecutor using Core Pool & Maximum Pool size values:
From the javadoc, you can see that:
If there are more than corePoolSize but less than maximumPoolSize threads running, a new thread will be created only if the queue is full.
By setting corePoolSize and maximumPoolSize the same, you create a fixed-size thread pool.
By setting maximumPoolSize to an essentially unbounded value such as Integer.MAX_VALUE, you allow the pool to accommodate an arbitrary number of concurrent tasks.
Most typically, core and maximum pool sizes are set only upon construction, but they may also be changed dynamically using setCorePoolSize(int) and setMaximumPoolSize(int).
So, in your example, you would need to set the 'corePoolSize' to the value of 8. You would then need to set the 'maximumPoolSize' which would then serve as the upper-bound to the pool.
Also, as in the javadoc, these values can be altered dynamically.
I checked the document, no parameter to specify the number of threads when using cachedthreadpool, does it mean the thread number will be increased under heavy load until the system resource has been used up?
While I'm not sure it would consume all resources, it is unbounded (well, unless you can spin up Integer.MAX_VALUE threads). It will still reuse threads and remove unused ones where possible. However you can simply use the constructor yourself:
ExecutorService myPool = new ThreadPoolExecutor(0, 30, //30 thread cap
60L, TimeUnit.SECONDS, //thread expiration time
new LinkedBlockingDeque<>(), //infinite queue, can use other synchronous collections
r -> new Thread(r, /* thread name */)); //thread factory
The idea behind newCachedThreadPool is to create short lived threads so that the same can be reused instead of creating new threads.
Still let's assume that there is a scenario where the number of threads is equivalent to Integer.MAX_VALUE; but still the number of CPU cores available will be say 4.
As per the below mentioned code :
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
The newCachedThreadPool() method internally creates a new ThreadPoolExecutor with maximumPoolSize as Integer.MAX_VALUE and keepAlive time as 60second.
The javadocs for keepAlive suggests :
keepAliveTime when the number of threads is greater than the core, this is the maximum time that excess idle threads will wait for new tasks before terminating.
Which means that if the number of threads is more than the available CPU cores, the excess thread will be live for only 60secs before being marked terminated.
So there should not be any such scenario where newCachedThreadPool() caused the system resources to be fully consumed.
This is the cached thread pool:
new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
and this is the fixed ThreadPoolExecutor:
new ThreadPoolExecutor( 0, 20, 60L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
The first one can create INTEGER.MAX_VALUE number of threads, which is undesired in my case.
The second one, is incorrect. You cannot use a mininum of zero threads and a maximum of 20 threads with a LinkedBlockingQueue.
From the documentation:
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html
Using an unbounded queue (for example a LinkedBlockingQueue without a
predefined capacity) will cause new tasks to wait in the queue when
all corePoolSize threads are busy. Thus, no more than corePoolSize
threads will ever be created. (And the value of the maximumPoolSize
therefore doesn't have any effect.)
The use of a SynchronousQueue in the first case, CachedThreadPool really serves no purpose as a queue. It will only create threads as needed and need a high upper bound.
A good default choice for a work queue is a SynchronousQueue that
hands off tasks to threads without otherwise holding them. Here, an
attempt to queue a task will fail if no threads are immediately
available to run it, so a new thread will be constructed. This policy
avoids lockups when handling sets of requests that might have internal
dependencies. Direct handoffs generally require unbounded
maximumPoolSizes to avoid rejection of new submitted tasks. This in
turn admits the possibility of unbounded thread growth when commands
continue to arrive on average faster than they can be processed.
Now, what I am after is this:
What I want is for an executor that you can submit work to, that uses a queue if maxThreads are all busy, but that also allows the threads to go idle, and not take up any resources when there are no work.
The use of:
ThreadPoolExecutor ex = new ThreadPoolExecutor(0, threads, 60L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
ex.setKeepAliveTime(idletime, TimeUnit.MILLISECONDS);
I am not sure what implication it has. The documentation seems to only explain the use of an unbounded LinkedBlockingQueue, which I am not really sure what that means, since the constructor creates one with a max capacity of Integer.MAX_VALUE.
The documentation also states:
(And the value of the maximumPoolSize therefore doesn't have any
effect.)
What I want is a minimum thread pool size and a maximum thread pool size that queues up work and lets threads go idle when there is no work.
EDIT
Reading this question and the last part made me consider if one should create a
new ThreadPoolExecutor(20, ??irrelevant??, 60L, TimeUnit.MILLISECONDS, new
LinkedBlockingQueue());
Which would create 20 threads that goes idle if
ex.setKeepAliveTime(60L, TimeUnit.MILLISECONDS);
is set.
Is this an correct assumption?
maybe this helps: https://stackoverflow.com/a/19538899/999043 (How to get the ThreadPoolExecutor to increase threads to max before queueing?)
What I want is a minimum thread pool size and a maximum thread pool size that queues up work and lets threads go idle when there is no work.
This is precicely what a fixed-thread pool with unbounded queue gives you. In short defining a thread-pool like
new ThreadPoolExecutor( 0, 20, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
Is all you need. What you get is a thread-pool which has at most 20 threads, and a work queue which can accept an 'infinite' number of tasks.
So what if the queue is empty? The ThreadPool will have the 20 threads still alive and waiting on the queue. When they wait on the queue the threads suspend and do no work.
The one update I would make is to change 0L, TimeUnit.MILLISECONDS to something a bit higher like 1L, TimeUnit.SECONDS. This is the thread-idle period, you should let the thread stay alive for a little longer before you shut it down.
In short, a thread-pool does what I believe you want. There may be something I am missing if so let me know in comments.
In the Oracle site (http://docs.oracle.com/javase/7/docs/api/?java/util/concurrent/ThreadPoolExecutor.html) under Queuing section, its mentioned that "If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread."
Lot of our code have corePoolSize as 1 and maximumPoolSize are like 10 or 20.
(1) Is it a bad practice? or rather not an optimal way to use ThreadPool's?
(2) Would you recommend have corePoolSize and maximumPoolSize value be same when creating a ThreadPoolExecutor?
Another question is with respect to usage of a BlockingQueue while creating a ThreadPoolExecutor - between LinkedBlockingQueue or ConcurrentLinkedQueue? I was looking more from the locking and performance point-of-view? Though running a test proves, inserting in a concurrentLinkedQueue is quite fast compared to other. Any thoughts?
Edited:
The first part has been answered in various questions. The one I liked
How to get the ThreadPoolExecutor to increase threads to max before queueing?
Please find the bellow points .
If a new task is submitted to the list of tasks to be executed, and
less than corePoolSize threads are running, a new thread is created
to handle the request. Incoming tasks are queued in case
corePoolSize or more threads are running.
If a request can’t be queued or there are less than corePoolSize
threads running, a new thread is created unless this would exceed
maximumPoolSize.
If the pool currently has more than corePoolSize threads, excess
threads will be terminated if they have been idle for more than the
keepAliveTime.
If a new task is submitted to the list of tasks to be executed and
there are more than corePoolSize but less than maximumPoolSize
threads running, a new thread will be created only if the queue is
full. This means that when using an Unbounded queue, no more threads
than corePoolSize will be running and keepAliveTime has no
influence.
If corePoolSize and maximumPoolSize are the same, a fixed-size
thread pool is used
Your ThreadPool Size
Core Pool size is 1.
Max Pool size is 10 or 20.
Question 1 : Is it a bad practice? or rather not an optimal way to use ThreadPool's?
It is based on your Pool which is used.
If you are using bounded queue , new thread will be created upto max size once it reaches the max limit of bounded queue. Otherwise it will run in core pool size . If you are using unbounded queue it will be Core pool size only.
Question 2 : Would you recommend have corePoolSize and maximumPoolSize value be same when creating a ThreadPoolExecutor?
Yes. So that only you can get same no of threads instead of what kind blocking queue . Even non blocking queue also you can get max no of threads.
Question 3 concurrentLinkedQueue vs LinkedBlockingQueue
You should not allow to use concurrentLinkedQueue since threadpool only supports BlockingQueue. So you may try concurrentLinkedBlockingQueue. Because concurrentLinkedBlockingQueue is unbounded. But LinkedBlockingQueue is bounded.
Well that should not happen, the reason TPE puts the request(Runnable) in queue is that it should not consume too much memory. But it guarantees the execution of all Runnables.
The purpose of thread pool executor is to optimize Memory.
If you want to increase pool size to max before adding in queue then I think you would have to extend the Thread Pool Executor.
I'm new to ScheduledThreadPoolExecutor (as I usually use the simple Timer, but people have been advising against it), and I don't quite understand what would be the appropriate integer value to pass to the ScheduledThreadPoolExecutor(int) constructor.
Could anyone explain this?
Thank you
In case of ScheduledThreadPoolExecutor, corePoolSize is maximum number of threads that will be created to perform scheduled actions.
This thread pool is fixed-sized and idle threads are kept alive.
DrunkenRabbit's answer is simply ivalid because ScheduledThreadPoolsExecutor docs says explicitly that (There will be no thread count spikes at all):
While this class inherits from ThreadPoolExecutor, a few of the
inherited tuning methods are not useful for it. In particular, because
it acts as a fixed-sized pool using corePoolSize threads and an
unbounded queue, adjustments to maximumPoolSize have no useful effect.
Now as for the value, reasonable number would be number of CPU cores that application is running on.
Essentially, the corePoolSize is the number of Threads to maintain in the pool.
eg. if you expect 10 concurrent requests on a regular basis, but peaks of 20. The corePoolSize should be 10 and max as 20. This way the executor will create up to 10 new threads, even if idle threads are available for use.
As stated in the JavaDocs
Core and maximum pool sizes
A ThreadPoolExecutor will automatically adjust the pool size (see getPoolSize()) according to the bounds set by corePoolSize (see getCorePoolSize()) and maximumPoolSize (see getMaximumPoolSize()). When a new task is submitted in method execute(java.lang.Runnable), and fewer than corePoolSize threads are running, a new thread is created to handle the request, even if other worker threads are idle. If there are more than corePoolSize but less than maximumPoolSize threads running, a new thread will be created only if the queue is full. By setting corePoolSize and maximumPoolSize the same, you create a fixed-size thread pool. By setting maximumPoolSize to an essentially unbounded value such as Integer.MAX_VALUE, you allow the pool to accommodate an arbitrary number of concurrent tasks. Most typically, core and maximum pool sizes are set only upon construction, but they may also be changed dynamically using setCorePoolSize(int) and setMaximumPoolSize(int).