I thought that using ThreadPoolExecutor we can submit Runnables to be executed either in the BlockingQueue passed in the constructor or using the execute method.
Also my understanding was that if a task is available it will be executed.
What I don't understand is the following:
public class MyThreadPoolExecutor {
private static ThreadPoolExecutor executor;
public MyThreadPoolExecutor(int min, int max, int idleTime, BlockingQueue<Runnable> queue){
executor = new ThreadPoolExecutor(min, max, 10, TimeUnit.MINUTES, queue);
//executor.prestartAllCoreThreads();
}
public static void main(String[] main){
BlockingQueue<Runnable> q = new LinkedBlockingQueue<Runnable>();
final String[] names = {"A","B","C","D","E","F"};
for(int i = 0; i < names.length; i++){
final int j = i;
q.add(new Runnable() {
#Override
public void run() {
System.out.println("Hi "+ names[j]);
}
});
}
new MyThreadPoolExecutor(10, 20, 1, q);
try {
TimeUnit.SECONDS.sleep(5);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
/*executor.execute(new Runnable() {
#Override
public void run() {
System.out.println("++++++++++++++");
}
}); */
for(int i = 0; i < 100; i++){
final int j = i;
q.add(new Runnable() {
#Override
public void run() {
System.out.println("Hi "+ j);
}
});
}
}
}
This code does not do absolutely anything unless I either uncomment the executor.prestartAllCoreThreads(); in the constructor OR I call execute of the runnable that prints System.out.println("++++++++++++++"); (it is also commented out).
Why?
Quote (my emphasis):
By default, even core threads are initially created and started only
when new tasks arrive, but this can be overridden dynamically using
method prestartCoreThread() or prestartAllCoreThreads(). You probably
want to prestart threads if you construct the pool with a non-empty
queue.
Ok. So my queue is not empty. But I create the executor, I do sleep and then I add new Runnables to the queue (in the loop to 100).
Doesn't this loop count as new tasks arrive?
Why doesn't it work and I have to either prestart or explicitely call execute?
Worker threads are spawned as tasks arrive by execute, and these are the ones that interact with the underlying work queue. You need to prestart the workers if you begin with a non-empty work queue. See the implementation in OpenJDK 7.
I repeat, the workers are the ones that interact with the work queue. They are only spawned on demand when passed via execute. (or the layers above it, e.g. invokeAll, submit, etc.) If they are not started, it will not matter how much work you add to the queue, since there is nothing checking it as there are no workers started.
ThreadPoolExecutor does not spawn worker threads until necessary or if you pre-empt their creation by the methods prestartAllCoreThreads and prestartCoreThread. If there are no workers started, then there is no way any of the work in your queue is going to be done.
The reason adding an initial execute works is that it forces the creation of a sole core worker thread, which then can begin processing the work from your queue. You could also call prestartCoreThread and receive similar behavior. If you want to start all the workers, you must call prestartAllCoreThreads or submit that number of tasks via execute.
See the code for execute below.
/**
* Executes the given task sometime in the future. The task
* may execute in a new thread or in an existing pooled thread.
*
* If the task cannot be submitted for execution, either because this
* executor has been shutdown or because its capacity has been reached,
* the task is handled by the current {#code RejectedExecutionHandler}.
*
* #param command the task to execute
* #throws RejectedExecutionException at discretion of
* {#code RejectedExecutionHandler}, if the task
* cannot be accepted for execution
* #throws NullPointerException if {#code command} is null
*/
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
A BlockingQueue is not a magic thread dispatcher. If you submit Runnable objects to the queue and there are no running threads to consume those tasks, they of course will not be executed. The execute method on the other hand will automatically dispatch threads according to the thread pool configuration if it needs to. If you pre-start all of the core threads, there will be threads there to consume tasks from the queue.
Related
I have a ThreadPoolExecutor as such
ThreadPoolExecutor executor = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<>());
The tasks are executed as follows
executor.execute(task)
Now each task may also execute more tasks to the same executor and those new tasks can submit more tasks
The problem is I want the main thread to wait until all tasks are executed and then call shutdown
Is the following approach guaranteed to work? (i.e. block/wait main
thread until all tasks are completed)
while (executor.getCompletedTaskCount() < executor.getTaskCount()) {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
LOGGER.error("Exception in atomic Count wait thread sleep", e);
break;
}
}
}
Will this eventually break out of loop? Just by preliminary testing, I found that it works even with exceptions in thread
P.S.
I cannot use latch, because I don't know the number of tasks beforehand
nor the accepted answer here
You should probably keep the futures that get submitted.
Deque<Future<?>> futures = new ConcurrentLinkedDeque<>();
Then everytime you submit a task.
futures.add(executor.submit( runnable, "Doesn't Really Matter, but Can be Useful"));
Then in your main thread that is waiting.
while(futures.size()>0){
futures.pop().get();
}
This will offer you a guarantee that .get will not complete until a task has finished, and if more tasks are added by another task then futures will reflect the change before the original task completes.
In my opinion it will be non-deterministic to get the actual count of tasks for the reason that while the tasks are submitted the execute method is called and one of below 3 conditions may happen.
1. Task starts executing (added to Workers)
2. Task is enqueued (added to WorkQueue)
3. Task is rejected as WorkerQueue capacity,Workers capacity and resources exhaust
/**
* Executes the given task sometime in the future. The task
* may execute in a new thread or in an existing pooled thread.
*
* If the task cannot be submitted for execution, either because this
* executor has been shutdown or because its capacity has been reached,
* the task is handled by the current {#code RejectedExecutionHandler}.
*
* #param command the task to execute
* #throws RejectedExecutionException at discretion of
* {#code RejectedExecutionHandler}, if the task
* cannot be accepted for execution
* #throws NullPointerException if {#code command} is null
*/
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
getTaskCount() and getCompletedTaskCount() methods are guarded by mainLock hence we do know if internal threads still submitting tasks to executor will be done by the time check (while (executor.getCompletedTaskCount() < executor.getTaskCount()) ) in main executes. This condition may result is false positive for a moment ending into a wrong result.
/**
* Returns the approximate total number of tasks that have ever been
* scheduled for execution. Because the states of tasks and
* threads may change dynamically during computation, the returned
* value is only an approximation.
*
* #return the number of tasks
*/
public long getTaskCount() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
long n = completedTaskCount;
for (Worker w : workers) {
n += w.completedTasks;
if (w.isLocked())
++n;
}
return n + workQueue.size();
} finally {
mainLock.unlock();
}
}
/**
* Returns the approximate total number of tasks that have
* completed execution. Because the states of tasks and threads
* may change dynamically during computation, the returned value
* is only an approximation, but one that does not ever decrease
* across successive calls.
*
* #return the number of tasks
*/
public long getCompletedTaskCount() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
long n = completedTaskCount;
for (Worker w : workers)
n += w.completedTasks;
return n;
} finally {
mainLock.unlock();
}
}
**Code Snippets used here are from JDK 1.8 222
The methods used to get the completed count and submitted count i.e. executor.getCompletedTaskCount() & executor.getTaskCount() do not always provide a 100% accurate count as per the Java (8) docs, so the approach may not work always.
public long getTaskCount()
Returns the approximate total number of tasks that have ever been
scheduled for execution. Because the states of tasks and threads may
change dynamically during computation, the returned value is only an
approximation.
public long getCompletedTaskCount()
Returns the approximate total number of tasks that have completed
execution. Because the states of tasks and threads may change
dynamically during computation, the returned value is only an
approximation, but one that does not ever decrease across successive
calls.
I have a servlet in AEM (Adobe Experience Manager) that create a new thread per request. I use a Apache Sling threadpool to manage the threads. All is ok, but if the servlet receives haundreds request in a few seconds the thread is not created and the threadpool is converted in useless. I have to restart the AEM instance to the threadpool The threadpool uses a queue without limit with a core pool of ten elements. The configuration is the next:
ThreadPool configuration
Debuging the java.util.concurrent.ThreadPoolExecutor class, my code enter into the third "if" and then not enter in the if neither the else, so, the thread is not created.
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
I read the documentation but not understand why this occurs.
I am using ThreadPoolExecutor to manage the number of threads, I can catch the event of creating a new Thread for ThreadPool via ThreadFactory->newThread()
But I do now know how to catch the event of killing Thread which stays idle for 2minutes as following configuration.
I hae searched a listener method but could not find.
public abstract class ThreadPoolEventProcessor<E> implements ThreadFactory{
private BlockingQueue<Runnable> taskQueue;
private ThreadPoolExecutor executor;
protected ThreadPoolEventProcessor(int coreThreadSize, int maxQueueSize) {
taskQueue = new LinkedBlockingQueue<Runnable>(maxQueueSize);
executor = new ThreadPoolExecutor(coreThreadSize, coreThreadSize * 5, 2L, TimeUnit.MINUTES, taskQueue,this);
executor.prestartAllCoreThreads();
}
public Thread newThread(Runnable r) {
return new Thread(r, getWorkerName());
}
The ThreadPoolExecutor does not kill threads. It will retrieve new threads from the ThreadFactory and have them run a Worker. All this worker does is loop, attempting to retrieve a Runnable from an underlying BlockingQueue.
If it gets one, it invokes run on it.
If the allowCoreThreadTimeOut is true and you have more workers than the core amount, then the keepAliveTime value is used to poll that underlying BlockingQueue. If the poll returns null, then the worker is (potentially) removed. Some extra cleanup happens and the various method invocations are popped from the stack as the methods return, until eventually the Worker#run() method terminates and the containing thread finishes.
Nowhere in that flow does ThreadPoolExecutor offer any hooks for notifications.
You can poll ThreadPoolExecutor#getPoolSize() and ThreadPoolExecutor#getLargestPoolSize() for information periodically.
In ThreadPoolExecutor class there is a set of Workers, which are runnable classes ran by Threads in the pool.
When a worker is done workerDone call back executes. And there you see a tryTerminate method being called. That is the method deciding if to terminate thread or not. You should be able to debug at that point
/**
* Performs bookkeeping for an exiting worker thread.
* #param w the worker
*/
void workerDone(Worker w) {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
completedTaskCount += w.completedTasks;
workers.remove(w);
if (--poolSize == 0)
tryTerminate();
} finally {
mainLock.unlock();
}
}
/* Termination support. */
/**
* Transitions to TERMINATED state if either (SHUTDOWN and pool
* and queue empty) or (STOP and pool empty), otherwise unless
* stopped, ensuring that there is at least one live thread to
* handle queued tasks.
*
* This method is called from the three places in which
* termination can occur: in workerDone on exit of the last thread
* after pool has been shut down, or directly within calls to
* shutdown or shutdownNow, if there are no live threads.
*/
private void tryTerminate() {
if (poolSize == 0) {
int state = runState;
if (state < STOP && !workQueue.isEmpty()) {
state = RUNNING; // disable termination check below
addThread(null);
}
if (state == STOP || state == SHUTDOWN) {
runState = TERMINATED;
termination.signalAll();
terminated();
}
}
}
I'm looking for a ThreadPoolExecutor that will block when it's task queue is full - the current Java implementation rejects new tasks if the underlying queue is full -
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
would changing this line:
if (isRunning(c) && workQueue.offer(command)) {
TO
if (isRunning(c) && workQueue.put(command)) {
Do the trick? Am I missing something?
SOLUTION (might help the next person):
public class BlockingThreadPoolExecutor extends ThreadPoolExecutor {
private final Semaphore runLock;
public BlockingThreadPoolExecutor(int corePoolSize, int maximumPoolSize,
long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, int maxTasks) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
runLock = new Semaphore(maxTasks);
}
#Override
protected void beforeExecute(Thread t, Runnable r) {
runLock.acquireUninterruptibly();
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
runLock.release();
}
}
Depends on the ThreadPoolExecutor state and settings because not all task submissions pass through the BlockingQueue. Usually you just want to change the RejectedExecutionHandler of the ThreadPoolExecutor to the ThreadPoolExecutor.CallerRunsPolicy which will throttle submissions. If you really want to block on submit then you should use a CompletionService and call the 'take' method when you want to block. You can also create a subclass and use a Semaphore to block the execute method. See JDK-6648211 : Need for blocking ThreadPoolExecutor for more information.
I'm working with a java.util.concurrent.ThreadPoolExecutor to process a number of items in parallel. Although the threading itself works fine, at times we've run into other resource constraints due to actions happening in the threads, which made us want to dial down the number of Threads in the pool.
I'd like to know if there's a way to dial down the number of the threads while the threads are actually working. I know that you can call setMaximumPoolSize() and/or setCorePoolSize(), but these only resize the pool once threads become idle, but they don't become idle until there are no tasks waiting in the queue.
You absolutely can. Calling setCorePoolSize(int) will change the core size of the pool. Calls to this method are thread-safe and override settings provided to the constructor of ThreadPoolExecutor. If you are trimming the pool size, the remaining threads will shut-down once their current job queue is completed (if they are idle, they will shut-down immediately). If you are increasing the pool size, new threads will be allocated as soon as possible. The timeframe for the allocation of new threads is undocumented — but in the implementation, allocation of new threads is performed upon each call to the execute method.
To pair this with a runtime-tunable job-farm, you can expose this property (either by wrapper or using a dynamic MBean exporter) as a read-write JMX attribute to create a rather nice, on-the-fly tunable batch processor.
To reduce the pool size forcibly in runtime (which is your request), you must subclass the ThreadPoolExecutor and add a disruption to the beforeExecute(Thread,Runnable) method. Interrupting the thread is not a sufficient disruption, since that only interacts with wait-states and during processing the ThreadPoolExecutor task threads do not go into an interruptable state.
I recently had the same problem trying to get a thread pool to forcibly terminate before all submitted tasks are executed. To make this happen, I interrupted the thread by throwing a runtime exception only after replacing the UncaughtExceptionHandler of the thread with one that expects my specific exception and discards it.
/**
* A runtime exception used to prematurely terminate threads in this pool.
*/
static class ShutdownException
extends RuntimeException {
ShutdownException (String message) {
super(message);
}
}
/**
* This uncaught exception handler is used only as threads are entered into
* their shutdown state.
*/
static class ShutdownHandler
implements UncaughtExceptionHandler {
private UncaughtExceptionHandler handler;
/**
* Create a new shutdown handler.
*
* #param handler The original handler to deligate non-shutdown
* exceptions to.
*/
ShutdownHandler (UncaughtExceptionHandler handler) {
this.handler = handler;
}
/**
* Quietly ignore {#link ShutdownException}.
* <p>
* Do nothing if this is a ShutdownException, this is just to prevent
* logging an uncaught exception which is expected. Otherwise forward
* it to the thread group handler (which may hand it off to the default
* uncaught exception handler).
* </p>
*/
public void uncaughtException (Thread thread, Throwable throwable) {
if (!(throwable instanceof ShutdownException)) {
/* Use the original exception handler if one is available,
* otherwise use the group exception handler.
*/
if (handler != null) {
handler.uncaughtException(thread, throwable);
}
}
}
}
/**
* Configure the given job as a spring bean.
*
* <p>Given a runnable task, configure it as a prototype spring bean,
* injecting any necessary dependencices.</p>
*
* #param thread The thread the task will be executed in.
* #param job The job to configure.
*
* #throws IllegalStateException if any error occurs.
*/
protected void beforeExecute (final Thread thread, final Runnable job) {
/* If we're in shutdown, it's because spring is in singleton shutdown
* mode. This means we must not attempt to configure the bean, but
* rather we must exit immediately (prematurely, even).
*/
if (!this.isShutdown()) {
if (factory == null) {
throw new IllegalStateException(
"This class must be instantiated by spring"
);
}
factory.configureBean(job, job.getClass().getName());
}
else {
/* If we are in shutdown mode, replace the job on the queue so the
* next process will see it and it won't get dropped. Further,
* interrupt this thread so it will no longer process jobs. This
* deviates from the existing behavior of shutdown().
*/
workQueue.add(job);
thread.setUncaughtExceptionHandler(
new ShutdownHandler(thread.getUncaughtExceptionHandler())
);
/* Throwing a runtime exception is the only way to prematurely
* cause a worker thread from the TheadPoolExecutor to exit.
*/
throw new ShutdownException("Terminating thread");
}
}
In your case, you may want to create a semaphore (just for use as a threadsafe counter) which has no permits, and when shutting down threads release to it a number of permits that corresponds to the delta of the previous core pool size and the new pool size (requiring you override the setCorePoolSize(int) method). This will allow you to terminate your threads after their current task completes.
private Semaphore terminations = new Semaphore(0);
protected void beforeExecute (final Thread thread, final Runnable job) {
if (terminations.tryAcquire()) {
/* Replace this item in the queue so it may be executed by another
* thread
*/
queue.add(job);
thread.setUncaughtExceptionHandler(
new ShutdownHandler(thread.getUncaughtExceptionHandler())
);
/* Throwing a runtime exception is the only way to prematurely
* cause a worker thread from the TheadPoolExecutor to exit.
*/
throw new ShutdownException("Terminating thread");
}
}
public void setCorePoolSize (final int size) {
int delta = getActiveCount() - size;
super.setCorePoolSize(size);
if (delta > 0) {
terminations.release(delta);
}
}
This should interrupt n threads for f(n) = active - requested. If there is any problem, the ThreadPoolExecutors allocation strategy is fairly durable. It book-keeps on premature termination using a finally block which guarantees execution. For this reason, even if you terminate too many threads, they will repopulate.
As far as I can tell, this is not possible in a nice clean way.
You can implement the beforeExecute method to check some boolean value and force threads to halt temporarily. Keep in mind, they will contain a task which will not be executed until they are re-enabled.
Alternatively, you can implement afterExecute to throw a RuntimeException when you are saturated. This will effectively cause the Thread to die and since the Executor will be above the max, no new one would be created.
I don't recommend you do either. Instead, try to find some other way of controlling concurrent execution of the tasks which are causing you a problem. Possibly by executing them in a separate thread pool with a more limited number of workers.
The solution is to drain the ThreadPoolExecutor queue, set the ThreadPoolExecutor size as needed and then add back the threads, one by one, as soon as the others ends.
The method to drain the queue in the ThreadPoolExecutor class is private so you have to create it by yourself. Here is the code:
/**
* Drains the task queue into a new list. Used by shutdownNow.
* Call only while holding main lock.
*/
public static List<Runnable> drainQueue() {
List<Runnable> taskList = new ArrayList<Runnable>();
BlockingQueue<Runnable> workQueue = executor.getQueue();
workQueue.drainTo(taskList);
/*
* If the queue is a DelayQueue or any other kind of queue
* for which poll or drainTo may fail to remove some elements,
* we need to manually traverse and remove remaining tasks.
* To guarantee atomicity wrt other threads using this queue,
* we need to create a new iterator for each element removed.
*/
while (!workQueue.isEmpty()) {
Iterator<Runnable> it = workQueue.iterator();
try {
if (it.hasNext()) {
Runnable r = it.next();
if (workQueue.remove(r))
taskList.add(r);
}
} catch (ConcurrentModificationException ignore) {
}
}
return taskList;
}
Before calling this method you need to get and then release the main lock.
To do this you need to use java reflection because the field "mainLock" is private.
Again, here is the code:
private Field getMainLock() throws NoSuchFieldException {
Field mainLock = executor.getClass().getDeclaredField("mainLock");
mainLock.setAccessible(true);
return mainLock;
}
Where "executor" is your ThreadPoolExecutor.
Now you need lock/unlock methods:
public void lock() {
try {
Field mainLock = getMainLock();
Method lock = mainLock.getType().getDeclaredMethod("lock", (Class[])null);
lock.invoke(mainLock.get(executor), (Object[])null);
} catch {
...
}
}
public void unlock() {
try {
Field mainLock = getMainLock();
mainLock.setAccessible(true);
Method lock = mainLock.getType().getDeclaredMethod("unlock", (Class[])null);
lock.invoke(mainLock.get(executor), (Object[])null);
} catch {
...
}
}
Finally you can write your "setThreadsNumber" method, and it will work both increasing and decreasing the ThreadPoolExecutor size:
public void setThreadsNumber(int intValue) {
boolean increasing = intValue > executor.getPoolSize();
executor.setCorePoolSize(intValue);
executor.setMaximumPoolSize(intValue);
if(increasing){
if(drainedQueue != null && (drainedQueue.size() > 0)){
executor.submit(drainedQueue.remove(0));
}
} else {
if(drainedQueue == null){
lock();
drainedQueue = drainQueue();
unlock();
}
}
}
Note: obviously if you execute N parallel threads and the you change this number to N-1, all the N threads will continue to run. When the first thread ends no new threads will be executed. From now on the number of parallel thread will be the one you have chosen.
I was in a need for the same solution too, and it seems that in JDK8 the setCorePoolSize() and setMaximumPoolSize() do indeed produce the desired result.
I made a test case where I submit 4 tasks to the pool and they execute concurently, I shrink the pool size while they are running and submit yet another runnable that I want to be lonesome. Then I restore the pool back to its original size. Here is the test source https://gist.github.com/southerton81/96e141b8feede3fe0b8f88f679bef381
It produces the following output (thread "50" is the one that should be executed in isolation)
run:
test thread 2 enter
test thread 1 enter
test thread 3 enter
test thread 4 enter
test thread 1 exit
test thread 2 exit
test thread 3 exit
test thread 4 exit
test thread 50 enter
test thread 50 exit
test thread 1 enter
test thread 2 enter
test thread 3 enter
test thread 4 enter
test thread 1 exit
test thread 2 exit
test thread 3 exit
test thread 4 exit