Detect when idle ThreadPoolExecutor thread is killed - java

I am using ThreadPoolExecutor to manage the number of threads, I can catch the event of creating a new Thread for ThreadPool via ThreadFactory->newThread()
But I do now know how to catch the event of killing Thread which stays idle for 2minutes as following configuration.
I hae searched a listener method but could not find.
public abstract class ThreadPoolEventProcessor<E> implements ThreadFactory{
private BlockingQueue<Runnable> taskQueue;
private ThreadPoolExecutor executor;
protected ThreadPoolEventProcessor(int coreThreadSize, int maxQueueSize) {
taskQueue = new LinkedBlockingQueue<Runnable>(maxQueueSize);
executor = new ThreadPoolExecutor(coreThreadSize, coreThreadSize * 5, 2L, TimeUnit.MINUTES, taskQueue,this);
executor.prestartAllCoreThreads();
}
public Thread newThread(Runnable r) {
return new Thread(r, getWorkerName());
}

The ThreadPoolExecutor does not kill threads. It will retrieve new threads from the ThreadFactory and have them run a Worker. All this worker does is loop, attempting to retrieve a Runnable from an underlying BlockingQueue.
If it gets one, it invokes run on it.
If the allowCoreThreadTimeOut is true and you have more workers than the core amount, then the keepAliveTime value is used to poll that underlying BlockingQueue. If the poll returns null, then the worker is (potentially) removed. Some extra cleanup happens and the various method invocations are popped from the stack as the methods return, until eventually the Worker#run() method terminates and the containing thread finishes.
Nowhere in that flow does ThreadPoolExecutor offer any hooks for notifications.
You can poll ThreadPoolExecutor#getPoolSize() and ThreadPoolExecutor#getLargestPoolSize() for information periodically.

In ThreadPoolExecutor class there is a set of Workers, which are runnable classes ran by Threads in the pool.
When a worker is done workerDone call back executes. And there you see a tryTerminate method being called. That is the method deciding if to terminate thread or not. You should be able to debug at that point
/**
* Performs bookkeeping for an exiting worker thread.
* #param w the worker
*/
void workerDone(Worker w) {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
completedTaskCount += w.completedTasks;
workers.remove(w);
if (--poolSize == 0)
tryTerminate();
} finally {
mainLock.unlock();
}
}
/* Termination support. */
/**
* Transitions to TERMINATED state if either (SHUTDOWN and pool
* and queue empty) or (STOP and pool empty), otherwise unless
* stopped, ensuring that there is at least one live thread to
* handle queued tasks.
*
* This method is called from the three places in which
* termination can occur: in workerDone on exit of the last thread
* after pool has been shut down, or directly within calls to
* shutdown or shutdownNow, if there are no live threads.
*/
private void tryTerminate() {
if (poolSize == 0) {
int state = runState;
if (state < STOP && !workQueue.isEmpty()) {
state = RUNNING; // disable termination check below
addThread(null);
}
if (state == STOP || state == SHUTDOWN) {
runState = TERMINATED;
termination.signalAll();
terminated();
}
}
}

Related

Free all waiting threads

I'm writing this class that simulates a barrier point. When a thread reaches this barrier point it cannot proceed until the other threads have also reached this point. I am using a counter to keep track of the number of threads that have arrived at this point. Assume that the class is expecting N+1 threads, but is only given N threads. In this case the program will keep all the threads waiting because it thinks that there is still one more thread to arrive.
I want to write a method that will allow me to free all of the waiting threads regardless of whether or not the program thinks there is still more threads to arrive at the barrier point.
My program to wait for all threads,
public volatile int count;
public static boolean cycle = false;
public static Lock lock = new ReentrantLock();
public static Condition cv = lock.newCondition();
public void barrier() throws InterruptedException {
boolean cycle;
System.out.println("lock");
lock.lock();
try {
cycle = this.cycle;
if (--this.count == 0) {
System.out.println("releasing all threads");
this.cycle = !this.cycle;
cv.signalAll();
} else {
while (cycle == this.cycle) {
System.out.println("waiting at barrier");
cv.await(); // Line 20
}
}
} finally {
System.out.println("unlock");
lock.unlock();
}
}
I was thinking I could simply create a method that calls the signalAll() method and all the threads would be free. However, a problem I am having is that if the program is expecting more threads it will maintain a lock because it will be waiting at line 20.
Is there a way to get around this lock? How should I approach this problem?
Better idea - use standard java.util.concurrent primitive - CyclicBarrier with method 'reset':
/**
* Resets the barrier to its initial state. If any parties are
* currently waiting at the barrier, they will return with a
* {#link BrokenBarrierException}. Note that resets <em>after</em>
* a breakage has occurred for other reasons can be complicated to
* carry out; threads need to re-synchronize in some other way,
* and choose one to perform the reset. It may be preferable to
* instead create a new barrier for subsequent use.
*/
public void reset()

Semaphore-implemented Producer-Consumer oriented thread pool

I am currently working on an educational assignment in which i have to implement a Semaphore only thread-safe thread pool.
I mustn't use during my assignment: Synchronize wait notify sleep or any thread-safe API's.
firstly without getting too much into the code i have:
Implemented a Thread-safe queue (no two threads can queue\dequeue at the same time) (i have tested this problem with ConcurrentLinkedQueue and the problem persists)
The design itself:
Shared:
Tasks semaphore = 0
Available semaphore = 0
Tasks_Queue queue
Available_Queue queue
Worker Threads:
Blocked semaphore = 0
General Info:
Only manager(single thread) can dequeue Tasks_Queue and Available_Queue
Only App-Main(single thread) can enqueue tasks is Tasks_Queue
Each worker thread can enqueue themselves in Available_Queue
So we have a mix of a single producer, a single manager and several consumers.
When the app first starts each of the worker threads gets started and immediately enqueues itself in Available_Queue, releases Available semaphore and gets blocked acquiring it's personal Blocked semaphore.
Whenever App-Main queues a new task it releases Task Semaphore
Whenever Manager wishes to execute a new task it must first acquire both Tasks and Available semaphores.
My question:
during the app's runtime the function dequeue_worker() returns a null worker, even though a semaphore is placed to protect access to the queue when it is known that there are no available worker threads.
i have "solved" the problem by recursively calling dequeue_worker() if it draws a null thread, BUT doing so is suppose to make an acquisition of a semaphore permit lost forever. yet when i limit the amount of workers to 1 the worker does not get blocked forever.
1) what's the break-point of my original design?
2) why doesn't my "solution" break the design even further?!
Code snippets:
// only gets called by Worker threads: enqueue_worker(this);
private void enqueue_worker(Worker worker) {
available_queue.add(worker);
available.release();
}
// only gets called by App-Main (a single thread)
public void enqueue_task(Query query) {
tasks_queue.add(query);
tasks.release();
}
// only gets called by Manager(a single Thread)
private Worker dequeue_worker() {
Worker worker = null;
try {
available.acquire();
worker = available_queue.poll();
} catch (InterruptedException e) {
// shouldn't happen
} // **** the solution: ****
if (worker==null) worker = dequeue_worker(); // TODO: find out why
return worker;
}
// only gets called by Manager(a single Thread)
private Query dequeue_task() {
Query query = null;
try {
tasks.acquire();
query = tasks_queue.poll();
} catch (InterruptedException e) {
// shouldn't happen
}
return query;
}
// gets called by Manager (a single thread)
private void execute() { // check if task is available and executes it
Worker worker = dequeue_worker(); // available.down()
Query query = dequeue_task(); //task.down()
worker.setData(query);
worker.blocked.release();
}
And finally Worker's Run() method:
while (true) { // main infinite loop
enqueue_worker(this);
acquire(); // blocked.acquire();
<C.S>
available.release();
}
You are calling available.release() twice, once in enqueue_worker, second time in a main loop.

Cancellation and Interruption in java

In Java Concurrency in Practice there is explanation about how to use cancellation and interruption in threads. This example is on Page 21 of Chapter 7 Cancellation and Shutdown, which states:
Listing 7.3. Unreliable Cancellation that can Leave Producers Stuck in a Blocking Operation. Don't Do this.
Here they are telling us in order to stop any thread operation just create a volatile flag which can be checked. Depending on the status of that flag thread execution stops.
Now there is one program for explaining same. It works fine there, below is the example:
public class PrimeGenerator implements Runnable {
#GuardedBy("this")
private final List<BigInteger> primes = new ArrayList<BigInteger>();
private volatile boolean cancelled;
public void run() {
BigInteger p = BigInteger.ONE;
while (!cancelled) {
p = p.nextProbablePrime();
synchronized (this) {
primes.add(p);
}
}
}
public void cancel() {
cancelled = true;
}
public synchronized List<BigInteger> get() {
return new ArrayList<BigInteger>(primes);
}
List<BigInteger> aSecondOfPrimes() throws InterruptedException {
PrimeGenerator generator = new PrimeGenerator();
new Thread(generator).start();
try {
SECONDS.sleep(1);
} finally {
generator.cancel();
}
return generator.get();
}
}
In the above code cancelled is the volatile flag which we can check for the cancellation check and thread execution stops if its true.
But if we do the same operation which we have done above but use BlockingQueue there is some problem.
If, however, a task that uses this approach calls a blocking method such as
BlockingQueue.put() we could have a more serious problem the task might never check the cancellation flag and therefore might never terminate.
BrokenPrimeProducer in below program illustrates this problem. The producer thread generates primes and places them on a blocking queue. If the producer gets ahead of the consumer, the queue will fill up and put() will block. What happens if the consumer tries to cancel the producer task while it is blocked in put()? It can call cancel which will set the cancelled flag but the producer will never check the flag because it will never emerge from the blocking put() (because the consumer has stopped retrieving primes from the queue).
Here is the code for the same:
class BrokenPrimeProducer extends Thread {
private final BlockingQueue<BigInteger> queue;
private volatile boolean cancelled = false;
BrokenPrimeProducer(BlockingQueue<BigInteger> queue) {
this.queue = queue;
}
public void run() {
try {
BigInteger p = BigInteger.ONE;
while (!cancelled) {
queue.put(p = p.nextProbablePrime());
}
} catch (InterruptedException consumed) {
}
}
public void cancel() {
cancelled = true;
}
void consumePrimes() throws InterruptedException {
BlockingQueue<BigInteger> primes =...;
BrokenPrimeProducer producer = new BrokenPrimeProducer(primes);
producer.start();
try {
while (needMorePrimes()) {
consume(primes.take());
}
} finally {
producer.cancel();
}
}
}
I am not able to understand why cancellation will not work in case of blocking Queue in second code example. Can someone explain?
This is explicitly because BlockingQueue#put(E) will block if it needs to while placing values inside of it. The code isn't in a position to check the flag again due to it being in a blocked state, so the fact that the flag is set to a different value at any other time is independent of the currently blocked thread.
The only real way to address the issue is to interrupt the thread, which will end the blocking operation.
When using a flag to cancel, there's no way to make the thread quit sleeping or waiting if it happens to have started sleeping or waiting, instead you have to wait for the sleep time to expire or for the wait to be ended with a notification. Blocking means a consumer thread sits in a wait state until something gets enqueued in an empty queue, or a producer thread sits in a wait state until there's room to put something in a full queue. The blocked thread never leaves the wait method -- it's as if you had a breakpoint on the line with the sleep or wait, and the thread is frozenon that line until the sleep time expires or until the thread gets a notification (not getting into spurious wakeups). The thread can't get to the line where it checks the flag.
Using interruption signals the thread to wake up if it is waiting or sleeping. You can't do that with a flag.
Cancellation flags need to be checked. Whereas interruption immediately notifies the thread blocked to throw InterruptedException, only the next iteration of the while loop will the thread know it's been changed - that is, when the thread unblocks and continues.
See the problem? The thread won't know if another thread set the flag. It's blocked. It can't go to the next iteration.
needMorePrimes() on some conditions return false, then the consumer will call producer.cancel(), at the same time, the producer fill the BlockingQueue full so that it block on queue.put(p = p.nextProbablePrime()) and can't check the cancelled status, so it's bad.

ThreadPoolExecutor and the queue

I thought that using ThreadPoolExecutor we can submit Runnables to be executed either in the BlockingQueue passed in the constructor or using the execute method.
Also my understanding was that if a task is available it will be executed.
What I don't understand is the following:
public class MyThreadPoolExecutor {
private static ThreadPoolExecutor executor;
public MyThreadPoolExecutor(int min, int max, int idleTime, BlockingQueue<Runnable> queue){
executor = new ThreadPoolExecutor(min, max, 10, TimeUnit.MINUTES, queue);
//executor.prestartAllCoreThreads();
}
public static void main(String[] main){
BlockingQueue<Runnable> q = new LinkedBlockingQueue<Runnable>();
final String[] names = {"A","B","C","D","E","F"};
for(int i = 0; i < names.length; i++){
final int j = i;
q.add(new Runnable() {
#Override
public void run() {
System.out.println("Hi "+ names[j]);
}
});
}
new MyThreadPoolExecutor(10, 20, 1, q);
try {
TimeUnit.SECONDS.sleep(5);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
/*executor.execute(new Runnable() {
#Override
public void run() {
System.out.println("++++++++++++++");
}
}); */
for(int i = 0; i < 100; i++){
final int j = i;
q.add(new Runnable() {
#Override
public void run() {
System.out.println("Hi "+ j);
}
});
}
}
}
This code does not do absolutely anything unless I either uncomment the executor.prestartAllCoreThreads(); in the constructor OR I call execute of the runnable that prints System.out.println("++++++++++++++"); (it is also commented out).
Why?
Quote (my emphasis):
By default, even core threads are initially created and started only
when new tasks arrive, but this can be overridden dynamically using
method prestartCoreThread() or prestartAllCoreThreads(). You probably
want to prestart threads if you construct the pool with a non-empty
queue.
Ok. So my queue is not empty. But I create the executor, I do sleep and then I add new Runnables to the queue (in the loop to 100).
Doesn't this loop count as new tasks arrive?
Why doesn't it work and I have to either prestart or explicitely call execute?
Worker threads are spawned as tasks arrive by execute, and these are the ones that interact with the underlying work queue. You need to prestart the workers if you begin with a non-empty work queue. See the implementation in OpenJDK 7.
I repeat, the workers are the ones that interact with the work queue. They are only spawned on demand when passed via execute. (or the layers above it, e.g. invokeAll, submit, etc.) If they are not started, it will not matter how much work you add to the queue, since there is nothing checking it as there are no workers started.
ThreadPoolExecutor does not spawn worker threads until necessary or if you pre-empt their creation by the methods prestartAllCoreThreads and prestartCoreThread. If there are no workers started, then there is no way any of the work in your queue is going to be done.
The reason adding an initial execute works is that it forces the creation of a sole core worker thread, which then can begin processing the work from your queue. You could also call prestartCoreThread and receive similar behavior. If you want to start all the workers, you must call prestartAllCoreThreads or submit that number of tasks via execute.
See the code for execute below.
/**
* Executes the given task sometime in the future. The task
* may execute in a new thread or in an existing pooled thread.
*
* If the task cannot be submitted for execution, either because this
* executor has been shutdown or because its capacity has been reached,
* the task is handled by the current {#code RejectedExecutionHandler}.
*
* #param command the task to execute
* #throws RejectedExecutionException at discretion of
* {#code RejectedExecutionHandler}, if the task
* cannot be accepted for execution
* #throws NullPointerException if {#code command} is null
*/
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
A BlockingQueue is not a magic thread dispatcher. If you submit Runnable objects to the queue and there are no running threads to consume those tasks, they of course will not be executed. The execute method on the other hand will automatically dispatch threads according to the thread pool configuration if it needs to. If you pre-start all of the core threads, there will be threads there to consume tasks from the queue.

Dynamic resizing of java.util.concurrent.ThreadPoolExecutor while it has waiting tasks

I'm working with a java.util.concurrent.ThreadPoolExecutor to process a number of items in parallel. Although the threading itself works fine, at times we've run into other resource constraints due to actions happening in the threads, which made us want to dial down the number of Threads in the pool.
I'd like to know if there's a way to dial down the number of the threads while the threads are actually working. I know that you can call setMaximumPoolSize() and/or setCorePoolSize(), but these only resize the pool once threads become idle, but they don't become idle until there are no tasks waiting in the queue.
You absolutely can. Calling setCorePoolSize(int) will change the core size of the pool. Calls to this method are thread-safe and override settings provided to the constructor of ThreadPoolExecutor. If you are trimming the pool size, the remaining threads will shut-down once their current job queue is completed (if they are idle, they will shut-down immediately). If you are increasing the pool size, new threads will be allocated as soon as possible. The timeframe for the allocation of new threads is undocumented — but in the implementation, allocation of new threads is performed upon each call to the execute method.
To pair this with a runtime-tunable job-farm, you can expose this property (either by wrapper or using a dynamic MBean exporter) as a read-write JMX attribute to create a rather nice, on-the-fly tunable batch processor.
To reduce the pool size forcibly in runtime (which is your request), you must subclass the ThreadPoolExecutor and add a disruption to the beforeExecute(Thread,Runnable) method. Interrupting the thread is not a sufficient disruption, since that only interacts with wait-states and during processing the ThreadPoolExecutor task threads do not go into an interruptable state.
I recently had the same problem trying to get a thread pool to forcibly terminate before all submitted tasks are executed. To make this happen, I interrupted the thread by throwing a runtime exception only after replacing the UncaughtExceptionHandler of the thread with one that expects my specific exception and discards it.
/**
* A runtime exception used to prematurely terminate threads in this pool.
*/
static class ShutdownException
extends RuntimeException {
ShutdownException (String message) {
super(message);
}
}
/**
* This uncaught exception handler is used only as threads are entered into
* their shutdown state.
*/
static class ShutdownHandler
implements UncaughtExceptionHandler {
private UncaughtExceptionHandler handler;
/**
* Create a new shutdown handler.
*
* #param handler The original handler to deligate non-shutdown
* exceptions to.
*/
ShutdownHandler (UncaughtExceptionHandler handler) {
this.handler = handler;
}
/**
* Quietly ignore {#link ShutdownException}.
* <p>
* Do nothing if this is a ShutdownException, this is just to prevent
* logging an uncaught exception which is expected. Otherwise forward
* it to the thread group handler (which may hand it off to the default
* uncaught exception handler).
* </p>
*/
public void uncaughtException (Thread thread, Throwable throwable) {
if (!(throwable instanceof ShutdownException)) {
/* Use the original exception handler if one is available,
* otherwise use the group exception handler.
*/
if (handler != null) {
handler.uncaughtException(thread, throwable);
}
}
}
}
/**
* Configure the given job as a spring bean.
*
* <p>Given a runnable task, configure it as a prototype spring bean,
* injecting any necessary dependencices.</p>
*
* #param thread The thread the task will be executed in.
* #param job The job to configure.
*
* #throws IllegalStateException if any error occurs.
*/
protected void beforeExecute (final Thread thread, final Runnable job) {
/* If we're in shutdown, it's because spring is in singleton shutdown
* mode. This means we must not attempt to configure the bean, but
* rather we must exit immediately (prematurely, even).
*/
if (!this.isShutdown()) {
if (factory == null) {
throw new IllegalStateException(
"This class must be instantiated by spring"
);
}
factory.configureBean(job, job.getClass().getName());
}
else {
/* If we are in shutdown mode, replace the job on the queue so the
* next process will see it and it won't get dropped. Further,
* interrupt this thread so it will no longer process jobs. This
* deviates from the existing behavior of shutdown().
*/
workQueue.add(job);
thread.setUncaughtExceptionHandler(
new ShutdownHandler(thread.getUncaughtExceptionHandler())
);
/* Throwing a runtime exception is the only way to prematurely
* cause a worker thread from the TheadPoolExecutor to exit.
*/
throw new ShutdownException("Terminating thread");
}
}
In your case, you may want to create a semaphore (just for use as a threadsafe counter) which has no permits, and when shutting down threads release to it a number of permits that corresponds to the delta of the previous core pool size and the new pool size (requiring you override the setCorePoolSize(int) method). This will allow you to terminate your threads after their current task completes.
private Semaphore terminations = new Semaphore(0);
protected void beforeExecute (final Thread thread, final Runnable job) {
if (terminations.tryAcquire()) {
/* Replace this item in the queue so it may be executed by another
* thread
*/
queue.add(job);
thread.setUncaughtExceptionHandler(
new ShutdownHandler(thread.getUncaughtExceptionHandler())
);
/* Throwing a runtime exception is the only way to prematurely
* cause a worker thread from the TheadPoolExecutor to exit.
*/
throw new ShutdownException("Terminating thread");
}
}
public void setCorePoolSize (final int size) {
int delta = getActiveCount() - size;
super.setCorePoolSize(size);
if (delta > 0) {
terminations.release(delta);
}
}
This should interrupt n threads for f(n) = active - requested. If there is any problem, the ThreadPoolExecutors allocation strategy is fairly durable. It book-keeps on premature termination using a finally block which guarantees execution. For this reason, even if you terminate too many threads, they will repopulate.
As far as I can tell, this is not possible in a nice clean way.
You can implement the beforeExecute method to check some boolean value and force threads to halt temporarily. Keep in mind, they will contain a task which will not be executed until they are re-enabled.
Alternatively, you can implement afterExecute to throw a RuntimeException when you are saturated. This will effectively cause the Thread to die and since the Executor will be above the max, no new one would be created.
I don't recommend you do either. Instead, try to find some other way of controlling concurrent execution of the tasks which are causing you a problem. Possibly by executing them in a separate thread pool with a more limited number of workers.
The solution is to drain the ThreadPoolExecutor queue, set the ThreadPoolExecutor size as needed and then add back the threads, one by one, as soon as the others ends.
The method to drain the queue in the ThreadPoolExecutor class is private so you have to create it by yourself. Here is the code:
/**
* Drains the task queue into a new list. Used by shutdownNow.
* Call only while holding main lock.
*/
public static List<Runnable> drainQueue() {
List<Runnable> taskList = new ArrayList<Runnable>();
BlockingQueue<Runnable> workQueue = executor.getQueue();
workQueue.drainTo(taskList);
/*
* If the queue is a DelayQueue or any other kind of queue
* for which poll or drainTo may fail to remove some elements,
* we need to manually traverse and remove remaining tasks.
* To guarantee atomicity wrt other threads using this queue,
* we need to create a new iterator for each element removed.
*/
while (!workQueue.isEmpty()) {
Iterator<Runnable> it = workQueue.iterator();
try {
if (it.hasNext()) {
Runnable r = it.next();
if (workQueue.remove(r))
taskList.add(r);
}
} catch (ConcurrentModificationException ignore) {
}
}
return taskList;
}
Before calling this method you need to get and then release the main lock.
To do this you need to use java reflection because the field "mainLock" is private.
Again, here is the code:
private Field getMainLock() throws NoSuchFieldException {
Field mainLock = executor.getClass().getDeclaredField("mainLock");
mainLock.setAccessible(true);
return mainLock;
}
Where "executor" is your ThreadPoolExecutor.
Now you need lock/unlock methods:
public void lock() {
try {
Field mainLock = getMainLock();
Method lock = mainLock.getType().getDeclaredMethod("lock", (Class[])null);
lock.invoke(mainLock.get(executor), (Object[])null);
} catch {
...
}
}
public void unlock() {
try {
Field mainLock = getMainLock();
mainLock.setAccessible(true);
Method lock = mainLock.getType().getDeclaredMethod("unlock", (Class[])null);
lock.invoke(mainLock.get(executor), (Object[])null);
} catch {
...
}
}
Finally you can write your "setThreadsNumber" method, and it will work both increasing and decreasing the ThreadPoolExecutor size:
public void setThreadsNumber(int intValue) {
boolean increasing = intValue > executor.getPoolSize();
executor.setCorePoolSize(intValue);
executor.setMaximumPoolSize(intValue);
if(increasing){
if(drainedQueue != null && (drainedQueue.size() > 0)){
executor.submit(drainedQueue.remove(0));
}
} else {
if(drainedQueue == null){
lock();
drainedQueue = drainQueue();
unlock();
}
}
}
Note: obviously if you execute N parallel threads and the you change this number to N-1, all the N threads will continue to run. When the first thread ends no new threads will be executed. From now on the number of parallel thread will be the one you have chosen.
I was in a need for the same solution too, and it seems that in JDK8 the setCorePoolSize() and setMaximumPoolSize() do indeed produce the desired result.
I made a test case where I submit 4 tasks to the pool and they execute concurently, I shrink the pool size while they are running and submit yet another runnable that I want to be lonesome. Then I restore the pool back to its original size. Here is the test source https://gist.github.com/southerton81/96e141b8feede3fe0b8f88f679bef381
It produces the following output (thread "50" is the one that should be executed in isolation)
run:
test thread 2 enter
test thread 1 enter
test thread 3 enter
test thread 4 enter
test thread 1 exit
test thread 2 exit
test thread 3 exit
test thread 4 exit
test thread 50 enter
test thread 50 exit
test thread 1 enter
test thread 2 enter
test thread 3 enter
test thread 4 enter
test thread 1 exit
test thread 2 exit
test thread 3 exit
test thread 4 exit

Categories