Java ForkJoinPool - order of tasks in queues - java

I would like to understand the order in which tasks are processed in Java fork-join pool.
So far, the only relevant information I've found in the docs is about a parameter called the "asyncMode", which is "true if this pool uses local first-in-first-out scheduling mode for forked tasks that are never joined".
My interpretation of this statement is that every worker has its own task queue; workers take tasks from the front of their own queue, or steal of the backs of other workers' queues if their own queues are empty; workers add newly-forked tasks to the back (resp. front) of their own queues if asyncMode is true (resp. false).
Please correct me if my interpretation is wrong!
Now, this raises a couple of questions:
1) What is the ordering for forked tasks that are joined?
My guess is that, when a task is forked, it is added to the worker's queue as described in my interpretation above. Now, suppose the task is joined...
If, when join is called, the task has not yet been started, the worker calling join will pull the task out of the queue and start working on it immediately.
If, when join is called, the task has already been stolen by another worker, then the worker calling join will work on other tasks in the meantime (following the ordering for getting tasks described in my interpretation above), until the task that it is joining has been finished by the worker that stole it.
This guess is based on writing simple test code with print statements, and observing the way in which changing the order of join calls influences the order in which tasks are processed. Could someone please tell me if my guess is correct?
2) What is the ordering for tasks that are submitted externally?
According to the answer to this question, fork-join pools do not use external queues. (I'm using Java 8, by the way.)
So am I to understand that when a task is submitted externally, the task is added to a randomly-selected worker queue?
If so, is the externally-submitted task added to the back or the front of the queue?
Finally, does this depend on whether the task is submitted by calling pool.execute(task) or by calling pool.invoke(task)? And does this depend on whether the thread calling pool.execute(task) or pool.invoke(task) is an external thread or a thread within this fork-join pool?

Your guess is correct, you are totally right.
As you can read in the "Implementation overview".
* Joining Tasks
* =============
*
* Any of several actions may be taken when one worker is waiting
* to join a task stolen (or always held) by another. Because we
* are multiplexing many tasks on to a pool of workers, we can't
* just let them block (as in Thread.join). We also cannot just
* reassign the joiner's run-time stack with another and replace
* it later, which would be a form of "continuation", that even if
* possible is not necessarily a good idea since we may need both
* an unblocked task and its continuation to progress. Instead we
* combine two tactics:
*
* Helping: Arranging for the joiner to execute some task that it
* would be running if the steal had not occurred.
*
* Compensating: Unless there are already enough live threads,
* method tryCompensate() may create or re-activate a spare
* thread to compensate for blocked joiners until they unblock.
2.Both ForkJoinPool.invoke and ForkJoinPool.join are the exactly the same in the manner the task is submitted. You can see in the code
public <T> T invoke(ForkJoinTask<T> task) {
if (task == null)
throw new NullPointerException();
externalPush(task);
return task.join();
}
public void execute(ForkJoinTask<?> task) {
if (task == null)
throw new NullPointerException();
externalPush(task);
}
In the externalPush you can see that the task is added to a randomly-selected worker queue using ThreadLocalRandom. Moreover, it entered to the head of the queue using a push method.
final void externalPush(ForkJoinTask<?> task) {
WorkQueue[] ws; WorkQueue q; int m;
int r = ThreadLocalRandom.getProbe();
int rs = runState;
if ((ws = workQueues) != null && (m = (ws.length - 1)) >= 0 &&
(q = ws[m & r & SQMASK]) != null && r != 0 && rs > 0 &&
U.compareAndSwapInt(q, QLOCK, 0, 1)) {
ForkJoinTask<?>[] a; int am, n, s;
if ((a = q.array) != null &&
(am = a.length - 1) > (n = (s = q.top) - q.base)) {
int j = ((am & s) << ASHIFT) + ABASE;
U.putOrderedObject(a, j, task);
U.putOrderedInt(q, QTOP, s + 1);
U.putIntVolatile(q, QLOCK, 0);
if (n <= 1)
signalWork(ws, q);
return;
}
U.compareAndSwapInt(q, QLOCK, 1, 0);
}
externalSubmit(task);
}
I am not sure what do you mean by that:
And does this depend on whether the thread calling pool.execute(task) or pool.invoke(task) is an external thread or a thread within this fork-join pool?

Related

ThreadPoolExecutor dynamic task execution, wait until all task completion

I have a ThreadPoolExecutor as such
ThreadPoolExecutor executor = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<>());
The tasks are executed as follows
executor.execute(task)
Now each task may also execute more tasks to the same executor and those new tasks can submit more tasks
The problem is I want the main thread to wait until all tasks are executed and then call shutdown
Is the following approach guaranteed to work? (i.e. block/wait main
thread until all tasks are completed)
while (executor.getCompletedTaskCount() < executor.getTaskCount()) {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
LOGGER.error("Exception in atomic Count wait thread sleep", e);
break;
}
}
}
Will this eventually break out of loop? Just by preliminary testing, I found that it works even with exceptions in thread
P.S.
I cannot use latch, because I don't know the number of tasks beforehand
nor the accepted answer here
You should probably keep the futures that get submitted.
Deque<Future<?>> futures = new ConcurrentLinkedDeque<>();
Then everytime you submit a task.
futures.add(executor.submit( runnable, "Doesn't Really Matter, but Can be Useful"));
Then in your main thread that is waiting.
while(futures.size()>0){
futures.pop().get();
}
This will offer you a guarantee that .get will not complete until a task has finished, and if more tasks are added by another task then futures will reflect the change before the original task completes.
In my opinion it will be non-deterministic to get the actual count of tasks for the reason that while the tasks are submitted the execute method is called and one of below 3 conditions may happen.
1. Task starts executing (added to Workers)
2. Task is enqueued (added to WorkQueue)
3. Task is rejected as WorkerQueue capacity,Workers capacity and resources exhaust
/**
* Executes the given task sometime in the future. The task
* may execute in a new thread or in an existing pooled thread.
*
* If the task cannot be submitted for execution, either because this
* executor has been shutdown or because its capacity has been reached,
* the task is handled by the current {#code RejectedExecutionHandler}.
*
* #param command the task to execute
* #throws RejectedExecutionException at discretion of
* {#code RejectedExecutionHandler}, if the task
* cannot be accepted for execution
* #throws NullPointerException if {#code command} is null
*/
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
getTaskCount() and getCompletedTaskCount() methods are guarded by mainLock hence we do know if internal threads still submitting tasks to executor will be done by the time check (while (executor.getCompletedTaskCount() < executor.getTaskCount()) ) in main executes. This condition may result is false positive for a moment ending into a wrong result.
/**
* Returns the approximate total number of tasks that have ever been
* scheduled for execution. Because the states of tasks and
* threads may change dynamically during computation, the returned
* value is only an approximation.
*
* #return the number of tasks
*/
public long getTaskCount() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
long n = completedTaskCount;
for (Worker w : workers) {
n += w.completedTasks;
if (w.isLocked())
++n;
}
return n + workQueue.size();
} finally {
mainLock.unlock();
}
}
/**
* Returns the approximate total number of tasks that have
* completed execution. Because the states of tasks and threads
* may change dynamically during computation, the returned value
* is only an approximation, but one that does not ever decrease
* across successive calls.
*
* #return the number of tasks
*/
public long getCompletedTaskCount() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
long n = completedTaskCount;
for (Worker w : workers)
n += w.completedTasks;
return n;
} finally {
mainLock.unlock();
}
}
**Code Snippets used here are from JDK 1.8 222
The methods used to get the completed count and submitted count i.e. executor.getCompletedTaskCount() & executor.getTaskCount() do not always provide a 100% accurate count as per the Java (8) docs, so the approach may not work always.
public long getTaskCount()
Returns the approximate total number of tasks that have ever been
scheduled for execution. Because the states of tasks and threads may
change dynamically during computation, the returned value is only an
approximation.
public long getCompletedTaskCount()
Returns the approximate total number of tasks that have completed
execution. Because the states of tasks and threads may change
dynamically during computation, the returned value is only an
approximation, but one that does not ever decrease across successive
calls.

Java ThreadPoolExecutor don't create new threads

I have a servlet in AEM (Adobe Experience Manager) that create a new thread per request. I use a Apache Sling threadpool to manage the threads. All is ok, but if the servlet receives haundreds request in a few seconds the thread is not created and the threadpool is converted in useless. I have to restart the AEM instance to the threadpool The threadpool uses a queue without limit with a core pool of ten elements. The configuration is the next:
ThreadPool configuration
Debuging the java.util.concurrent.ThreadPoolExecutor class, my code enter into the third "if" and then not enter in the if neither the else, so, the thread is not created.
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
I read the documentation but not understand why this occurs.

In Java's ForkJoinTask, does the order of fork/join matter?

Let's say we extended a RecursiveTask called MyRecursiveTask.
Then two sub-tasks are created within the scope of a forkJoinTask:
MyRecursiveTask t1 = new MyRecursiveTask()
MyRecursiveTask t2 = new MyRecursiveTask()
t1.fork()
t2.fork()
I think then the "t2" will be at the top of the Workqueue (which is a deque, it is used as a stack for the worker itself), as I saw the implementation for fork method like this:
public final ForkJoinTask<V> fork() {
Thread t;
if ((t = Thread.currentThread()) instanceof ForkJoinWorkerThread)
((ForkJoinWorkerThread)t).workQueue.push(this);
else
ForkJoinPool.common.externalPush(this);
return this;
}
if so, is there any difference for performance for the two expressions below:
Expression1:
t1.join() + t2.join()
Expression2:
t2.join() + t1.join()
I think it may matter. t1.join() will always be blocking (if there is no work stealing) before t2.join() finished, because only the task at the top of workQueue can be poped. (In other words, t2 has to be poped before t1 is poped). Below is the codes for doJoin and tryUnpush.
private int doJoin() {
int s; Thread t; ForkJoinWorkerThread wt; ForkJoinPool.WorkQueue w;
return (s = status) < 0 ? s :
((t = Thread.currentThread()) instanceof ForkJoinWorkerThread) ?
(w = (wt = (ForkJoinWorkerThread)t).workQueue).
tryUnpush(this) && (s = doExec()) < 0 ? s :
wt.pool.awaitJoin(w, this, 0L) :
externalAwaitDone();
}
/**
* Pops the given task only if it is at the current top.
* (A shared version is available only via FJP.tryExternalUnpush)
*/
final boolean tryUnpush(ForkJoinTask<?> t) {
ForkJoinTask<?>[] a; int s;
if ((a = array) != null && (s = top) != base &&
U.compareAndSwapObject
(a, (((a.length - 1) & --s) << ASHIFT) + ABASE, t, null)) {
U.putOrderedInt(this, QTOP, s);
return true;
}
return false;
}
Does anyone have ideas about this? Thanks!
Whether your using Java7 or Java8 is important. In Java7 the framework creates continuation threads for join(). In Java8 the framework mostly stalls for join(). See here. I've been writing a critique about this framework since 2010.
The recommendation for using a RecursiveTask (from the JavaDoc):
return f2.compute() + f1.join();
This way the splitting thread continues the operation itself.
Relying on the F/J code for direction is not recommended since this code changes frequently. For instance, in Java8 using nested parallel streams caused too many compensation threads that the code was reworked in Java8u40 only to cause more problems. See here.
If you must do multiple joins, then it really doesn't matter what order you join(). Each fork() makes the task available for any thread.
If you have enough core counts than both threads run in parallel, it doesn't matter which one is started first because completition time matters. So, whichever one finishes the first has to wait for the other to finish and calculate the result. If you have 1 core than what you're thinking is could be true, but for 1 core why do you need to parallelize the job?

ThreadPoolExecutor and the queue

I thought that using ThreadPoolExecutor we can submit Runnables to be executed either in the BlockingQueue passed in the constructor or using the execute method.
Also my understanding was that if a task is available it will be executed.
What I don't understand is the following:
public class MyThreadPoolExecutor {
private static ThreadPoolExecutor executor;
public MyThreadPoolExecutor(int min, int max, int idleTime, BlockingQueue<Runnable> queue){
executor = new ThreadPoolExecutor(min, max, 10, TimeUnit.MINUTES, queue);
//executor.prestartAllCoreThreads();
}
public static void main(String[] main){
BlockingQueue<Runnable> q = new LinkedBlockingQueue<Runnable>();
final String[] names = {"A","B","C","D","E","F"};
for(int i = 0; i < names.length; i++){
final int j = i;
q.add(new Runnable() {
#Override
public void run() {
System.out.println("Hi "+ names[j]);
}
});
}
new MyThreadPoolExecutor(10, 20, 1, q);
try {
TimeUnit.SECONDS.sleep(5);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
/*executor.execute(new Runnable() {
#Override
public void run() {
System.out.println("++++++++++++++");
}
}); */
for(int i = 0; i < 100; i++){
final int j = i;
q.add(new Runnable() {
#Override
public void run() {
System.out.println("Hi "+ j);
}
});
}
}
}
This code does not do absolutely anything unless I either uncomment the executor.prestartAllCoreThreads(); in the constructor OR I call execute of the runnable that prints System.out.println("++++++++++++++"); (it is also commented out).
Why?
Quote (my emphasis):
By default, even core threads are initially created and started only
when new tasks arrive, but this can be overridden dynamically using
method prestartCoreThread() or prestartAllCoreThreads(). You probably
want to prestart threads if you construct the pool with a non-empty
queue.
Ok. So my queue is not empty. But I create the executor, I do sleep and then I add new Runnables to the queue (in the loop to 100).
Doesn't this loop count as new tasks arrive?
Why doesn't it work and I have to either prestart or explicitely call execute?
Worker threads are spawned as tasks arrive by execute, and these are the ones that interact with the underlying work queue. You need to prestart the workers if you begin with a non-empty work queue. See the implementation in OpenJDK 7.
I repeat, the workers are the ones that interact with the work queue. They are only spawned on demand when passed via execute. (or the layers above it, e.g. invokeAll, submit, etc.) If they are not started, it will not matter how much work you add to the queue, since there is nothing checking it as there are no workers started.
ThreadPoolExecutor does not spawn worker threads until necessary or if you pre-empt their creation by the methods prestartAllCoreThreads and prestartCoreThread. If there are no workers started, then there is no way any of the work in your queue is going to be done.
The reason adding an initial execute works is that it forces the creation of a sole core worker thread, which then can begin processing the work from your queue. You could also call prestartCoreThread and receive similar behavior. If you want to start all the workers, you must call prestartAllCoreThreads or submit that number of tasks via execute.
See the code for execute below.
/**
* Executes the given task sometime in the future. The task
* may execute in a new thread or in an existing pooled thread.
*
* If the task cannot be submitted for execution, either because this
* executor has been shutdown or because its capacity has been reached,
* the task is handled by the current {#code RejectedExecutionHandler}.
*
* #param command the task to execute
* #throws RejectedExecutionException at discretion of
* {#code RejectedExecutionHandler}, if the task
* cannot be accepted for execution
* #throws NullPointerException if {#code command} is null
*/
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
A BlockingQueue is not a magic thread dispatcher. If you submit Runnable objects to the queue and there are no running threads to consume those tasks, they of course will not be executed. The execute method on the other hand will automatically dispatch threads according to the thread pool configuration if it needs to. If you pre-start all of the core threads, there will be threads there to consume tasks from the queue.

Is adding tasks to BlockingQueue of ThreadPoolExecutor advisable?

The JavaDoc for ThreadPoolExecutor is unclear on whether it is acceptable to add tasks directly to the BlockingQueue backing the executor. The docs say calling executor.getQueue() is "intended primarily for debugging and monitoring".
I'm constructing a ThreadPoolExecutor with my own BlockingQueue. I retain a reference to the queue so I can add tasks to it directly. The same queue is returned by getQueue() so I assume the admonition in getQueue() applies to a reference to the backing queue acquired through my means.
Example
General pattern of the code is:
int n = ...; // number of threads
queue = new ArrayBlockingQueue<Runnable>(queueSize);
executor = new ThreadPoolExecutor(n, n, 1, TimeUnit.HOURS, queue);
executor.prestartAllCoreThreads();
// ...
while (...) {
Runnable job = ...;
queue.offer(job, 1, TimeUnit.HOURS);
}
while (jobsOutstanding.get() != 0) {
try {
Thread.sleep(...);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
executor.shutdownNow();
queue.offer() vs executor.execute()
As I understand it, the typical use is to add tasks via executor.execute(). The approach in my example above has the benefit of blocking on the queue whereas execute() fails immediately if the queue is full and rejects my task. I also like that submitting jobs interacts with a blocking queue; this feels more "pure" producer-consumer to me.
An implication of adding tasks to the queue directly: I must call prestartAllCoreThreads() otherwise no worker threads are running. Assuming no other interactions with the executor, nothing will be monitoring the queue (examination of ThreadPoolExecutor source confirms this). This also implies for direct enqueuing that the ThreadPoolExecutor must additionally be configured for > 0 core threads and mustn't be configured to allow core threads to timeout.
tl;dr
Given a ThreadPoolExecutor configured as follows:
core threads > 0
core threads aren't allowed to timeout
core threads are prestarted
hold a reference to the BlockingQueue backing the executor
Is it acceptable to add tasks directly to the queue instead of calling executor.execute()?
Related
This question ( producer/consumer work queues ) is similar, but doesn't specifically cover adding to the queue directly.
One trick is to implement a custom subclass of ArrayBlockingQueue and to override the offer() method to call your blocking version, then you can still use the normal code path.
queue = new ArrayBlockingQueue<Runnable>(queueSize) {
#Override public boolean offer(Runnable runnable) {
try {
return offer(runnable, 1, TimeUnit.HOURS);
} catch(InterruptedException e) {
// return interrupt status to caller
Thread.currentThread().interrupt();
}
return false;
}
};
(as you can probably guess, i think calling offer directly on the queue as your normal code path is probably a bad idea).
If it were me, I would prefer using Executor#execute() over Queue#offer(), simply because I'm using everything else from java.util.concurrent already.
Your question is a good one, and it piqued my interest, so I took a look at the source for ThreadPoolExecutor#execute():
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
if (poolSize >= corePoolSize || !addIfUnderCorePoolSize(command)) {
if (runState == RUNNING && workQueue.offer(command)) {
if (runState != RUNNING || poolSize == 0)
ensureQueuedTaskHandled(command);
}
else if (!addIfUnderMaximumPoolSize(command))
reject(command); // is shutdown or saturated
}
}
We can see that execute itself calls offer() on the work queue, but not before doing some nice, tasty pool manipulations if necessary. For that reason, I'd think that it'd be advisable to use execute(); not using it may (although I don't know for certain) cause the pool to operate in a non-optimal way. However, I don't think that using offer() will break the executor - it looks like tasks are pulled off the queue using the following (also from ThreadPoolExecutor):
Runnable getTask() {
for (;;) {
try {
int state = runState;
if (state > SHUTDOWN)
return null;
Runnable r;
if (state == SHUTDOWN) // Help drain queue
r = workQueue.poll();
else if (poolSize > corePoolSize || allowCoreThreadTimeOut)
r = workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS);
else
r = workQueue.take();
if (r != null)
return r;
if (workerCanExit()) {
if (runState >= SHUTDOWN) // Wake up others
interruptIdleWorkers();
return null;
}
// Else retry
} catch (InterruptedException ie) {
// On interruption, re-check runState
}
}
}
This getTask() method is just called from within a loop, so if the executor's not shutting down, it'd block until a new task was given to the queue (regardless of from where it came from).
Note: Even though I've posted code snippets from source here, we can't rely on them for a definitive answer - we should only be coding to the API. We don't know how the implementation of execute() will change over time.
One can actually configure behavior of the pool when the queue is full, by specifying a RejectedExecutionHandler at instantiation. ThreadPoolExecutor defines four policies as inner classes, including AbortPolicy, DiscardOldestPolicy, DiscardPolicy, as well as my personal favorite, CallerRunsPolicy, which runs the new job in the controlling thread.
For example:
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(
nproc, // core size
nproc, // max size
60, // idle timeout
TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(4096, true), // Fairness = true guarantees FIFO
new ThreadPoolExecutor.CallerRunsPolicy() ); // If we have to reject a task, run it in the calling thread.
The behavior desired in the question can be obtained using something like:
public class BlockingPolicy implements RejectedExecutionHandler {
void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
executor.getQueue.put(r); // Self contained, no queue reference needed.
}
At some point the queue must be accessed. The best place to do so is in a self-contained RejectedExecutionHandler, which saves any code duplication or potenial bugs arising from direct manipulation of the queue at the scope of the pool object. Note that the handlers included in ThreadPoolExecutor themselves use getQueue().
It's a very important question if the queue you're using is a completely different implementation from the standard in-memory LinkedBlockingQueue or ArrayBlockingQueue.
For instance if you're implementing the producer-consumer pattern using several producers on different machines, and use a queuing mechanism based on a separate persistence subsystem (like Redis), then the question becomes relevant on its own, even if you don't want a blocking offer() like the OP.
So the given answer, that prestartAllCoreThreads() has to be called (or enough times prestartCoreThread()) for the worker threads to be available and running, is important enough to be stressed.
If required, we can also use a parking lot which separates main processing from rejected tasks -
final CountDownLatch taskCounter = new CountDownLatch(TASKCOUNT);
final List<Runnable> taskParking = new LinkedList<Runnable>();
BlockingQueue<Runnable> taskPool = new ArrayBlockingQueue<Runnable>(1);
RejectedExecutionHandler rejectionHandler = new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
System.err.println(Thread.currentThread().getName() + " -->rejection reported - adding to parking lot " + r);
taskCounter.countDown();
taskParking.add(r);
}
};
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(5, 10, 1000, TimeUnit.SECONDS, taskPool, rejectionHandler);
for(int i=0 ; i<TASKCOUNT; i++){
//main
threadPoolExecutor.submit(getRandomTask());
}
taskCounter.await(TASKCOUNT * 5 , TimeUnit.SECONDS);
System.out.println("Checking the parking lot..." + taskParking);
while(taskParking.size() > 0){
Runnable r = taskParking.remove(0);
System.out.println("Running from parking lot..." + r);
if(taskParking.size() > LIMIT){
waitForSometime(...);
}
threadPoolExecutor.submit(r);
}
threadPoolExecutor.shutdown();

Categories