Java ExecutorService: awaitTermination of all recursively created tasks - java

I use an ExecutorService to execute a task. This task can recursively create other tasks which are submitted to the same ExecutorService and those child tasks can do that, too.
I now have the problem that I want to wait until all the tasks are done (that is, all tasks are finished and they did not submit new ones) before I continue.
I cannot call ExecutorService.shutdown() in the main thread because this prevents new tasks from being accepted by the ExecutorService.
And Calling ExecutorService.awaitTermination() seems to do nothing if shutdown hasn't been called.
So I am kinda stuck here. It can't be that hard for the ExecutorService to see that all workers are idle, can it? The only inelegant solution I could come up with is to directly use a ThreadPoolExecutor and query its getPoolSize() every once in a while. Is there really no better way do do that?

This really is an ideal candidate for a Phaser. Java 7 is coming out with this new class. Its a flexible CountdonwLatch/CyclicBarrier. You can get a stable version at JSR 166 Interest Site.
The way it is a more flexible CountdownLatch/CyclicBarrier is because it is able to not only support an unknown number of parties (threads) but its also reusable (thats where the phase part comes in)
For each task you submit you would register, when that task is completed you arrive. This can be done recursively.
Phaser phaser = new Phaser();
ExecutorService e = //
Runnable recursiveRunnable = new Runnable(){
public void run(){
//do work recursively if you have to
if(shouldBeRecursive){
phaser.register();
e.submit(recursiveRunnable);
}
phaser.arrive();
}
}
public void doWork(){
int phase = phaser.getPhase();
phaser.register();
e.submit(recursiveRunnable);
phaser.awaitAdvance(phase);
}
Edit: Thanks #depthofreality for pointing out the race condition in my previous example. I am updating it so that executing thread only awaits advance of the current phase as it blocks for the recursive function to complete.
The phase number won't trip until the number of arrives == registers. Since prior to each recursive call invokes register a phase increment will happen when all invocations are complete.

If number of tasks in the tree of recursive tasks is initially unknown, perhaps the easiest way would be to implement your own synchronization primitive, some kind of "inverse semaphore", and share it among your tasks. Before submitting each task you increment a value, when task is completed, it decrements that value, and you wait until the value is 0.
Implementing it as a separate primitive explicitly called from tasks decouples this logic from the thread pool implementation and allows you to submit several independent trees of recursive tasks into the same pool.
Something like this:
public class InverseSemaphore {
private int value = 0;
private Object lock = new Object();
public void beforeSubmit() {
synchronized(lock) {
value++;
}
}
public void taskCompleted() {
synchronized(lock) {
value--;
if (value == 0) lock.notifyAll();
}
}
public void awaitCompletion() throws InterruptedException {
synchronized(lock) {
while (value > 0) lock.wait();
}
}
}
Note that taskCompleted() should be called inside a finally block, to make it immune to possible exceptions.
Also note that beforeSubmit() should be called by the submitting thread before the task is submitted, not by the task itself, to avoid possible "false completion" when old tasks are completed and new ones not started yet.
EDIT: Important problem with usage pattern fixed.

Wow, you guys are quick:)
Thank you for all the suggestions. Futures don't easily integrate with my model because I don't know how many runnables are scheduled beforehand. So if I keep a parent task alive just to wait for it's recursive child tasks to finish I have a lot of garbage laying around.
I solved my problem using the AtomicInteger suggestion. Essentially, I subclassed ThreadPoolExecutor and increment the counter on calls to execute() and decrement on calls to afterExecute(). When the counter gets 0 I call shutdown(). This seems to work for my problems, not sure if that's a generally good way to do that. Especially, I assume that you only use execute() to add Runnables.
As a side node: I first tried to check in afterExecute() the number of Runnables in the queue and the number of workers that are active and shutdown when those are 0; but that didn't work because not all Runnables showed up in the queue and the getActiveCount() didn't do what I expected either.
Anyhow, here's my solution: (if anybody finds serious problems with this, please let me know:)
public class MyThreadPoolExecutor extends ThreadPoolExecutor {
private final AtomicInteger executing = new AtomicInteger(0);
public MyThreadPoolExecutor(int coorPoolSize, int maxPoolSize, long keepAliveTime,
TimeUnit seconds, BlockingQueue<Runnable> queue) {
super(coorPoolSize, maxPoolSize, keepAliveTime, seconds, queue);
}
#Override
public void execute(Runnable command) {
//intercepting beforeExecute is too late!
//execute() is called in the parent thread before it terminates
executing.incrementAndGet();
super.execute(command);
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
int count = executing.decrementAndGet();
if(count == 0) {
this.shutdown();
}
}
}

You could create your own thread pool which extends ThreadPoolExecutor. You want to know when a task has been submitted and when it completes.
public class MyThreadPoolExecutor extends ThreadPoolExecutor {
private int counter = 0;
public MyThreadPoolExecutor() {
super(1, 1, 0, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
}
#Override
public synchronized void execute(Runnable command) {
counter++;
super.execute(command);
}
#Override
protected synchronized void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
counter--;
notifyAll();
}
public synchronized void waitForExecuted() throws InterruptedException {
while (counter == 0)
wait();
}
}

Use a Future for your tasks (instead of submitting Runnable's), a callback updates it's state when it's completed, so you can use Future.isDone to track the sate of all your tasks.

(mea culpa: its a 'bit' past my bedtime ;) but here's a first attempt at a dynamic latch):
package oss.alphazero.sto4958330;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
public class DynamicCountDownLatch {
#SuppressWarnings("serial")
private static final class Sync extends AbstractQueuedSynchronizer {
private final CountDownLatch toplatch;
public Sync() {
setState(0);
this.toplatch = new CountDownLatch(1);
}
#Override
protected int tryAcquireShared(int acquires){
try {
toplatch.await();
}
catch (InterruptedException e) {
throw new RuntimeException("Interrupted", e);
}
return getState() == 0 ? 1 : -1;
}
public boolean tryReleaseShared(int releases) {
for (;;) {
int c = getState();
if (c == 0)
return false;
int nextc = c-1;
if (compareAndSetState(c, nextc))
return nextc == 0;
}
}
public boolean tryExtendState(int acquires) {
for (;;) {
int s = getState();
int exts = s+1;
if (compareAndSetState(s, exts)) {
toplatch.countDown();
return exts > 0;
}
}
}
}
private final Sync sync;
public DynamicCountDownLatch(){
this.sync = new Sync();
}
public void await()
throws InterruptedException
{
sync.acquireSharedInterruptibly(1);
}
public boolean await(long timeout, TimeUnit unit)
throws InterruptedException
{
return sync.tryAcquireSharedNanos(1, unit.toNanos(timeout));
}
public void countDown() {
sync.releaseShared(1);
}
public void join() {
sync.tryExtendState(1);
}
}
This latch introduces a new method join() to the existing (cloned) CountDownLatch API, which is used by tasks to signal their entry into the larger task group.
The latch is pass around from parent Task to child Task. Each task would, per Suraj's pattern, first 'join()' the latch, do its task(), and then countDown().
To address situations where the main thread launches the task group and then immediately awaits() -- before any of the task threads have had a chance to even join() -- the topLatch is used int inner Sync class. This is a latch that will get counted down on each join(); only the first countdown is of course significant, as all subsequent ones are nops.
The initial implementation above does introduce a semantic wrinkle of sorts since the tryAcquiredShared(int) is not supposed to be throwing an InterruptedException but then we do need to deal with the interrupt on the wait on the topLatch.
Is this an improvement over OP's own solution using Atomic counters? I would say probably not IFF he is insistent upon using Executors, but it is, I believe, an equally valid alternative approach using the AQS in that case, and, is usable with generic threads as well.
Crit away fellow hackers.

If you want to use JSR166y classes - e.g. Phaser or Fork/Join - either of which might work for you, you can always download the Java 6 backport of them from: http://gee.cs.oswego.edu/dl/concurrency-interest/ and use that as a basis rather than writing a completely homebrew solution. Then when 7 comes out you can just drop the dependency on the backport and change a few package names.
(Full disclosure: We've been using the LinkedTransferQueue in prod for a while now. No issues)

I must say, that solutions described above of problem with recursive calling task and wait for end suborder tasks doesn't satisfy me. There is my solution inspired by original documentation from Oracle there: CountDownLatch and example there: Human resources CountDownLatch.
The first common thread in process in instance of class HRManagerCompact has waiting latch for two daughter's threads, wich has waiting latches for their subsequent 2 daughter's threads... etc.
Of course, latch can be set on the different value than 2 (in constructor of CountDownLatch), as well as the number of runnable objects can be established in iteration i.e. ArrayList, but it must correspond (number of count downs must be equal the parameter in CountDownLatch constructor).
Be careful, the number of latches increases exponentially according restriction condition:
'level.get() < 2', as well as the number of objects. 1, 2, 4, 8, 16... and latches 0, 1, 2, 4... As you can see, for four levels (level.get() < 4) there will be 15 waiting threads and 7 latches in the time, when peak 16 threads are running.
package processes.countdownlatch.hr;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
/** Recursively latching running classes to wait for the peak threads
*
* #author hariprasad
*/
public class HRManagerCompact extends Thread {
final int N = 2; // number of daughter's tasks for latch
CountDownLatch countDownLatch;
CountDownLatch originCountDownLatch;
AtomicInteger level = new AtomicInteger(0);
AtomicLong order = new AtomicLong(0); // id latched thread waiting for
HRManagerCompact techLead1 = null;
HRManagerCompact techLead2 = null;
HRManagerCompact techLead3 = null;
// constructor
public HRManagerCompact(CountDownLatch countDownLatch, String name,
AtomicInteger level, AtomicLong order){
super(name);
this.originCountDownLatch=countDownLatch;
this.level = level;
this.order = order;
}
private void doIt() {
countDownLatch = new CountDownLatch(N);
AtomicInteger leveli = new AtomicInteger(level.get() + 1);
AtomicLong orderi = new AtomicLong(Thread.currentThread().getId());
techLead1 = new HRManagerCompact(countDownLatch, "first", leveli, orderi);
techLead2 = new HRManagerCompact(countDownLatch, "second", leveli, orderi);
//techLead3 = new HRManagerCompact(countDownLatch, "third", leveli);
techLead1.start();
techLead2.start();
//techLead3.start();
try {
synchronized (Thread.currentThread()) { // to prevent print and latch in the same thread
System.out.println("*** HR Manager waiting for recruitment to complete... " + level + ", " + order + ", " + orderi);
countDownLatch.await(); // wait actual thread
}
System.out.println("*** Distribute Offer Letter, it means finished. " + level + ", " + order + ", " + orderi);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#Override
public void run() {
try {
System.out.println(Thread.currentThread().getName() + ": working... " + level + ", " + order + ", " + Thread.currentThread().getId());
Thread.sleep(10*level.intValue());
if (level.get() < 2) doIt();
Thread.yield();
}
catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
/*catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}*/
// TODO Auto-generated method stub
System.out.println("--- " +Thread.currentThread().getName() + ": recruted " + level + ", " + order + ", " + Thread.currentThread().getId());
originCountDownLatch.countDown(); // count down
}
public static void main(String args[]){
AtomicInteger levelzero = new AtomicInteger(0);
HRManagerCompact hr = new HRManagerCompact(null, "zero", levelzero, new AtomicLong(levelzero.longValue()));
hr.doIt();
}
}
Possible commented output (with some probability):
first: working... 1, 1, 10 // thread 1, first daughter's task (10)
second: working... 1, 1, 11 // thread 1, second daughter's task (11)
first: working... 2, 10, 12 // thread 10, first daughter's task (12)
first: working... 2, 11, 14 // thread 11, first daughter's task (14)
second: working... 2, 11, 15 // thread 11, second daughter's task (15)
second: working... 2, 10, 13 // thread 10, second daughter's task (13)
--- first: recruted 2, 10, 12 // finished 12
--- first: recruted 2, 11, 14 // finished 14
--- second: recruted 2, 10, 13 // finished 13 (now can be opened latch 10)
--- second: recruted 2, 11, 15 // finished 15 (now can be opened latch 11)
*** HR Manager waiting for recruitment to complete... 0, 0, 1
*** HR Manager waiting for recruitment to complete... 1, 1, 10
*** Distribute Offer Letter, it means finished. 1, 1, 10 // latch on 10 opened
--- first: recruted 1, 1, 10 // finished 10
*** HR Manager waiting for recruitment to complete... 1, 1, 11
*** Distribute Offer Letter, it means finished. 1, 1, 11 // latch on 11 opened
--- second: recruted 1, 1, 11 // finished 11 (now can be opened latch 1)
*** Distribute Offer Letter, it means finished. 0, 0, 1 // latch on 1 opened

Use CountDownLatch.
Pass the CountDownLatch object to each of your tasks and code your tasks something like below.
public void doTask() {
// do your task
latch.countDown();
}
Whereas the thread which needs to wait should execute the following code:
public void doWait() {
latch.await();
}
But ofcourse, this assumes you already know the number of child tasks so that you could initialize the latch's count.

The only inelegant solution I could come up with is to directly use a ThreadPoolExecutor and query its getPoolSize() every once in a while. Is there really no better way do do that?
You have to use shutdown() ,awaitTermination()and shutdownNow() methods in a proper sequence.
shutdown(): Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted.
awaitTermination():Blocks until all tasks have completed execution after a shutdown request, or the timeout occurs, or the current thread is interrupted, whichever happens first.
shutdownNow(): Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution.
Recommended way from oracle documentation page of ExecutorService:
void shutdownAndAwaitTermination(ExecutorService pool) {
pool.shutdown(); // Disable new tasks from being submitted
try {
// Wait a while for existing tasks to terminate
if (!pool.awaitTermination(60, TimeUnit.SECONDS)) {
pool.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!pool.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Pool did not terminate");
}
} catch (InterruptedException ie) {
// (Re-)Cancel if current thread also interrupted
pool.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}
You can replace if condition with while condition in case of long duration in completion of tasks as below:
Change
if (!pool.awaitTermination(60, TimeUnit.SECONDS))
To
while(!pool.awaitTermination(60, TimeUnit.SECONDS)) {
Thread.sleep(60000);
}
You can refer to other alternatives (except join(), which can be used with standalone thread ) in :
wait until all threads finish their work in java

You could use a runner that keeps track of running threads:
Runner runner = Runner.runner(numberOfThreads);
runner.runIn(2, SECONDS, callable);
runner.run(callable);
// blocks until all tasks are finished (or failed)
runner.waitTillDone();
// and reuse it
runner.runRunnableIn(500, MILLISECONDS, runnable);
runner.waitTillDone();
// and then just kill it
runner.shutdownAndAwaitTermination();
to use it you just add a dependency:
compile 'com.github.matejtymes:javafixes:1.3.0'

Related

Multi threaded program using newFixedThreadPool doesn't run as excepted when the thread pool size is less than the number of tasks executed

package com.playground.concurrency;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
public class MyRunnable implements Runnable {
private String taskName;
public String getTaskName() {
return taskName;
}
public void setTaskName(String taskName) {
this.taskName = taskName;
}
private int processed = 0;
public MyRunnable(String name) {
this.taskName = name;
}
private boolean keepRunning = true;
public boolean isKeepRunning() {
return keepRunning;
}
public void setKeepRunning(boolean keepRunning) {
this.keepRunning = keepRunning;
}
private BlockingQueue<Integer> elements = new LinkedBlockingQueue<Integer>(10);
public BlockingQueue<Integer> getElements() {
return elements;
}
public void setElements(BlockingQueue<Integer> elements) {
this.elements = elements;
}
#Override
public void run() {
while (keepRunning || !elements.isEmpty()) {
try {
Integer element = elements.take();
Thread.sleep(10);
System.out.println(taskName +" :: "+elements.size());
System.out.println("Got :: " + element);
processed++;
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
System.out.println("Exiting thread");
}
public int getProcessed() {
return processed;
}
public void setProcessed(int processed) {
this.processed = processed;
}
}
package com.playground.concurrency.service;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import com.playground.concurrency.MyRunnable;
public class TestService {
public static void main(String[] args) throws InterruptedException {
int roundRobinIndex = 0;
int noOfProcess = 10;
List<MyRunnable> processes = new ArrayList<MyRunnable>();
for (int i = 0; i < noOfProcess; i++) {
processes.add(new MyRunnable("Task : " + i));
}
ExecutorService threadPoolExecutor = Executors.newFixedThreadPool(5);
for (MyRunnable process : processes) {
threadPoolExecutor.execute(process);
}
int totalMessages = 1000;
long start = System.currentTimeMillis();
for (int i = 1; i <= totalMessages; i++) {
processes.get(roundRobinIndex++).getElements().put(i);
if (roundRobinIndex == noOfProcess) {
roundRobinIndex = 0;
}
}
System.out.println("Done putting all the elements");
for (MyRunnable process : processes) {
process.setKeepRunning(false);
}
threadPoolExecutor.shutdown();
try {
threadPoolExecutor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
long totalProcessed = 0;
for (MyRunnable process : processes) {
System.out.println("task " + process.getTaskName() + " processd " + process.getProcessed());
totalProcessed += process.getProcessed();
}
long end = System.currentTimeMillis();
System.out.println("total time" + (end - start));
}
}
I have a simple task that reads elements from a LinkedBlockingQueue. I create multiple instances of these tasks and execute by ExecutorService . This programs works as expected when the noOfProcess and thread pool size is same.(For ex: noOfProcess=10 and thread pool size=10).
However , if noOfProcess=10 and thread pool size =5 then the main thread keeps waiting at the below line after processing a few items.
processes.get(roundRobinIndex++).getElements().put(i);
What am i doing wrong here ?
Ah yes. The good old deadlock.
What happens is: You submit 10 Tasks to the ExecutorService, and then send jobs via .put(i). This blocks for Task 5 as expected when its queue is full. Now Task 5 is not currently being executed, and as a matter of fact will never be, since Task 0 to 4 are still clogging up your FixedThreadPool, blocking at .take() in the run() Method waiting for new Jobs from .put(i), which they will never get.
This error is a fundamental design flaw within your code and there are myriads of ways to fix it, one of which being the increased Thread Pool Size.
My suggestion is that you go back to the drawing board and rethink the structure in the main Method.
And since you posted your code, have some tips:
1.:
Posting your entire code can be interpreted as a call to 'pls fix my code', and you are encouraged to omit all uneccessary details (like all those getters and setters). Maybe check https://stackoverflow.com/help/minimal-reproducible-example
2.:
Posting two classes in the same body made things kinda complicated. Split it next time.
3.: (nitpick)
processes.get(roundRobinIndex++).getElements().put(i);
Combining two operations like you did here is bad style since it makes your code less readable for others. You could just have written:
processes.get(i % noOfProcesses).getElements().put(i);
To fix the behavior, you need to do one of the following:
have enough Runnables, each with enough queue capacity to take all 1,000 messages (for example: 100 Runnables with capacity 10 or more; or 10 Runnables with capacity 100 or more), or
have a thread pool that is large enough to accomodate all of your Runnables so that each of them can start running.
Without one of those happening, the ExecutorService will not start the extra Runnables. The main worker thread will continue adding items to each queue, including those of non-running Runnables, until it encounters a queue that is full, at which point it blocks. With 10 Runnables and thread pool size 5, the first queue to fill up will the be the 6th Runnable. This is the same if you had just 6 Runnables. The significant point is that you have at least one more Runnable than you have room in your thread pool.
From newFixedThreadPool() Javadoc:
If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available.
Consider a simpler example of 2 processes and thread pool size of 1. You'll be allowed to create the first process and submit it to the ExecutorService (so the ExecutorService will start and run it). The second process however, will not be allowed to run by the ExecutorService. Your main thread does not pay attention to this, however, and it will continue putting elements into the queue for the second process even though nothing is consuming it.
Your code is ok with noOfProcess=10 and thread pool size=5 – if you also change your queue size to 100, like this: new LinkedBlockingQueue<>(100).
You can observe this behavior – where the queue of a non-running Runnable fills up – if you change this line:
processes.get(roundRobinIndex++).getElements().put(i);
to this (which is the same logical code, but has object references saved for use inside the println() output):
MyRunnable runnable = processes.get(roundRobinIndex++);
BlockingQueue<Integer> elements = runnable.getElements();
System.out.println("attempt to put() for " + runnable.getTaskName() + " with " + elements.size() + " elements");
elements.put(i);

How does interrupting a future work with single thread executors?

How does Executor.newSingleThreadExecutor() behave if I am frequently scheduling tasks to run that are being cancelled with future.cancel(true);?
Does the single thread spawned by the executor get interrupted (so the future code needs to clear the interrupt), or does the interrupt flag get automatically cleared when the next future starts up.
Does the Executor need to spawn an additional thread on every interrupt to be used by the remaining task queue?
Is there a better way?
Good question, I don't find this documented anywhere, so I would say it is implementation dependent.
For example OpenJDK does reset the interrupted flag before every executed task:
// If pool is stopping, ensure thread is interrupted;
// if not, ensure thread is not interrupted. This
// requires a recheck in second case to deal with
// shutdownNow race while clearing interrupt
if ((runStateAtLeast(ctl.get(), STOP) ||
(Thread.interrupted() &&
runStateAtLeast(ctl.get(), STOP))) &&
!wt.isInterrupted())
wt.interrupt();
Snippet from from OpenJDK jdk8u ThreadPoolExecutor#runWorker source.
The following sample program demonstrates that the interrupt is called on the thread if you call the cancel method with true. You can even see that it is reusing the same thread. The cancel returns a boolean which indicates if the cancellation was successful. The javadoc of this method is also clear enough.
class Task implements Callable<String> {
#Override
public String call() throws Exception {
try {
System.out.println("Thread name = " + Thread.currentThread().getName());
Thread.sleep(Integer.MAX_VALUE);
} catch (InterruptedException e) {
System.out.println("Interrupted");
return "Interruped";
}
return "X";
}
}
public class Testy {
public static void main(String[] args) throws InterruptedException {
ExecutorService executorService =
Executors.newSingleThreadExecutor();
int count = 0;
while (true) {
System.out.println("Iteration " + count++);
Future<String> submit = executorService.submit(new Task());
Thread.sleep(500);
submit.cancel(true);
}
}
}
Output looks like below
Iteration 0
Thread name = pool-1-thread-1
Iteration 1
Interrupted
Thread name = pool-1-thread-1
Iteration 2
Interrupted

Java Concurrency in Practice: race condition in BoundedExecutor?

There's something odd about the implementation of the BoundedExecutor in the book Java Concurrency in Practice.
It's supposed to throttle task submission to the Executor by blocking the submitting thread when there are enough threads either queued or running in the Executor.
This is the implementation (after adding the missing rethrow in the catch clause):
public class BoundedExecutor {
private final Executor exec;
private final Semaphore semaphore;
public BoundedExecutor(Executor exec, int bound) {
this.exec = exec;
this.semaphore = new Semaphore(bound);
}
public void submitTask(final Runnable command) throws InterruptedException, RejectedExecutionException {
semaphore.acquire();
try {
exec.execute(new Runnable() {
#Override public void run() {
try {
command.run();
} finally {
semaphore.release();
}
}
});
} catch (RejectedExecutionException e) {
semaphore.release();
throw e;
}
}
When I instantiate the BoundedExecutor with an Executors.newCachedThreadPool() and a bound of 4, I would expect the number of threads instantiated by the cached thread pool to never exceed 4. In practice, however, it does. I've gotten this little test program to create as much as 11 threads:
public static void main(String[] args) throws Exception {
class CountingThreadFactory implements ThreadFactory {
int count;
#Override public Thread newThread(Runnable r) {
++count;
return new Thread(r);
}
}
List<Integer> counts = new ArrayList<Integer>();
for (int n = 0; n < 100; ++n) {
CountingThreadFactory countingThreadFactory = new CountingThreadFactory();
ExecutorService exec = Executors.newCachedThreadPool(countingThreadFactory);
try {
BoundedExecutor be = new BoundedExecutor(exec, 4);
for (int i = 0; i < 20000; ++i) {
be.submitTask(new Runnable() {
#Override public void run() {}
});
}
} finally {
exec.shutdown();
}
counts.add(countingThreadFactory.count);
}
System.out.println(Collections.max(counts));
}
I think there's a tiny little time frame between the release of the semaphore and the task ending, where another thread can aquire a permit and submit a task while the releasing thread hasn't finished yet. In other words, it has a race condition.
Can someone confirm this?
BoundedExecutor was indeed intended as an illustration of how to throttle task submission, not as a way to place a bound on thread pool size. There are more direct ways to achieve the latter, as at least one comment pointed out.
But the other answers don't mention the text in the book that says to use an unbounded queue and to
set the bound on the semaphore to be equal to the pool size plus the
number of queued tasks you want to allow, since the semaphore is
bounding the number of tasks both currently executing and awaiting
execution. [JCiP, end of section 8.3.3]
By mentioning unbounded queues and pool size, we were implying (apparently not very clearly) the use of a thread pool of bounded size.
What has always bothered me about BoundedExecutor, however, is that it doesn't implement the ExecutorService interface. A modern way to achieve similar functionality and still implement the standard interfaces would be to use Guava's listeningDecorator method and ForwardingListeningExecutorService class.
You are correct in your analysis of the race condition. There is no synchronization guarantees between the ExecutorService & the Semaphore.
However, I do not know if throttling the number of threads is what the BoundedExecutor is used for. I think it is more for throttling the number of tasks submitted to the service. Imagine if you have 5 million tasks that need to submit, and if you submit more then 10,000 of them you run out of memory.
Well you only will ever have 4 threads running at any given time, why would you want to try and queue up all 5 millions tasks? You can use a construct similar to this to throttle the number of tasks queued up at any given time. What you should get out of this is that at any given time there are only 4 tasks running.
Obviously the resolution to this is to use a Executors.newFixedThreadPool(4).
I see as much as 9 threads created at once. I suspect there is a race condition which causes there to be more thread than required.
This could be because there is before and after running the task work to be done. This means that even though there is only 4 thread inside your block of code, there is a number of thread stopping a previous task or getting ready to start a new task.
i.e. the thread does a release() while it is still running. Even though its the last thing you do its not the last thing it does before acquiring a new task.

Electing a thread for barrier action execution - Java CyclicBarrier

Looking at the javadocs for CyclicBarrier I found the following statement in the class documentation that I dont completely understand. From the javadoc:
If the barrier action does not rely on the parties being suspended when it is executed, then any of the threads in the party could execute that action when it is released. To facilitate this, each invocation of await() returns the arrival index of that thread at the barrier. You can then choose which thread should execute the barrier action, for example:
if (barrier.await() == 0) {
// log the completion of this iteration
}
Can someone explain how to designate a specific thread for execution of the barrier action once all the parties have called .await() and perhaps provide an example?
OK, pretend RuPaul wanted some worker threads, but only the 3rd one that finished is supposed to do the barrier task (Say "Sashay, Chante").
import java.util.Random;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.TimeUnit;
public class Main
{
private static class Worker implements Runnable {
private CyclicBarrier barrier;
public Worker(CyclicBarrier b) {
barrier = b;
}
public void run() {
final String threadName = Thread.currentThread().getName();
System.out.printf("%s: You better work!%n", threadName);
// simulate the workin' it part
Random rnd = new Random();
int secondsToWorkIt = rnd.nextInt(10) + 1;
try {
TimeUnit.SECONDS.sleep(secondsToWorkIt);
} catch (InterruptedException ex) { /* ...*/ }
System.out.printf("%s worked it, girl!%n", threadName);
try {
int n = barrier.await();
final int myOrder = barrier.getParties() - n;
System.out.printf("Turn number: %s was %s%n", myOrder, threadName);
// MAGIC CODE HERE!!!
if (myOrder == 3) { // the third one that finished
System.out.printf("%s: Sashay Chante!%n", myOrder);
}
// END MAGIC CODE
}
catch (BrokenBarrierException ex) { /* ... */ }
catch (InterruptedException ex) { /* ... */ }
}
}
private final int numThreads = 5;
public void work() {
/*
* I want the 3rd thread that finished to say "Sashay Chante!"
* when everyone has called await.
* So I'm not going to put my "barrier action" in the CyclicBarrier constructor,
* where only the last thread will run it! I'm going to put it in the Runnable
* that calls await.
*/
CyclicBarrier b = new CyclicBarrier(numThreads);
for (int i= 0; i < numThreads; i++) {
Worker task = new Worker(b);
Thread thread = new Thread(task);
thread.start();
}
}
public static void main(String[] args)
{
Main main = new Main();
main.work();
}
}
Here is an example of the output:
Thread-0: You better work!
Thread-4: You better work!
Thread-2: You better work!
Thread-1: You better work!
Thread-3: You better work!
Thread-1 worked it, girl!
Thread-4 worked it, girl!
Thread-0 worked it, girl!
Thread-3 worked it, girl!
Thread-2 worked it, girl!
Turn number: 5 was Thread-2
Turn number: 3 was Thread-0
3: Sashay Chante!
Turn number: 1 was Thread-1
Turn number: 4 was Thread-3
Turn number: 2 was Thread-4
As you can see, the thread that finished 3rd was Thread-0, so Thread-0 was the one that did the "barrier action".
Say you are able to name your threads:
thread.setName("My Thread " + i);
Then you can perform the action on the thread of that name...I don't know how feasible that is for you.
I think that section of the documentation is about an alternative to the barrier action Runnable, not a particular way of using it. Note how it says (emphasis mine):
If the barrier action does not rely on the parties being suspended when it is executed
If you specify a barrier action as a runnable, then it ...
is run once per barrier point, after the last thread in the party arrives, but before any threads are released
So, while the threads are suspended (although since it's run by the last thread to arrive, that one isn't suspendd; but at least its normal flow of execution is suspended until the barrier action finishes).
The business about using the return value of await() is something you can do if you don't need your action to run while the threads are suspended.
The documentation's examples are indicative. The example using a Runnable barrier action is coordinating the work of some other threads - merging the rows and checking if the job is done. The other threads need to wait for it to know if they have more work to do. So, it has to run while they're suspended. The example using the return value from await() is some logging. The other threads don't depend on the logging having being done. So, it can happen while the other threads have started doing more work.
CyclicBarrier enables designating a Thread by ORDER :
Designating a thread that returns at a SPECIFIC order is possible if, as you say, you enclose the barrier completion logic in a conditional which is specific to a thread index. Thus, your implementation above will work according to the documentation you cited.
However, the point of confusion here - is that the documentation is talking about thread identity in terms of order of returning to the barrier, rather than thread object identity. Thus, thread 0 refers to the 0th thread to complete.
Alternative : Designating a Thread using other mechanisms.
If you wanted to have a specific thread carry on a specific action after other works completed, you might use a different mechanism - like a semaphore , for example. If you desired this behavior, you may not really need the cyclic barrier.
To inspect what is meant by the documentation, run the class (modified from http://programmingexamples.wikidot.com/cyclicbarrier) below , where ive incorporated your snippet.
Example of what is meant by the docs for the CyclicBarrier
package thread;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
public class CyclicBarrierExample
{
private static int matrix[][] =
{
{ 1 },
{ 2, 2 },
{ 3, 3, 3 },
{ 4, 4, 4, 4 },
{ 5, 5, 5, 5, 5 } };
static final int rows = matrix.length;
private static int results[]=new int[rows];
static int threadId=0;
private static class Summer extends Thread
{
int row;
CyclicBarrier barrier;
Summer(CyclicBarrier barrier, int row)
{
this.barrier = barrier;
this.row = row;
}
public void run()
{
int columns = matrix[row].length;
int sum = 0;
for (int i = 0; i < columns; i++)
{
sum += matrix[row][i];
}
results[row] = sum;
System.out.println("Results for row " + row + " are : " + sum);
// wait for the others
// Try commenting the below block, and watch what happens.
try
{
int w = barrier.await();
if(w==0)
{
System.out.println("merging now !");
int fullSum = 0;
for (int i = 0; i < rows; i++)
{
fullSum += results[i];
}
System.out.println("Results are: " + fullSum);
}
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
public static void main(String args[])
{
/*
* public CyclicBarrier(int parties,Runnable barrierAction)
* Creates a new CyclicBarrier that will trip when the given number
* of parties (threads) are waiting upon it, and which will execute
* the merger task when the barrier is tripped, performed
* by the last thread entering the barrier.
*/
CyclicBarrier barrier = new CyclicBarrier(rows );
for (int i = 0; i < rows; i++)
{
System.out.println("Creating summer " + i);
new Summer(barrier, i).start();
}
System.out.println("Waiting...");
}
}

wait until all threads finish their work in java

I'm writing an application that has 5 threads that get some information from web simultaneously and fill 5 different fields in a buffer class.
I need to validate buffer data and store it in a database when all threads finished their job.
How can I do this (get alerted when all threads finished their work) ?
The approach I take is to use an ExecutorService to manage pools of threads.
ExecutorService es = Executors.newCachedThreadPool();
for(int i=0;i<5;i++)
es.execute(new Runnable() { /* your task */ });
es.shutdown();
boolean finished = es.awaitTermination(1, TimeUnit.MINUTES);
// all tasks have finished or the time has been reached.
You can join to the threads. The join blocks until the thread completes.
for (Thread thread : threads) {
thread.join();
}
Note that join throws an InterruptedException. You'll have to decide what to do if that happens (e.g. try to cancel the other threads to prevent unnecessary work being done).
Have a look at various solutions.
join() API has been introduced in early versions of Java. Some good alternatives are available with this concurrent package since the JDK 1.5 release.
ExecutorService#invokeAll()
Executes the given tasks, returning a list of Futures holding their status and results when everything is completed.
Refer to this related SE question for code example:
How to use invokeAll() to let all thread pool do their task?
CountDownLatch
A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
A CountDownLatch is initialized with a given count. The await methods block until the current count reaches zero due to invocations of the countDown() method, after which all waiting threads are released and any subsequent invocations of await return immediately. This is a one-shot phenomenon -- the count cannot be reset. If you need a version that resets the count, consider using a CyclicBarrier.
Refer to this question for usage of CountDownLatch
How to wait for a thread that spawns it's own thread?
ForkJoinPool or newWorkStealingPool() in Executors
Iterate through all Future objects created after submitting to ExecutorService
Wait/block the Thread Main until some other threads complete their work.
As #Ravindra babu said it can be achieved in various ways, but showing with examples.
java.lang.Thread.join() Since:1.0
public static void joiningThreads() throws InterruptedException {
Thread t1 = new Thread( new LatchTask(1, null), "T1" );
Thread t2 = new Thread( new LatchTask(7, null), "T2" );
Thread t3 = new Thread( new LatchTask(5, null), "T3" );
Thread t4 = new Thread( new LatchTask(2, null), "T4" );
// Start all the threads
t1.start();
t2.start();
t3.start();
t4.start();
// Wait till all threads completes
t1.join();
t2.join();
t3.join();
t4.join();
}
java.util.concurrent.CountDownLatch Since:1.5
.countDown() « Decrements the count of the latch group.
.await() « The await methods block until the current count reaches zero.
If you created latchGroupCount = 4 then countDown() should be called 4 times to make count 0. So, that await() will release the blocking threads.
public static void latchThreads() throws InterruptedException {
int latchGroupCount = 4;
CountDownLatch latch = new CountDownLatch(latchGroupCount);
Thread t1 = new Thread( new LatchTask(1, latch), "T1" );
Thread t2 = new Thread( new LatchTask(7, latch), "T2" );
Thread t3 = new Thread( new LatchTask(5, latch), "T3" );
Thread t4 = new Thread( new LatchTask(2, latch), "T4" );
t1.start();
t2.start();
t3.start();
t4.start();
//latch.countDown();
latch.await(); // block until latchGroupCount is 0.
}
Example code of Threaded class LatchTask. To test the approach use joiningThreads();
and latchThreads(); from main method.
class LatchTask extends Thread {
CountDownLatch latch;
int iterations = 10;
public LatchTask(int iterations, CountDownLatch latch) {
this.iterations = iterations;
this.latch = latch;
}
#Override
public void run() {
String threadName = Thread.currentThread().getName();
System.out.println(threadName + " : Started Task...");
for (int i = 0; i < iterations; i++) {
System.out.println(threadName + " : " + i);
MainThread_Wait_TillWorkerThreadsComplete.sleep(1);
}
System.out.println(threadName + " : Completed Task");
// countDown() « Decrements the count of the latch group.
if(latch != null)
latch.countDown();
}
}
CyclicBarriers A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point.CyclicBarriers are useful in programs involving a fixed sized party of threads that must occasionally wait for each other. The barrier is called cyclic because it can be re-used after the waiting threads are released.
CyclicBarrier barrier = new CyclicBarrier(3);
barrier.await();
For example refer this Concurrent_ParallelNotifyies class.
Executer framework: we can use ExecutorService to create a thread pool, and tracks the progress of the asynchronous tasks with Future.
submit(Runnable), submit(Callable) which return Future Object. By using future.get() function we can block the main thread till the working threads completes its work.
invokeAll(...) - returns a list of Future objects via which you can obtain the results of the executions of each Callable.
Find example of using Interfaces Runnable, Callable with Executor framework.
#See also
Find out thread is still alive?
Apart from Thread.join() suggested by others, java 5 introduced the executor framework. There you don't work with Thread objects. Instead, you submit your Callable or Runnable objects to an executor. There's a special executor that is meant to execute multiple tasks and return their results out of order. That's the ExecutorCompletionService:
ExecutorCompletionService executor;
for (..) {
executor.submit(Executors.callable(yourRunnable));
}
Then you can repeatedly call take() until there are no more Future<?> objects to return, which means all of them are completed.
Another thing that may be relevant, depending on your scenario is CyclicBarrier.
A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point. CyclicBarriers are useful in programs involving a fixed sized party of threads that must occasionally wait for each other. The barrier is called cyclic because it can be re-used after the waiting threads are released.
Another possibility is the CountDownLatch object, which is useful for simple situations : since you know in advance the number of threads, you initialize it with the relevant count, and pass the reference of the object to each thread.
Upon completion of its task, each thread calls CountDownLatch.countDown() which decrements the internal counter. The main thread, after starting all others, should do the CountDownLatch.await() blocking call. It will be released as soon as the internal counter has reached 0.
Pay attention that with this object, an InterruptedException can be thrown as well.
You do
for (Thread t : new Thread[] { th1, th2, th3, th4, th5 })
t.join()
After this for loop, you can be sure all threads have finished their jobs.
Store the Thread-objects into some collection (like a List or a Set), then loop through the collection once the threads are started and call join() on the Threads.
You can use Threadf#join method for this purpose.
Although not relevant to OP's problem, if you are interested in synchronization (more precisely, a rendez-vous) with exactly one thread, you may use an Exchanger
In my case, I needed to pause the parent thread until the child thread did something, e.g. completed its initialization. A CountDownLatch also works well.
I created a small helper method to wait for a few Threads to finish:
public static void waitForThreadsToFinish(Thread... threads) {
try {
for (Thread thread : threads) {
thread.join();
}
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
An executor service can be used to manage multiple threads including status and completion. See http://programmingexamples.wikidot.com/executorservice
try this, will work.
Thread[] threads = new Thread[10];
List<Thread> allThreads = new ArrayList<Thread>();
for(Thread thread : threads){
if(null != thread){
if(thread.isAlive()){
allThreads.add(thread);
}
}
}
while(!allThreads.isEmpty()){
Iterator<Thread> ite = allThreads.iterator();
while(ite.hasNext()){
Thread thread = ite.next();
if(!thread.isAlive()){
ite.remove();
}
}
}
I had a similar problem and ended up using Java 8 parallelStream.
requestList.parallelStream().forEach(req -> makeRequest(req));
It's super simple and readable.
Behind the scenes it is using default JVM’s fork join pool which means that it will wait for all the threads to finish before continuing. For my case it was a neat solution, because it was the only parallelStream in my application. If you have more than one parallelStream running simultaneously, please read the link below.
More information about parallel streams here.
The existing answers said could join() each thread.
But there are several ways to get the thread array / list:
Add the Thread into a list on creation.
Use ThreadGroup to manage the threads.
Following code will use the ThreadGruop approach. It create a group first, then when create each thread specify the group in constructor, later could get the thread array via ThreadGroup.enumerate()
Code
SyncBlockLearn.java
import org.testng.Assert;
import org.testng.annotations.Test;
/**
* synchronized block - learn,
*
* #author eric
* #date Apr 20, 2015 1:37:11 PM
*/
public class SyncBlockLearn {
private static final int TD_COUNT = 5; // thread count
private static final int ROUND_PER_THREAD = 100; // round for each thread,
private static final long INC_DELAY = 10; // delay of each increase,
// sync block test,
#Test
public void syncBlockTest() throws InterruptedException {
Counter ct = new Counter();
ThreadGroup tg = new ThreadGroup("runner");
for (int i = 0; i < TD_COUNT; i++) {
new Thread(tg, ct, "t-" + i).start();
}
Thread[] tArr = new Thread[TD_COUNT];
tg.enumerate(tArr); // get threads,
// wait all runner to finish,
for (Thread t : tArr) {
t.join();
}
System.out.printf("\nfinal count: %d\n", ct.getCount());
Assert.assertEquals(ct.getCount(), TD_COUNT * ROUND_PER_THREAD);
}
static class Counter implements Runnable {
private final Object lkOn = new Object(); // the object to lock on,
private int count = 0;
#Override
public void run() {
System.out.printf("[%s] begin\n", Thread.currentThread().getName());
for (int i = 0; i < ROUND_PER_THREAD; i++) {
synchronized (lkOn) {
System.out.printf("[%s] [%d] inc to: %d\n", Thread.currentThread().getName(), i, ++count);
}
try {
Thread.sleep(INC_DELAY); // wait a while,
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.printf("[%s] end\n", Thread.currentThread().getName());
}
public int getCount() {
return count;
}
}
}
The main thread will wait for all threads in the group to finish.
I had similar situation , where i had to wait till all child threads complete its execution then only i could get the status result for each of them .. hence i needed to wait till all child thread completed.
below is my code where i did multi-threading using
public static void main(String[] args) {
List<RunnerPojo> testList = ExcelObject.getTestStepsList();//.parallelStream().collect(Collectors.toList());
int threadCount = ConfigFileReader.getInstance().readConfig().getParallelThreadCount();
System.out.println("Thread count is : ========= " + threadCount); // 5
ExecutorService threadExecutor = new DriverScript().threadExecutor(testList, threadCount);
boolean isProcessCompleted = waitUntilCondition(() -> threadExecutor.isTerminated()); // Here i used waitUntil condition
if (isProcessCompleted) {
testList.forEach(x -> {
System.out.println("Test Name: " + x.getTestCaseId());
System.out.println("Test Status : " + x.getStatus());
System.out.println("======= Test Steps ===== ");
x.getTestStepsList().forEach(y -> {
System.out.println("Step Name: " + y.getDescription());
System.out.println("Test caseId : " + y.getTestCaseId());
System.out.println("Step Status: " + y.getResult());
System.out.println("\n ============ ==========");
});
});
}
Below method is for distribution of list with parallel proccessing
// This method will split my list and run in a parallel process with mutliple threads
private ExecutorService threadExecutor(List<RunnerPojo> testList, int threadSize) {
ExecutorService exec = Executors.newFixedThreadPool(threadSize);
testList.forEach(tests -> {
exec.submit(() -> {
driverScript(tests);
});
});
exec.shutdown();
return exec;
}
This is my wait until method: here you can wait till your condition satisfies within do while loop . in my case i waited for some max timeout .
this will keep checking until your threadExecutor.isTerminated() is true with polling period of 5 sec.
static boolean waitUntilCondition(Supplier<Boolean> function) {
Double timer = 0.0;
Double maxTimeOut = 20.0;
boolean isFound;
do {
isFound = function.get();
if (isFound) {
break;
} else {
try {
Thread.sleep(5000); // Sleeping for 5 sec (main thread will sleep for 5 sec)
} catch (InterruptedException e) {
e.printStackTrace();
}
timer++;
System.out.println("Waiting for condition to be true .. waited .." + timer * 5 + " sec.");
}
} while (timer < maxTimeOut + 1.0);
return isFound;
}
Use this in your main thread: while(!executor.isTerminated());
Put this line of code after starting all the threads from executor service. This will only start the main thread after all the threads started by executors are finished. Make sure to call executor.shutdown(); before the above loop.

Categories