What is the Java equivalent of Golang's WaitGroup - java

Golang has something called a WaitGroup which is sort of like in Java a CompletionService or a CountDownLatch or a Semaphore or some combination of the latter.
I'm not entirely sure how you would implement a WaitGroup in Java. I would imagine a custom CompletionService with some sort of Poison message would be the route to go (since queues can't say when they are done) but perhaps there is a better concurrent data structure/lock?
EDIT I posted a possible solution below using Semaphore that I think is more analogous than using thread.join.

WaitGroup has Add(delta) method that can be called after a WaitGroup has been created. CountDownLatch doesn't support this, number of tasks needs to be specified in advance. JDK7 Phaser can be used instead in this case:
phaser.register = wg.Add(1)
phaser.arrive = wg.Done
phaser.await = wg.Wait

public class WaitGroup {
private int jobs = 0;
public synchronized void add(int i) {
jobs += i;
}
public synchronized void done() {
if (--jobs == 0) {
notifyAll();
}
}
public synchronized void await() throws InterruptedException {
while (jobs > 0) {
wait();
}
}
}

Thanks to #kostya's answer.
I write a WaitGroup class with Phaser
public class WaitGroup {
Phaser phaser = new Phaser(1);
public void add() {
phaser.register();
}
public void done() {
phaser.arrive();
}
public void await() {
phaser.arriveAndAwaitAdvance();
}
}

After looking at the golang doc and confirming that Semaphore won't break with an enormous amount of permits I think a Semaphore set to Integer.MAX_VALUE is the closest to a golang WaitGroup.
The thread.join is probably more similar to how you would use WaitGroup with goroutines since it deals with the cleanup of the threads however an isolated WaitGroup just like a Semaphore is agnostic of what increments it.
CountdownLatch doesn't work because you need to know a priori how many threads you want to run and you cannot increment a CountdownLatch.
Assuming the semaphore is set to Integer.MAX_VALUE:
wg.Add(n) == semaphore.acquire(n)
wg.Done() == semaphore.release()
and in your thread where you want everything to stop:
wg.Wait() == semaphore.acquire(Integer.MAX_VALUE)
However I'm not sure all the semantics carryover so I'm not going to mark this correct for now.

No, there is no 'CountDownLatch' in Go.
sync.WaitGroup may have the 'wait task finish' feature, but this API's Add() doesn't happens-befor Done().

Related

Is there better way to use wait/notify with connection with AtomicInteger

I've this kind of code:
public class RecursiveQueue {
//#Inject
private QueueService queueService;
public static void main(String[] args) {
RecursiveQueue test = new RecursiveQueue();
test.enqueue(new Node("X"), true);
test.enqueue(new Node("Y"), false);
test.enqueue(new Node("Z"), false);
}
private void enqueue(final Node node, final boolean waitTillFinished) {
final AtomicLong totalDuration = new AtomicLong(0L);
final AtomicInteger counter = new AtomicInteger(0);
AfterCallback callback= new AfterCallback() {
#Override
public void onFinish(Result result) {
for(Node aNode : result.getChildren()) {
counter.incrementAndGet();
queueService.requestProcess(aNode, this);
}
totalDuration.addAndGet(result.getDuration());
if(counter.decrementAndGet() <= 0) { //last one
System.out.println("Processing of " + node.toString() + " has finished in " + totalDuration.get() + " ms");
if(waitTillFinished) {
counter.notify();
}
}
}
};
counter.incrementAndGet();
queueService.requestProcess(node, callback);
if(waitTillFinished) {
try {
counter.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Imagine there is a queueService which uses blocking queue and few consumer threads to process nodes = calls DAO to fetch children of nodes (it's a tree).
So requestProcess method just enqueues the node and does not block.
Is there some better/safe way to avoid using wait/notify in this sample ?
According to some findings I can use Phaser (but I work on java 6) or conditions (but I'm not using locks).
There is no synchronized anything in your example. You mustn't call o.wait() or o.notify() except from within a synchronized(o) {...} block.
Your call to wait() is not in a loop. This may not ever happen in your JVM, but the language spec permits wait() to return prematurely (that's known as a spurious wakeup) More generally, it is good practice to always use a loop because it's a familiar design pattern. A while statement costs no more than an if, and you should have it because of the possibility of spurious wakeup, and you'd absolutely must have it in a multi-consumer situation, and so you might as well just always write it that way.
Since you must use synchronized blocks in order to use wait() and notify(), there probably is no reason to use Atomic anything.
This "recursive" thing seems awfully complicated, what with the callback adding more items to the queue. How deep can that go?
I think you are looking for CountDownLatch.
You actually use locks or, let's put it this way, you should be using them if you try to use wait/notify as James pointed out. As you are bound to Java 1.6 and ForkJoin or Phaser are not available to you the choice is either implementing wait/notify properly or using Condition with explicit lock. This would be a matter of your personal preferences.
Another alternative is to try and restructure your algorithm so you first get to know the entire set of steps you would need to execute. It is not always possible though.

Java: is CountDownLatch threadsafe

In the docs for CountDownLatch I see something like:
public void run() {
try {
startSignal.await();
doWork();
doneSignal.countDown();
} catch (InterruptedException ex) {} // return;
}
Here startSignal and doneSignal are CountDownLatch objects.
The docs don't mention anything about the class being thread-safe or not.
As it is designed to be used by multiple threads it would be fair to assume that it is thread-safe to most meanings of thread-safe.
There is even a happens-before commitment (from your link):
Memory consistency effects: Until the count reaches zero, actions in a thread prior to calling countDown() happen-before actions following a successful return from a corresponding await() in another thread.
With reference to your specific question What if two threads call countDown at the same time? Wouldn't it just do the count down action only once effectively? No, two countDowns will be actioned every time.
Yes the class or rather the methods you call on a CountDownLatch objects arr thread-safe.
In order to make these operations such as countDown() await() thread-safe, they have not used synchronize block or functions. Rather they have used Compare and Swap strategy.
Below is the source codes which proves the same
sync.releaseShared(1);
public final boolean releaseShared(int arg) {
if (tryReleaseShared(arg)) {
doReleaseShared();
return true;
}
return false;
}
protected boolean tryReleaseShared(int releases) {
// Decrement count; signal when transition to zero
for (;;) {
int c = getState();
if (c == 0)
return false;
int nextc = c-1;
if (compareAndSetState(c, nextc))
return nextc == 0;
}
}
The above code is a part of the total implementation, you can check source codes for other methods like await() as well.

Synchronized Block inside the run method

Does using a synchronized block inside the run method makes any sense? I thought it does, as long as I'm using a relevant lock, not the instance of Runnable containing this run method. Reading the answers to similar questions on stackoverflow seemed to confirm this. I tried to write some simple code to test it and the synchronized block inside the run method doesn't prevent from data corruption:
public class Test {
public Test() {
ExecutorService es = Executors.newCachedThreadPool();
for (int i = 0; i < 1000; i++) {
es.execute(new Runnable() {
#Override
public void run() {
synchronized (lock) {
sum += 1;
}
}
});
}
es.shutdown();
while(!es.isTerminated()) {
}
}
private int sum = 0;
private final Object lock = new Object();
public static void main(String[] args) {
Test t = new Test();
System.out.println(t.sum);
}
}
Why this code generates incorrect results? Is this because the synchronized block or some other mistake? I feel like I'm missing something basic here.
It's possible your executor encounters some sort of unexpected error. If that happens you won't know it because you are not getting any return value to check.
Try switching to submit() instead of execute() and store a list of Future instances the Executor gives you. If the final sum is less than 1000, iterate the futures and get() each one. If an exception is raised you'll see what happened with that particular runnable task.
Apart from your simple example, which looks OK, you should be careful with synchronization in Runnables to prevent them from blocking each other when one Runnable waits for some resource to be released only by another Runnable later in the queue that has not started yet and never will since the current waiting Runnable must finish first.
With enough worker Threads executing the jobs this is less likely to occur, though.

Mutually exclusive methods

I am on my way learning Java multithread programming. I have a following logic:
Suppose I have a class A
class A {
ConcurrentMap<K, V> map;
public void someMethod1 () {
// operation 1 on map
// operation 2 on map
}
public void someMethod2 () {
// operation 3 on map
// operation 4 on map
}
}
Now I don't need synchronization of the operations in "someMethod1" or "someMethod2". This means if there are two threads calling "someMethod1" at the same time, I don't need to serialize these operations (because the ConcurrentMap will do the job).
But I hope "someMethod1" and "someMethod2" are mutex of each other, which means when some thread is executing "someMethod1", another thread should wait to enter "someMethod2" (but another thread should be allowed to enter "someMethod1").
So, in short, is there a way that I can make "someMethod1" and "someMethod2" not mutex of themselves but mutex of each other?
I hope I stated my question clear enough...
Thanks!
I tried a couple attempts with higher-level constructs, but nothing quite came to mind. I think this may be an occasion to drop down to the low level APIs:
EDIT: I actually think you're trying to set up a problem which is inherently tricky (see second to last paragraph) and probably not needed (see last paragraph). But that said, here's how it could be done, and I'll leave the color commentary for the end of this answer.
private int someMethod1Invocations = 0;
private int someMethod2Invocations = 0;
public void someMethod1() {
synchronized(this) {
// Wait for there to be no someMethod2 invocations -- but
// don't wait on any someMethod1 invocations.
// Once all someMethod2s are done, increment someMethod1Invocations
// to signify that we're running, and proceed
while (someMethod2Invocations > 0)
wait();
someMethod1Invocations++;
}
// your code here
synchronized (this) {
// We're done with this method, so decrement someMethod1Invocations
// and wake up any threads that were waiting for that to hit 0.
someMethod1Invocations--;
notifyAll();
}
}
public void someMethod2() {
// comments are all ditto the above
synchronized(this) {
while (someMethod1Invocations > 0)
wait();
someMethod2Invocations++;
}
// your code here
synchronized(this) {
someMethod2Invocations--;
notifyAll();
}
}
One glaring problem with the above is that it can lead to thread starvation. For instance, someMethod1() is running (and blocking someMethod2()s), and just as it's about to finish, another thread comes along and invokes someMethod1(). That proceeds just fine, and just as it finishes another thread starts someMethod1(), and so on. In this scenario, someMethod2() will never get a chance to run. That's actually not directly a bug in the above code; it's a problem with your very design needs, one which a good solution should actively work to solve. I think a fair AbstractQueuedSynchronizer could do the trick, though that is an exercise left to the reader. :)
Finally, I can't resist but to interject an opinion: given that ConcurrentHashMap operations are pretty darn quick, you could be better off just putting a single mutex around both methods and just being done with it. So yes, threads will have to queue up to invoke someMethod1(), but each thread will finish its turn (and thus let other threads proceed) extremely quickly. It shouldn't be a problem.
I think this should work
class A {
Lock lock = new Lock();
private static class Lock {
int m1;
int m2;
}
public void someMethod1() throws InterruptedException {
synchronized (lock) {
while (lock.m2 > 0) {
lock.wait();
}
lock.m1++;
}
// someMethod1 and someMethod2 cannot be here simultaneously
synchronized (lock) {
lock.m1--;
lock.notifyAll();
}
}
public void someMethod2() throws InterruptedException {
synchronized (lock) {
while (lock.m1 > 0) {
lock.wait();
}
lock.m2++;
}
// someMethod1 and someMethod2 cannot be here simultaneously
synchronized (lock) {
lock.m2--;
lock.notifyAll();
}
}
}
This probably can't work (see comments) - leaving it for information.
One way would be to use Semaphores:
one semaphore sem1, with one permit, linked to method1
one semaphore sem2, with one permit, linked to method2
when entering method1, try to acquire sem2's permit, and if available release it immediately.
See this post for an implementation example.
Note: in your code, even if ConcurrentMap is thread safe, operation 1 and operation 2 (for example) are not atomic - so it is possible in your scenario to have the following interleaving:
Thread 1 runs operation 1
Thread 2 runs operation 1
Thread 2 runs operation 2
Thread 1 runs operation 2
First of all : Your map is thread safe as its ConcurrentMap. This means that operations on this map like add,contains etc are thread safe.
Secondaly
This doesn't guarantee that even your methods (somemethod1 and somemethod2) are also thread safe. So your methods are not mutually exclusive and two threads at same time can access them.
Now you want these to be mutex of each other : One approach could be put all operations (operaton 1,..operation 4) in a single method and based on condition call each.
I think you cannot do this without a custom synchronizer. I've whipped up this, I called it TrafficLight since it allows threads with a particular state to pass while halting others, until it changes state:
public class TrafficLight<T> {
private final int maxSequence;
private final ReentrantLock lock = new ReentrantLock(true);
private final Condition allClear = lock.newCondition();
private int registered;
private int leftInSequence;
private T openState;
public TrafficLight(int maxSequence) {
this.maxSequence = maxSequence;
}
public void acquire(T state) throws InterruptedException {
lock.lock();
try {
while ((this.openState != null && !this.openState.equals(state)) || leftInSequence == maxSequence) {
allClear.await();
}
if (this.openState == null) {
this.openState = state;
}
registered++;
leftInSequence++;
} finally {
lock.unlock();
}
}
public void release() {
lock.lock();
try {
registered--;
if (registered == 0) {
openState = null;
leftInSequence = 0;
allClear.signalAll();
}
} finally {
lock.unlock();
}
}
}
acquire() will block if another state is active, until it becomes inactive.
The maxSequence is there to help prevent thread starvation, allowing only a maximum number of threads to pass in sequence (then they'll have to queue like the others). You could make a variant that uses a time window instead.
For your problem someMethod1() and someMethod2() would call acquire() with a different state each at the start, and release() at the end.

java concurrency: lightweight nonblocking semaphore?

I have a situation where I have a callback that I want to execute once. For the sake of argument let's say it looks like this:
final X once = new X(1);
Runnable r = new Runnable() {
#Override public void run() {
if (once.use())
doSomething();
}
}
where X is some concurrent object with the following behavior:
constructor: X(int N) -- allocates N use permits
boolean use(): If there is at least 1 use permit, consume one of them and return true. Otherwise return false. This operation is atomic with respect to multiple threads.
I know I can use java.util.concurrent.Semaphore for this, but I don't need the blocking/waiting aspect of it, and I want this to be a one-time use thing.
AtomicInteger doesn't look sufficient unless I do something like
class NTimeUse {
final private AtomicInteger count;
public NTimeUse(int N) { this.count = new AtomicInteger(N); }
public boolean use() {
while (true)
{
int n = this.count.get();
if (n == 0)
return false;
if (this.count.compareAndSet(n, n-1))
return true;
}
}
and I feel queasy about the while loop.
CountDownLatch won't work, because the countDown() method has no return value and can't be executed atomically w/r/t getCount().
Should I just use Semaphore or is there a more appropriate class?
In the case of single permit you can use AtomicBoolean:
final AtomicBoolean once = new AtomicBoolean(true);
Runnable r = new Runnable() {
#Override public void run() {
if (once.getAndSet(false))
doSomething();
}
}
If you need many permits, use your solution with compareAndSet(). Don't worry about the loop, getAndIncrement() works the same way under the cover.
yes. AtomicInteger is non-blocking. You can use getAndDecrement().
You can use something like
if(counter.getAndDecrement() > 0) {
// something
} else {
counter.set(0);
}
This will work provided you don't call it two billion times between the decrement and the set. i.e. you would need to have two billion threads stop between these two statements.
Again you can use AtomicLong for extra paranoia.
// This implements an unfair locking scheme:
while ( mayContinue() ) {
// acquire the permit and check if it was legally obtained
if ( counter.decrementAndGet() > 0 )
return true;
// return the illegally acquired permit
counter.incrementAndGet();
}
return false;
Setting the counter back to zero if you discover the permit was illegally obtained creates a race condition when another thread releases a permit. This only works for situations where there are 2 or 3 threads at most. Some other backoff or latching mechanism needs to be added if you have more.

Categories