I am trying to understand a particular detail in ReentrantLock::lock method. I am looking at it and seeing it as:
final void lock() {
if (!initialTryLock()) {
acquire(1);
}
}
So first it tries this method : initialTryLock (I will look in NonfairSync), which does this:
it does a compareAndSwap(0, 1), meaning if no one holds the lock (0) and I can grab it (1), I hold the lock now.
if the above fails, it checks if the thread requesting the lock is the owner already.
if that fails it returns false, meaning I could not acquire the lock.
Let's assume the above failed. It then goes on and calls acquire in AbstractQueuedSynchronizer:
public final void acquire(int arg) {
if (!tryAcquire(arg))
acquire(null, arg, false, false, false, 0L);
}
It calls tryAcquire first in NonfairSync:
protected final boolean tryAcquire(int acquires) {
if (getState() == 0 && compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(Thread.currentThread());
return true;
}
return false;
}
You can see that it tries to acquire the lock again, though the initialTryLock already failed. In theory, this tryAcquire could have simply returned false, right?
I see this as a potential retry, because between the calls of initialTryLock and tryAcquire, the lock might have been released. The benefit of this might be that because the next operation (after tryAcquire) fails, is the expensive enqueue of this thread. So I guess this makes sense (to retry) because of that?
Just to add to the answer above.
tryAcquire could have simply returned false, right?
No.
This implementation:
boolean tryAcquire(int acquires) {
return false;
}
would break the work of AbstractQueuedSynchronizer.
The reason is that tryAcquire() is the only way to take the lock in AbstractQueuedSynchronizer.
Even acquire() in the end uses tryAcquire().
So if tryAcquire() always returned false then acquire() would never acquire the lock.
And acquire() is used when several threads contend for the lock.
initialTryLock() contains reentrancy functionality:
javadoc:
/**
* Checks for reentrancy and acquires if lock immediately
* available under fair vs nonfair rules. Locking methods
* perform initialTryLock check before relaying to
* corresponding AQS acquire methods.
*/
abstract boolean initialTryLock();
source code in NonfairSync:
final boolean initialTryLock() {
Thread current = Thread.currentThread();
if (compareAndSetState(0, 1)) { // first attempt is unguarded
setExclusiveOwnerThread(current);
return true;
} else if (getExclusiveOwnerThread() == current) {
int c = getState() + 1;
if (c < 0) // overflow
throw new Error("Maximum lock count exceeded");
setState(c);
return true;
} else
return false;
}
Here:
the 1st if checks if the lock is taken (and takes the lock if it is free)
the 2nd if checks if the taken lock belongs to the current thread — this is the reentrancy logic.
tryAcquire() must be implemented by any class extending AbstractQueuedSynchronizer
what the method must do is described in its javadoc in AbstractQueuedSynchronizer
/**
* Attempts to acquire in exclusive mode. This method should query
* if the state of the object permits it to be acquired in the
* exclusive mode, and if so to acquire it.
* ...
*/
protected boolean tryAcquire(int arg) {
throw new UnsupportedOperationException();
}
implementation in NonfairSync does exactly that (and doesn't contain reentrancy functionality):
/**
* Acquire for non-reentrant cases after initialTryLock prescreen
*/
protected final boolean tryAcquire(int acquires) {
if (getState() == 0 && compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(Thread.currentThread());
return true;
}
return false;
}
Related
I'm writing this class that simulates a barrier point. When a thread reaches this barrier point it cannot proceed until the other threads have also reached this point. I am using a counter to keep track of the number of threads that have arrived at this point. Assume that the class is expecting N+1 threads, but is only given N threads. In this case the program will keep all the threads waiting because it thinks that there is still one more thread to arrive.
I want to write a method that will allow me to free all of the waiting threads regardless of whether or not the program thinks there is still more threads to arrive at the barrier point.
My program to wait for all threads,
public volatile int count;
public static boolean cycle = false;
public static Lock lock = new ReentrantLock();
public static Condition cv = lock.newCondition();
public void barrier() throws InterruptedException {
boolean cycle;
System.out.println("lock");
lock.lock();
try {
cycle = this.cycle;
if (--this.count == 0) {
System.out.println("releasing all threads");
this.cycle = !this.cycle;
cv.signalAll();
} else {
while (cycle == this.cycle) {
System.out.println("waiting at barrier");
cv.await(); // Line 20
}
}
} finally {
System.out.println("unlock");
lock.unlock();
}
}
I was thinking I could simply create a method that calls the signalAll() method and all the threads would be free. However, a problem I am having is that if the program is expecting more threads it will maintain a lock because it will be waiting at line 20.
Is there a way to get around this lock? How should I approach this problem?
Better idea - use standard java.util.concurrent primitive - CyclicBarrier with method 'reset':
/**
* Resets the barrier to its initial state. If any parties are
* currently waiting at the barrier, they will return with a
* {#link BrokenBarrierException}. Note that resets <em>after</em>
* a breakage has occurred for other reasons can be complicated to
* carry out; threads need to re-synchronize in some other way,
* and choose one to perform the reset. It may be preferable to
* instead create a new barrier for subsequent use.
*/
public void reset()
Working with the classic multiple Consumer/Producer problem, and I have an issue that is driving me around the bend, regarding how to avoid race conditions when inserting/removing from a circular buffer. Appreciate any help in advance!
Sample code for circular buffer for example purposes. Similar to my implementation (Note: I cannot use collection types, only arrays for this):
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class BoundedBuffer {
private final String[] buffer;
private final int capacity;
private int front;
private int rear;
private int count;
private final Lock lock = new ReentrantLock();
private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();
public BoundedBuffer(int capacity) {
super();
this.capacity = capacity;
buffer = new String[capacity];
}
public void deposit(String data) throws InterruptedException {
lock.lock();
try {
while (count == capacity) {
notFull.await();
}
buffer[rear] = data;
rear = (rear + 1) % capacity;
count++;
notEmpty.signal();
} finally {
lock.unlock();
}
}
public String fetch() throws InterruptedException {
lock.lock();
try {
while (count == 0) {
notEmpty.await();
}
String result = buffer[front];
front = (front + 1) % capacity;
count--;
notFull.signal();
return result;
} finally {
lock.unlock();
}
}
}
What I need to know is how can I implement a method for checking if the buffer is full/Empty? This method needs to be included in this BoundedBuffer and must be called from another class (Producer/Consumer) before giving the go ahead for/Calling Inserting/Writing methods.
Pseudocode for method in Producer class.
if (!bufferFull) {
buffer.addelement;
}
else {
thread.sleep(5)
threadHasSleptFor++;
}
I am using threads, and there are multiple producers/consumers (In this case 2 producers/consumers, but I may require more). I need it so that if the buffer is full, the thread has to wait until it becomes available for insertion, and the time it waits needs to be stored for output purposes (Not debug, part of the core features). The issue I am having is this:
Thread 1 Producer checks is bufferfull condition, it's false.
Scheduler switches to Thread 2 midway.
Thread 2 also checks bufferfull condition, it's false.
thread 2 proceeds to insert.
Scheduler switches back to Thread 1.
Thread 1 now goes to insert line, as it already checked, but Thread 2 beat it.
Booom.
Somewhat new to Java, though as I understand this is the "time-of-check/time-of-use" race condition issue.
Can someone please advise as to how this can be implemented safely, and how would I loop the code so the threadHasSleptFor variable keeps incrementing on every fail (Providing the methods would be great). I want it so that only the Thread that has requested the check can begin to insert item; the second producer must wait for the lock.
Thanks.
This is by definition impossible to do without higher level locking.
You have to guarantee that the check of whether the buffer is full or not and the following insert are atomic from the thread's perspective which means you have to acquire some common lock to do so. This general problem is indeed called Time of check to time to use and leads to many interesting race conditions down the line.
The solution to these problems is to not check if you can do an operation and then do it, but to just try the operation and handle the error case. So if you don't want to block if the buffer is full with your operation, just implement a tryDeposit method that throws an exception if it can't store a value, or return a boolean success value.
Although in your case if you have to store the time necessary before you could push the value onto the stack, I don't see why a simple:
long start = System.nanotime();
queue.deposit();
long end = System.nanotime();
wouldn't do the trick as well.
If I understand you correctly, you are asking how to make a thread wait until it's OK to call deposit() or wait until it's OK to call fetch(). But, there's no need for that. Your deposit() method will block the calling thread until there is room in the queue, and your fetch() method will block the caller until there is something to fetch. That's what the notFull.await() and notEmpty.await() calls do.
await() unlocks the lock, sleeps until the condition is signalled by another thread, and then it re-locks the lock. The condition may or may not still be true when the caller finally gets the lock again, but that's why you have the await() calls in loops, so that the thread keeps trying until finally it has the lock and the condition is true. Then it does its work (add an item or remove an item), unlocks the lock, and returns.
I am on my way learning Java multithread programming. I have a following logic:
Suppose I have a class A
class A {
ConcurrentMap<K, V> map;
public void someMethod1 () {
// operation 1 on map
// operation 2 on map
}
public void someMethod2 () {
// operation 3 on map
// operation 4 on map
}
}
Now I don't need synchronization of the operations in "someMethod1" or "someMethod2". This means if there are two threads calling "someMethod1" at the same time, I don't need to serialize these operations (because the ConcurrentMap will do the job).
But I hope "someMethod1" and "someMethod2" are mutex of each other, which means when some thread is executing "someMethod1", another thread should wait to enter "someMethod2" (but another thread should be allowed to enter "someMethod1").
So, in short, is there a way that I can make "someMethod1" and "someMethod2" not mutex of themselves but mutex of each other?
I hope I stated my question clear enough...
Thanks!
I tried a couple attempts with higher-level constructs, but nothing quite came to mind. I think this may be an occasion to drop down to the low level APIs:
EDIT: I actually think you're trying to set up a problem which is inherently tricky (see second to last paragraph) and probably not needed (see last paragraph). But that said, here's how it could be done, and I'll leave the color commentary for the end of this answer.
private int someMethod1Invocations = 0;
private int someMethod2Invocations = 0;
public void someMethod1() {
synchronized(this) {
// Wait for there to be no someMethod2 invocations -- but
// don't wait on any someMethod1 invocations.
// Once all someMethod2s are done, increment someMethod1Invocations
// to signify that we're running, and proceed
while (someMethod2Invocations > 0)
wait();
someMethod1Invocations++;
}
// your code here
synchronized (this) {
// We're done with this method, so decrement someMethod1Invocations
// and wake up any threads that were waiting for that to hit 0.
someMethod1Invocations--;
notifyAll();
}
}
public void someMethod2() {
// comments are all ditto the above
synchronized(this) {
while (someMethod1Invocations > 0)
wait();
someMethod2Invocations++;
}
// your code here
synchronized(this) {
someMethod2Invocations--;
notifyAll();
}
}
One glaring problem with the above is that it can lead to thread starvation. For instance, someMethod1() is running (and blocking someMethod2()s), and just as it's about to finish, another thread comes along and invokes someMethod1(). That proceeds just fine, and just as it finishes another thread starts someMethod1(), and so on. In this scenario, someMethod2() will never get a chance to run. That's actually not directly a bug in the above code; it's a problem with your very design needs, one which a good solution should actively work to solve. I think a fair AbstractQueuedSynchronizer could do the trick, though that is an exercise left to the reader. :)
Finally, I can't resist but to interject an opinion: given that ConcurrentHashMap operations are pretty darn quick, you could be better off just putting a single mutex around both methods and just being done with it. So yes, threads will have to queue up to invoke someMethod1(), but each thread will finish its turn (and thus let other threads proceed) extremely quickly. It shouldn't be a problem.
I think this should work
class A {
Lock lock = new Lock();
private static class Lock {
int m1;
int m2;
}
public void someMethod1() throws InterruptedException {
synchronized (lock) {
while (lock.m2 > 0) {
lock.wait();
}
lock.m1++;
}
// someMethod1 and someMethod2 cannot be here simultaneously
synchronized (lock) {
lock.m1--;
lock.notifyAll();
}
}
public void someMethod2() throws InterruptedException {
synchronized (lock) {
while (lock.m1 > 0) {
lock.wait();
}
lock.m2++;
}
// someMethod1 and someMethod2 cannot be here simultaneously
synchronized (lock) {
lock.m2--;
lock.notifyAll();
}
}
}
This probably can't work (see comments) - leaving it for information.
One way would be to use Semaphores:
one semaphore sem1, with one permit, linked to method1
one semaphore sem2, with one permit, linked to method2
when entering method1, try to acquire sem2's permit, and if available release it immediately.
See this post for an implementation example.
Note: in your code, even if ConcurrentMap is thread safe, operation 1 and operation 2 (for example) are not atomic - so it is possible in your scenario to have the following interleaving:
Thread 1 runs operation 1
Thread 2 runs operation 1
Thread 2 runs operation 2
Thread 1 runs operation 2
First of all : Your map is thread safe as its ConcurrentMap. This means that operations on this map like add,contains etc are thread safe.
Secondaly
This doesn't guarantee that even your methods (somemethod1 and somemethod2) are also thread safe. So your methods are not mutually exclusive and two threads at same time can access them.
Now you want these to be mutex of each other : One approach could be put all operations (operaton 1,..operation 4) in a single method and based on condition call each.
I think you cannot do this without a custom synchronizer. I've whipped up this, I called it TrafficLight since it allows threads with a particular state to pass while halting others, until it changes state:
public class TrafficLight<T> {
private final int maxSequence;
private final ReentrantLock lock = new ReentrantLock(true);
private final Condition allClear = lock.newCondition();
private int registered;
private int leftInSequence;
private T openState;
public TrafficLight(int maxSequence) {
this.maxSequence = maxSequence;
}
public void acquire(T state) throws InterruptedException {
lock.lock();
try {
while ((this.openState != null && !this.openState.equals(state)) || leftInSequence == maxSequence) {
allClear.await();
}
if (this.openState == null) {
this.openState = state;
}
registered++;
leftInSequence++;
} finally {
lock.unlock();
}
}
public void release() {
lock.lock();
try {
registered--;
if (registered == 0) {
openState = null;
leftInSequence = 0;
allClear.signalAll();
}
} finally {
lock.unlock();
}
}
}
acquire() will block if another state is active, until it becomes inactive.
The maxSequence is there to help prevent thread starvation, allowing only a maximum number of threads to pass in sequence (then they'll have to queue like the others). You could make a variant that uses a time window instead.
For your problem someMethod1() and someMethod2() would call acquire() with a different state each at the start, and release() at the end.
A thread can use Object.wait() to block until another thread calls notify() or notifyAll() on that object.
But what if a thread wants to wait until one of multiple objects is signaled? For example, my thread must wait until either a) bytes become available to read from an InputStream or b) an item is added to an ArrayList.
How can the thread wait for either of these events to occur?
EDIT
This question deals with waiting for multiple threads to complete -- my case involves a thread waiting for one of many objects to be singnaled.
You are in for a world of pain. Use a higher level abstraction, such as a blocking message queue, from which the thread can consume messages such as 'more bytes available' or 'item added'.
They could all use the same mutex. You consumer is waiting on that mutex, the both other notify on that mutex when the first can proceed.
A thread cannot wait on more than one object at a time.
The wait() and notify() methods are object-specific. The wait() method suspends the current thread of execution, and tells the object to keep track of the suspended thread. The notify() method tells the object to wake up the suspended threads that it is currently keeping track of.
Useful link : Can a thread call wait() on two locks at once in Java (6) ?
Little late, but it's a very interesting question!
It would seems that you can indeed wait for multiple conditions, with the same performance, and no extra threads; It's just a matter of defining the problem! I took the time to write a more detailed explanation within the commits of the code bellow. By request I will extract the abstraction:
So in fact waiting on multiple objects, is the same as waiting on multiple conditions. But the next step is to merge your sub-conditions into a -net- condition a -single condition-. And when any component of the condition would cause it to become true you flip a boolean, and notify the lock (like any other wait-notify condition).
My approach:
For any condition, it can only result in two values (true and false). How that value is produced is irrelevant. In your case your "functional condition" is when either one of two values is true: (value_a || value_b). I call this "functional condition" the "Nexus-Point". If you apply the perspective that any complex condition -no matter how complex-, always yields a simple result (true or false), then what you're really asking for is "What will cause my net condition to become true?" (Assuming the logic is "Wait until true"). Thus, when a thread causes a component of your condition to become true (setting value_a, or value_b to true, in your case), and you know it'll cause your desired -net- condition to be met, then you can simplify your approach to a classical ( in that it flips a single boolean-flag, and releases a lock). With this concept, you can apply a object-ordinate approach to help aid the clarity of your overall logic:
import java.util.HashSet;
import java.util.Set;
/**
* The concept is that all control flow operation converge
* to a single value: true or false. In the case of N
* components in which create the resulting value, the
* theory is the same. So I believe this is a matter of
* perspective and permitting 'simple complexity'. for example:
*
* given the statement:
* while(condition_a || condition_b || ...) { ... }
*
* you could think of it as:
* let C = the boolean -resulting- value of (condition_a || condition_b || ...),
* so C = (condition_a || condition_b || ...);
*
* Now if we were to we-write the statement, in lamest-terms:
* while(C) { ... }
*
* Now if you recognise this form, you'll notice its just the standard
* syntax for any control-flow statement?
*
* while(condition_is_not_met) {
* synchronized (lock_for_condition) {
* lock_for_condition.wait();
* }
* }
*
* So in theory, even if the said condition was evolved from some
* complex form, it should be treated as nothing more then if it
* was in the simplest form. So whenever a component of the condition,
* in which cause the net-condition (resulting value of the complex
* condition) to be met, you would simply flip the boolean and notify
* a lock to un-park whoever is waiting on it. Just like any standard
* fashion.
*
* So thinking ahead, if you were to think of your given condition as a
* function whos result is true or false, and takes the parameters of the states
* in which its comprised of ( f(...) = (state_a || state_b && state_c), for example )
* then you would recognize "If I enter this state, in which this I know would
* cause that condition/lock to become true, I should just flip the switch switch,
* and notify".
*
* So in your example, your 'functional condition' is:
* while(!state_a && !state_b) {
* wait until state a or state b is false ....
* }
*
* So armed with this mindset, using a simple/assertive form,
* you would recognize that the overall question:
* -> What would cause my condition to be true? : if state_a is true OR state_b is true
* Ok... So, that means: When state_a or state_b turn true, my overall condition is met!
* So... I can just simplify this thing:
*
* boolean net_condition = ...
* final Object lock = new Lock();
*
* void await() {
* synchronized(lock) {
* while(!net_condition) {
* lock.wait();
* }
* }
* }
*
* Almighty, so whenever I turn state_a true, I should just flip and notify
* the net_condition!
*
*
*
* Now for a more expanded form of the SAME THING, just more direct and clear:
*
* #author Jamie Meisch
*/
public class Main {
/**
*
* The equivalent if one was to "Wait for one of many condition/lock to
* be notify me when met" :
*
* synchronized(lock_a,lock_b,lock_c) {
* while(!condition_a || !condition_b || !condition_c) {
* condition_a.wait();
* condition_b.wait();
* condition_c.wait();
* }
* }
*
*/
public static void main(String... args) {
OrNexusLock lock = new OrNexusLock();
// The workers register themselves as their own variable as part of the overall condition,
// in which is defined by the OrNuxusLock custom-implement. Which will be true if any of
// the given variables are true
SpinningWarrior warrior_a = new SpinningWarrior(lock,1000,5);
SpinningWarrior warrior_b = new SpinningWarrior(lock,1000,20);
SpinningWarrior warrior_c = new SpinningWarrior(lock,1000,50);
new Thread(warrior_a).start();
new Thread(warrior_b).start();
new Thread(warrior_c).start();
// So... if any one of these guys reaches 1000, stop waiting:
// ^ As defined by our implement within the OrNexusLock
try {
System.out.println("Waiting for one of these guys to be done, or two, or all! does not matter, whoever comes first");
lock.await();
System.out.println("WIN: " + warrior_a.value() + ":" + warrior_b.value() + ":" + warrior_c.value());
} catch (InterruptedException ignored) {
}
}
// For those not using Java 8 :)
public interface Condition {
boolean value();
}
/**
* A variable in which the net locks 'condition function'
* uses to determine its overall -net- state.
*/
public static class Variable {
private final Object lock;
private final Condition con;
private Variable(Object lock, Condition con) {
this.lock = lock;
this.con = con;
}
public boolean value() {
return con.value();
}
//When the value of the condition changes, this should be called
public void valueChanged() {
synchronized (lock) {
lock.notifyAll();
}
}
}
/**
*
* The lock has a custom function in which it derives its resulting
* -overall- state (met, or not met). The form of the function does
* not matter, but it only has boolean variables to work from. The
* conditions are in their abstract form (a boolean value, how ever
* that sub-condition is met). It's important to retain the theory
* that complex conditions yeild a simple result. So expressing a
* complex statement such as ( field * 5 > 20 ) results in a simple
* true or false value condition/variable is what this approach is
* about. Also by centerializing the overal logic, its much more
* clear then the raw -simplest- form (listed above), and just
* as fast!
*/
public static abstract class NexusLock {
private final Object lock;
public NexusLock() {
lock = new Object();
}
//Any complex condition you can fathom!
//Plus I prefer it be consolidated into a nexus point,
// and not asserted by assertive wake-ups
protected abstract boolean stateFunction();
protected Variable newVariable(Condition condition) {
return new Variable(lock, condition);
}
//Wait for the overall condition to be met
public void await() throws InterruptedException {
synchronized (lock) {
while (!stateFunction()) {
lock.wait();
}
}
}
}
// A implement in which any variable must be true
public static class OrNexusLock extends NexusLock {
private final Set<Variable> vars = new HashSet<>();
public OrNexusLock() {
}
public Variable newVar(Condition con) {
Variable var = newVariable(con);
vars.add(var); //register it as a general component of or net condition // We should notify the thread since our functional-condition has changed/evolved:
synchronized (lock) { lock.notifyAll(); }
return var;
}
#Override
public boolean stateFunction() { //Our condition for this lock
// if any variable is true: if(var_a || var_b || var_c || ...)
for(Variable var : vars) {
if(var.value() == true) return true;
}
return false;
}
}
//increments a value with delay, the condition is met when the provided count is reached
private static class SpinningWarrior implements Runnable, Condition {
private final int count;
private final long delay;
private final Variable var;
private int tick = 0;
public SpinningWarrior(OrNexusLock lock, int count, long delay) {
this.var = lock.newVar(this);
this.count = count; //What to count to?
this.delay = delay;
}
#Override
public void run() {
while (state_value==false) { //We're still counting up!
tick++;
chkState();
try {
Thread.sleep(delay);
} catch (InterruptedException ignored) {
break;
}
}
}
/**
* Though redundant value-change-notification are OK,
* its best to prevent them. As such its made clear to
* that we will ever change state once.
*/
private boolean state_value = false;
private void chkState() {
if(state_value ==true) return;
if(tick >= count) {
state_value = true;
var.valueChanged(); //Our value has changed
}
}
#Override
public boolean value() {
return state_value; //We could compute our condition in here, but for example sake.
}
}
}
It appears that in your case you're waiting for "notifications" from two different sources. You may not have to "wait" (as in normal java synchronized(object) object.wait()) on those two objects per se, but have them both talk to a queue or what not (as the other answers mention, some blocking collection like LinkedBlockingQueue).
If you really want to "wait" on two different java objects, you might be able to do so by applying some of the principles from this answer: https://stackoverflow.com/a/31885029/32453 (basically new up a thread each to do a wait on each of the objects you're waiting for, have them notify the main thread when the object itself is notified) but it might not be easy to manage the synchronized aspects.
Lock in both cases over the same object. Call in case a) or in case b) notify() on the same object.
You can wait only on one monitor. So notifiers must notify this one monitor. There is no other way in this low level synchronization.
In order handle the termination of any thread from a given set without waiting for all of them to finish, a dedicated common Object (lastExited below) can be used as monitor (wait() and notify() in synchronized blocks). Further monitors are required for ensuring that at any time at most one thread is exiting (notifyExitMutex) and at most one thread is waiting for any thread to exit (waitAnyExitMonitor); thus the wait()/notify() pairs pertain always to different blocks.
Example (all process terminations are handled in the order the threads finished):
import java.util.Random;
public class ThreadMonitor {
private final Runnable[] lastExited = { null };
private final Object notifyExitMutex = new Object();
public void startThread(final Runnable runnable) {
(new Thread(new Runnable() { public void run() {
try { runnable.run(); } catch (Throwable t) { }
synchronized (notifyExitMutex) {
synchronized (lastExited) {
while (true) {
try {
if (lastExited[0] != null) lastExited.wait();
lastExited[0] = runnable;
lastExited.notify();
return;
}
catch (InterruptedException e) { }
}
}
}
}})).start();
}
private final Object waitAnyExitMutex = new Object();
public Runnable waitAnyExit() throws InterruptedException {
synchronized (waitAnyExitMutex) {
synchronized (lastExited) {
if (lastExited[0] == null) lastExited.wait();
Runnable runnable = lastExited[0];
lastExited[0] = null;
lastExited.notify();
return runnable;
}
}
}
private static Random random = new Random();
public static void main(String[] args) throws InterruptedException {
ThreadMonitor threadMonitor = new ThreadMonitor();
int threadCount = 0;
while (threadCount != 100) {
Runnable runnable = new Runnable() { public void run() {
try { Thread.sleep(1000 + random.nextInt(100)); }
catch (InterruptedException e) { }
}};
threadMonitor.startThread(runnable);
System.err.println(runnable + " started");
threadCount++;
}
while (threadCount != 0) {
Runnable runnable = threadMonitor.waitAnyExit();
System.err.println(runnable + " exited");
threadCount--;
}
}
}
I have a requirement to manipulate two queues atomically and am not sure what is the correct synchronization strategy: This is what I was trying:
public class transfer {
BlockingQueue firstQ;
BlockingQueue secondQ;
public moveToSecond() {
synchronized (this){
Object a = firstQ.take();
secondQ.put(a)
}
}
public moveToFirst() {
synchronized(this) {
Object a = secondQ.take();
firstQ.put(a);
}
}
}
Is this the correct pattern? In the method moveToSecond(), if firstQ is empty, the method will wait on firstQ.take(), but it still holds the lock on this object. This will prevent moveToFirst() to have a chance to execute.
I am confused about the lock release during a wait - Does the thread release all locks [both this and BlockedQUeue lock?]? What is the correct pattern to provide atomicity dealing with multiple blocking queues?
You are using the correct approach using a common mutex to synchronize between both queues. However, to avoid the situation you describe with the first queue being empty I'd suggest reimplementing moveToFirst() and moveToSecond() to use poll() rather than take(); e.g.
public void boolean moveToFirst() {
// Synchronize on simple mutex; could use a Lock here but probably
// not worth the extra dev. effort.
synchronzied(queueLock) {
boolean success;
// Will return immediately, returning null if the queue is empty.
Object o = firstQ.poll();
if (o != null) {
// Put could block if the queue is full. If you're using a bounded
// queue you could use add(Object) instead to avoid any blocking but
// you would need to handle the exception somehow.
secondQ.put(o);
success = true;
} else {
success = false;
}
}
return success;
}
Another failure condition you didn't mention is if firstQ is not empty but secondQ is full, the item will be removed from firstQ but there will be no place to put it.
So the only correct way is to use poll and offer with timeouts and code to return things to the way they were before any failure (important!), then retry after a random time until both poll and offer are successful.
This is an optimistic approach; efficient in normal operation but quite inefficient when deadlocks are frequent (average latency depends on the timeout chosen)
You should use the Lock-mechanism from java.util.concurrency, like this:
Lock lock = new ReentrantLock();
....
lock.lock();
try {
secondQ.put(firstQ.take());
} finally {
lock.unlock();
}
Do the same for firstQ.put(secondQ.take()), using the same lock object.
There is no need to use the lowlevel wait/notify methods on the Object class anymore, unless you are writing new concurrency primitives.
In your code, while the thread is blocked on BlockingQueue.take() it is holding on to the lock on this. The lock isn't released until either the code leaves the synchronized block or this.wait() is called.
Here I assume that moveToFirst() and moveToSecond() should block, and that your class controls all access to the queues.
private final BlockingQueue<Object> firstQ = new LinkedBlockingQueue();
private final Semaphore firstSignal = new Semaphore(0);
private final BlockingQueue<Object> secondQ = LinkedBlockingQueue();
private final Semaphore secondSignal = new Semaphore(0);
private final Object monitor = new Object();
public void moveToSecond() {
int moved = 0;
while (moved == 0) {
// bock until someone adds to the queue
firstSignal.aquire();
// attempt to move an item from one queue to another atomically
synchronized (monitor) {
moved = firstQ.drainTo(secondQ, 1);
}
}
}
public void putInFirst(Object object) {
firstQ.put(object);
// notify any blocking threads that the queue has an item
firstSignal.release();
}
You would have similar code for moveToFirst() and putInSecond(). The while is only needed if some other code might remove items from the queue. If you want the method that removes on the queue to wait for pending moves, it should aquire a permit from the semaphore, and the semaphore should be created as a fair Semaphore, so the first thread to call aquire will get released first:
firstSignal = new Semaphore(0, true);
If you don't want moveToFirst() to block you have a few options
Have the method do do its work in a Runnable sent to an Executor
Pass a timeout to moveToFirst() and use BlockingQueue.poll(int, TimeUnit)
Use BlockingQueue.drainTo(secondQ, 1) and modify moveToFirst() to return a boolean to indicate if it was successful.
For the above three options, you wouldn't need the semaphore.
Finally, I question the need to make the move atomic. If multiple threads are adding or removing from the queues, then an observing queue wouldn't be able to tell whether moveToFirst() was atomic.