I am playing around with Conditions in ReentrantLock in the context of a resource pool, from what I can see it simplifies thread communications. My questions is, I end up organically writing strange Conditionals such as acquiredMapEmpty, freeQueueNotEmpty, change that await and single different things. Technically they can be all replaced by one Conditional or be broken up into more Conditionals -- is there a rule of rule of thumb for:
Identifying the Conditionals
Figuring out if you have too many or too few
When your on the right track or way off course
Here is example of removing a resource.
public boolean remove(R resource) throws InterruptedException {
System.out.println("Remove, resource: " + resource + " with thread: " + Thread.currentThread().getName());
if (resource == null) {
throw new NullPointerException();
}
mainLock.lock();
try {
if (!isOpen) {
throw new IllegalStateException("Not open");
}
Object lock = locks.get(resource);
if (lock == null) {
return false;
}
if (freeQueue.remove(resource)) {
locks.remove(resource);
if (!freeQueue.isEmpty()) {
freeQueueNotEmpty.signalAll();
}
return true;
}
while (!freeQueue.contains(resource)) {
change.await();
if (!isOpen) {
throw new IllegalStateException("Not open");
}
lock = locks.get(resource);
if (lock == null) {
return false;
}
}
if (freeQueue.remove(resource)) {
locks.remove(resource);
if (!freeQueue.isEmpty()) {
freeQueueNotEmpty.signalAll();
}
return true;
}
return false;
} finally {
mainLock.unlock();
}
}
Well, as a rule of thumb, I tend to have as many condition variables as there are reasons for a thread to block. The rationale is that when you signal a condition variable you want to wake up a thread that is waiting for that specific change in the state you're signalling and you really, really want to avoid the "thundering herd" syndrome - waking up all the threads, blocked on a condition, only to have one of them make progress and all the others going back to sleep, having wasted precious CPU time and thrashed caches in the meantime.
In my humble opinion, there is no thumb rule here.
It really depends on use-cases, and synchronization is not an easy topic at all.
Of course you should not "exhaust" your system with locks - locks are an expensive resource.
If you feel you need to coordinate threads, and to protected shared resources, then you have no choice but to use synchronization objects.
Each time you use a synch object such as a lock or a condition that is obtained from a lock, you should ask yourself what is the use-case, do you really need the lock, what other threads need to be coordinated (what are their flows).
I want to take this question a bit off-topic and give you an example - in which I discovered that we have several threads using synchronized keyword, but some perform read, and some write, so I switched to ReaderWriterLock - so should be your case,
don't use all kinds of synch-objects just cause they are cool - carefully understand if and where they are really needed.
Related
I was about to write something about this, but maybe it is better to have a second opinion before appearing like a fool...
So the idea in the next piece of code (android's room package v2.4.1, RoomTrackingLiveData), is that the winner thread is kept alive, and is forced to check for contention that may have entered the process (coming from losing threads) while computing.
While fail CAS operations performed by these losing threads keep them out from entering and executing code, preventing repeating signals (mComputeFunction.call() OR postValue()).
final Runnable mRefreshRunnable = new Runnable() {
#WorkerThread
#Override
public void run() {
if (mRegisteredObserver.compareAndSet(false, true)) {
mDatabase.getInvalidationTracker().addWeakObserver(mObserver);
}
boolean computed;
do {
computed = false;
if (mComputing.compareAndSet(false, true)) {
try {
T value = null;
while (mInvalid.compareAndSet(true, false)) {
computed = true;
try {
value = mComputeFunction.call();
} catch (Exception e) {
throw new RuntimeException("Exception while computing database"
+ " live data.", e);
}
}
if (computed) {
postValue(value);
}
} finally {
mComputing.set(false);
}
}
} while (computed && mInvalid.get());
}
};
final Runnable mInvalidationRunnable = new Runnable() {
#MainThread
#Override
public void run() {
boolean isActive = hasActiveObservers();
if (mInvalid.compareAndSet(false, true)) {
if (isActive) {
getQueryExecutor().execute(mRefreshRunnable);
}
}
}
};
The most obvious thing here is that atomics are being used for everything they are not good at:
Identifying losers and ignoring winners (what reactive patterns need).
AND a happens once behavior, performed by the loser thread.
So this is completely counter intuitive to what atomics are able to achieve, since they are extremely good at defining winners, AND anything that requires a "happens once" becomes impossible to ensure state consistency (the last one is suitable to start a philosophical debate about concurrency, and I will definitely agree with any conclusion).
If atomics are used as: "Contention checkers" and "Contention blockers" then we can implement the exact principle with a volatile check of an atomic reference after a successful CAS.
And checking this volatile against the snapshot/witness during every other step of the process.
private final AtomicInteger invalidationCount = new AtomicInteger();
private final IntFunction<Runnable> invalidationRunnableFun = invalidationVersion -> (Runnable) () -> {
if (invalidationVersion != invalidationCount.get()) return;
try {
T value = computeFunction.call();
if (invalidationVersion != invalidationCount.get()) return; //In case computation takes too long...
postValue(value);
} catch (Exception e) {
e.printStackTrace();
}
};
getQueryExecutor().execute(invalidationRunnableFun.apply(invalidationCount.incrementAndGet()));
In this case, each thread is left with the individual responsibility of checking their position in the contention lane, if their position moved and is not at the front anymore, it means that a new thread entered the process, and they should stop further processing.
This alternative is so laughably simple that my first question is:
Why didn't they do it like this?
Maybe my solution has a flaw... but the thing about the first alternative (the nested spin-lock) is that it follows the idea that an atomic CAS operation cannot be verified a second time, and that a verification can only be achieved with a cmpxchg process.... which is... false.
It also follows the common (but wrong) believe that what you define after a successful CAS is the sacred word of GOD... as I've seen code seldom check for concurrency issues once they enter the if body.
if (mInvalid.compareAndSet(false, true)) {
// Ummm... yes... mInvalid is still true...
// Let's use a second atomicReference just in case...
}
It also follows common code conventions that involve "double-<enter something>" in concurrency scenarios.
So only because the first code follows those ideas, is that I am inclined to believe that my solution is a valid and better alternative.
Even though there is an argument in favor of the "nested spin-lock" option, but does not hold up much:
The first alternative is "safer" precisely because it is SLOWER, so it has MORE time to identify contention at the end of the current of incoming threads.
BUT is not even 100% safe because of the "happens once" thing that is impossible to ensure.
There is also a behavior with the code, that, when it reaches the end of a continuos flow of incoming threads, 2 signals are dispatched one after the other, the second to last one, and then the last one.
But IF it is safer because it is slower, wouldn't that defeat the goal of using atomics, since their usage is supposed to be with the aim of being a better performance alternative in the first place?
Deadlocks only seem possible if there is a cyclic dependency created by the possibility of one or more threads creating a loop through lockable resources.
One option is to avoid these cycles through careful static analysis or through a design pattern for acquiring locks.
However can we prevent deadlocks by using tryLock on the Lock interface?
tryLock attemps to get the lock atomically, and returns true if successful, if its already locked then it returns false so we can simply skip over the code.
int sharedStateA = 0;
int sharedStateB = 0;
Lock lockA = new ReentrantLock();
Lock lockB = new ReentrantLock();
// possible deadlock safe solution
// executed by thread 1
void deadLockSafeUpdateAthenB(){
try {
if (lockA.tryLock()){
sharedStateA = sharedStateA + 1;
try {
if (lockB.tryLock()){
sharedStateB = sharedStateB + 1;
}
} finally {
lockB.unlock();
}
}
} finally {
lockA.unlock();
}
}
// executed by thread 2
void deadLockSafeUpdateBthenA(){
try {
if (lockB.tryLock()){
sharedStateB = sharedStateB + 1;
try {
if (lockA.tryLock()){
sharedStateA = sharedStateA + 1;
}
} finally {
lockA.unlock();
}
}
} finally {
lockB.unlock();
}
}
Your code with Lock.tryLock() is deadlock safe but you should try to use the other method,
public boolean tryLock(long timeout,
TimeUnit unit)
if your threads have short run times. The call - tryLock(0,TimeUnit.SECONDS) is better than Lock.tryLock() because it honors fairness policy i.e. lock waiting queue is honored while tryLock() doesn't honor that.
Even if a static analysis tells us that a code is deadlock prone but its not always necessary that a deadlock prone code will actually produce deadlocks since its all an unlucky timing game so your target with tryLock() should be to produce functionally the same program as with deadlock prone code assuming that deadlock doesn't occur.
Fixing one problem shouldn't introduce other issues and in your code it looks quite possible that at some unlucky timing, one thread might not execute at all so I suggest to use timed trylock instead of barging trylock if its mandatory for lock acquisition to be in that order.
Hope it helps !!
This program attempts to print numbers 1 to 10 in a sequential manner, 1 thread prints odd numbers and the second threads prints even numbers.
I have been reading JCIP book and it says:
Ensure that the state variables making up the condition predicate are guarded by the lock associated with the condition queue.
In the below program, the condition queue will correspond to static member 'obj1' while the state variable that makes up the condition predicate is static volatile member 'count'. (let me know if I am wrong in the interpretation of condition, state variable, condition predicate)
The below program works correctly but is clearly violating the above idiom. Have I understood what the author is trying to say correctly? Is the below code really a poor programming practice (which happens to work correctly)
Can you give me an example where not following the above idiom will make me run into problems?
public class OddEvenSynchronized implements Runnable {
static Object obj1 = new Object(); // monitor to share data
static volatile int count =1; // condition predicate
boolean isEven;
public OddEvenSynchronized(boolean isEven) { //constructor
this.isEven=isEven;
}
public void run (){
while (count<=10){
if (this.isEven == true){
printEven(); //print an even number
}
else{
printOdd(); //print an odd number
}
}
}
public static void main(String[] args) {
Thread t1 = new Thread (new OddEvenSynchronized(true));
Thread t2 = new Thread (new OddEvenSynchronized(false));
//start the 2 threads
t1.start();
t2.start();
}
void printEven(){
synchronized (obj1) {
while (count%2 != 0){
try{
obj1.wait();
}catch (InterruptedException e) {
e.printStackTrace();
}
}
}
System.out.println("Even"+count);
count++; //unguarded increment (violation)
synchronized (obj1) {
obj1.notifyAll();
}
} //end method
void printOdd(){
synchronized (obj1) {
while (count%2 == 0){
try{
obj1.wait();
}catch (InterruptedException e) {
e.printStackTrace();
}
}
}
System.out.println("Odd"+count);
count++; //unguarded increment (violation)
synchronized (obj1) {
obj1.notifyAll();
}
} //end method
} //end class
Do not read from or write to count if you're not synchronized on obj1. That's a no no! The prints and the increments should be done from inside synchronized blocks.
synchronized (obj1) {
while (count%2 != 0){
try {
obj1.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println("Even"+count);
}
synchronized (obj1) {
count++;
obj1.notifyAll();
}
You'll notice that there's no reason to drop the synchronization now. Combine the two blocks.
synchronized (obj1) {
while (count%2 != 0){
try {
obj1.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println("Even"+count);
count++;
obj1.notifyAll();
}
The below program works correctly but is clearly violating the above idiom.
The insidious danger of multithreaded programming is that a buggy program can appear to work correctly most of the time. Race conditions can be quite devious because they often require very tight timing conditions which rarely happen.
It's really, really important to follow the rules to the letter. It's very difficult to get multithreaded programming right. It's a near certainty that any time you deviate from the rules and try to be clever you will introduce subtle bugs.
The only reason I have been able to come up with for this question, as hinted by my discussion with John Kugelman in his answer (please correct if something is wrong):
1st Key insight: In Java, there is only one condition queue associated with the object's monitor. And although they share the condition queue, there condition predicates are different. This sharing results in unnecessary wakeup ->check condition predicate -> sleep again. So although inefficient, they will still behave as kind of separate condition queues if coded properly ( while ( condition predicate ) { thread.wait() } )
In the above program, the condition predicates
count%2 == 0
count%2 != 0
are different, although they are part of the same condition queue (i.e. doing a notify() on this object's monitor will wake both of them, however only one would be able to proceed at a time).
The 2nd key insight:
The volatile count variable ensure memory visibility.
Conclusion:
As soon as we introduce another thread with the same condition predicate, the program will be susceptible to race conditions (if not other defects).
Also, note that usually wait() notify() mechanism is employed for object with same condition predicated, for example, waiting for a resource lock. The above program is usually used in interviews and I doubt if it would be common in real-life code.
So, if there are two or more threads in the same condition queue with different condition predicates, and the condition predicate variable is volatile (and hence ensures memory visibility), then ignoring the above advice can produce a correct program. Although this is of little significance, this really helped me understand multi-threading better.
I've this kind of code:
public class RecursiveQueue {
//#Inject
private QueueService queueService;
public static void main(String[] args) {
RecursiveQueue test = new RecursiveQueue();
test.enqueue(new Node("X"), true);
test.enqueue(new Node("Y"), false);
test.enqueue(new Node("Z"), false);
}
private void enqueue(final Node node, final boolean waitTillFinished) {
final AtomicLong totalDuration = new AtomicLong(0L);
final AtomicInteger counter = new AtomicInteger(0);
AfterCallback callback= new AfterCallback() {
#Override
public void onFinish(Result result) {
for(Node aNode : result.getChildren()) {
counter.incrementAndGet();
queueService.requestProcess(aNode, this);
}
totalDuration.addAndGet(result.getDuration());
if(counter.decrementAndGet() <= 0) { //last one
System.out.println("Processing of " + node.toString() + " has finished in " + totalDuration.get() + " ms");
if(waitTillFinished) {
counter.notify();
}
}
}
};
counter.incrementAndGet();
queueService.requestProcess(node, callback);
if(waitTillFinished) {
try {
counter.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Imagine there is a queueService which uses blocking queue and few consumer threads to process nodes = calls DAO to fetch children of nodes (it's a tree).
So requestProcess method just enqueues the node and does not block.
Is there some better/safe way to avoid using wait/notify in this sample ?
According to some findings I can use Phaser (but I work on java 6) or conditions (but I'm not using locks).
There is no synchronized anything in your example. You mustn't call o.wait() or o.notify() except from within a synchronized(o) {...} block.
Your call to wait() is not in a loop. This may not ever happen in your JVM, but the language spec permits wait() to return prematurely (that's known as a spurious wakeup) More generally, it is good practice to always use a loop because it's a familiar design pattern. A while statement costs no more than an if, and you should have it because of the possibility of spurious wakeup, and you'd absolutely must have it in a multi-consumer situation, and so you might as well just always write it that way.
Since you must use synchronized blocks in order to use wait() and notify(), there probably is no reason to use Atomic anything.
This "recursive" thing seems awfully complicated, what with the callback adding more items to the queue. How deep can that go?
I think you are looking for CountDownLatch.
You actually use locks or, let's put it this way, you should be using them if you try to use wait/notify as James pointed out. As you are bound to Java 1.6 and ForkJoin or Phaser are not available to you the choice is either implementing wait/notify properly or using Condition with explicit lock. This would be a matter of your personal preferences.
Another alternative is to try and restructure your algorithm so you first get to know the entire set of steps you would need to execute. It is not always possible though.
I am fairly new to JAVA and especially concurrency, so probably/hopefully this is fairly straight forward problem.
Basically from my main thread I have this:
public void playerTurn(Move move)
{
// Wait until able to move
while( !gameRoom.game.getCurrentPlayer().getAllowMove() )
{
try {
Thread.sleep(200);
trace("waiting for player to be available");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
gameRoom.getGame().handle(move);
}
gameRoom.getGame() is on its own thread.
gameRoom.getGame().handle() is synchronized
gameRoom.game.getCurrentPlayer() is on a varible of gameRoom.getGame(), it is in the same thread
allowMoves is set to false as soon as handle(move) is called, and back to true once it has finished processing the move.
I call playerTurn() multiple times. I actually call it from a SmartFoxServer extension, as and when it receives a request, often in quick succession.
My problem is, most times it works. However SOMETIMES it is issuing multiple handle(move) calls even though allowMoves should be false. Its not waiting for it to be true again. I thought its possible that the game thread didn't have a chance to set allowMoves before another handle(move) was called. I added volatile to allowMoves, and ensured the functions on the game thread were set to synchronized. But the problem is still happening.
These are in my Game class:
synchronized public void handle(Object msg)
{
lastMessage = msg;
notify();
}
synchronized public Move move() throws InterruptedException
{
while (true)
{
allowMoves = true;
System.out.print(" waiting for move()...");
wait();
allowMoves = false;
if (lastMessage instanceof Move)
{
System.out.print(" process move()...");
Move m = (Move) lastMessage;
return m;
}
}
}
public volatile boolean allowMoves;
synchronized public boolean getAllowMoves()
{
return allowMoves;
}
As I said, I am new to this and probably a little ahead of myself (as per usual, but its kinda my style to jump into the deep end, great for a quick learning curve anyway).
Cheers for your help.
Not sure if this will help, but what if you will use AtomicBoolean instead of synchronized and volatile? It says that it is lock-free and thread-safe.
The Problem is you are using synchronized method on two different objects.
gameRoom.game.getCurrentPlayer().getAllowMove()<-- This is synchronized on
CurrentPlayer instance.
gameRoom.getGame().handle(move)<-- This is synchronized on `gameRoom.getGame()`
This is your issue. You don't need synchronized keyword for getAllowMoves since field is volatile as volatile guarantees visibility semantics.
public boolean getAllowMoves() {
return allowMoves;
}
there is the primitive, dedicated for resource management - Semaphore
you need to
create semaphore with permits set to 1
use acquire when looking for a move
use release after move is complete
so you will never face that 2 concurrent invocations of handle method appear.