Mutually exclusive methods - java

I am on my way learning Java multithread programming. I have a following logic:
Suppose I have a class A
class A {
ConcurrentMap<K, V> map;
public void someMethod1 () {
// operation 1 on map
// operation 2 on map
}
public void someMethod2 () {
// operation 3 on map
// operation 4 on map
}
}
Now I don't need synchronization of the operations in "someMethod1" or "someMethod2". This means if there are two threads calling "someMethod1" at the same time, I don't need to serialize these operations (because the ConcurrentMap will do the job).
But I hope "someMethod1" and "someMethod2" are mutex of each other, which means when some thread is executing "someMethod1", another thread should wait to enter "someMethod2" (but another thread should be allowed to enter "someMethod1").
So, in short, is there a way that I can make "someMethod1" and "someMethod2" not mutex of themselves but mutex of each other?
I hope I stated my question clear enough...
Thanks!

I tried a couple attempts with higher-level constructs, but nothing quite came to mind. I think this may be an occasion to drop down to the low level APIs:
EDIT: I actually think you're trying to set up a problem which is inherently tricky (see second to last paragraph) and probably not needed (see last paragraph). But that said, here's how it could be done, and I'll leave the color commentary for the end of this answer.
private int someMethod1Invocations = 0;
private int someMethod2Invocations = 0;
public void someMethod1() {
synchronized(this) {
// Wait for there to be no someMethod2 invocations -- but
// don't wait on any someMethod1 invocations.
// Once all someMethod2s are done, increment someMethod1Invocations
// to signify that we're running, and proceed
while (someMethod2Invocations > 0)
wait();
someMethod1Invocations++;
}
// your code here
synchronized (this) {
// We're done with this method, so decrement someMethod1Invocations
// and wake up any threads that were waiting for that to hit 0.
someMethod1Invocations--;
notifyAll();
}
}
public void someMethod2() {
// comments are all ditto the above
synchronized(this) {
while (someMethod1Invocations > 0)
wait();
someMethod2Invocations++;
}
// your code here
synchronized(this) {
someMethod2Invocations--;
notifyAll();
}
}
One glaring problem with the above is that it can lead to thread starvation. For instance, someMethod1() is running (and blocking someMethod2()s), and just as it's about to finish, another thread comes along and invokes someMethod1(). That proceeds just fine, and just as it finishes another thread starts someMethod1(), and so on. In this scenario, someMethod2() will never get a chance to run. That's actually not directly a bug in the above code; it's a problem with your very design needs, one which a good solution should actively work to solve. I think a fair AbstractQueuedSynchronizer could do the trick, though that is an exercise left to the reader. :)
Finally, I can't resist but to interject an opinion: given that ConcurrentHashMap operations are pretty darn quick, you could be better off just putting a single mutex around both methods and just being done with it. So yes, threads will have to queue up to invoke someMethod1(), but each thread will finish its turn (and thus let other threads proceed) extremely quickly. It shouldn't be a problem.

I think this should work
class A {
Lock lock = new Lock();
private static class Lock {
int m1;
int m2;
}
public void someMethod1() throws InterruptedException {
synchronized (lock) {
while (lock.m2 > 0) {
lock.wait();
}
lock.m1++;
}
// someMethod1 and someMethod2 cannot be here simultaneously
synchronized (lock) {
lock.m1--;
lock.notifyAll();
}
}
public void someMethod2() throws InterruptedException {
synchronized (lock) {
while (lock.m1 > 0) {
lock.wait();
}
lock.m2++;
}
// someMethod1 and someMethod2 cannot be here simultaneously
synchronized (lock) {
lock.m2--;
lock.notifyAll();
}
}
}

This probably can't work (see comments) - leaving it for information.
One way would be to use Semaphores:
one semaphore sem1, with one permit, linked to method1
one semaphore sem2, with one permit, linked to method2
when entering method1, try to acquire sem2's permit, and if available release it immediately.
See this post for an implementation example.
Note: in your code, even if ConcurrentMap is thread safe, operation 1 and operation 2 (for example) are not atomic - so it is possible in your scenario to have the following interleaving:
Thread 1 runs operation 1
Thread 2 runs operation 1
Thread 2 runs operation 2
Thread 1 runs operation 2

First of all : Your map is thread safe as its ConcurrentMap. This means that operations on this map like add,contains etc are thread safe.
Secondaly
This doesn't guarantee that even your methods (somemethod1 and somemethod2) are also thread safe. So your methods are not mutually exclusive and two threads at same time can access them.
Now you want these to be mutex of each other : One approach could be put all operations (operaton 1,..operation 4) in a single method and based on condition call each.

I think you cannot do this without a custom synchronizer. I've whipped up this, I called it TrafficLight since it allows threads with a particular state to pass while halting others, until it changes state:
public class TrafficLight<T> {
private final int maxSequence;
private final ReentrantLock lock = new ReentrantLock(true);
private final Condition allClear = lock.newCondition();
private int registered;
private int leftInSequence;
private T openState;
public TrafficLight(int maxSequence) {
this.maxSequence = maxSequence;
}
public void acquire(T state) throws InterruptedException {
lock.lock();
try {
while ((this.openState != null && !this.openState.equals(state)) || leftInSequence == maxSequence) {
allClear.await();
}
if (this.openState == null) {
this.openState = state;
}
registered++;
leftInSequence++;
} finally {
lock.unlock();
}
}
public void release() {
lock.lock();
try {
registered--;
if (registered == 0) {
openState = null;
leftInSequence = 0;
allClear.signalAll();
}
} finally {
lock.unlock();
}
}
}
acquire() will block if another state is active, until it becomes inactive.
The maxSequence is there to help prevent thread starvation, allowing only a maximum number of threads to pass in sequence (then they'll have to queue like the others). You could make a variant that uses a time window instead.
For your problem someMethod1() and someMethod2() would call acquire() with a different state each at the start, and release() at the end.

Related

How is `hold count` value useful in Reentrant Lock?

Reentrant Lock ( https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantLock.html ) has a feature to state the strength of locking by a particular Thread which is based on the value of 'hold count'. It is initilized when a Thread aquires the lock and each time when it re-aquires the lock the value is incremented. The value is decremented each time the thread invokes the unlock method on the lock.
Single thread at a time can be the owner of the Reentrant Lock hence simple boolen flag makes mcuh sense rather than an integers count. A thread already being the owner of the lock can only re-aquire it so count seams not of much. (any) use.
What is the usefulness of hold count ? What are the use cases of it ? One such use case can be to check of the current thread is holding the lock (hold count value > 0). But there are different APIs like isHeldByCurrentThread().
The API documentation for that method explains it :
The hold count information is typically only used for testing and debugging purposes.
So it's basically a method that can help you track down instances where your code fails to call unlock(). This is especially true for cases where you have reentrant usage of the lock.
Suppose you have a method includes a locked block, you can call it from different place. This method should do different thing according to the lock count it holds. Then you can take use of getHoldCount.
import java.util.concurrent.locks.ReentrantLock;
public class Example {
ReentrantLock lock = new ReentrantLock();
void method1() {
lock.lock();
try {
if (lock.getHoldCount() == 1) {
System.out.println("call method1 directly");
} else if (lock.getHoldCount() == 2) {
System.out.println("call method1 by invoking it inside method2");
}
} finally {
lock.unlock();
}
}
void method2() {
lock.lock();
try {
method1();
} finally {
lock.unlock();
}
}
public static void main(String[] args) {
Example example = new Example();
example.method1(); // call method1 directly
example.method2(); // call method1 by invoking it inside method2
}
}

Java: two WAITING + one BLOCKED threads, notify() leads to a livelock, notifyAll() doesn't, why?

I was trying to implement something similar to Java's bounded BlockingQueue interface using Java synchronization "primitives" (synchronized, wait(), notify()) when I stumbled upon some behavior I don't understand.
I create a queue capable of storing 1 element, create two threads that wait to fetch a value from the queue, start them, then try to put two values into the queue in a synchronized block in the main thread. Most of the time it works, but sometimes the two threads waiting for a value start seemingly waking up each other and not letting the main thread enter the synchronized block.
Here's my (simplified) code:
import java.util.LinkedList;
import java.util.Queue;
public class LivelockDemo {
private static final int MANY_RUNS = 10000;
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < MANY_RUNS; i++) { // to increase the probability
final MyBoundedBlockingQueue ctr = new MyBoundedBlockingQueue(1);
Thread t1 = createObserver(ctr, i + ":1");
Thread t2 = createObserver(ctr, i + ":2");
t1.start();
t2.start();
System.out.println(i + ":0 ready to enter synchronized block");
synchronized (ctr) {
System.out.println(i + ":0 entered synchronized block");
ctr.addWhenHasSpace("hello");
ctr.addWhenHasSpace("world");
}
t1.join();
t2.join();
System.out.println();
}
}
public static class MyBoundedBlockingQueue {
private Queue<Object> lst = new LinkedList<Object>();;
private int limit;
private MyBoundedBlockingQueue(int limit) {
this.limit = limit;
}
public synchronized void addWhenHasSpace(Object obj) throws InterruptedException {
boolean printed = false;
while (lst.size() >= limit) {
printed = __heartbeat(':', printed);
notify();
wait();
}
lst.offer(obj);
notify();
}
// waits until something has been set and then returns it
public synchronized Object getWhenNotEmpty() throws InterruptedException {
boolean printed = false;
while (lst.isEmpty()) {
printed = __heartbeat('.', printed); // show progress
notify();
wait();
}
Object result = lst.poll();
notify();
return result;
}
// just to show progress of waiting threads in a reasonable manner
private static boolean __heartbeat(char c, boolean printed) {
long now = System.currentTimeMillis();
if (now % 1000 == 0) {
System.out.print(c);
printed = true;
} else if (printed) {
System.out.println();
printed = false;
}
return printed;
}
}
private static Thread createObserver(final MyBoundedBlockingQueue ctr,
final String name) {
return new Thread(new Runnable() {
#Override
public void run() {
try {
System.out.println(name + ": saw " + ctr.getWhenNotEmpty());
} catch (InterruptedException e) {
e.printStackTrace(System.err);
}
}
}, name);
}
}
Here's what I see when it "blocks":
(skipped a lot)
85:0 ready to enter synchronized block
85:0 entered synchronized block
85:2: saw hello
85:1: saw world
86:0 ready to enter synchronized block
86:0 entered synchronized block
86:2: saw hello
86:1: saw world
87:0 ready to enter synchronized block
............................................
..........................................................................
..................................................................................
(goes "forever")
However, if I change the notify() calls inside the while(...) loops of addWhenHasSpace and getWhenNotEmpty methods to notifyAll(), it "always" passes.
My question is this: why does the behavior vary between notify() and notifyAll() methods in this case, and also why is the behavior of notify() the way it is?
I would expect both methods to behave in the same way in this case (two threads WAITING, one BLOCKED), because:
it seems to me that in this case notifyAll() would only wake up the other thread, same as notify();
it looks like the choice of the method which wakes up a thread affects how the thread that is woken up (and becomes RUNNABLE I guess) and the main thread (that has been BLOCKED) later compete for the lock — not something I would expect from the javadoc as well as searching the internet on the topic.
Or maybe I'm doing something wrong altogether?
Without looking too deeply into your code, I can see that you are using a single condition variable to implement a queue with one producer and more than one consumer. That's a recipe for trouble: If there's only one condition variable, then when a consumer calls notify(), there's no way of knowing whether it will wake the producer or wake the other consumer.
There are two ways out of that trap: The simplest is to always use notifyAll().
The other way is to stop using synchronized, wait(), and notify(), and instead use the facilities in java.util.concurrent.locks.
A single ReentrantLock object can give you two (or more) condition variables. Use one exclusively for the producer to notify the consumers, and use the other exclusively for the consumers to notify the producer.
Note: The names change when you switch to using ReentrantLocks: o.wait() becomes c.await(), and o.notify() becomes c.signal().
There appears to be some kind of fairness/barging going on using intrinsic locking - probably due to some optimization. I am guessing, that the native code checks to see if the current thread has notified the monitor it is about to wait on and allows it to win.
Replace the synchronized with ReentrantLock and it should work as you expect it. The different here is how the ReentrantLock handles waiters of a lock it has notified on.
Update:
Interesting find here. What you are seeing is a race between the main thread entering
synchronized (ctr) {
System.out.println(i + ":0 entered synchronized block");
ctr.addWhenHasSpace("hello");
ctr.addWhenHasSpace("world");
}
while the other two thread enter their respective synchronized regions. If the main thread does not get into its sync region before at least one of the two, you will experience this live-lock output you are describing.
What appears to be happening is that if both the two consumer threads hit the sync block first they will ping-pong with each other for notify and wait. It may be the case the JVM gives threads that are waiting priority to the monitor while threads are blocked.

How to implement synchronized checks for Bounded Buffer to avoid Race Conditions?

Working with the classic multiple Consumer/Producer problem, and I have an issue that is driving me around the bend, regarding how to avoid race conditions when inserting/removing from a circular buffer. Appreciate any help in advance!
Sample code for circular buffer for example purposes. Similar to my implementation (Note: I cannot use collection types, only arrays for this):
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class BoundedBuffer {
private final String[] buffer;
private final int capacity;
private int front;
private int rear;
private int count;
private final Lock lock = new ReentrantLock();
private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();
public BoundedBuffer(int capacity) {
super();
this.capacity = capacity;
buffer = new String[capacity];
}
public void deposit(String data) throws InterruptedException {
lock.lock();
try {
while (count == capacity) {
notFull.await();
}
buffer[rear] = data;
rear = (rear + 1) % capacity;
count++;
notEmpty.signal();
} finally {
lock.unlock();
}
}
public String fetch() throws InterruptedException {
lock.lock();
try {
while (count == 0) {
notEmpty.await();
}
String result = buffer[front];
front = (front + 1) % capacity;
count--;
notFull.signal();
return result;
} finally {
lock.unlock();
}
}
}
What I need to know is how can I implement a method for checking if the buffer is full/Empty? This method needs to be included in this BoundedBuffer and must be called from another class (Producer/Consumer) before giving the go ahead for/Calling Inserting/Writing methods.
Pseudocode for method in Producer class.
if (!bufferFull) {
buffer.addelement;
}
else {
thread.sleep(5)
threadHasSleptFor++;
}
I am using threads, and there are multiple producers/consumers (In this case 2 producers/consumers, but I may require more). I need it so that if the buffer is full, the thread has to wait until it becomes available for insertion, and the time it waits needs to be stored for output purposes (Not debug, part of the core features). The issue I am having is this:
Thread 1 Producer checks is bufferfull condition, it's false.
Scheduler switches to Thread 2 midway.
Thread 2 also checks bufferfull condition, it's false.
thread 2 proceeds to insert.
Scheduler switches back to Thread 1.
Thread 1 now goes to insert line, as it already checked, but Thread 2 beat it.
Booom.
Somewhat new to Java, though as I understand this is the "time-of-check/time-of-use" race condition issue.
Can someone please advise as to how this can be implemented safely, and how would I loop the code so the threadHasSleptFor variable keeps incrementing on every fail (Providing the methods would be great). I want it so that only the Thread that has requested the check can begin to insert item; the second producer must wait for the lock.
Thanks.
This is by definition impossible to do without higher level locking.
You have to guarantee that the check of whether the buffer is full or not and the following insert are atomic from the thread's perspective which means you have to acquire some common lock to do so. This general problem is indeed called Time of check to time to use and leads to many interesting race conditions down the line.
The solution to these problems is to not check if you can do an operation and then do it, but to just try the operation and handle the error case. So if you don't want to block if the buffer is full with your operation, just implement a tryDeposit method that throws an exception if it can't store a value, or return a boolean success value.
Although in your case if you have to store the time necessary before you could push the value onto the stack, I don't see why a simple:
long start = System.nanotime();
queue.deposit();
long end = System.nanotime();
wouldn't do the trick as well.
If I understand you correctly, you are asking how to make a thread wait until it's OK to call deposit() or wait until it's OK to call fetch(). But, there's no need for that. Your deposit() method will block the calling thread until there is room in the queue, and your fetch() method will block the caller until there is something to fetch. That's what the notFull.await() and notEmpty.await() calls do.
await() unlocks the lock, sleeps until the condition is signalled by another thread, and then it re-locks the lock. The condition may or may not still be true when the caller finally gets the lock again, but that's why you have the await() calls in loops, so that the thread keeps trying until finally it has the lock and the condition is true. Then it does its work (add an item or remove an item), unlocks the lock, and returns.

JAVA Concurrency - waiting for process to complete

I am fairly new to JAVA and especially concurrency, so probably/hopefully this is fairly straight forward problem.
Basically from my main thread I have this:
public void playerTurn(Move move)
{
// Wait until able to move
while( !gameRoom.game.getCurrentPlayer().getAllowMove() )
{
try {
Thread.sleep(200);
trace("waiting for player to be available");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
gameRoom.getGame().handle(move);
}
gameRoom.getGame() is on its own thread.
gameRoom.getGame().handle() is synchronized
gameRoom.game.getCurrentPlayer() is on a varible of gameRoom.getGame(), it is in the same thread
allowMoves is set to false as soon as handle(move) is called, and back to true once it has finished processing the move.
I call playerTurn() multiple times. I actually call it from a SmartFoxServer extension, as and when it receives a request, often in quick succession.
My problem is, most times it works. However SOMETIMES it is issuing multiple handle(move) calls even though allowMoves should be false. Its not waiting for it to be true again. I thought its possible that the game thread didn't have a chance to set allowMoves before another handle(move) was called. I added volatile to allowMoves, and ensured the functions on the game thread were set to synchronized. But the problem is still happening.
These are in my Game class:
synchronized public void handle(Object msg)
{
lastMessage = msg;
notify();
}
synchronized public Move move() throws InterruptedException
{
while (true)
{
allowMoves = true;
System.out.print(" waiting for move()...");
wait();
allowMoves = false;
if (lastMessage instanceof Move)
{
System.out.print(" process move()...");
Move m = (Move) lastMessage;
return m;
}
}
}
public volatile boolean allowMoves;
synchronized public boolean getAllowMoves()
{
return allowMoves;
}
As I said, I am new to this and probably a little ahead of myself (as per usual, but its kinda my style to jump into the deep end, great for a quick learning curve anyway).
Cheers for your help.
Not sure if this will help, but what if you will use AtomicBoolean instead of synchronized and volatile? It says that it is lock-free and thread-safe.
The Problem is you are using synchronized method on two different objects.
gameRoom.game.getCurrentPlayer().getAllowMove()<-- This is synchronized on
CurrentPlayer instance.
gameRoom.getGame().handle(move)<-- This is synchronized on `gameRoom.getGame()`
This is your issue. You don't need synchronized keyword for getAllowMoves since field is volatile as volatile guarantees visibility semantics.
public boolean getAllowMoves() {
return allowMoves;
}
there is the primitive, dedicated for resource management - Semaphore
you need to
create semaphore with permits set to 1
use acquire when looking for a move
use release after move is complete
so you will never face that 2 concurrent invocations of handle method appear.

Two Synchronized blocks execution in Java

Why is it that two synchronized blocks can't be executed simultaneously by two different threads in Java.
EDIT
public class JavaApplication4 {
public static void main(String[] args) {
new JavaApplication4();
}
public JavaApplication4() {
Thread t1 = new Thread() {
#Override
public void run() {
if (Thread.currentThread().getName().equals("Thread-1")) {
test(Thread.currentThread().getName());
} else {
test1(Thread.currentThread().getName());
}
}
};
Thread t2 = new Thread(t1);
t2.start();
t1.start();
}
public synchronized void test(String msg) {
for (int i = 0; i < 10; i++) {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
}
System.out.println(msg);
}
}
public synchronized void test1(String msg) {
for (int i = 0; i < 10; i++) {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
}
System.out.println(msg + " from test1");
}
}
}
Your statement is false. Any number of synchronized blocks can execute in parallel as long as they don't contend for the same lock.
But if your question is about blocks contending for the same lock, then it is wrong to ask "why is it so" because that is the purpose of the whole concept. Programmers need a mutual exclusion mechanism and they get it from Java through synchronized.
Finally, you may be asking "Why would we ever need to mutually exclude code segments from executing in parallel". The answer to that would be that there are many data structures that only make sense when they are organized in a certain way and when a thread updates the structure, it necessarily does it part by part, so the structure is in a "broken" state while it's doing the update. If another thread were to come along at that point and try to read the structure, or even worse, update it on its own, the whole thing would fall apart.
EDIT
I saw your example and your comments and now it's obvious what is troubling you: the semantics of the synchronized modifier of a method. That means that the method will contend for a lock on this's monitor. All synchronized methods of the same object will contend for the same lock.
That is the whole concept of synchronization, if you are taking a lock on an object (or a class), none of the other threads can access any synchronized blocks.
Example
Class A{
public void method1()
{
synchronized(this)//Block 1 taking lock on Object
{
//do something
}
}
public void method2()
{
synchronized(this)//Block 2 taking lock on Object
{
//do something
}
}
}
If one thread of an Object enters any of the synchronized blocks, all others threads of the same object will have to wait for that thread to come out of the synchronized block to enter any of the synchronized blocks. If there are N number of such blocks, only one thread of the Object can access only one block at a time. Please note my emphasis on Threads of same Object. The concept will not apply if we are dealing with threads from different objects.
Let me also add that if you are taking a lock on class, the above concept will get expanded to any object of the class. So if instead of saying synchronized(this), I would have used synchronized(A.class), code will instruct JVM, that irrespective of the Object that thread belongs to, make it wait for other thread to finish the synchronized block execution.
Edit: Please understand that when you are taking a lock (by using synchronized keyword), you are not just taking lock on one block. You are taking lock on the object. That means you are telling JVM "hey, this thread is doing some critical work which might change the state of the object (or class), so don't let any other thread do any other critical work" . Critical work, here refers to all the code in synchronized blocks which take lock on that particular Object (or class), and not only in one synchronized block.
This is not absolutely true. If you are dealing with locks on different objects then multiple threads can execute those blocks.
synchronized(obj1){
//your code here
}
synchronized(obj2){
//your code here
}
In above case one thread can execute first and second can execute second block , the point is here threads are working with different locks.
Your statement is correct if threads are dealing with same lock.Every object is associated with only one lock in java if one thread has acquired the lock and executing then other thread has to wait until first thread release that lock.Lock can be acquired by synchronized block or method.
Two Threads can execute synchronized blocks simultaneously till the point they are not locking the same object.
In case the blocks are synchronized on different object... they can execute simultaneously.
synchronized(object1){
...
}
synchronized(object2){
...
}
EDIT:
Please reason the output for http://pastebin.com/tcJT009i
In your example when you are invoking synchronized methods the lock is acquired over the same object. Try creating two objects and see.

Categories