Use a semaphore in writer reader - java

So I'm attending a course in multi threaded development and are currently learning about semaphores. In our latest assignment we are supposed to use three threads and two queues. The writer thread will write chars to the first queue, then a "encryptor" thread will read the chars from that queue, encrypt the char and then add it to the second queue. Then we have a reader thread which reads from the second queue. To handle synchronization we are supposed to use semaphore's and mutex, but I managed without any:
public class Buffer {
private Queue<Character> qPlain = new LinkedList<Character>();
private Queue<Character> qEncrypt = new LinkedList<Character>();
private final int CAPACITY = 3;
public Buffer() {
System.out.println("New Buffer!");
}
public synchronized void addPlain(char c) {
while (qPlain.size() == CAPACITY) {
try {
wait();
System.out.println("addPlain is waiting to add Data");
} catch (InterruptedException e) {
}
}
qPlain.add(c);
notifyAll();
System.out.println("addPlain Adding Data-" + c);
}
public synchronized char removePlain() {
while (qPlain.size() == 0) {
try {
wait();
System.out.println("----------removePlain is waiting to return Data.");
} catch (InterruptedException e) {
}
}
notifyAll();
char c = qPlain.remove();
System.out.println("---------------removePlain Returning Data-" + c);
return c;
}
public synchronized void addEncrypt(char c) {
while (qEncrypt.size() == CAPACITY) {
try {
wait();
System.out.println("addEncrypt is waiting to add Data");
} catch (InterruptedException e) {
}
}
qEncrypt.add(c);
notifyAll();
System.out.println("addEncrypt Adding Data-" + c);
}
public synchronized char removeEncrypt() {
while (qEncrypt.size() == 0) {
try {
wait();
System.out.println("----------------removeEncrypt is waiting to return Data.");
} catch (InterruptedException e) {
}
}
notifyAll();
char c = qEncrypt.remove();
System.out.println("--------------removeEncrypt Returning Data-" + c);
return c;
}
}
So this works fine, but I'm not going to pass as I haven't used any semaphore. I do understand the concept, but I just don't see the point to use any in this case. I have 2 queues and just one reader and writer for each one.
EDIT: Updated with Semaphores instead. It almost works, problem arises when the removePlain() method get's called when the queue is empty. I'm pretty sure I should block it, but I'm lost here. Could I not just use a mutex here instead?
public class Buffer {
private Semaphore encryptedSem = new Semaphore(0);
private Semaphore decryptedSem = new Semaphore(0);
private final Queue<Character> qPlain = new LinkedList<Character>();
private final Queue<Character> qEncrypt = new LinkedList<Character>();
private final int CAPACITY = 3;
private boolean startedWrite = false;
private boolean startedRead = false;
/**
* Adds a character to the queue containing non encrypted chars.
*
* #param c
*/
public void addPlain(char c) {
// Makes sure that this writer executes first.
if (!startedWrite) {
startedWrite = true;
encryptedSem = new Semaphore(1);
}
if (qPlain.size() < CAPACITY) {
aquireLock(encryptedSem);
System.out.println("addPlain has lock");
qPlain.add(c);
realeseLock(encryptedSem);
}
}
/**
* Removes and returns the next char in the non encrypted queue.
*
* #return
*/
public char removePlain() {
// TODO Need to fix what happens when the queue is 0. Right now it just
// returns a char that is 0. This needs to be blocked somehow.
char c = 0;
if (qPlain.size() > 0) {
aquireLock(encryptedSem);
System.out.println("removePlain has lock");
c = qPlain.remove();
realeseLock(encryptedSem);
} else {
System.out.println("REMOVEPLAIN CALLED WHEN qPlain IS EMPTY");
}
return c;
}
/**
* Adds a character to the queue containing the encrypted chars.
*
* #param c
*/
public void addEncrypt(char c) {
if (!startedRead) {
startedRead = true;
decryptedSem = new Semaphore(1);
}
if (qEncrypt.size() < CAPACITY) {
aquireLock(decryptedSem);
System.out.println("addEncrypt has lock");
qEncrypt.add(c);
realeseLock(decryptedSem);
}
}
/**
* Removes and returns the next char in the encrypted queue.
*
* #return
*/
public char removeEncrypt() {
char c = 0;
if (qEncrypt.size() > 0) {
aquireLock(decryptedSem);
System.out.println("removeEncrypt has lock");
c = qEncrypt.remove();
realeseLock(decryptedSem);
}
return c;
}
/**
* Aquries lock on the given semaphore.
*
* #param sem
*/
private void aquireLock(Semaphore sem) {
try {
sem.acquire();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
/**
* Realeses lock on the given semaphore.
*
* #param sem
*/
private void realeseLock(Semaphore sem) {
sem.release();
}
}

OK, so trying to adress your concerns, without doing your homework :-)
About your first sample
At first sight, this is a working sample. You are using a form of mutual exclusion through the synchronized keyword, which allows you to use this.wait/notify correctly. This also provides safeguards seeing every thread synchronizes on the same monitor, which provides adequate happen-before safety.
In other words, thanks to this single monitor, you are assured that anything under the synchronized methods is executed exclusively and that these methods side-effects are visible inside the other methods.
Only minor gripe is that your queues are not final, which according to safe object publication guidelines and depending on how your whole system/threads is bootstraped, might lead to visibility issues. Rule of thumb in multithreaded code (and maybe even generally) : whatever can be made final should be.
The real problem with your code is that it does not fulfill your requirements : use semaphores.
About your second sample
Unsafe boolean mutation
This one has real issues. First, your startedWrite/ startedRead booleans : you mutate them (change their true/false value) outside of any synchronization (lock, semaphores, syncrhonized, ... nothing at all). This is unsafe, under the java memory model it would be legal for a thread that has not performed the mutation to not see the mutated value. Put it another way, the first write could set startedWrite to true, and it could be that all other threads never see that true value.
Some discussions on this :
- https://docs.oracle.com/javase/tutorial/essential/concurrency/memconsist.html
- Java's happens-before and synchronization
So anything that relies on these booleans is inherently flawed in your sample. That is, your Semaphore assignments, for one thing.
Several ways to correct this :
Always mutate shared state under a synchonization tool of some sort (in your first sample, it was the synchronized keyword, and here it could be your semaphores), and make sure that the same tool is used by all threads mutating or accessing the variable
Or use a concurrently safe type, like AtomicBoolean is this case, which has concurrency guarantees that any mutation is made visible to other threads
Race conditions
Another issue with your second code sample, is that you check the sizes of your queues before taking a lock and modifiying them, that is :
if (qPlain.size() > 0) {
aquireLock(encryptedSem);
...
c = qPlain.remove();
realeseLock(encryptedSem);
} else {
System.out.println("REMOVEPLAIN CALLED WHEN qPlain IS EMPTY");
}
Two concurrent threads could perform the check at the first line at the same time, and behave wrongly. Typical scenario is :
qplain has a size of 1
Thread 1 arrives at the if, checks that qplain is not empty, the check succeeds, then thread 1 is paused by the OS scheduler right here and now
Thread 2 arrives at the same if and the same check succeeds for the same reason
Thread 1 and Thread 2 resume from there on, both think they are allowed to take 1 element out of qplain which is wrong, because qplain has a size of 1 actually.
It will fail. You should have had a mutual exclusion of some sort. You can not (rule of thumb again) perform a check and after it perform a mutation under a lock. Both the check and the mutation should happen in, broadly speaking, the same lock. (Or you are a very advanced multithreading kind of guy and you know optimistic locking and stuf like that really well).
Possible deadlock
Another rule of thumb: any time you acquire and release a lock and/or a resource at the same call site, you should have a try/finally pattern.
That is, no matter how it is done, your code should always look like
acuquireSemaphore();
try {
// do something
} finally {
releaseSemaphore();
}
Same goes for locks, input or output streams, sockets, ... Failure to do so may lead to your semaphore being acquired but never released, especially in case of an uncaught exception. So do use try and finally around your resources.
Conclusions
With such serious flaws, I did not really read your code to see if the "spirit" of it works. Maybe it does, but at this point, it's not worth it to check it out further.
Going forward with your assignment
You are asked to use two tools : Semaphores and mutual exclusion (e.g. synchonized, or Lock I guess). Both are not exactly the same thing!
You probablye get mutual exclusions, as your first sample showed. Probably not Semaphores yet. The point of semaphores, is that they (safely) manage a number of "permits". One (a thread) can ask for a permit, and if the semaphore has one available and grants it, one can proceed with one's work. Otherwise, one is put in a "holding pattern" (a wait) untill a permit is available. At some point, one* is expected to give the permit back to the Semaphore, for others to use.
(*Please note : it is not mandatory for a semaphore to work that threads performing permit acquisition are the one to perform permit release. It is part of what make a lock and a semaphore so different, and it's a good thing).
Let's start simple : a Semaphore that only has one permit can be used as a mutual exclusion. Bonus point : it can be released by another thread than the one that acquired it. That makes it ideal for message passing between threads : they can exchange permits.
What does it remember us of ? Of wait / notify of course!
A possible path to a solution
So we have a semaphore, and it has a number of permits. What could the meaning of this number be ? A natural candidate is : have a Semaphore hold the number of elements inside the queues. At first, this could be zero.
Each time somebody puts an element in the queue, it raises the number of permits by one.
Each time somebody takes an element off the queue, it lowers the number of permits.
Then : trying to take an element off an empty queue means trying to acquire a permit from an empty semaphore, it will automatically block the caller. This seems to be what you want.
But!
We're yet to have a definition for "putting an element on top of a full queue". That is because semaphores are not bounded in permits. One can start with an empty semaphore and call "release" a thousand times, and end up with a 1000 permits available. We wil blow our maximal capacity without any kind of bounds.
Let's say we have a workaround for that, we're still not done : we did not make sure at this point that readers and writers do not modify the queue at the same time. And this is crucial for correctneess !
So we need other ideas.
Well issue #2 is easy : we are allowed to use exclusive locks for this exercie, so we'll use them. Just make sure that any manipulation to the list itself is under a synchonized block using the same monitor.
Issue number one... Well, we have a Semaphore representing a "not empty" condition. That's one of the two pairs of wait/notify you had in your first sample. OK cool, let's make another Semaphore representing a "not full" condition, the other wait/notifyPair of your code sample !
So recap : use a semaphore for each couple of wait/notify in your original sample. Keep a mutual exclusion to actually modify the contents of the queue object. And be very carreful of the articulation of the mutual exclusion part with the semaphores, it is the crux of the matter.
And I'll stop there to let you walk down this path if you want.
Bonus point
You should not have to code the same thing twice here. In your samples, you coded twice the same logic (one for the "clear text", once for the "encrypted"), basically, wait for (at least) a spot before stacking a char, and wait for the presence of (at least) a char before popping a it.
This should be one and the same code / methods. Do it once, and you'll get it right (or wrong of course) at all times. Write it twice, you double the chance of mistakes.
Future thoughts
This is all still very complex, for something that could be done using a `BlockingQueuè but then again, homeworks do have another purpose :-).
A bit more complex, but this message passing pattern of signaling having a thread waiting for a "notEmpty" signal, while the other waits on a "notFull" signal is the exact use case of the JDK Condition object, which mimicks the use of wait/notify.

Related

java concurrency: BlockingQueue based producer/consumer does not seem to work well with compound actions that needs synchronized code block?

It comes to a surprise to me when I am trying to implement some compound actions with a BlockingQueue based producer/consumer pattern, which makes me think I most likely have missed something obvious.
1. In short
I need
my consumer to make sequence actions in form of ‘take obj from the queue + do more consumer operations on the obj’ atomic and
My producer to make sequence actions in form of ‘offer obj onto the queue + do more producer operations on the obj’ atomic and
The two above atomic sequences synchronized on the same obj, obviously
Without such atomicity, problem may occur, see 'PROBLEM!!' as an example in the comment in code for the producer in the following section 2.
But I can’t simply put a synchronized block around the call to take() and its associated consumer operations as when the queue is empty, this consumer will be stuck there FOREVER since it will still possess the sync lock while it waits on the producer to fill the queue with an obj, and that sync lock possession of consumer will in turn stop the producer from entering corresponding critical region to do any 'producing'.
2. Specially, simplified example code are as the following:
Common code known to the producer and consumer classes:
Queue<QObj> nbq = new ConcurrentLinkedQueue();
BlockingQueue<QObj> bq = new LinkedBlockingQueue<>();
List<String> idList = new LinkedList<>();
Object lockObj = idList;
int Idx = 1;
public static class QObj {
public String id;
public String content;
public QObj(String id, String content) {
this.id = id;
this.content = content;
}
}
Main logic in producer class:
public void produceBlocking() {
QObj o = new QObj(String.valueOf(Idx), "Content_" + Idx++);
// synchronized(lockObj) {
// no point to include Queue.offer(...) call in a synchronized block as we
// won't be able to use synchronized() in corresponding consumer anyway
// for the reason described above
bq.offer(o);
synchronized (lockObj) {
// PROBLEM!! by now, 'o' could have been 'consumed' already
// hence we shouldn't do the following operations:
// do the associated part of compound action of 'producer'
idList.add(o.id);
// do some more operation as part of this compound action ...
}
// }
}
Main logic in consumer class:
public void consumeBlocking() {
while (true) {
try {
// synchronized (lockObj) {
// can't simply put synchronized() here to make the following compound action atomic
// - when the queue is empty, this consumer will be stuck here forever since it still possesses
// the lockObj, which stops the producer from entering the critical region to do any 'producing'
QObj o = bq.take();
synchronized (lockObj) {
// do the associated part of compound action of 'consumer'
idList.remove(o.id);
// do some more operation as part of this compound action ...
}
// }
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
3. Why has this not been a common problem?
I feel that this must have been a common occurring problem when people are using BlockingQueue, and the fact that I couldn’t really locate anything addressing directly to a similar problem affirms my belief that I might have got something fundamentally wrong.
Can someone give some hint about a direct solution or point out where I thought wrong about this problem?
4. Alternative Ideas
I did think of a few ideas as alternatives, but I feel none of them is addressing this issue directly and all have some drawbacks (as highlighted 'DRAWBACK!!' in the comments in the code)
4.1 -
Do a check using Queue.contains() before continue
public void produceBlockingWithCheck() {
QObj o = new QObj(String.valueOf(Idx), "Content_" + Idx++);
bq.offer(o);
synchronized (lockObj) {
// First, Check if the obj could have already been consumed
// DRAWBACK!!: this could be very costly, e.g.
// when 'bq' is a LinkedBlockingQueue, and contains(...) always triggers
// a sequential traversal, the Queue itself can be very large
if (bq.contains(o)) {
// do the associated part of compound action of 'producer'
idList.add(o.id);
// do some more operation as part of this compound action ...
}
}
}
4.2 -
Adjust the order of ops on the producer, move the Queue.offer() call to the end
public void produceBlockingOrderAdjusted() {
QObj o = new QObj(String.valueOf(Idx), "Content_" + Idx++);
// do the associated part of compound action of 'producer', only before
// calling BlockingQueue.offer(...)
// DRAWBACK!!: even this may work for this simple case, such order adjustment
// won't not be logically possible for all cases, will it?
synchronized (lockObj) {
idList.add(o.id);
// do some more operation as part of this compound action ...
}
bq.offer(o);
}
4.3 -
Use non-blocking queues instead.
public void produceNonBlocking() {
QObj o = new QObj(String.valueOf(Idx), "Content_" + Idx++);
synchronized(lockObj) {
nbq.offer(o);
// do the associated part of compound action of 'producer'
idList.add(o.id);
// do some more operation as part of this compound action ...
}
}
public void consumeNonBlocking() {
while (true) {
synchronized (lockObj) {
// kind of doing our own blocking.
QObj o = nbq.poll();
if (o != null) {
// do the associated part of compound action of 'consumer'
idList.add(o.id);
// do some more operation as part of this compound action ...
}
// DRAWBACK!!: if the 'producers' don't produce faster than the 'consumers' consuming,
// this 'miss' could be happening too often and get costly
}
}
}
Why has this not been a common problem?
Multi-threading is like the old board game, "Othello," which was marketed with the tag line, "A minute to learn, a lifetime to master." Modern threading libraries make it easy to get started writing multi-threaded code, but it's not easy to design algorithms that use multi-threading effectively. Sometimes, the same design principles that underly efficient, single-threaded algorithms can be completely inappropriate to use in multi-threaded code.
An experienced designer knows that when thread A puts some object in a queue to be "consumed" by thread B, it's best to let thread A be done with that object for good. Simply taking the object out of the queue should be enough for thread B to have exclusive use of it. If you can't do that without adding complexity to your design,... Well, that's the price you pay for using multiple threads.
A multi-threaded, parallel computation that's only half as efficient as a single-threaded implementation could be still could run four times as fast if it's running on an eight core machine.
I need
my consumer to make sequence actions in form of ‘take obj from the queue + do more consumer operations on the obj’ atomic and
My producer to make sequence actions in form of ‘offer obj onto the queue + do more producer operations on the obj’ atomic and
The two above atomic sequences synchronized on the same obj, obviously
You can use wait+notifyAll for that.
Try to read this article:
it explains wait+notifyAll in details.
But I can’t simply put a synchronized block around the call to take() and its associated consumer operations as when the queue is empty, this consumer will be stuck there FOREVER since it will still possess the sync lock while it waits on the producer to fill the queue with an obj, and that sync lock possession of consumer will in turn stop the producer from entering corresponding critical region to do any 'producing'.
wait+notifyAll solves this problem because a thread that is waiting inside wait() releases the lock (and later when wait() need to return the thread acquires the lock again).
Also you can look at Condition javadocs.
Condition is the same concept as wait+notify but for Lock interface (which is more flexible and powerful version of synchronized).
Again, look at the BoundedBuffer example in the javadocs - it seems like it could be modified to do what you want in your code.

Nested spin-lock vs volatile check

I was about to write something about this, but maybe it is better to have a second opinion before appearing like a fool...
So the idea in the next piece of code (android's room package v2.4.1, RoomTrackingLiveData), is that the winner thread is kept alive, and is forced to check for contention that may have entered the process (coming from losing threads) while computing.
While fail CAS operations performed by these losing threads keep them out from entering and executing code, preventing repeating signals (mComputeFunction.call() OR postValue()).
final Runnable mRefreshRunnable = new Runnable() {
#WorkerThread
#Override
public void run() {
if (mRegisteredObserver.compareAndSet(false, true)) {
mDatabase.getInvalidationTracker().addWeakObserver(mObserver);
}
boolean computed;
do {
computed = false;
if (mComputing.compareAndSet(false, true)) {
try {
T value = null;
while (mInvalid.compareAndSet(true, false)) {
computed = true;
try {
value = mComputeFunction.call();
} catch (Exception e) {
throw new RuntimeException("Exception while computing database"
+ " live data.", e);
}
}
if (computed) {
postValue(value);
}
} finally {
mComputing.set(false);
}
}
} while (computed && mInvalid.get());
}
};
final Runnable mInvalidationRunnable = new Runnable() {
#MainThread
#Override
public void run() {
boolean isActive = hasActiveObservers();
if (mInvalid.compareAndSet(false, true)) {
if (isActive) {
getQueryExecutor().execute(mRefreshRunnable);
}
}
}
};
The most obvious thing here is that atomics are being used for everything they are not good at:
Identifying losers and ignoring winners (what reactive patterns need).
AND a happens once behavior, performed by the loser thread.
So this is completely counter intuitive to what atomics are able to achieve, since they are extremely good at defining winners, AND anything that requires a "happens once" becomes impossible to ensure state consistency (the last one is suitable to start a philosophical debate about concurrency, and I will definitely agree with any conclusion).
If atomics are used as: "Contention checkers" and "Contention blockers" then we can implement the exact principle with a volatile check of an atomic reference after a successful CAS.
And checking this volatile against the snapshot/witness during every other step of the process.
private final AtomicInteger invalidationCount = new AtomicInteger();
private final IntFunction<Runnable> invalidationRunnableFun = invalidationVersion -> (Runnable) () -> {
if (invalidationVersion != invalidationCount.get()) return;
try {
T value = computeFunction.call();
if (invalidationVersion != invalidationCount.get()) return; //In case computation takes too long...
postValue(value);
} catch (Exception e) {
e.printStackTrace();
}
};
getQueryExecutor().execute(invalidationRunnableFun.apply(invalidationCount.incrementAndGet()));
In this case, each thread is left with the individual responsibility of checking their position in the contention lane, if their position moved and is not at the front anymore, it means that a new thread entered the process, and they should stop further processing.
This alternative is so laughably simple that my first question is:
Why didn't they do it like this?
Maybe my solution has a flaw... but the thing about the first alternative (the nested spin-lock) is that it follows the idea that an atomic CAS operation cannot be verified a second time, and that a verification can only be achieved with a cmpxchg process.... which is... false.
It also follows the common (but wrong) believe that what you define after a successful CAS is the sacred word of GOD... as I've seen code seldom check for concurrency issues once they enter the if body.
if (mInvalid.compareAndSet(false, true)) {
// Ummm... yes... mInvalid is still true...
// Let's use a second atomicReference just in case...
}
It also follows common code conventions that involve "double-<enter something>" in concurrency scenarios.
So only because the first code follows those ideas, is that I am inclined to believe that my solution is a valid and better alternative.
Even though there is an argument in favor of the "nested spin-lock" option, but does not hold up much:
The first alternative is "safer" precisely because it is SLOWER, so it has MORE time to identify contention at the end of the current of incoming threads.
BUT is not even 100% safe because of the "happens once" thing that is impossible to ensure.
There is also a behavior with the code, that, when it reaches the end of a continuos flow of incoming threads, 2 signals are dispatched one after the other, the second to last one, and then the last one.
But IF it is safer because it is slower, wouldn't that defeat the goal of using atomics, since their usage is supposed to be with the aim of being a better performance alternative in the first place?

Do I need the "synchronized" keyword when i only invoke the "size()" method of a collection [duplicate]

I know that the documentation says that the object is thread safe but does that mean that all access to it from all methods are thread safe? So if I call put() on it from many threads at once and take() on it at the same instance, will nothing bad happen?
I ask because this answer is making me second guess:
https://stackoverflow.com/a/22006181/4164238
The quick answer is yes, they are thread safe. But lets not leave it there ...
Firstly a little house keeping, BlockingQueue is an interface, and any implementation that is not thread safe will be breaking the documented contract. The link that you included was referring to LinkedBlockingQueue, which has some cleverness to it.
The link that you included makes an interesting observation, yes there are two locks within LinkedBlockingQueue. However it fails to understand that the edge case that a 'simple' implementation would have fallen foul of was in-fact being handled, which is why the take and put methods are more complicated than one would at first expect.
LinkedBlockingQueue is optimized to avoid using the same lock on both reading and writing, this reduces contention however for correct behavior it relies on the queue not being empty. When the queue has elements within it, then the push and the pop points are not at the same region of memory and contention can be avoided. However when the queue is empty then the contention cannot be avoided, and so extra code is required to handle this common 'edge' case. This is a common trade off between code complexity and performance/scalability.
The question then follows, how does LinkedBlockingQueue know when the queue is empty/not empty and thus handle the threading then? The answer is that it uses an AtomicInteger and a Condition as two extra concurrent data structures. The AtomicInteger is used to check whether the length of the queue is zero and the Condition is used to wait for a signal to notify a waiting thread when the queue is probably in the desired state. This extra coordination does have an overhead, however in measurements it has been shown that when ramping up the number of concurrent threads that the overheads of this technique are lower than the contention that is introduced by using a single lock.
Below I have copied the code from LinkedBlockingQueue and added comments explaining how they work. At a high level, take() first locks out all other calls to take() and then signals put() as necessary. put() works in a similar way, first it blocks out all other calls to put() and then signals take() if necessary.
From the put() method:
// putLock coordinates the calls to put() only; further coordination
// between put() and take() follows below
putLock.lockInterruptibly();
try {
// block while the queue is full; count is shared between put() and take()
// and is safely visible between cores but prone to change between calls
// a while loop is used because state can change between signals, which is
// why signals get rechecked and resent.. read on to see more of that
while (count.get() == capacity) {
notFull.await();
}
// we know that the queue is not full so add
enqueue(e);
c = count.getAndIncrement();
// if the queue is not full, send a signal to wake up
// any thread that is possibly waiting for the queue to be a little
// emptier -- note that this is logically part of 'take()' but it
// has to be here because take() blocks itself
if (c + 1 < capacity)
notFull.signal();
} finally {
putLock.unlock();
}
if (c == 0)
signalNotEmpty();
From take()
takeLock.lockInterruptibly();
try {
// wait for the queue to stop being empty
while (count.get() == 0) {
notEmpty.await();
}
// remove element
x = dequeue();
// decrement shared count
c = count.getAndDecrement();
// send signal that the queue is not empty
// note that this is logically part of put(), but
// for thread coordination reasons is here
if (c > 1)
notEmpty.signal();
} finally {
takeLock.unlock();
}
if (c == capacity)
signalNotFull();
Yes, all implementations of BlockingQueue are thread safe for put and take and all actions.
The link just goes halfway...and is not covering the full details. It is thread safe.
That answer is a little strange - for a start, BlockingQueue is an interface so it doesn't have any locks. Implementations such as ArrayBlockingQueue use the same lock for add() and take() so would be fine. Generally, if any implementation is not thread safe then it is a buggy implementation.
I think #Chris K has missed some points. "When the queue has elements within it, then the push and the pop points are not at the same region of memory and contention can be avoided. ", notice that when the queue has one element, head.next and tail points to the same node and put() and take() can both get locks and execute.
I think empty and full condition can be solved by synchronized put() and take(). However when it comes to one element, the lb queue has a null dummy head node, which may has something to do with the thread safety.
I tried this implementation on Leetcode
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingDeque;
class FooBar {
private final BlockingQueue<Object> line = new LinkedBlockingDeque<>(1);
private static final Object PRESENT = new Object();
private int n;
public FooBar(int n) {
this.n = n;
}
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; i++) {
line.put(PRESENT);
// printFoo.run() outputs "foo". Do not change or remove this line.
printFoo.run();
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n; i++) {
line.take();
// printBar.run() outputs "bar". Do not change or remove this line.
printBar.run();
}
}
}
With n = 3, mosttimes I get a correct response of foobarfoobarfoorbar but sometimes I get barbarfoofoofoobar which is quite surprising.
I resolved to use using ReentrantLock and Condition, #chris-k can you shed more light

What happens when few threads trying to call the same synchronized method?

so I got this horses race and when a horse getting to the finishing line, I invoke an arrival method. Let's say I got 10 threads, one for each horse, and the first horse who arrives indeed invoking 'arrive':
public class FinishingLine {
List arrivals;
public FinishingLine() {
arrivals = new ArrayList<Horse>();
}
public synchronized void arrive(Horse hourse) {
arrivals.add(hourse);
}
}
Ofc I set the arrive method to synchronized but I dont completely understand what could happen if it wasnt synchronized, the professor just said it wouldn't be safe.
Another thing that I would like to understand better is how it is decided which thread will after the first one has been finished? After the first thread finished 'arrive' and the method get unlocked, which thread will run next?
1) It is undefined what the behaviour would be, but you should assume that it is not what you would want it to do in any way that you can rely upon.
If two threads try to add at the same time, you might get both elements added (in either order), only one element added, or maybe even neither.
The pertinent quote from the Javadoc is:
Note that this implementation is not synchronized. If multiple threads access an ArrayList instance concurrently, and at least one of the threads modifies the list structurally, it must be synchronized externally. (A structural modification is any operation that adds or deletes one or more elements, or explicitly resizes the backing array; merely setting the value of an element is not a structural modification.)
2) This is down to how the OS schedules the threads. There is no guarantee of "fairness" (execution in arrival order) for regular synchronized blocks, although there are certain classes (Semaphore is one) which give you the choice of a fair execution order.
e.g. you can implement a fair execution order by using a Semaphore:
public class FinishingLine {
List arrivals;
final Semaphore semaphore = new Semaphore(1, true);
public FinishingLine() {
arrivals = new ArrayList<Horse>();
}
public void arrive(Horse hourse) {
semaphore.acquire();
try {
arrivals.add(hourse);
} finally {
semaphore.release();
}
}
}
However, it would be easier to do this with a fair blocking queue, which handles the concurrent access for you:
public class FinishingLine {
final BlockingQueue queue = new ArrayBlockingQueue(NUM_HORSES, true);
public void arrive(Horse hourse) {
queue.add(hourse);
}
}

How to implement synchronized checks for Bounded Buffer to avoid Race Conditions?

Working with the classic multiple Consumer/Producer problem, and I have an issue that is driving me around the bend, regarding how to avoid race conditions when inserting/removing from a circular buffer. Appreciate any help in advance!
Sample code for circular buffer for example purposes. Similar to my implementation (Note: I cannot use collection types, only arrays for this):
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class BoundedBuffer {
private final String[] buffer;
private final int capacity;
private int front;
private int rear;
private int count;
private final Lock lock = new ReentrantLock();
private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();
public BoundedBuffer(int capacity) {
super();
this.capacity = capacity;
buffer = new String[capacity];
}
public void deposit(String data) throws InterruptedException {
lock.lock();
try {
while (count == capacity) {
notFull.await();
}
buffer[rear] = data;
rear = (rear + 1) % capacity;
count++;
notEmpty.signal();
} finally {
lock.unlock();
}
}
public String fetch() throws InterruptedException {
lock.lock();
try {
while (count == 0) {
notEmpty.await();
}
String result = buffer[front];
front = (front + 1) % capacity;
count--;
notFull.signal();
return result;
} finally {
lock.unlock();
}
}
}
What I need to know is how can I implement a method for checking if the buffer is full/Empty? This method needs to be included in this BoundedBuffer and must be called from another class (Producer/Consumer) before giving the go ahead for/Calling Inserting/Writing methods.
Pseudocode for method in Producer class.
if (!bufferFull) {
buffer.addelement;
}
else {
thread.sleep(5)
threadHasSleptFor++;
}
I am using threads, and there are multiple producers/consumers (In this case 2 producers/consumers, but I may require more). I need it so that if the buffer is full, the thread has to wait until it becomes available for insertion, and the time it waits needs to be stored for output purposes (Not debug, part of the core features). The issue I am having is this:
Thread 1 Producer checks is bufferfull condition, it's false.
Scheduler switches to Thread 2 midway.
Thread 2 also checks bufferfull condition, it's false.
thread 2 proceeds to insert.
Scheduler switches back to Thread 1.
Thread 1 now goes to insert line, as it already checked, but Thread 2 beat it.
Booom.
Somewhat new to Java, though as I understand this is the "time-of-check/time-of-use" race condition issue.
Can someone please advise as to how this can be implemented safely, and how would I loop the code so the threadHasSleptFor variable keeps incrementing on every fail (Providing the methods would be great). I want it so that only the Thread that has requested the check can begin to insert item; the second producer must wait for the lock.
Thanks.
This is by definition impossible to do without higher level locking.
You have to guarantee that the check of whether the buffer is full or not and the following insert are atomic from the thread's perspective which means you have to acquire some common lock to do so. This general problem is indeed called Time of check to time to use and leads to many interesting race conditions down the line.
The solution to these problems is to not check if you can do an operation and then do it, but to just try the operation and handle the error case. So if you don't want to block if the buffer is full with your operation, just implement a tryDeposit method that throws an exception if it can't store a value, or return a boolean success value.
Although in your case if you have to store the time necessary before you could push the value onto the stack, I don't see why a simple:
long start = System.nanotime();
queue.deposit();
long end = System.nanotime();
wouldn't do the trick as well.
If I understand you correctly, you are asking how to make a thread wait until it's OK to call deposit() or wait until it's OK to call fetch(). But, there's no need for that. Your deposit() method will block the calling thread until there is room in the queue, and your fetch() method will block the caller until there is something to fetch. That's what the notFull.await() and notEmpty.await() calls do.
await() unlocks the lock, sleeps until the condition is signalled by another thread, and then it re-locks the lock. The condition may or may not still be true when the caller finally gets the lock again, but that's why you have the await() calls in loops, so that the thread keeps trying until finally it has the lock and the condition is true. Then it does its work (add an item or remove an item), unlocks the lock, and returns.

Categories