From the Java Condition Docs
class BoundedBuffer<E> {
final Lock lock = new ReentrantLock();
final Condition notFull = lock.newCondition();
final Condition notEmpty = lock.newCondition();
final Object[] items = new Object[100];
int putptr, takeptr, count;
public void put(E x) throws InterruptedException {
lock.lock();
try {
while (count == items.length)
notFull.await();
items[putptr] = x;
if (++putptr == items.length) putptr = 0;
++count;
notEmpty.signal();
} finally {
lock.unlock();
}
}
public E take() throws InterruptedException {
lock.lock();
try {
while (count == 0)
notEmpty.await();
E x = (E) items[takeptr];
if (++takeptr == items.length) takeptr = 0;
--count;
notFull.signal();
return x;
} finally {
lock.unlock();
}
}
}
Suppose a thread Produce calls put and so Produce is now owning the lock lock. But the while condition is true so Produce does notFull.await(). My question is now if a thread Consume calls take, at the line which says lock.lock() what exactly happens?
I am kind of confused because we let the old lock go in the critical section and now need to acquire it from a different thread.
If you look closer at the Javadoc for Condition.await(), you'll see that the await() method atomically releases the lock and suspends itself:
"The lock associated with this Condition is atomically released and the current thread becomes disabled for thread scheduling purposes and lies dormant until one of four things happens..."
Related
Is this code good implementation of using synchronized mechanism using semaphore in Java, I'm not sure if the length variable is safe because we have 2 methods there, I think that this is the right way, but I need someone to confirm this for me.
Thank you very much for your time!
public class BlockingQueue<T> {
static int length;
T[] contents;
int capacity;
public BlockingQueue(int capacity) {
contents = (T[]) new Object[capacity];
this.capacity = capacity;
}
public synchronized void enqueue(T item) throws InterruptedException {
while(length == this.capacity) {
wait();
}
contents[length++]= item;
if(contents.length == 1) {
notifyAll();
}
}
public synchronized T dequeue() throws InterruptedException{
while(length == 0){
wait();
}
if(length == capacity){
notifyAll();
}
T it = contents[0];
for(int i=1;i<length;i++)
{
contents[i-1]=contents[i]; //shifting left
}
length--;
return it;
}
}
The implementation looks correct. The structure you have with length and capacity essentially implements a semaphore. A few items can be improved I believe:
if(contents.length == 1) {
notifyAll();
}
If there are multiple producers, this is correct. If there is only one producer, notify() should suffice.
Instead of shifting contents, consider allocating capacity+1 and use a circular queue.
I found following source code in LinkedBlockingQueue
public E take() throws InterruptedException {
E x;
int c = -1;
final AtomicInteger count = this.count;
final ReentrantLock takeLock = this.takeLock;
takeLock.lockInterruptibly();
try {
while (count.get() == 0) {
notEmpty.await();
}
x = dequeue();
c = count.getAndDecrement();
if (c > 1)
notEmpty.signal();
} finally {
takeLock.unlock();
}
if (c == capacity)
signalNotFull();
return x;
}
The await method release the lock and after it is signaled, in while loop again, seems it does not have the lock. And in notEmpty<Condition> it specifies that IllegalMonitorStateException would be thrown if not holding the lock during calling await.
This confused me.. Does it hold the lock or not eventually?
of course It holds the lock.to judge whether the data can be token from queue, it uses loop.
when the queue is still empty,it should await again until its notified and the queue count is not zero .
I implemented my custom BlockingQueue<T> and compared it with java.util.concurrent ArrayBlockingQueue.
Here is my implementation:
public class CustomBlockingQueue<T> implements BlockingQueue<T> {
private final T[] table;
private final int capacity;
private int head = 0;
private int tail = 0;
private volatile int size;
private final Lock lock = new ReentrantLock();
private final Condition notEmpty = lock.newCondition();
private final Condition notFull = lock.newCondition();
#SuppressWarnings("unchecked")
public CustomBlockingQueue(final int capacity) {
this.capacity = capacity;
this.table = (T[]) new Object[this.capacity];
size = 0;
}
#Override
public void add(final T item) throws InterruptedException {
lock.lock();
try {
while (size >= table.length) {
notFull.await();
}
if (tail == table.length) {
tail = 0;
}
table[tail] = item;
size++;
tail++;
if (size == 1) {
notEmpty.signalAll();
}
} finally {
lock.unlock();
}
}
#Override
public T poll() throws InterruptedException {
lock.lock();
try {
while (size == 0) {
notEmpty.await();
}
if (head == table.length) {
head = 0;
}
final T result = table[head];
table[head] = null;
size--;
head++;
if (size == capacity - 1) {
notFull.signalAll();
}
return result;
} finally {
lock.unlock();
}
}
#Override
public int size() {
return size;
}
}
My implementation is based on the array.
I don't ask you to review the code but help me to clarify the difference between my one and Java's.
In my code I do notEmpty.signalAll() or notFull.signalAll() inside the if clause but java.util.concurrent one simply invokes signal() in each case?
What is the reason for notifying another thread each time even when there's no necessary in it?
If a thread is blocking until it can read from or add to the queue, the best place for it is in the waitset for the applicable condition. That way it isn't contending actively for the lock and isn't getting context-switched into.
If only one item gets added to the queue, we want to signal only one consumer. We don't want to signal more consumers than we have items in the queue, because it makes more work for the system to have to manage and give timeslices to all the threads that can't make progress regardless.
That's why ArrayBlockingQueue signals one at a time for each time an item is enqueued or dequeued, in order to avoid unnecessary wakeups. In your implementation everybody in the waitset gets woken up on the transition (from empty to non-empty, or from full to not full), regardless of how many of those threads will be able to get their work accomplished.
This gets more significant as more threads are hitting this concurrently. Imagine a system with 100 threads waiting to consume something from the queue, but only one item is added every 10 seconds. It would be better not to kick out 100 threads from the waitset just to have 99 of them have to go back in.
The following code is an example right out of the Oracle java documentation.
class BoundedBuffer {
final Lock lock = new ReentrantLock();
final Condition notFull = lock.newCondition();
final Condition notEmpty = lock.newCondition();
final Object[] items = new Object[100];
int putptr, takeptr, count;
public void put(Object x) throws InterruptedException {
lock.lock();
try {
while (count == items.length)
notFull.await();
items[putptr] = x;
if (++putptr == items.length) putptr = 0;
++count;
notEmpty.signal();
} finally {
lock.unlock();
}
}
public Object take() throws InterruptedException {
lock.lock();
try {
while (count == 0)
notEmpty.await();
Object x = items[takeptr];
if (++takeptr == items.length) takeptr = 0;
--count;
notFull.signal();
return x;
} finally {
lock.unlock();
}
}
}
I can see how get() and take() handshake through notFull and notEmpty. What I cannot find is how notFull, notEmpty, and count are initialized. Where would Empty be set, full be cleared and count zeroed? You an create other conditions, but I don't see where they are initialized either. Shouldn't this happen in the constructor?
Thanks to Robert Harvey. The int count defaults to 0. That makes it work.
I have a question about synchronized blocks and multithreading. I have an "alternate" situation in which two synchronized blocks should be executed before and after a non-synchronized block, and such blocks should not interfere each other.
Here is (a portion of) the code:
boolean calculate = false;
synchronized(this){
Double oldSim = struct.get(pair);
if(oldSim == null || oldSim < maxThreshold)
calculate = true;
}
if(calculate)
{
// This part should be parallel
Double newSim = calculateSimilarity(...);
synchronized(this){
if(newSim > minThreshold && (oldSim == null || newSim > oldSim))
struct.put(pair, newSim);
}
}
The problem is that, in this way, different threads could execute the first sync block, while another thread could execute the second sync block. So, I though this solution:
int maxThreshold = 1.0;
if(checkAndwriteSimilarity(pair, null, false))
{
Double newSim = calculateSimilarity(table, pKey1, pKey2, pKey1Val, pKey2Val, c);
checkAndwriteSimilarity(pair, newSim, true);
}
private synchronized boolean checkAndwriteSimilarity(Pair pair, Double newSim, boolean write)
{
Double oldSim = struct.get(pair);
if(!write)
{
if(oldSim == null || oldSim < maxThreshold)
return true;
else
return false;
}
else
{
if(newSim > minThreshold && (oldSim == null || newSim > oldSim))
struct.put(pair, newSim);
return true;
}
}
Do you think it is the most correct solution? I sincerely do not like such method a lot ...
Do you have alternative solutions to suggest?
Thank you