The following code is an example right out of the Oracle java documentation.
class BoundedBuffer {
final Lock lock = new ReentrantLock();
final Condition notFull = lock.newCondition();
final Condition notEmpty = lock.newCondition();
final Object[] items = new Object[100];
int putptr, takeptr, count;
public void put(Object x) throws InterruptedException {
lock.lock();
try {
while (count == items.length)
notFull.await();
items[putptr] = x;
if (++putptr == items.length) putptr = 0;
++count;
notEmpty.signal();
} finally {
lock.unlock();
}
}
public Object take() throws InterruptedException {
lock.lock();
try {
while (count == 0)
notEmpty.await();
Object x = items[takeptr];
if (++takeptr == items.length) takeptr = 0;
--count;
notFull.signal();
return x;
} finally {
lock.unlock();
}
}
}
I can see how get() and take() handshake through notFull and notEmpty. What I cannot find is how notFull, notEmpty, and count are initialized. Where would Empty be set, full be cleared and count zeroed? You an create other conditions, but I don't see where they are initialized either. Shouldn't this happen in the constructor?
Thanks to Robert Harvey. The int count defaults to 0. That makes it work.
Related
Is this code good implementation of using synchronized mechanism using semaphore in Java, I'm not sure if the length variable is safe because we have 2 methods there, I think that this is the right way, but I need someone to confirm this for me.
Thank you very much for your time!
public class BlockingQueue<T> {
static int length;
T[] contents;
int capacity;
public BlockingQueue(int capacity) {
contents = (T[]) new Object[capacity];
this.capacity = capacity;
}
public synchronized void enqueue(T item) throws InterruptedException {
while(length == this.capacity) {
wait();
}
contents[length++]= item;
if(contents.length == 1) {
notifyAll();
}
}
public synchronized T dequeue() throws InterruptedException{
while(length == 0){
wait();
}
if(length == capacity){
notifyAll();
}
T it = contents[0];
for(int i=1;i<length;i++)
{
contents[i-1]=contents[i]; //shifting left
}
length--;
return it;
}
}
The implementation looks correct. The structure you have with length and capacity essentially implements a semaphore. A few items can be improved I believe:
if(contents.length == 1) {
notifyAll();
}
If there are multiple producers, this is correct. If there is only one producer, notify() should suffice.
Instead of shifting contents, consider allocating capacity+1 and use a circular queue.
I have the following code ArrayList implementation
public class LongArrayListUnsafe {
public static void main(String[] args) {
LongArrayList dal1 = LongArrayList.withElements();
for (int i = 0; i < 1000; i++)
dal1.add(i);
// Runtime.getRuntime().availableProcessors()
ExecutorService executorService = Executors.newFixedThreadPool(4);
long start = System.nanoTime();
for (int i = 0; i < 100; i++) {
executorService.execute(new Runnable() {
public void run() {
for (int i = 0; i < 1000; i++)
dal1.size();
for (int i = 0; i < 1000; i++)
dal1.get(i % 100);
}
});
}
executorService.shutdown();
try {
executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
System.out.println("mayor disaster!");
}
}
class LongArrayList {
private long[] items;
private int size;
public LongArrayList() {
reset();
}
public static LongArrayList withElements(long...initialValues) {
LongArrayList list = new LongArrayList();
for (long l: initialValues)
list.add(l);
return list;
}
// Number of items in the double list
public synchronized int size() {
return size;
}
// Return item number i
public synchronized long get(int i) {
if (0 <= i && i < size)
return items[i];
else
throw new IndexOutOfBoundsException(String.valueOf(i));
}
// Add item x to end of list
public synchronized LongArrayList add(long x) {
if (size == items.length) {
long[] newItems = new long[items.length * 2];
for (int i = 0; i < items.length; i++)
newItems[i] = items[i];
items = newItems;
}
items[size] = x;
size++;
return this;
}
Now, this concurrent drivercode simply reads of the list, which is already made.This goes pretty fast.
But I was wondering whether it would be possible
for me to do this onlyreading operation faster with a readwritelock.
In size and get, this looks like this:
synchronized public int size() {
readWriteLock.readLock().lock();
int ret = this.size.get();
readWriteLock.readLock().unlock();
return ret;
}
and
public long get(int i) {
readWriteLock.readLock().lock();
if (0 <= i && i < size.get()) {
long ret = items.get(i);
readWriteLock.readLock().unlock();
return ret;
} else {
throw new IndexOutOfBoundsException(String.valueOf(i));
}
}
However, using a readwritelock goes way slower, and even slower when I add more threads. Why is this? when my drivercode is only reading, the threads should have more or less unlimited acces to the methods?
A java.util.concurrent.locks.ReadWriteLock is an inherently more complex thing than a mutual exclusion lock like synchronized. The documentation of the class states this. The overhead of the read-write semantics are likely bigger than return this.size;, or return this.items[i];, even with a surrounding boundary check.
Let's also look at your proposal in particular. You want to replace the original
public synchronized int size() {
return size;
}
with the proposal
synchronized public int size() { // <-- locks exclusively/mutually on "this"
readWriteLock.readLock().lock(); // <-- locks on readWriteLock.readLock()
int ret = this.size.get(); // <-- is size and AtomicInteger now?
readWriteLock.readLock().unlock();
return ret;
}
I assume the use of synchronized was a typo, or it would add another lock to the equation. Also, I assume this.size.get(); should be this.size;. (using an AttomicInteger for size makes no sense in this context and adds additional cost). If my assumptions are correct, your actual proposal would be:
public int size() {
readWriteLock.readLock().lock();
int ret = this.size;
readWriteLock.readLock().unlock();
return ret;
}
public long get(int i) {
readWriteLock.readLock().lock();
if (0 <= i && i < this.size) {
long ret = items[i];
readWriteLock.readLock().unlock();
return ret;
} else {
throw new IndexOutOfBoundsException(String.valueOf(i));
}
}
public LongArrayList add(long x) {
readWriteLock.writeLock().lock();
if (size == items.length) {
long[] newItems = new long[items.length * 2];
for (int i = 0; i < items.length; i++)
newItems[i] = items[i];
this.items = newItems;
}
items[size] = x;
size++;
readWriteLock.writeLock().unlock();
return this;
}
The implementation of get(int) is dangerous. If an IndexOutOfBoundException is thrown, the read-lock remains locked forever. That won't slow down further reads, but it keeps all future calls to add(long) waiting. If you use a lock, it is advisable to use it in combination with finally to ensure it is unlocked:
public long get(int i) {
readWriteLock.readLock().lock();
try {
if (0 <= i && i < size) {
return items[i];
}
throw new IndexOutOfBoundsException(String.valueOf(i));
}
finally {
readWriteLock.readLock().unlock();
}
}
public LongArrayList add(long x) {
readWriteLock.writeLock().lock();
try {
if (size == items.length) {
long[] newItems = new long[items.length * 2];
for (int i = 0; i < items.length; i++)
newItems[i] = items[i];
items = newItems;
}
items[size] = x;
size++;
}
finally {
readWriteLock.writeLock().unlock();
}
return this;
}
As mentioned, if you are reading far more than you write, using synchronized could be more performant.
I found following source code in LinkedBlockingQueue
public E take() throws InterruptedException {
E x;
int c = -1;
final AtomicInteger count = this.count;
final ReentrantLock takeLock = this.takeLock;
takeLock.lockInterruptibly();
try {
while (count.get() == 0) {
notEmpty.await();
}
x = dequeue();
c = count.getAndDecrement();
if (c > 1)
notEmpty.signal();
} finally {
takeLock.unlock();
}
if (c == capacity)
signalNotFull();
return x;
}
The await method release the lock and after it is signaled, in while loop again, seems it does not have the lock. And in notEmpty<Condition> it specifies that IllegalMonitorStateException would be thrown if not holding the lock during calling await.
This confused me.. Does it hold the lock or not eventually?
of course It holds the lock.to judge whether the data can be token from queue, it uses loop.
when the queue is still empty,it should await again until its notified and the queue count is not zero .
I implemented my custom BlockingQueue<T> and compared it with java.util.concurrent ArrayBlockingQueue.
Here is my implementation:
public class CustomBlockingQueue<T> implements BlockingQueue<T> {
private final T[] table;
private final int capacity;
private int head = 0;
private int tail = 0;
private volatile int size;
private final Lock lock = new ReentrantLock();
private final Condition notEmpty = lock.newCondition();
private final Condition notFull = lock.newCondition();
#SuppressWarnings("unchecked")
public CustomBlockingQueue(final int capacity) {
this.capacity = capacity;
this.table = (T[]) new Object[this.capacity];
size = 0;
}
#Override
public void add(final T item) throws InterruptedException {
lock.lock();
try {
while (size >= table.length) {
notFull.await();
}
if (tail == table.length) {
tail = 0;
}
table[tail] = item;
size++;
tail++;
if (size == 1) {
notEmpty.signalAll();
}
} finally {
lock.unlock();
}
}
#Override
public T poll() throws InterruptedException {
lock.lock();
try {
while (size == 0) {
notEmpty.await();
}
if (head == table.length) {
head = 0;
}
final T result = table[head];
table[head] = null;
size--;
head++;
if (size == capacity - 1) {
notFull.signalAll();
}
return result;
} finally {
lock.unlock();
}
}
#Override
public int size() {
return size;
}
}
My implementation is based on the array.
I don't ask you to review the code but help me to clarify the difference between my one and Java's.
In my code I do notEmpty.signalAll() or notFull.signalAll() inside the if clause but java.util.concurrent one simply invokes signal() in each case?
What is the reason for notifying another thread each time even when there's no necessary in it?
If a thread is blocking until it can read from or add to the queue, the best place for it is in the waitset for the applicable condition. That way it isn't contending actively for the lock and isn't getting context-switched into.
If only one item gets added to the queue, we want to signal only one consumer. We don't want to signal more consumers than we have items in the queue, because it makes more work for the system to have to manage and give timeslices to all the threads that can't make progress regardless.
That's why ArrayBlockingQueue signals one at a time for each time an item is enqueued or dequeued, in order to avoid unnecessary wakeups. In your implementation everybody in the waitset gets woken up on the transition (from empty to non-empty, or from full to not full), regardless of how many of those threads will be able to get their work accomplished.
This gets more significant as more threads are hitting this concurrently. Imagine a system with 100 threads waiting to consume something from the queue, but only one item is added every 10 seconds. It would be better not to kick out 100 threads from the waitset just to have 99 of them have to go back in.
From the Java Condition Docs
class BoundedBuffer<E> {
final Lock lock = new ReentrantLock();
final Condition notFull = lock.newCondition();
final Condition notEmpty = lock.newCondition();
final Object[] items = new Object[100];
int putptr, takeptr, count;
public void put(E x) throws InterruptedException {
lock.lock();
try {
while (count == items.length)
notFull.await();
items[putptr] = x;
if (++putptr == items.length) putptr = 0;
++count;
notEmpty.signal();
} finally {
lock.unlock();
}
}
public E take() throws InterruptedException {
lock.lock();
try {
while (count == 0)
notEmpty.await();
E x = (E) items[takeptr];
if (++takeptr == items.length) takeptr = 0;
--count;
notFull.signal();
return x;
} finally {
lock.unlock();
}
}
}
Suppose a thread Produce calls put and so Produce is now owning the lock lock. But the while condition is true so Produce does notFull.await(). My question is now if a thread Consume calls take, at the line which says lock.lock() what exactly happens?
I am kind of confused because we let the old lock go in the critical section and now need to acquire it from a different thread.
If you look closer at the Javadoc for Condition.await(), you'll see that the await() method atomically releases the lock and suspends itself:
"The lock associated with this Condition is atomically released and the current thread becomes disabled for thread scheduling purposes and lies dormant until one of four things happens..."