PriorityQueue and PriorityBlockingQueue - java

which one should I choose over another among these programs and why? Generally the question is why should I choose to use PriorityBlockingQueue over PriorityQueue.
PriorityBlockingQueue
import java.util.concurrent.PriorityBlockingQueue;
public class PriorityBlockingQueueExample {
static PriorityBlockingQueue<String> priorityQueue = new PriorityBlockingQueue<String>();
public static void main(String[] args) {
new Thread(){
public void run(){
try {
System.out.println(priorityQueue.take() +" is removed from priorityQueue object");
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}.start();
new Thread(){
public void run(){
priorityQueue.add("string variable");
System.out.println("Added an element to the queue");
}
}.start();
}
}
which one should I choose over another among these programs and why? Generally the question is why should I choose to use PriorityBlockingQueue over PriorityQueue.
PriorityQueue
import java.util.PriorityQueue;
public class PriorityQueueTest {
static PriorityQueue<String> priorityQueue = new PriorityQueue<String>();
private static Object lock = new Object();
public static void main(String[] args) {
new Thread(){
public void run(){
synchronized(lock){
try {
while(priorityQueue.isEmpty()){lock.wait();}
System.out.println(priorityQueue.remove() +" is removed from priorityQueue object");
lock.notify();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}.start();
new Thread(){
public void run(){
synchronized(lock){
priorityQueue.add("string variable");
System.out.println("Added an element to the queue");
lock.notify();
}
}
}.start();
}
}

A normal Queue will return null when accessed if it is empty, while a BlockingQueue blocks if the queue is empty until a value is available.
The priority part in the queues you are using simply means the items are read from the queue in a specific order (either natural if they implement Comparable or according to a Comparator).
Typically you could should depend on the abstract type, either PriorityQueue or BlockingQueue. If your code requires knowledge of both these concepts a re-think may be needed.
There's numerous reasons why you might need a PriorityQueue that boil down to message ordering. For example on a queue of jobs, you might want to be able to give those jobs priority. That said typically the code processing the jobs should be agnostic to the order.
With a BlockingQueue you're typically in the realm of worker threads picking up queued work and when there's no work to do, those threads can be blocked until work becomes available. Like the example of a PriorityQueue, the calling code could be agnostic to this, though as you may want to use some sort of wait timeout that's not always case.

PriorityBlockingQueue was added with the concurrent package in JDK 5 see: http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/package-summary.html
It's basically under the hood doing the extra code you wrote for PriorityQueue of adding the commonly necessary synchronize/wait/notify around your queue. Thus the "Blocking" part of the name is added to imply the thread will block waiting until there's an item available on the queue.
If your app can run on JDK 5 or newer, I'd use PriorityBlockingQueue.

I know that this is an old topic but I saw that you didnt consider a concurrent implementation of a priority queue.
Although java's collections framework does not have one, it does have enough building blocks to create one:
public class ConcurrentSkipListPriorityQueue<T> implements Queue<T> {
private ConcurrentSkipListMap<T, Boolean> values;
public ConcurrentSkipListPriorityQueue(Comparator<? super T> comparator) {
values = new ConcurrentSkipListMap<>(comparator);
}
public ConcurrentSkipListPriorityQueue() {
values = new ConcurrentSkipListMap<>();
}
#Override
public boolean add(T e) {
values.put(e, Boolean.TRUE);
return true;
}
#Override
public boolean offer(T e) {
return add(e);
}
#Override
public T remove() {
while (true) {
final T v = values.firstKey();
if (values.remove(v)) {
return v;
}
}
}
#Override
public T poll() {
try {
while (true) {
if (values.isEmpty()) {
return null;
}
final T v = values.firstKey();
if (values.remove(v)) {
return v;
}
}
} catch (NoSuchElementException ex) {
return null; // poll should not throw an exception..
}
}
#Override
public T element() {
return values.firstKey();
}
#Override
public T peek() {
if (values.isEmpty()) {
return null;
}
try {
return element();
} catch (NoSuchElementException ex) {
return null;
}
}
#Override
public int size() {
return values.size();
}
#Override
public boolean isEmpty() {
return values.isEmpty();
}
#Override
public boolean contains(Object o) {
return values.containsKey(o);
}
#Override
public Iterator<T> iterator() {
return values.keySet().iterator();
}
#Override
public Object[] toArray() {
return values.keySet().toArray();
}
#Override
public <T> T[] toArray(T[] a) {
return values.keySet().toArray(a);
}
#Override
public boolean remove(Object o) {
return values.remove(o);
}
#Override
public boolean containsAll(Collection<?> c) {
return values.keySet().containsAll(c);
}
#Override
public boolean addAll(Collection<? extends T> c) {
boolean changed = false;
for (T i : c) {
changed |= add(i);
}
return changed;
}
#Override
public boolean removeAll(Collection<?> c) {
return values.keySet().removeAll(c);
}
#Override
public boolean retainAll(Collection<?> c) {
return values.keySet().retainAll(c);
}
#Override
public void clear() {
values.clear();
}
}
This queue is based on skip list by delegating all of its operations to the ConcurrentSkipListMap class. It allows non-blocking concurrent access from multiple threads.

Related

Implementing a Semaphore with a Queue

I am trying to create a basic Semaphore implementation using Queue. The idea is, there is a database, and there are 10 writers. Writers can only write to the database in mutual exclusion. I am using Queue because I want to implement First In First Out and Last In First Out.
Using Semaphore, I can't notify a specific thread to wake up. So my idea is what I am doing is for every Writer, I create an object and tell the Writer to wait on that object. Puts that object in a queue. Then remove the object from the queue and notify the Thread that is waiting on that object. In this way, I think I can make a FIFO or LIFO implementation.
I need help on the actual code implementation:
1. I run the code below, it gave me a lot of IllegalMonitorStateException.
2. FIFO and LIFO code (my FIFO code seems incorrect, while for LIFO code, I'm thinking to use Stack instead of Queue).
public class Test {
public static void main(String [] args) {
Database db = new Database();
for (int i = 0; i < 10; i++)
(new Thread(new Writer(db))).start();
}
}
public class Writer implements Runnable {
private Database database;
public Writer(Database database) {
this.database = database;
}
public void run() {
this.database.acquireWriteLock();
this.database.write();
this.database.releaseWriteLock();
}
}
public class Database {
private Semaphore lockQueue;
public Database() {
this.lockQueue = new Semaphore();
}
public void write() {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {}
}
public void acquireWriteLock() {
lockQueue.acquire();
}
public void releaseWriteLock() {
lockQueue.release();
}
}
import java.util.Queue;
import java.util.LinkedList;
public class Semaphore {
private Queue<Object> queue;
public Semaphore() {
this.queue = new LinkedList<Object>();
}
public synchronized void acquire() {
Object object = new Object();
try {
if (this.queue.size() > 0) {
object.wait();
this.queue.add(object);
}
} catch (InterruptedException ie) {}
this.queue.add(object);
}
public synchronized void release() {
Object object = this.queue.remove();
object.notify();
}
}
You need to acquire the lock of the object before you can use wait() and notify().
Try to check if the following code will work:
public class Semaphore {
private Queue<Object> queue;
private int state;
public Semaphore() {
this.queue = new LinkedList<Object>();
}
public void acquire() {
Object object = new Object();
synchronized (object) {
try {
if (this.state > 0) {
this.queue.add(object);
object.wait();
} else {
state++;
}
} catch (InterruptedException ie) {
}
}
}
public void release() {
Object object = this.queue.poll();
state--;
if(null == object) {
return;
}
synchronized (object) {
object.notify();
}
}
}

Setting and accessing a varibale by two different threads

I have two threads, one setting a variable of a class, and the other one accessing the variable by a get method.
public class Parent {
private int value = -1
public int getValue()
return this.value;
}
public void setValue(int value){
this.value = value;
}
private class UpdatingVaribale extends Thread {
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
setValue(2);
Thread.currentThread().interrupt();
}
}
}
}
private class GettingVaribale extends Thread {
public void run() {
while (getValue == -1) {
try{
System.out.println(getValue);
Thread.sleep(500);
} catch (InterruptedException e) {
}
}
System.out.println(getValue);
}
}
The problem is that the condition of the while loop in the second thread is always true. The System.out.println(getValue) always prints -1. I am wondering why the second thread doesn't get the new value of value which is 2. I don't think the synchronized matters here since one thread is setting a variable and the other one just accessing the variable.
There are some solutions here:
use standard Java class AtomicInteger for storing your value in multi-threaded safe way. Actually it's the best and fastest way.
add synchronized keyword to your getValue and setValue methods
add volatile java keyword to i field definition
The source of your problem is i variable value actually looks different in different threads cause of CPU speed and memory optimization and you have to specify JVM somehow don't to do this optimization and - opposite - makes the latest i value visible in all threads.
UPDATE code for testing
public class SyncProblem {
public static void main(String[] args) {
Parent parent = new Parent();
new Thread(parent.new GettingVaribale()).start();
new Thread(parent.new UpdatingVaribale()).start();
}
}
class Parent {
private volatile int value = -1;
public int getValue() {
return this.value;
}
public void setValue(int value) {
this.value = value;
}
class UpdatingVaribale implements Runnable {
#Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
setValue(2);
Thread.currentThread().interrupt();
}
}
}
class GettingVaribale implements Runnable {
#Override
public void run() {
while (getValue() == -1) {
try {
System.out.println(getValue());
Thread.sleep(500);
} catch (InterruptedException e) {
}
}
System.out.println(getValue());
}
}
}

ThreadPoolExecutor's queuing behavior customizable to prefer new thread creation over queuing?

ThreadPoolExecutor doc says
If corePoolSize or more threads are running, the Executor always
prefers queuing a request rather than adding a new thread.
If there are more than corePoolSize but less than maximumPoolSize
threads running, a new thread will be created only if the queue is
full.
Is there a way to get the executor to prefer new thread creation until the max is reached even if there are there are more than core size threads, and then start queuing? Tasks would get rejected if the queue reached its maximum size. It would be nice if the timeout setting would kick in and remove threads down to core size after a busy burst has been handled. I see the reason behind preferring to queue so as to allow for throttling; however, this customization would additionally allow the queue to act mainly as a list of tasks yet to be run.
No way to get this exact behavior with a ThreadPoolExecutor.
But, here's a couple solutions:
Consider,
If less than corePoolSize threads are running, a new thread will be created for every item queued until coorPoolSize threads are running.
A new thread will only be created if the queue is full, and less than maximumPoolSize threads are running.
So, wrap a ThreadPoolExecutor in a class which monitors how fast items are being queued. Then, change the core pool size to a higher value when many items are being submitted. This will cause a new thread to be created each time a new item is submitted.
When the submission burst is done, core pool size needs to be manually reduced again so the threads can naturally time out. If you're worried the busy burst could end abruptly, causing the manual method to fail, be sure to use allowCoreThreadTimeout.
Create a fixed thread pool, and allowCoreThreadTimeout
Unfortunately this uses more threads during low submission bursts, and stores no idle threads during zero traffic.
Use the 1st solution if you have the time, need, and inclination as it will handle a wider range of submission frequency and so is a better solution in terms of flexibility.
Otherwise use the 2nd solution.
Just do what Executors.newFixedThreadPool does and set core and max to the same value. Here's the newFixedThreadPool source from Java 6:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
What you can do if you have an existing one:
ThreadPoolExecutor tpe = ... ;
tpe.setCorePoolSize(tpe.getMaxPoolSize());
Edit: As William points out in the comments, this means that all threads are core threads, so none of the threads will time out and terminate. To change this behavior, just use ThreadPoolExecutor.allowCoreThreadTimeout(true). This will make it so that the threads can time out and be swept away when the executor isn't in use.
It seems that your preference is minimal latency during times of low-activity. For that I would just set the corePoolSize to the max and let the extra threads hang around. During high-activity times these threads will be there anyways. During low-activity times their existence won't have that much impact. You can set the core thread timeout if you want them to die though.
That way all the threads will always be available to execute a task as soon as possible.
CustomBlockingQueue
package com.gunjan;
import java.util.concurrent.BlockingQueue;
public abstract class CustomBlockingQueue<E> implements BlockingQueue<E> {
public BlockingQueue<E> blockingQueue;
public CustomBlockingQueue(BlockingQueue blockingQueue) {
this.blockingQueue = blockingQueue;
}
#Override
final public boolean offer(E e) {
return false;
}
final public boolean customOffer(E e) {
return blockingQueue.offer(e);
}
}
ThreadPoolBlockingQueue
package com.gunjan;
import java.util.Collection;
import java.util.Iterator;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.TimeUnit;
public class ThreadPoolBlockingQueue<E> extends CustomBlockingQueue<E> {
public ThreadPoolBlockingQueue(BlockingQueue blockingQueue) {
super(blockingQueue);
}
#Override
public E remove() {
return this.blockingQueue.remove();
}
#Override
public E poll() {
return this.blockingQueue.poll();
}
#Override
public E element() {
return this.blockingQueue.element();
}
#Override
public E peek() {
return this.blockingQueue.peek();
}
#Override
public int size() {
return this.blockingQueue.size();
}
#Override
public boolean isEmpty() {
return this.blockingQueue.isEmpty();
}
#Override
public Iterator<E> iterator() {
return this.blockingQueue.iterator();
}
#Override
public Object[] toArray() {
return this.blockingQueue.toArray();
}
#Override
public <T> T[] toArray(T[] a) {
return this.blockingQueue.toArray(a);
}
#Override
public boolean containsAll(Collection<?> c) {
return this.blockingQueue.containsAll(c);
}
#Override
public boolean addAll(Collection<? extends E> c) {
return this.blockingQueue.addAll(c);
}
#Override
public boolean removeAll(Collection<?> c) {
return this.blockingQueue.removeAll(c);
}
#Override
public boolean retainAll(Collection<?> c) {
return this.blockingQueue.retainAll(c);
}
#Override
public void clear() {
this.blockingQueue.clear();
}
#Override
public boolean add(E e) {
return this.blockingQueue.add(e);
}
#Override
public void put(E e) throws InterruptedException {
this.blockingQueue.put(e);
}
#Override
public boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException {
return this.blockingQueue.offer(e, timeout, unit);
}
#Override
public E take() throws InterruptedException {
return this.blockingQueue.take();
}
#Override
public E poll(long timeout, TimeUnit unit) throws InterruptedException {
return this.blockingQueue.poll(timeout, unit);
}
#Override
public int remainingCapacity() {
return this.blockingQueue.remainingCapacity();
}
#Override
public boolean remove(Object o) {
return this.blockingQueue.remove(o);
}
#Override
public boolean contains(Object o) {
return this.blockingQueue.contains(o);
}
#Override
public int drainTo(Collection<? super E> c) {
return this.blockingQueue.drainTo(c);
}
#Override
public int drainTo(Collection<? super E> c, int maxElements) {
return this.blockingQueue.drainTo(c, maxElements);
}
}
RejectedExecutionHandlerImpl
package com.gunjan;
import java.util.concurrent.RejectedExecutionException;
import java.util.concurrent.RejectedExecutionHandler;
import java.util.concurrent.ThreadPoolExecutor;
public class RejectedExecutionHandlerImpl implements RejectedExecutionHandler {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
boolean inserted = ((CustomBlockingQueue) executor.getQueue()).customOffer(r);
if (!inserted) {
throw new RejectedExecutionException();
}
}
}
CustomThreadPoolExecutorTest
package com.gunjan;
import java.util.concurrent.*;
public class CustomThreadPoolExecutorTest {
public static void main(String[] args) throws InterruptedException {
LinkedBlockingQueue linkedBlockingQueue = new LinkedBlockingQueue<Runnable>(500);
CustomBlockingQueue customLinkedBlockingQueue = new ThreadPoolBlockingQueue<Runnable>(linkedBlockingQueue);
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(5, 100, 60, TimeUnit.SECONDS,
customLinkedBlockingQueue, new RejectedExecutionHandlerImpl());
for (int i = 0; i < 750; i++) {
try {
threadPoolExecutor.submit(new Runnable() {
#Override
public void run() {
try {
Thread.sleep(1000);
System.out.println(threadPoolExecutor);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
} catch (RejectedExecutionException e) {
e.printStackTrace();
}
}
threadPoolExecutor.shutdown();
threadPoolExecutor.awaitTermination(Integer.MAX_VALUE, TimeUnit.MINUTES);
System.out.println(threadPoolExecutor);
}
}

Need Fool Proof Synchronization of ArrayList in A Multi-threaded Environment

I've been at this for a week now doing my research on how to properly synchronize an ArrayList.
My main issue in a nutshell is I have a "master" ArrayList of objects. Different threads may come in and add/set/remove from this list. I need to be sure that when one thread is iterating through the ArrayList, another is not changing it.
Now I've read many articles on the "best" way of handling this:
use collections.synchronizedlist
use CopyOnWriteArrayList
use synchronized() blocks in conjunction with collections.synchronizedlist
use Vector (many people are against this)
Using synchronized blocks around every iteration, add/set/remove block seems to be kind of what I want, but people have said there is a lot of overhead.
So then I started playing with CopyOnWriteArrayList (I do way more reads than writes for my master ArrayList). This is fine for reading, but what a lot of forum threads neglect to mention is that elements cannote be added, set, or removed from the iterator itself. For example (a basic version, but imagine it in a multi-threaded environment):
public static void main(String[] args) {
class TestObject{
private String s = "";
public TestObject(String s){
this.s = s;
}
public void setTheString(String s){
this.s = s;
}
public String getTheString(){
return s;
}
}
CopyOnWriteArrayList<TestObject> list = new CopyOnWriteArrayList<TestObject>();
list.add(new TestObject("A"));
list.add(new TestObject("B"));
list.add(new TestObject("C"));
list.add(new TestObject("D"));
list.add(new TestObject("E"));
ListIterator<TestObject> litr = list.listIterator();
while(litr.hasNext()){
TestObject test = litr.next();
if(test.getTheString().equals("B")){
litr.set(new TestObject("TEST"));
}
}
}
the line "litr.set(new TestObject("TEST"));" would throw a
java.lang.UnsupportedOperationException
And looking at the Java Documentation there is a specific line describing this behavior:
"Element-changing operations on iterators themselves (remove, set, and add) are not supported. These methods throw UnsupportedOperationException."
So then you are forced to modify that list by using
list.set(litr.previousIndex(), new TestObject("TEST"));
Now technically shouldn't this present a synchronization issue? If another thread were to come in at the same time, and say, remove all elements from "list" the iterator would not see that, it would go to set the "list" at a given index and would throw an exception because the element at that point no longer exists. I just don't understand the point of CopyOnWriteArrayList if you cant add an element through the iterator itself.
Am I missing the point with using CopyOnWriteArrayList?
Do I wrap every iterator that ends up having to add/set/remove an element in a synchronized block?
This HAS to be a common issue with multi-threading. I would have thought someone would have made a class that could handle all this without worry...
Thanks in advance for having a look at this!
As you found out yourself, CopyOnWriteArrayList is NOT ABLE to make completely secure changes when someone is processing the data, especially not while iterating over the list.
Because: Whenever you are working on the data, there is no context to make sure your complete block of statements accessing the list is executed before someone else changed the list data.
Therefore you MUST have any context (like synchronization) for all your access operations (also for reading!) that execute your whole data accessing block. For example:
ArrayList<String> list = getList();
synchronized (list) {
int index = list.indexOf("test");
// if the whole block would not be synchronized,
// the index could be invalid after an external change
list.remove(index);
}
Or for iterators:
synchronized (list) {
for (String s : list) {
System.out.println(s);
}
}
But now comes the big problem with this type of synchronization: It is slow and doesn't allow multiple reading access.
Therefore it would be useful to build your own context for data access. I am going to use the ReentrantReadWriteLock to allow multiple reading access and improve the performance.
I'm very interested in this topic and will make such a context for the ArrayList and attach it here after I finished it.
20.10.2012 | 18:30 - EDIT:
I created an own access context using the ReentrantReadWriteLock for a secure ArrayList.Firstly I will insert the whole SecureArrayList class (the most of the first operations is just overriding and protecting), then I insert my Tester class with the explanation of the usage.
I just tested the access with one thread, not with many at the same time, but I'm pretty sure it works! If not, please tell me.
SecureArrayList:
package mydatastore.collections.concurrent;
import java.util.ArrayList;
import java.util.Collection;
import java.util.ConcurrentModificationException;
import java.util.Iterator;
import java.util.List;
import java.util.ListIterator;
import java.util.NoSuchElementException;
import java.util.concurrent.locks.ReentrantReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;
/**
* #date 19.10.2012
* #author Thomas Jahoda
*
* uses ReentrantReadWriteLock
*/
public class SecureArrayList<E> extends ArrayList<E> {
protected final ReentrantReadWriteLock rwLock;
protected final ReadLock readLock;
protected final WriteLock writeLock;
public SecureArrayList() {
super();
this.rwLock = new ReentrantReadWriteLock();
readLock = rwLock.readLock();
writeLock = rwLock.writeLock();
}
// write operations
#Override
public boolean add(E e) {
try {
writeLock.lock();
return super.add(e);
} finally {
writeLock.unlock();
}
}
#Override
public void add(int index, E element) {
try {
writeLock.lock();
super.add(index, element);
} finally {
writeLock.unlock();
}
}
#Override
public boolean addAll(Collection<? extends E> c) {
try {
writeLock.lock();
return super.addAll(c);
} finally {
writeLock.unlock();
}
}
#Override
public boolean addAll(int index, Collection<? extends E> c) {
try {
writeLock.lock();
return super.addAll(index, c);
} finally {
writeLock.unlock();
}
}
#Override
public boolean remove(Object o) {
try {
writeLock.lock();
return super.remove(o);
} finally {
writeLock.unlock();
}
}
#Override
public E remove(int index) {
try {
writeLock.lock();
return super.remove(index);
} finally {
writeLock.unlock();
}
}
#Override
public boolean removeAll(Collection<?> c) {
try {
writeLock.lock();
return super.removeAll(c);
} finally {
writeLock.unlock();
}
}
#Override
protected void removeRange(int fromIndex, int toIndex) {
try {
writeLock.lock();
super.removeRange(fromIndex, toIndex);
} finally {
writeLock.unlock();
}
}
#Override
public E set(int index, E element) {
try {
writeLock.lock();
return super.set(index, element);
} finally {
writeLock.unlock();
}
}
#Override
public void clear() {
try {
writeLock.lock();
super.clear();
} finally {
writeLock.unlock();
}
}
#Override
public boolean retainAll(Collection<?> c) {
try {
writeLock.lock();
return super.retainAll(c);
} finally {
writeLock.unlock();
}
}
#Override
public void ensureCapacity(int minCapacity) {
try {
writeLock.lock();
super.ensureCapacity(minCapacity);
} finally {
writeLock.unlock();
}
}
#Override
public void trimToSize() {
try {
writeLock.lock();
super.trimToSize();
} finally {
writeLock.unlock();
}
}
//// now the read operations
#Override
public E get(int index) {
try {
readLock.lock();
return super.get(index);
} finally {
readLock.unlock();
}
}
#Override
public boolean contains(Object o) {
try {
readLock.lock();
return super.contains(o);
} finally {
readLock.unlock();
}
}
#Override
public boolean containsAll(Collection<?> c) {
try {
readLock.lock();
return super.containsAll(c);
} finally {
readLock.unlock();
}
}
#Override
public Object clone() {
try {
readLock.lock();
return super.clone();
} finally {
readLock.unlock();
}
}
#Override
public boolean equals(Object o) {
try {
readLock.lock();
return super.equals(o);
} finally {
readLock.unlock();
}
}
#Override
public int hashCode() {
try {
readLock.lock();
return super.hashCode();
} finally {
readLock.unlock();
}
}
#Override
public int indexOf(Object o) {
try {
readLock.lock();
return super.indexOf(o);
} finally {
readLock.unlock();
}
}
#Override
public Object[] toArray() {
try {
readLock.lock();
return super.toArray();
} finally {
readLock.unlock();
}
}
#Override
public boolean isEmpty() { // not sure if have to override because the size is temporarly stored in every case...
// it could happen that the size is accessed when it just gets assigned a new value,
// and the thread is switched after assigning 16 bits or smth... i dunno
try {
readLock.lock();
return super.isEmpty();
} finally {
readLock.unlock();
}
}
#Override
public int size() {
try {
readLock.lock();
return super.size();
} finally {
readLock.unlock();
}
}
#Override
public int lastIndexOf(Object o) {
try {
readLock.lock();
return super.lastIndexOf(o);
} finally {
readLock.unlock();
}
}
#Override
public List<E> subList(int fromIndex, int toIndex) {
try {
readLock.lock();
return super.subList(fromIndex, toIndex);
} finally {
readLock.unlock();
}
}
#Override
public <T> T[] toArray(T[] a) {
try {
readLock.lock();
return super.toArray(a);
} finally {
readLock.unlock();
}
}
#Override
public String toString() {
try {
readLock.lock();
return super.toString();
} finally {
readLock.unlock();
}
}
////// iterators
#Override
public Iterator<E> iterator() {
return new SecureArrayListIterator();
}
#Override
public ListIterator<E> listIterator() {
return new SecureArrayListListIterator(0);
}
#Override
public ListIterator<E> listIterator(int index) {
return new SecureArrayListListIterator(index);
}
// deligated lock mechanisms
public void lockRead() {
readLock.lock();
}
public void unlockRead() {
readLock.unlock();
}
public void lockWrite() {
writeLock.lock();
}
public void unlockWrite() {
writeLock.unlock();
}
// getters
public ReadLock getReadLock() {
return readLock;
}
/**
* The writeLock also has access to reading, so when holding write, the
* thread can also obtain the readLock. But while holding the readLock and
* attempting to lock write, it will result in a deadlock.
*
* #return
*/
public WriteLock getWriteLock() {
return writeLock;
}
protected class SecureArrayListIterator implements Iterator<E> {
int cursor; // index of next element to return
int lastRet = -1; // index of last element returned; -1 if no such
#Override
public boolean hasNext() {
return cursor != size();
}
#Override
public E next() {
// checkForComodification();
int i = cursor;
if (i >= SecureArrayList.super.size()) {
throw new NoSuchElementException();
}
cursor = i + 1;
lastRet = i;
return SecureArrayList.super.get(lastRet);
}
#Override
public void remove() {
if (!writeLock.isHeldByCurrentThread()) {
throw new IllegalMonitorStateException("when the iteration uses write operations,"
+ "the complete iteration loop must hold a monitor for the writeLock");
}
if (lastRet < 0) {
throw new IllegalStateException("No element iterated over");
}
try {
SecureArrayList.super.remove(lastRet);
cursor = lastRet;
lastRet = -1;
} catch (IndexOutOfBoundsException ex) {
throw new ConcurrentModificationException(); // impossibru, except for bugged child classes
}
}
// protected final void checkForComodification() {
// if (modCount != expectedModCount) {
// throw new IllegalMonitorStateException("The complete iteration must hold the read or write lock!");
// }
// }
}
/**
* An optimized version of AbstractList.ListItr
*/
protected class SecureArrayListListIterator extends SecureArrayListIterator implements ListIterator<E> {
protected SecureArrayListListIterator(int index) {
super();
cursor = index;
}
#Override
public boolean hasPrevious() {
return cursor != 0;
}
#Override
public int nextIndex() {
return cursor;
}
#Override
public int previousIndex() {
return cursor - 1;
}
#Override
public E previous() {
// checkForComodification();
int i = cursor - 1;
if (i < 0) {
throw new NoSuchElementException("No element iterated over");
}
cursor = i;
lastRet = i;
return SecureArrayList.super.get(lastRet);
}
#Override
public void set(E e) {
if (!writeLock.isHeldByCurrentThread()) {
throw new IllegalMonitorStateException("when the iteration uses write operations,"
+ "the complete iteration loop must hold a monitor for the writeLock");
}
if (lastRet < 0) {
throw new IllegalStateException("No element iterated over");
}
// try {
SecureArrayList.super.set(lastRet, e);
// } catch (IndexOutOfBoundsException ex) {
// throw new ConcurrentModificationException(); // impossibru, except for bugged child classes
// EDIT: or any failed direct editing while iterating over the list
// }
}
#Override
public void add(E e) {
if (!writeLock.isHeldByCurrentThread()) {
throw new IllegalMonitorStateException("when the iteration uses write operations,"
+ "the complete iteration loop must hold a monitor for the writeLock");
}
// try {
int i = cursor;
SecureArrayList.super.add(i, e);
cursor = i + 1;
lastRet = -1;
// } catch (IndexOutOfBoundsException ex) {
// throw new ConcurrentModificationException(); // impossibru, except for bugged child classes
// // EDIT: or any failed direct editing while iterating over the list
// }
}
}
}
SecureArrayList_Test:
package mydatastore.collections.concurrent;
import java.util.Iterator;
import java.util.ListIterator;
/**
* #date 19.10.2012
* #author Thomas Jahoda
*/
public class SecureArrayList_Test {
private static SecureArrayList<String> statList = new SecureArrayList<>();
public static void main(String[] args) {
accessExamples();
// mechanismTest_1();
// mechanismTest_2();
}
private static void accessExamples() {
final SecureArrayList<String> list = getList();
//
try {
list.lockWrite();
//
list.add("banana");
list.add("test");
} finally {
list.unlockWrite();
}
////// independent single statement reading or writing access
String val = list.get(0);
//// ---
////// reading only block (just some senseless unoptimized 'whatever' example)
int lastIndex = -1;
try {
list.lockRead();
//
String search = "test";
if (list.contains(search)) {
lastIndex = list.lastIndexOf(search);
}
// !!! MIND !!!
// inserting writing operations here results in a DEADLOCK!!!
// ... which is just really, really awkward...
} finally {
list.unlockRead();
}
//// ---
////// writing block (can also contain reading operations!!)
try {
list.lockWrite();
//
int index = list.indexOf("test");
if (index != -1) {
String newVal = "banana";
list.add(index + 1, newVal);
}
} finally {
list.unlockWrite();
}
//// ---
////// iteration for reading only
System.out.println("First output: ");
try {
list.lockRead();
//
for (Iterator<String> it = list.iterator(); it.hasNext();) {
String string = it.next();
System.out.println(string);
// !!! MIND !!!
// inserting writing operations called directly on the list will result in a deadlock!
// inserting writing operations called on the iterator will result in an IllegalMonitorStateException!
}
} finally {
list.unlockRead();
}
System.out.println("------");
//// ---
////// iteration for writing and reading
try {
list.lockWrite();
//
boolean firstAdd = true;
for (ListIterator<String> it = list.listIterator(); it.hasNext();) {
int index = it.nextIndex();
String string = it.next();
switch (string) {
case "banana":
it.remove();
break;
case "test":
if (firstAdd) {
it.add("whatever");
firstAdd = false;
}
break;
}
if (index == 2) {
list.set(index - 1, "pretty senseless data and operations but just to show "
+ "what's possible");
}
// !!! MIND !!!
// Only I implemented the iterators to enable direct list editing,
// other implementations normally throw a ConcurrentModificationException
}
} finally {
list.unlockWrite();
}
//// ---
System.out.println("Complete last output: ");
try {
list.lockRead();
//
for (String string : list) {
System.out.println(string);
}
} finally {
list.unlockRead();
}
System.out.println("------");
////// getting the last element
String lastElement = null;
try {
list.lockRead();
int size = list.size();
lastElement = list.get(size - 1);
} finally {
list.unlockRead();
}
System.out.println("Last element: " + lastElement);
//// ---
}
private static void mechanismTest_1() { // fus, roh
SecureArrayList<String> list = getList();
try {
System.out.print("fus, ");
list.lockRead();
System.out.print("roh, ");
list.lockWrite();
System.out.println("dah!"); // never happens cos of deadlock
} finally {
// also never happens
System.out.println("dah?");
list.unlockRead();
list.unlockWrite();
}
}
private static void mechanismTest_2() { // fus, roh, dah!
SecureArrayList<String> list = getList();
try {
System.out.print("fus, ");
list.lockWrite();
System.out.print("roh, ");
list.lockRead();
System.out.println("dah!");
} finally {
list.unlockRead();
list.unlockWrite();
}
// successful execution
}
private static SecureArrayList<String> getList() {
return statList;
}
}
Edit: I've added a couple test cases to demonstrate the functionality in threads. The above class works perfectly and I'm now using it in my main project (Liam):
private static void threadedWriteLock(){
final ThreadSafeArrayList<String> list = getList();
Thread threadOne;
Thread threadTwo;
final long lStartMS = System.currentTimeMillis();
list.add("String 1");
list.add("String 2");
System.out.println("******* basic write lock test *******");
threadOne = new Thread(new Runnable(){
public void run(){
try {
list.lockWrite();
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
} finally {
list.unlockWrite();
}
}
});
threadTwo = new Thread(new Runnable(){
public void run(){
//give threadOne time to lock (just in case)
try {
Thread.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Expect a wait....");
//if this "add" line is commented out, even the iterator read will be locked.
//So its not only locking on the add, but also the read which is correct.
list.add("String 3");
for (ListIterator<String> it = list.listIterator(); it.hasNext();) {
System.out.println("String at index " + it.nextIndex() + ": " + it.next());
}
System.out.println("ThreadTwo completed in " + (System.currentTimeMillis() - lStartMS) + "ms");
}
});
threadOne.start();
threadTwo.start();
}
private static void threadedReadLock(){
final ThreadSafeArrayList<String> list = getList();
Thread threadOne;
Thread threadTwo;
final long lStartMS = System.currentTimeMillis();
list.add("String 1");
list.add("String 2");
System.out.println("******* basic read lock test *******");
threadOne = new Thread(new Runnable(){
public void run(){
try {
list.lockRead();
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
} finally {
list.unlockRead();
}
}
});
threadTwo = new Thread(new Runnable(){
public void run(){
//give threadOne time to lock (just in case)
try {
Thread.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Expect a wait if adding, but not reading....");
//if this "add" line is commented out, the read will continue without holding up the thread
list.add("String 3");
for (ListIterator<String> it = list.listIterator(); it.hasNext();) {
System.out.println("String at index " + it.nextIndex() + ": " + it.next());
}
System.out.println("ThreadTwo completed in " + (System.currentTimeMillis() - lStartMS) + "ms");
}
});
threadOne.start();
threadTwo.start();
}
Another approach is to protect all access to the list, but with a ReadWriteLock instead of synchronized blocks.
This allows simultaneous reads in a safe manner, and could improve performance a lot in a scenario with many reads and few writes.
Use CopyOnWriteArrayList, and synchronize on write operations only
CopyOnWriteArrayList<TestObject> list = ...
final Object writeLock = new Object();
void writeOpA()
{
synchronized(writeLock)
{
read/write list
}
}
void writeOpB()
{
synchronized(writeLock)
{
read/write list
}
}
Therefore no two write sessions will overlap with each other.
Reads require no lock. But a read session may see a changing list. If we want a read session to see a snapshot of the list, either use iterator(), or take a snapshot by toArray().
It's probably even better if you do the copy-on-write yourselves
volatile Foo data = new Foo(); // ArrayList in your case
final Object writeLock = new Object();
void writeOpA()
{
synchronized(writeLock)
{
Foo clone = data.clone();
// read/write clone
data = clone;
}
}
void writeOpB()
{
// similar...
}
void readSession()
{
Foo snapshot = data;
// read snapshot
}
If you're modifying during an iteration, yeah, you have to use option 3. None of the others will actually do what you want.
More specifically: given what you want to do, you have to lock the entire list for the length of the iteration, because you might modify it in the middle, which would corrupt any other iterators working on the list at the same time. That means option 3, since the Java language can't just have a "synchronized iterator" -- the iterator itself can only synchronize individual calls to hasNext() or next(), but it can't synchronize across the entire length of the iteration.

Synchronization doesnt quite work

I have the following code which I'm trying to write a LRU Cache. I have a runner class that I'm running against random capacity of the cache. However, Cache size is exceeding it is capacity. When I make the FixLRU method synchronized, it becomes more accurate when the cache size is more than 100 however it gets slower. When I remove the synchronized keyword, cache is becoming less accurate.
Any ideas how to make this work properly? more accurate?
import java.util.concurrent.ConcurrentHashMap;
public abstract class Cache<TKey, TValue> implements ICache<TKey,TValue>{
private final ConcurrentHashMap<TKey,TValue> _cache;
protected Cache()
{
_cache= new ConcurrentHashMap<TKey, TValue>();
}
protected Cache(int capacity){
_cache = new ConcurrentHashMap<TKey, TValue>(capacity);
}
#Override
public void Put(TKey key, TValue value) {
_cache.put(key, value);
}
#Override
public TValue Get(TKey key) {
TValue value = _cache.get(key);
return value;
}
#Override
public void Delete(TKey key) {
_cache.remove(key);
}
#Override
public void Purge() {
for(TKey key : _cache.keySet()){
_cache.remove(key);
}
}
public void IterateCache(){
for(TKey key: _cache.keySet()){
System.out.println("key:"+key+" , value:"+_cache.get(key));
}
}
public int Count()
{
return _cache.size();
}
}
import java.util.concurrent.ConcurrentLinkedQueue;
public class LRUCache<TKey,TValue> extends Cache<TKey,TValue> implements ICache<TKey, TValue> {
private ConcurrentLinkedQueue<TKey> _queue;
private int capacity;
public LRUCache(){
_queue = new ConcurrentLinkedQueue<TKey>();
}
public LRUCache(int capacity){
this();
this.capacity = capacity;
}
public void Put(TKey key, TValue value)
{
FixLRU(key);
super.Put(key, value);
}
private void FixLRU(TKey key)
{
if(_queue.contains(key))
{
_queue.remove(key);
super.Delete(key);
}
_queue.offer(key);
while(_queue.size() > capacity){
TKey keytoRemove =_queue.poll();
super.Delete(keytoRemove);
}
}
public TValue Get(TKey key){
TValue _value = super.Get(key);
if(_value == null){
return null;
}
FixLRU(key);
return _value;
}
public void Delete(TKey key){
super.Delete(key);
}
}
public class RunningLRU extends Thread{
static LRUCache<String, String> cache = new LRUCache<String, String>(50);
public static void main(String [ ] args) throws InterruptedException{
Thread t1 = new RunningLRU();
t1.start();
Thread t2 = new RunningLRU();
t2.start();
Thread t3 = new RunningLRU();
t3.start();
Thread t4 = new RunningLRU();
t4.start();
try {
t1.join();
t2.join();
t3.join();
t4.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(cache.toString());
cache.IterateCache();
System.out.println(cache.Count());
}
#Override
public void run() {
for(int i=0;i<100000;i++)
cache.Put("test"+i, "test"+i);
}
}
I would clean up additional entries after adding your entry. This minimises the time that the cache will be larger than you wanted. You can also trigger size() to perform a cleanup.
Any ideas how to make this work properly?
Does your test reflect how your application behaves? It may be that the cache behaves properly (or much closer to it) when you have not hammering it. ;)
If this test does reflect your application behaviour then perhaps an LRUCache is not the best choice.
Your problem seems to be that you aren't using the special synchronized version of the put method putIfAbsent(). If you don't use it, a ConcurrentHashMap behaves as if not synchronized - like a normal Map eg HashMap.
When you use it, you must continue to use only the returned value, so your Put() method doesn't have the correct signature (it should return TValue) to support concurrency. You'll need to redesign your interface.
Also, in java land, unlike .Net land, we name our methods with a leading lowercase, eg put(), not Put(). It would behove you to rename your methods thus.

Categories