Consider the following method:
public void add(final List<ReportingSTG> message) {
if(stopRequested.get()) {
synchronized (this) {
if(stopRequested.get()) {
retryQueue.put(message);
}
}
}
messages.add(message);
if(messages.size() >= batchSize && waitingThreads.get() == 0) {
synchronized (this) {
if(messages.size() >= batchSize && waitingThreads.get() == 0) {
final List<List<ReportingSTG>> clone = new ArrayList<List<ReportingSTG>>(messages);
messages.clear();
if(processors.size()>=numOfProcessors) {
waitingThreads.incrementAndGet();
waitForProcessor();
waitingThreads.decrementAndGet();
}
startProcessor(clone);
}
}
}
}
Particularly these 2 lines:
1: final List<List<ReportingSTG>> clone = new ArrayList<List<ReportingSTG>>(messages);
2: messages.clear();
If thread A enters synchronized block and acquires a lock on current object, does this means that state of instance properties of this object can't be changed by other threads outside synchronized block (while thread A is in synchronized block)?
For example, thread A executed line 1 -> thread B entered the method and added new list entry (messages.add(message)) -> Thread a executed line 2 -> entry what was added by thread B removed (together with other entries). Is this scenario possible? Or thread B will wait while lock is released by thread A and only then will remove the List entry
messages is a non-static synchronizedList
UPD: updated method, possible solution:
public void add(final List<ReportingSTG> message) {
if(stopRequested.get()) {
synchronized (this) {
if(stopRequested.get()) {
retryQueue.put(message);
}
}
}
while (addLock.get()){
try {
Thread.sleep(1);
} catch (InterruptedException e) {}
}
messages.add(message);
if(messages.size() >= batchSize && waitingThreads.get() == 0) {
synchronized (this) {
if(messages.size() >= batchSize && waitingThreads.get() == 0) {
addLock.set(true);
final List<List<ReportingSTG>> clone = new ArrayList<List<ReportingSTG>>(messages);
messages.clear();
addLock.set(false);
if(processors.size()>=numOfProcessors) {
waitingThreads.incrementAndGet();
waitForProcessor();
waitingThreads.decrementAndGet();
}
startProcessor(clone);
}
}
}
}
addLock - AtomicBoolean, false by default
The described scenario is possible. i.e. you may loose messages.
The synchronized keyword ensure that you never have 2 threads running the synchronized section simultaneously. It doesn't prevent modification by another thread of objects that are manipulated inside the synchronized block (as soon as this other thread have access to them).
This is a possible solution since it synchronize the add and the clear.
private Object lock = new Object();
public void add(final List<ReportingSTG> message) {
if(stopRequested.get()) {
synchronized (this) {
if(stopRequested.get()) {
retryQueue.put(message);
}
}
}
synchronized(lock){
messages.add(message);
if(messages.size() >= batchSize && waitingThreads.get() == 0) {
final List<List<ReportingSTG>> clone = new ArrayList<List<ReportingSTG>>(messages);
messages.clear();
if(processors.size()>=numOfProcessors) {
waitingThreads.incrementAndGet();
waitForProcessor();
waitingThreads.decrementAndGet();
}
startProcessor(clone);
}
}
}
I put together a DoubleBufferedList class recently. Perhaps using that would avoid your issue completely. As it's name suggests it implements the double-buffering algorithm but for lists.
This class allow you to have many producer threads and many consumer threads. Each producer thread can add to the current list. Each consumer thread gets the whole current list for processing.
This also uses no locks, just atomics so it should run efficiently.
Note that much of this is test code. You can remove everything after the // TESTING comment but you may find the rigour of the tests comforting.
public class DoubleBufferedList<T> {
// Atomic reference so I can atomically swap it through.
// Mark = true means I am adding to it so unavailable for iteration.
private AtomicMarkableReference<List<T>> list = new AtomicMarkableReference<>(newList(), false);
// Factory method to create a new list - may be best to abstract this.
protected List<T> newList() {
return new ArrayList<>();
}
// Get and replace the current list.
public List<T> get() {
// Atomically grab and replace the list with an empty one.
List<T> empty = newList();
List<T> it;
// Replace an unmarked list with an empty one.
if (!list.compareAndSet(it = list.getReference(), empty, false, false)) {
// Failed to replace!
// It is probably marked as being appended to but may have been replaced by another thread.
// Return empty and come back again soon.
return Collections.EMPTY_LIST;
}
// Successfull replaced an unmarked list with an empty list!
return it;
}
// Grab and lock the list in preparation for append.
private List<T> grab() {
List<T> it;
// We cannot fail so spin on get and mark.
while (!list.compareAndSet(it = list.getReference(), it, false, true)) {
// Spin on mark.
}
return it;
}
// Release the list.
private void release(List<T> it) {
// Unmark it. Should never fail because once marked it will not be replaced.
if (!list.attemptMark(it, false)) {
throw new IllegalMonitorStateException("it changed while we were adding to it!");
}
}
// Add an entry to the list.
public void add(T entry) {
List<T> it = grab();
try {
// Successfully marked! Add my new entry.
it.add(entry);
} finally {
// Always release after a grab.
release(it);
}
}
// Add many entries to the list.
public void add(List<T> entries) {
List<T> it = grab();
try {
// Successfully marked! Add my new entries.
it.addAll(entries);
} finally {
// Always release after a grab.
release(it);
}
}
// Add a number of entries.
public void add(T... entries) {
// Make a list of them.
add(Arrays.asList(entries));
}
// TESTING.
// How many testers to run.
static final int N = 10;
// The next one we're waiting for.
static final AtomicInteger[] seen = new AtomicInteger[N];
// The ones that arrived out of order.
static final Set<Widget>[] queued = new ConcurrentSkipListSet[N];
static {
// Populate the arrays.
for (int i = 0; i < N; i++) {
seen[i] = new AtomicInteger();
queued[i] = new ConcurrentSkipListSet();
}
}
// Thing that is produced and consumed.
private static class Widget implements Comparable<Widget> {
// Who produced it.
public final int producer;
// Its sequence number.
public final int sequence;
public Widget(int producer, int sequence) {
this.producer = producer;
this.sequence = sequence;
}
#Override
public String toString() {
return producer + "\t" + sequence;
}
#Override
public int compareTo(Widget o) {
// Sort on producer
int diff = Integer.compare(producer, o.producer);
if (diff == 0) {
// And then sequence
diff = Integer.compare(sequence, o.sequence);
}
return diff;
}
}
// Produces Widgets and feeds them to the supplied DoubleBufferedList.
private static class TestProducer implements Runnable {
// The list to feed.
final DoubleBufferedList<Widget> list;
// My ID
final int id;
// The sequence we're at
int sequence = 0;
// Set this at true to stop me.
public volatile boolean stop = false;
public TestProducer(DoubleBufferedList<Widget> list, int id) {
this.list = list;
this.id = id;
}
#Override
public void run() {
// Just pump the list.
while (!stop) {
list.add(new Widget(id, sequence++));
}
}
}
// Consumes Widgets from the suplied DoubleBufferedList
private static class TestConsumer implements Runnable {
// The list to bleed.
final DoubleBufferedList<Widget> list;
// My ID
final int id;
// Set this at true to stop me.
public volatile boolean stop = false;
public TestConsumer(DoubleBufferedList<Widget> list, int id) {
this.list = list;
this.id = id;
}
#Override
public void run() {
// The list I am working on.
List<Widget> l = list.get();
// Stop when stop == true && list is empty
while (!(stop && l.isEmpty())) {
// Record all items in list as arrived.
arrived(l);
// Grab another list.
l = list.get();
}
}
private void arrived(List<Widget> l) {
for (Widget w : l) {
// Mark each one as arrived.
arrived(w);
}
}
// A Widget has arrived.
private static void arrived(Widget w) {
// Which one is it?
AtomicInteger n = seen[w.producer];
// Don't allow multi-access to the same producer data or we'll end up confused.
synchronized (n) {
// Is it the next to be seen?
if (n.compareAndSet(w.sequence, w.sequence + 1)) {
// It was the one we were waiting for! See if any of the ones in the queue can now be consumed.
for (Iterator<Widget> i = queued[w.producer].iterator(); i.hasNext();) {
Widget it = i.next();
// Is it in sequence?
if (n.compareAndSet(it.sequence, it.sequence + 1)) {
// Done with that one too now!
i.remove();
} else {
// Found a gap! Stop now.
break;
}
}
} else {
// Out of sequence - Queue it.
queued[w.producer].add(w);
}
}
}
}
// Main tester
public static void main(String args[]) {
try {
System.out.println("DoubleBufferedList:Test");
// Create my test buffer.
DoubleBufferedList<Widget> list = new DoubleBufferedList<>();
// All threads running - Producers then Consumers.
List<Thread> running = new LinkedList<>();
// Start some producer tests.
List<TestProducer> producers = new ArrayList<>();
for (int i = 0; i < N; i++) {
TestProducer producer = new TestProducer(list, i);
Thread t = new Thread(producer);
t.setName("Producer " + i);
t.start();
producers.add(producer);
running.add(t);
}
// Start the same number of consumers.
List<TestConsumer> consumers = new ArrayList<>();
for (int i = 0; i < N; i++) {
TestConsumer consumer = new TestConsumer(list, i);
Thread t = new Thread(consumer);
t.setName("Consumer " + i);
t.start();
consumers.add(consumer);
running.add(t);
}
// Wait for a while.
Thread.sleep(5000);
// Close down all.
for (TestProducer p : producers) {
p.stop = true;
}
for (TestConsumer c : consumers) {
c.stop = true;
}
// Wait for all to stop.
for (Thread t : running) {
System.out.println("Joining " + t.getName());
t.join();
}
// What results did we get?
for (int i = 0; i < N; i++) {
// How far did the producer get?
int gotTo = producers.get(i).sequence;
// The consumer's state
int seenTo = seen[i].get();
Set<Widget> queue = queued[i];
if (seenTo == gotTo && queue.isEmpty()) {
System.out.println("Producer " + i + " ok.");
} else {
// Different set consumed as produced!
System.out.println("Producer " + i + " Failed: gotTo=" + gotTo + " seenTo=" + seenTo + " queued=" + queue);
}
}
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
Related
I have to create a hedge simulator. There is eg. 10 segments of it and each of them should have its own dedicated Thread simulating grow of the segment (each time we're about to calculate whether segment growed up, we should perform random test).
In addition there should be one additional, gardener Thread.
Gardener should cut segment of hence, when its size reaches 10 (then he cuts its size back to initial level of 1 and adds notifies it in his notes).
My attempt to make it working was like this:
public class Segment implements Runnable {
private int currentSize;
#Override
public void run() {
if(Math.random() < 0.3)
incrementSize();
}
private synchronized void incrementSize() {
currentSize++;
}
public synchronized int getCurrentSize() {
return currentSize;
}
public synchronized void setCurrentSize(int newSize) {
currentSize = newSize;
}
}
public class Gardener implements Runnable {
private int[] segmentsCutAmount = new int[10]; //Gardener notes
private Collection<Segment> segments;
public Gardener(Collection<Segment> segmentsToLookAfter) {
segments = segmentsToLookAfter;
}
#Override
public void run() {
while(true) {
//Have no idea how to deal with 10 different segments here
}
}
}
public class Main {
private Collection<Segment> segments = new ArrayList<>():
public void main(String[] args) {
Main program = new Main();
for(int i = 0; i < 10; i++)
program.addSegment();
Thread gardenerThread = new Thread(new Gardener(program.segments));
}
private void addSegment(Collection<Segment> segments) {
Segment segment = new Segment();
Thread segmentThread = new Thread(segment);
segmentThread.start();
segments.add(segment);
}
}
I am not sure what am I supposed to do, when segment reaches max height.
If there was 10 gardeners, every of them could observe one segment, but, unfortunelly, gardener is a lonely shooter - he has no family and his friends are very busy and are not willing to help him. And are you willing to help me? :D
I generally know basics of synchronization - synchronized methods/blocks, Locks, wait and notify methods, but this time I have totally no idea what to do :(
Its like horrible deadlock! Of course I am not expecting to be spoonfeeded. Any kind of hint would be very helpful as well. Thank you in advance and have a wonderful day!
About that queue. You can use the ExecutorService for that.
Letting the Hedge grow
So let's you have a hedge that can grow and be cut.
class Hedge {
private AtomicInteger height = new AtomicInteger(1);
public int grow() {
return height.incrementAndGet();
}
public int cut() {
return height.decrementAndGet();
}
}
And then you have an environment that will let the hedge grow. This will simulate the hedge sections; each environment is responsible for one of the sections only. It will also notify a Consumer<Integer> when the hedge size has gone.
class SectionGrower implements Runnable {
public static final Random RANDOM = new Random();
private final Hedge hedge;
private final Consumer<Integer> hedgeSizeListener;
public SectionGrower (Hedge h, Consumer<Integer> hl) {
hedge = h;
hedgeSizeListener = hl
}
public void run() {
while (true) { // grow forever
try {
// growing the hedge takes up to 20 seconds
Thread.sleep(RANDOM.nextInt(20)*1000);
int sectionHeight = hedge.grow();
hedgeSizeListener.accept(sectionHeight);
} catch (Exception e) {} // do something here
}
}
}
So at this point, you can do this.
ExecutorService growingExecutor = Executors.newFixedThreadPool(10);
Consumer<Integer> printer = i -> System.out.printf("hedge section has grown to %d\n", i.intValue());
for (int i = 0; i < 10; i++) {
Hedge section = new Hedge();
Environment grower = new SectionGrower(section, printer);
growingExecutor.submit(grower::run);
}
This will grow 10 hedge sections and print the current height for each as they grow.
Adding the Gardener
So now you need a Gardener that can cut the hedge.
class Gardener {
public static final Random RANDOM = new Random();
public void cutHedge(Hedge h) {
try {
// cutting the hedge takes up to 10 seconds
Thread.sleep(RANDOM.nextInt(10)*1000);
h.cut();
} catch (Exception e) {} // do something here
}
}
Now you need some construct to give him work; this is where the BlockingQueue comes in. We've already made sure the Environment can notify a Consumer<Integer> after a section has grown, so that's what we can use.
ExecutorService growingExecutor = Executors.newFixedThreadPool(10);
// so this is the queue
ExecutorService gardenerExecutor = Executors.newSingleThreadPool();
Gardener gardener = new Gardener();
for (int i = 0; i < 10; i++) {
Hedge section = new Hedge();
Consumer<Integer> cutSectionIfNeeded = i -> {
if (i > 8) { // size exceeded?
// have the gardener cut the section, ie adding item to queue
gardenerExecutor.submit(() -> gardener.cutHedge(section));
}
};
SectionGrower grower = new SectionGrower(section, cutSectionIfNeeded);
growingExecutor.submit(grower::run);
}
So I haven't actually tried this but it should work with some minor adjustments.
Note that I use the AtomicInteger in the hedge because it might grow and get cut "at the same time", because that happens in different threads.
The in following code Gardner waits for Segment to get to an arbitrary value of 9.
When Segment gets to 9, it notifies Gardner, and waits for Gardner to finish trimming:
import java.util.ArrayList;
import java.util.Collection;
public class Gardening {
public static void main(String[] args) {
Collection<Segment> segments = new ArrayList<>();
for(int i = 0; i < 2; i++) {
addSegment(segments);
}
Thread gardenerThread = new Thread(new Gardener(segments));
gardenerThread.start();
}
private static void addSegment(Collection<Segment> segments) {
Segment segment = new Segment();
Thread segmentThread = new Thread(segment);
segmentThread.start();
segments.add(segment);
}
}
class Gardener implements Runnable {
private Collection<Segment> segments;
private boolean isStop = false; //add stop flag
public Gardener(Collection<Segment> segmentsToLookAfter) {
segments = segmentsToLookAfter;
}
#Override
public void run() {
for (Segment segment : segments) {
follow(segment);
}
}
private void follow(Segment segment) {
new Thread(() -> {
Thread t = new Thread(segment);
t.start();
synchronized (segment) {
while(! isStop) {
try {
segment.wait(); //wait for segment
} catch (InterruptedException ex) { ex.printStackTrace();}
System.out.println("Trimming Segment " + segment.getId()+" size: "
+ segment.getCurrentSize() ); //add size to notes
segment.setCurrentSize(0); //trim size
segment.notify(); //notify so segment continues
}
}
}).start();
}
}
class Segment implements Runnable {
private int currentSize;
private boolean isStop = false; //add stop flag
private static int segmentIdCounter = 0;
private int segmentId = segmentIdCounter++; //add an id to identify thread
#Override
public void run() {
synchronized (this) {
while ( ! isStop ) {
if(Math.random() < 0.0000001) {
incrementSize();
}
if(getCurrentSize() >= 9) {
notify(); //notify so trimming starts
try {
wait(); //wait for gardener to finish
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
}
}
private synchronized void incrementSize() {
currentSize++;
System.out.println("Segment " + getId()+" size: "
+ getCurrentSize() );
}
public synchronized int getCurrentSize() { return currentSize; }
public synchronized void setCurrentSize(int newSize) {
currentSize = newSize;
}
public int getId() { return segmentId; }
}
The mutual waiting mechanizem can be implemented also with CountDownLatch.
Note that my experience with threads is limited. I hope other users comment and suggest improvements.
I am looking for a Java implementation of the following concurrency semantics. I want something similar to ReadWriteLock except symmetrical, i.e. both the read and write sides can be shared amongst many threads, but read excludes write and vice versa.
There are two locks, let's call them A and B.
Lock A is shared, i.e. there may be multiple threads holding it concurrently. Lock B is also shared, there may be multiple threads holding it concurrently.
If any thread holds lock A then no thread may take B – threads attempting to take B shall block until all threads holding A have released A.
If any thread holds lock B then no thread may take A – threads attempting to take A shall block until all threads holding B have released B.
Is there an existing library class that achieves this? At the moment I have approximated the desired functionality with a ReadWriteLock because fortunately the tasks done in the context of lock B are somewhat rarer. It feels like a hack though, and it could affect the performance of my program under heavy load.
Short answer:
In the standard library, there is nothing like what you need.
Long answer:
To easily implement a custom Lock you should subclass or delegate to an AbstractQueuedSynchronizer.
The following code is an example of a non-fair lock that implements what you need, including some (non exhausting) test. I called it LeftRightLock because of the binary nature of your requirements.
The concept is pretty straightforward:
AbstractQueuedSynchronizer exposes a method to atomically set the state of an int using the Compare and swap idiom ( compareAndSetState(int expect, int update) ), we can use the exposed state keep the count of the threads holding the lock, setting it to a positive value in case the Right lock is being held or a negative value in case the Left lock is being held.
Than we just make sure of the following conditions:
- you can lock Left only if the state of the internal AbstractQueuedSynchronizer is zero or negative
- you can lock Right only if the state of the internal AbstractQueuedSynchronizer is zero or positive
LeftRightLock.java
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
import java.util.concurrent.locks.Lock;
/**
* A binary mutex with the following properties:
*
* Exposes two different {#link Lock}s: LEFT, RIGHT.
*
* When LEFT is held other threads can acquire LEFT but thread trying to acquire RIGHT will be
* blocked. When RIGHT is held other threads can acquire RIGHT but thread trying to acquire LEFT
* will be blocked.
*/
public class LeftRightLock {
public static final int ACQUISITION_FAILED = -1;
public static final int ACQUISITION_SUCCEEDED = 1;
private final LeftRightSync sync = new LeftRightSync();
public void lockLeft() {
sync.acquireShared(LockSide.LEFT.getV());
}
public void lockRight() {
sync.acquireShared(LockSide.RIGHT.getV());
}
public void releaseLeft() {
sync.releaseShared(LockSide.LEFT.getV());
}
public void releaseRight() {
sync.releaseShared(LockSide.RIGHT.getV());
}
public boolean tryLockLeft() {
return sync.tryAcquireShared(LockSide.LEFT) == ACQUISITION_SUCCEEDED;
}
public boolean tryLockRight() {
return sync.tryAcquireShared(LockSide.RIGHT) == ACQUISITION_SUCCEEDED;
}
private enum LockSide {
LEFT(-1), NONE(0), RIGHT(1);
private final int v;
LockSide(int v) {
this.v = v;
}
public int getV() {
return v;
}
}
/**
* <p>
* Keep count the count of threads holding either the LEFT or the RIGHT lock.
* </p>
*
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) greater than 0 means one or more threads are holding RIGHT lock. </li>
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) lower than 0 means one or more threads are holding LEFT lock.</li>
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) equal to zero means no thread is holding any lock.</li>
*/
private static final class LeftRightSync extends AbstractQueuedSynchronizer {
#Override
protected int tryAcquireShared(int requiredSide) {
return (tryChangeThreadCountHoldingCurrentLock(requiredSide, ChangeType.ADD) ? ACQUISITION_SUCCEEDED : ACQUISITION_FAILED);
}
#Override
protected boolean tryReleaseShared(int requiredSide) {
return tryChangeThreadCountHoldingCurrentLock(requiredSide, ChangeType.REMOVE);
}
public boolean tryChangeThreadCountHoldingCurrentLock(int requiredSide, ChangeType changeType) {
if (requiredSide != 1 && requiredSide != -1)
throw new AssertionError("You can either lock LEFT or RIGHT (-1 or +1)");
int curState;
int newState;
do {
curState = this.getState();
if (!sameSide(curState, requiredSide)) {
return false;
}
if (changeType == ChangeType.ADD) {
newState = curState + requiredSide;
} else {
newState = curState - requiredSide;
}
//TODO: protect against int overflow (hopefully you won't have so many threads)
} while (!this.compareAndSetState(curState, newState));
return true;
}
final int tryAcquireShared(LockSide lockSide) {
return this.tryAcquireShared(lockSide.getV());
}
final boolean tryReleaseShared(LockSide lockSide) {
return this.tryReleaseShared(lockSide.getV());
}
private boolean sameSide(int curState, int requiredSide) {
return curState == 0 || sameSign(curState, requiredSide);
}
private boolean sameSign(int a, int b) {
return (a >= 0) ^ (b < 0);
}
public enum ChangeType {
ADD, REMOVE
}
}
}
LeftRightLockTest.java
import org.junit.Test;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
public class LeftRightLockTest {
int logLineSequenceNumber = 0;
private LeftRightLock sut = new LeftRightLock();
#Test(timeout = 2000)
public void acquiringLeftLockExcludeAcquiringRightLock() throws Exception {
sut.lockLeft();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockRight());
assertFalse("I shouldn't be able to acquire the RIGHT lock!", task.get());
}
#Test(timeout = 2000)
public void acquiringRightLockExcludeAcquiringLeftLock() throws Exception {
sut.lockRight();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockLeft());
assertFalse("I shouldn't be able to acquire the LEFT lock!", task.get());
}
#Test(timeout = 2000)
public void theLockShouldBeReentrant() throws Exception {
sut.lockLeft();
assertTrue(sut.tryLockLeft());
}
#Test(timeout = 2000)
public void multipleThreadShouldBeAbleToAcquireTheSameLock_Right() throws Exception {
sut.lockRight();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockRight());
assertTrue(task.get());
}
#Test(timeout = 2000)
public void multipleThreadShouldBeAbleToAcquireTheSameLock_left() throws Exception {
sut.lockLeft();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockLeft());
assertTrue(task.get());
}
#Test(timeout = 2000)
public void shouldKeepCountOfAllTheThreadsHoldingTheSide() throws Exception {
CountDownLatch latchA = new CountDownLatch(1);
CountDownLatch latchB = new CountDownLatch(1);
Thread threadA = spawnThreadToAcquireLeftLock(latchA, sut);
Thread threadB = spawnThreadToAcquireLeftLock(latchB, sut);
System.out.println("Both threads have acquired the left lock.");
try {
latchA.countDown();
threadA.join();
boolean acqStatus = sut.tryLockRight();
System.out.println("The right lock was " + (acqStatus ? "" : "not") + " acquired");
assertFalse("There is still a thread holding the left lock. This shouldn't succeed.", acqStatus);
} finally {
latchB.countDown();
threadB.join();
}
}
#Test(timeout = 5000)
public void shouldBlockThreadsTryingToAcquireLeftIfRightIsHeld() throws Exception {
sut.lockLeft();
CountDownLatch taskStartedLatch = new CountDownLatch(1);
final Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> {
taskStartedLatch.countDown();
sut.lockRight();
return false;
});
taskStartedLatch.await();
Thread.sleep(100);
assertFalse(task.isDone());
}
#Test
public void shouldBeFreeAfterRelease() throws Exception {
sut.lockLeft();
sut.releaseLeft();
assertTrue(sut.tryLockRight());
}
#Test
public void shouldBeFreeAfterMultipleThreadsReleaseIt() throws Exception {
CountDownLatch latch = new CountDownLatch(1);
final Thread thread1 = spawnThreadToAcquireLeftLock(latch, sut);
final Thread thread2 = spawnThreadToAcquireLeftLock(latch, sut);
latch.countDown();
thread1.join();
thread2.join();
assertTrue(sut.tryLockRight());
}
#Test(timeout = 2000)
public void lockShouldBeReleasedIfNoThreadIsHoldingIt() throws Exception {
CountDownLatch releaseLeftLatch = new CountDownLatch(1);
CountDownLatch rightLockTaskIsRunning = new CountDownLatch(1);
Thread leftLockThread1 = spawnThreadToAcquireLeftLock(releaseLeftLatch, sut);
Thread leftLockThread2 = spawnThreadToAcquireLeftLock(releaseLeftLatch, sut);
Future<Boolean> acquireRightLockTask = Executors.newSingleThreadExecutor().submit(() -> {
if (sut.tryLockRight())
throw new AssertionError("The left lock should be still held, I shouldn't be able to acquire right a this point.");
printSynchronously("Going to be blocked on right lock");
rightLockTaskIsRunning.countDown();
sut.lockRight();
printSynchronously("Lock acquired!");
return true;
});
rightLockTaskIsRunning.await();
releaseLeftLatch.countDown();
leftLockThread1.join();
leftLockThread2.join();
assertTrue(acquireRightLockTask.get());
}
private synchronized void printSynchronously(String str) {
System.out.println(logLineSequenceNumber++ + ")" + str);
System.out.flush();
}
private Thread spawnThreadToAcquireLeftLock(CountDownLatch releaseLockLatch, LeftRightLock lock) throws InterruptedException {
CountDownLatch lockAcquiredLatch = new CountDownLatch(1);
final Thread thread = spawnThreadToAcquireLeftLock(releaseLockLatch, lockAcquiredLatch, lock);
lockAcquiredLatch.await();
return thread;
}
private Thread spawnThreadToAcquireLeftLock(CountDownLatch releaseLockLatch, CountDownLatch lockAcquiredLatch, LeftRightLock lock) {
final Thread thread = new Thread(() -> {
lock.lockLeft();
printSynchronously("Thread " + Thread.currentThread() + " Acquired left lock");
try {
lockAcquiredLatch.countDown();
releaseLockLatch.await();
} catch (InterruptedException ignore) {
} finally {
lock.releaseLeft();
}
printSynchronously("Thread " + Thread.currentThread() + " RELEASED left lock");
});
thread.start();
return thread;
}
}
I don't know any library that does that you want. Even if there is such a library it possess little value because every time your request changes the library stops doing the magic.
The actual question here is "How to I implement my own lock with custom specification?"
Java provides tool for that named AbstractQueuedSynchronizer. It has extensive documentation. Apart from docs one would possibly like to look at CountDownLatch and ReentrantLock sources and use them as examples.
For your particular request see code below, but beware that it is 1) not fair 2) not tested
public class MultiReadWriteLock implements ReadWriteLock {
private final Sync sync;
private final Lock readLock;
private final Lock writeLock;
public MultiReadWriteLock() {
this.sync = new Sync();
this.readLock = new MultiLock(Sync.READ, sync);
this.writeLock = new MultiLock(Sync.WRITE, sync);
}
#Override
public Lock readLock() {
return readLock;
}
#Override
public Lock writeLock() {
return writeLock;
}
private static final class Sync extends AbstractQueuedSynchronizer {
private static final int READ = 1;
private static final int WRITE = -1;
#Override
public int tryAcquireShared(int arg) {
int state, result;
do {
state = getState();
if (state >= 0 && arg == READ) {
// new read
result = state + 1;
} else if (state <= 0 && arg == WRITE) {
// new write
result = state - 1;
} else {
// blocked
return -1;
}
} while (!compareAndSetState(state, result));
return 1;
}
#Override
protected boolean tryReleaseShared(int arg) {
int state, result;
do {
state = getState();
if (state == 0) {
return false;
}
if (state > 0 && arg == READ) {
result = state - 1;
} else if (state < 0 && arg == WRITE) {
result = state + 1;
} else {
throw new IllegalMonitorStateException();
}
} while (!compareAndSetState(state, result));
return result == 0;
}
}
private static class MultiLock implements Lock {
private final int parameter;
private final Sync sync;
public MultiLock(int parameter, Sync sync) {
this.parameter = parameter;
this.sync = sync;
}
#Override
public void lock() {
sync.acquireShared(parameter);
}
#Override
public void lockInterruptibly() throws InterruptedException {
sync.acquireSharedInterruptibly(parameter);
}
#Override
public boolean tryLock() {
return sync.tryAcquireShared(parameter) > 0;
}
#Override
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
return sync.tryAcquireSharedNanos(parameter, unit.toNanos(time));
}
#Override
public void unlock() {
sync.releaseShared(parameter);
}
#Override
public Condition newCondition() {
throw new UnsupportedOperationException(
"Conditions are unsupported as there are no exclusive access"
);
}
}
}
After my nth attempt to make a simple fair implementation, I think I understand why I could not find another library/example of the "mutual exclusive lock-pair": it requires a pretty specific user-case. As OP mentioned, you can get a long way with the ReadWriteLock and a fair lock-pair is only useful when there are many requests for a lock in quick succession (else you might as well use one normal lock).
The implementation below is more of a "permit dispenser": it is not re-entrant. It can be made re-entrant though (if not, I fear I failed to make the code simple and readable) but it requires some additional administration for various cases (e.g. one thread locking A twice, still needs to unlock A twice and the unlock-method needs to know when there are no more locks outstanding). An option to throw a deadlock error when one thread locks A and wants to lock B is probably a good idea.
The main idea is that there is an "active lock" that can only be changed by the lock-method when there are no (requests for) locks at all and can be changed by the unlock-method when the active locks outstanding reaches zero. The rest is basically keeping count of lock-requests and making threads wait until the active lock can be changed. Making threads wait involves working with InterruptedExceptions and I made a compromise there: I could not find a good solution that works well in all cases (e.g. application shutdown, one thread that gets interrupted, etc.).
I only did some basic testing (test class at the end), more validation is needed.
import java.util.concurrent.Semaphore;
import java.util.concurrent.locks.ReentrantLock;
/**
* A pair of mutual exclusive read-locks: many threads can hold a lock for A or B, but never A and B.
* <br>Usage:<pre>
* PairedLock plock = new PairedLock();
* plock.lockA();
* try {
* // do stuff
* } finally {
* plock.unlockA();
* }</pre>
* This lock is not reentrant: a lock is not associated with a thread and a thread asking for the same lock
* might be blocked the second time (potentially causing a deadlock).
* <p>
* When a lock for A is active, a lock for B will wait for all locks on A to be unlocked and vice versa.
* <br>When a lock for A is active, and a lock for B is waiting, subsequent locks for A will wait
* until all (waiting) locks for B are unlocked.
* I.e. locking is fair (in FIFO order).
* <p>
* See also
* stackoverflow-java-concurrency-paired-locks-with-shared-access
*
* #author vanOekel
*
*/
public class PairedLock {
static final int MAX_LOCKS = 2;
static final int CLOSE_PERMITS = 10_000;
/** Use a fair lock to keep internal state instead of the {#code synchronized} keyword. */
final ReentrantLock state = new ReentrantLock(true);
/** Amount of threads that have locks. */
final int[] activeLocks = new int[MAX_LOCKS];
/** Amount of threads waiting to receive a lock. */
final int[] waitingLocks = new int[MAX_LOCKS];
/** Threads block on a semaphore until locks are available. */
final Semaphore[] waiters = new Semaphore[MAX_LOCKS];
int activeLock;
volatile boolean closed;
public PairedLock() {
super();
for (int i = 0; i < MAX_LOCKS; i++) {
// no need for fair semaphore: unlocks are done for all in one go.
waiters[i] = new Semaphore(0);
}
}
public void lockA() throws InterruptedException { lock(0); }
public void lockB() throws InterruptedException { lock(1); }
public void lock(int lockNumber) throws InterruptedException {
if (lockNumber < 0 || lockNumber >= MAX_LOCKS) {
throw new IllegalArgumentException("Lock number must be 0 or less than " + MAX_LOCKS);
} else if (isClosed()) {
throw new IllegalStateException("Lock closed.");
}
boolean wait = false;
state.lock();
try {
if (nextLockIsWaiting()) {
wait = true;
} else if (activeLock == lockNumber) {
activeLocks[activeLock]++;
} else if (activeLock != lockNumber && activeLocks[activeLock] == 0) {
// nothing active and nobody waiting - safe to switch to another active lock
activeLock = lockNumber;
activeLocks[activeLock]++;
} else {
// with only two locks this means this is the first lock that needs an active-lock switch.
// in other words:
// activeLock != lockNumber && activeLocks[activeLock] > 0 && waitingLocks[lockNumber] == 0
wait = true;
}
if (wait) {
waitingLocks[lockNumber]++;
}
} finally {
state.unlock();
}
if (wait) {
waiters[lockNumber].acquireUninterruptibly();
// there is no easy way to bring this lock back into a valid state when waiters do no get a lock.
// so for now, use the closed state to make this lock unusable any further.
if (closed) {
throw new InterruptedException("Lock closed.");
}
}
}
protected boolean nextLockIsWaiting() {
return (waitingLocks[nextLock(activeLock)] > 0);
}
protected int nextLock(int lockNumber) {
return (lockNumber == 0 ? 1 : 0);
}
public void unlockA() { unlock(0); }
public void unlockB() { unlock(1); }
public void unlock(int lockNumber) {
// unlock is called in a finally-block and should never throw an exception.
if (lockNumber < 0 || lockNumber >= MAX_LOCKS) {
System.out.println("Cannot unlock lock number " + lockNumber);
return;
}
state.lock();
try {
if (activeLock != lockNumber) {
System.out.println("ERROR: invalid lock state: no unlocks for inactive lock expected (active: " + activeLock + ", unlock: " + lockNumber + ").");
return;
}
activeLocks[lockNumber]--;
if (activeLocks[activeLock] == 0 && nextLockIsWaiting()) {
activeLock = nextLock(lockNumber);
waiters[activeLock].release(waitingLocks[activeLock]);
activeLocks[activeLock] += waitingLocks[activeLock];
waitingLocks[activeLock] = 0;
} else if (activeLocks[lockNumber] < 0) {
System.out.println("ERROR: to many unlocks for lock number " + lockNumber);
activeLocks[lockNumber] = 0;
}
} finally {
state.unlock();
}
}
public boolean isClosed() { return closed; }
/**
* All threads waiting for a lock will be unblocked and an {#link InterruptedException} will be thrown.
* Subsequent calls to the lock-method will throw an {#link IllegalStateException}.
*/
public synchronized void close() {
if (!closed) {
closed = true;
for (int i = 0; i < MAX_LOCKS; i++) {
waiters[i].release(CLOSE_PERMITS);
}
}
}
#Override
public String toString() {
StringBuilder sb = new StringBuilder(this.getClass().getSimpleName());
sb.append("=").append(this.hashCode());
state.lock();
try {
sb.append(", active=").append(activeLock).append(", switching=").append(nextLockIsWaiting());
sb.append(", lockA=").append(activeLocks[0]).append("/").append(waitingLocks[0]);
sb.append(", lockB=").append(activeLocks[1]).append("/").append(waitingLocks[1]);
} finally {
state.unlock();
}
return sb.toString();
}
}
The test class (YMMV - works fine on my system, but may deadlock on yours due to faster or slower starting and running of threads):
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PairedLockTest {
private static final Logger log = LoggerFactory.getLogger(PairedLockTest.class);
public static final ThreadPoolExecutor tp = (ThreadPoolExecutor) Executors.newCachedThreadPool();
public static void main(String[] args) {
try {
new PairedLockTest().test();
} catch (Exception e) {
e.printStackTrace();
} finally {
tp.shutdownNow();
}
}
PairedLock mlock = new PairedLock();
public void test() throws InterruptedException {
CountDownLatch start = new CountDownLatch(1);
CountDownLatch done = new CountDownLatch(2);
mlock.lockA();
try {
logLock("la1 ");
mlock.lockA();
try {
lockAsync(start, null, done, 1);
await(start);
logLock("la2 ");
} finally {
mlock.unlockA();
}
lockAsync(null, null, done, 0);
} finally {
mlock.unlockA();
}
await(done);
logLock();
}
void lockAsync(CountDownLatch start, CountDownLatch locked, CountDownLatch unlocked, int lockNumber) {
tp.execute(() -> {
countDown(start);
await(start);
//log.info("Locking async " + lockNumber);
try {
mlock.lock(lockNumber);
try {
countDown(locked);
logLock("async " + lockNumber + " ");
} finally {
mlock.unlock(lockNumber);
//log.info("Unlocked async " + lockNumber);
//logLock("async " + lockNumber + " ");
}
countDown(unlocked);
} catch (InterruptedException ie) {
log.warn(ie.toString());
}
});
}
void logLock() {
logLock("");
}
void logLock(String msg) {
log.info(msg + mlock.toString());
}
static void countDown(CountDownLatch l) {
if (l != null) {
l.countDown();
}
}
static void await(CountDownLatch l) {
if (l == null) {
return;
}
try {
l.await();
} catch (InterruptedException e) {
log.error(e.toString(), e.getCause());
}
}
}
How about
class ABSync {
private int aHolders;
private int bHolders;
public synchronized void lockA() throws InterruptedException {
while (bHolders > 0) {
wait();
}
aHolders++;
}
public synchronized void lockB() throws InterruptedException {
while (aHolders > 0) {
wait();
}
bHolders++;
}
public synchronized void unlockA() {
aHolders = Math.max(0, aHolders - 1);
if (aHolders == 0) {
notifyAll();
}
}
public synchronized void unlockB() {
bHolders = Math.max(0, bHolders - 1);
if (bHolders == 0) {
notifyAll();
}
}
}
Update: As for "fairness" (or, rather, non-starvation), OPs requirements don't mention it. In order to implement OPs requirements + some form of fairness/non-starvation, it should be specified explicitly (what do you consider fair, how should it behave when flows of requests for currently dominant and non-dominant locks come in etc). One of the ways to implement it would be:
class ABMoreFairSync {
private Lock lock = new ReentrantLock(true);
public final Part A, B;
public ABMoreFairSync() {
A = new Part();
B = new Part();
A.other = B;
B.other = A;
}
private class Part {
private Condition canGo = lock.newCondition();
private int currentGeneration, lastGeneration;
private int holders;
private Part other;
public void lock() throws InterruptedException {
lock.lockInterruptibly();
try {
int myGeneration = lastGeneration;
if (other.holders > 0 || currentGeneration < myGeneration) {
if (other.currentGeneration == other.lastGeneration) {
other.lastGeneration++;
}
while (other.holders > 0 || currentGeneration < myGeneration) {
canGo.await();
}
}
holders++;
} finally {
lock.unlock();
}
}
public void unlock() throws InterruptedException {
lock.lockInterruptibly();
try {
holders = Math.max(0, holders - 1);
if (holders == 0) {
currentGeneration++;
other.canGo.signalAll();
}
} finally {
lock.unlock();
}
}
}
}
To be used as in:
sync.A.lock();
try {
...
} finally {
sync.A.unlock();
}
The idea of generations here is taken from "Java Concurrency in Practice", Listing 14.9.
I have two thread that can produce value and add it in a arraylist,
and other thread can access to it to read a value.
My problem is that the producer can access to the list in the same time that the consumer use data.
This is my code :
public class CommandTree
{
Lock lock = new ReentrantLock();
ArrayList<Command> cmdToSend = null;
JSONObject sendCmdMap;
public CommandTree(JSONObject sendCmdMap)
{
this.cmdToSend = new ArrayList<Command>();
this.sendCmdMap = sendCmdMap;
}
private synchronized void addMacroCmd(String macro, int fmt, int tgt, int sid,int count,JSONArray sli,String paramName,JSONObject params,int function)
{
boolean check = false;
int i = 0;
lock.lock();
try
{
for(i=0; i<cmdToSend.size(); i++)
{
if(cmdToSend.get(i).getMacroName().equalsIgnoreCase(macro))
{
check = true;
break;
}
}
if(check == false)
{
cmdToSend.add(new Command(macro,fmt,tgt,sid,count,function,sli));
}
if(paramName != null)
{
if(check)
cmdToSend.get(i).setParameter(paramName,params);
else
cmdToSend.get(cmdToSend.size()-1).setParameter(paramName,params);
}
}
finally
{
lock.unlock();
}
}
private void addParameter(String macro,int fmt, int tgt, int sid,int count,JSONArray sli,String paramName,JSONObject params,int function)
{
lock.lock();
try
{
this.addMacroCmd(macro, fmt, tgt, sid, count,sli, paramName,params,function);
}
finally
{
lock.unlock();
}
}
public int getSize()
{
return cmdToSend.size();
}
public void reset()
{
lock.lock();
try
{
cmdToSend.clear();
}
finally
{
lock.unlock();
}
}
/*
public Command getNextCommandInLoop()
{
return cmdToSend.;
}
*/
public Command getNextCommand(int i)
{
Command result;
lock.lock();
try
{
result = cmdToSend.get(i);
}
finally
{
lock.unlock();
}
return result;
}
public synchronized boolean populateCommandTree(String i,String target) throws JSONException
{
JSONObject tgtCmd = (JSONObject) sendCmdMap.get(target);
JSONObject cmdObject;
Iterator<String> iter = tgtCmd.keys();
while (iter.hasNext())
{
String key = iter.next();
if(key.equalsIgnoreCase(i))
{
//it is a general commands
JSONObject macro = (JSONObject)tgtCmd.opt(key);
cmdObject = (JSONObject) macro.opt("cmd");
addMacroCmd(key,cmdObject.optInt("fmt"),cmdObject.optInt("tgt"),cmdObject.optInt("sid"),cmdObject.optInt("count"),cmdObject.optJSONArray("sli"),null,null,macro.optInt("function"));
return true;
}
else
{
//It is a parameter, we have to search its general command
cmdObject = (JSONObject)tgtCmd.opt(key);
if(cmdObject == null)
{
continue;
}
JSONObject parameter = cmdObject.optJSONObject("Parameter");
if( parameter == null)
{
//There isn't the requested command, we iterate on the next one
continue;
}
else
{
if(((JSONObject) parameter).optJSONObject(i) != null)
{
JSONObject cmdStructure = (JSONObject)cmdObject.opt("cmd");
//We have found the command, save it in commandSendCache
addMacroCmd(key,cmdStructure.optInt("fmt"),cmdStructure.optInt("tgt"),cmdStructure.optInt("sid"),cmdStructure.optInt("count"),cmdStructure.optJSONArray("sli"),i,parameter.optJSONObject(i),cmdObject.optInt("function"));
return true;//(JSONObject)tgtCmd.opt(key);
}
else
{
continue;
}
}
}
}
return false;
}}
I read some post on that case, but I don't understand very well. I thought to post my code in this way I can understand in better way.
Other problem is that one producer is a UI thread, and I worried if there is problem to stop the UI thread for some times.
I also thought to use ConcurrentLinkedQueue because some time I need to loop on the list, and I always extract the value from the first position, but with concurrentLInkedQueue I don't know how can implementate the loop and in what way I can implementate the addMacroCmd method..
In my case I think to use lock object and ArrayList.
Do you have some suggestion ? I want to learn in better way the concurrency, but it not very easy for me :(
EDIT : the following is the part of code that add and remove the command :
public synchronized void readSensorData(String[] sensor, String target)
{
cmdTree.reset();
for(int i=0;i<sensor.length;i++)
{
try
{
cmdTree.populateCommandTree(sensor[i],target);
}
catch (JSONException e)
{
}
}
writeExecutor.execute(this.writeCommandTree);
}
/**
*
* #param i
* #param target
* #return
* #throws JSONException when the command requested doesn't exists
*/
private ByteArrayOutputStream f = new ByteArrayOutputStream();
ExecutorService writeExecutor = Executors.newSingleThreadExecutor();
Semaphore mutex = new Semaphore(0);
volatile boolean diagnostic = false;
volatile int index = 0;
Runnable writeCommandTree = new Runnable()
{
#Override
public void run()
{
while(index < cmdTree.getSize())
{
writeCmd();
try
{
mutex.acquire();
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
sendAnswerBroadcast("answer", answer);
answer = new JSONObject();
index = 0;
}
};
and the mutex is release when arrive a new response .
Addictional information :
The readSensorData() is called when button on the ux (UI Thread) is
pressed and in same case from other Thread B. WriteCommandTree is only
execute in the executor (Other Thread C).
I change the name of getnextcommand into getcommand
- getcommand(int i) is called in the callback of the response (sometime is in other thread (i'm forget to that function ...) and in writecmd inside writecommandtree
- getsize in the writecommandTree in the thread C
Don't get headaches just for synchronizing a list, simply use the Java standard library :
List<Command> commands = Collections.synchronizedList(new ArrayList<>());
By the way, a naive implementation of this would simply to wrap an unsafe list and add synchronized to all the methods.
You can use blockingQueue to achieve the same. Refer simple tutorial about blockingQueue :http://tutorials.jenkov.com/java-util-concurrent/blockingqueue.html
There are several problems with this code:
It is unlikely that you need both a ReentrantLock and synchronization.
The getSize method is not synchronized at all. If, e.g., reset is called from a thread other than the one from which getSize is called, the program is incorrect.
sendCmdMap is leaked in CommandTree's constructor. If the thread that creates the CommandTree is different from the thread that calls populateCommandTree, the program is incorrect.
Note, btw, that using a synchronized view of cmdToSend would not fix any of these problems.
What you need to do, here, is this:
Producers need to seize a lock, hand a command to the CommandTree and then delete all references to it.
Consumers need to seize the same lock and get a reference to a command, deleting it from the CommandTree.
For problems like this, there is no better reference than "Java Concurrency in Practice"
I have been working on some code, but I need help.
I have created one producer and one consumer, however I need to create multiple consumers who will consume the specific String from the producer e.g. I need a consumer that will consume specifically 'Move Left Hand'.
Contained in the code is the buffer, producer, consumer and the main. I am not sure how to notify the correct consumer and compare the string that needs to be consumed. As it stands I only have one consumer.
public class iRobotBuffer {
private boolean empty = true;
public synchronized String take() {
// Wait until message is
// available.
while (empty) {
try {
wait();
} catch (InterruptedException e) {}
}
// Toggle status.
empty = true;
// Notify producer that
// status has changed.
notifyAll();
return message;
}
public synchronized void put(String message) {
// Wait until message has
// been retrieved.
while (!empty) {
try {
wait();
} catch (InterruptedException e) {}
}
// Toggle status.
empty = false;
// Store message.
this.message = message;
// Notify consumer that status
// has changed.
notifyAll();
}
}
public class iRobotConsumer implements Runnable {
private iRobotBuffer robotBuffer;
public iRobotConsumer(iRobotBuffer robotBuffer){
this.robotBuffer = robotBuffer;
}
public void run() {
Random random = new Random();
for (String message = robotBuffer.take();
! message.equals("DONE");
message = robotBuffer.take()) {
System.out.format("MESSAGE RECEIVED: %s%n", message);
try {
Thread.sleep(random.nextInt(5000));
} catch (InterruptedException e) {}
}
}
}
public class iRobotProducer implements Runnable {
private iRobotBuffer robotBuffer;
private int number;
public iRobotProducer(iRobotBuffer robotBuffer)
{
this.robotBuffer = robotBuffer;
//this.number = number;
}
public void run() {
String commandInstructions[] = {
"Move Left Hand",
"Move Right Hand",
"Move Both Hands",
};
int no = commandInstructions.length;
int randomNo;
Random random = new Random();
for (int i = 0;
i < commandInstructions.length;
i++) {
randomNo =(int)(Math.random()*no);
System.out.println(commandInstructions[randomNo]);
robotBuffer.put(commandInstructions[i]);
try {
Thread.sleep(random.nextInt(5000));
} catch (InterruptedException e) {}
}
robotBuffer.put("DONE");
}
}
public class iRobot
{
public static void main(String[] args)
{
iRobotBuffer robotBuffer = new iRobotBuffer();
(new Thread(new iRobotProducer(robotBuffer))).start();
(new Thread(new iRobotConsumer(robotBuffer))).start();
}//main
}//class
The problem is your iRobotBuffer class. It needs to be a queue to support multiple producer / consumers. I've provided the code for such a queue, but java already has an implementation (BlockingDeque<E>).
public class BlockingQueue<T> {
private final LinkedList<T> innerList = new LinkedList<>();
private boolean isEmpty = true;
public synchronized T take() throws InterruptedException {
while (isEmpty) {
wait();
}
T element = innerList.removeFirst();
isEmpty = innerList.size() == 0;
return element;
}
public synchronized void put(T element) {
isEmpty = false;
innerList.addLast(element);
notify();
}
}
As I understand, you would like 3 consumers, one for each move instruction.
You can use an ArrayBlockingQueue from the java.util.concurrent package, in place of the iRobotBuffer class. By the way, you can have a look at the other concurrent collections provided - one may sweet you better.
Then for the consumer, you can peek() at what is in the queue and test if it matches the requirements and then poll().
I have a need for a single-permit semaphore object in my Java program where there is an additional acquire method which looks like this:
boolean tryAcquire(int id)
and behaves as follows: if the id has not been encountered before, then remember it and then just do whatever java.util.concurrent.Semaphore does. If the id has been encountered before and that encounter resulted in the lease of the permit then give this thread priority over all other threads who may be waiting for the permit. I'll also want an extra release method like:
void release(int id)
which does whatever the java.util.concurrent.Semaphore does, plus also "forgets" about the id.
I don't really know how to approach this, but here's the start of a possible implementation but I fear it's going nowhere:
public final class SemaphoreWithMemory {
private final Semaphore semaphore = new Semaphore(1, true);
private final Set<Integer> favoured = new ConcurrentSkipListSet<Integer>();
public boolean tryAcquire() {
return semaphore.tryAcquire();
}
public synchronized boolean tryAcquire(int id) {
if (!favoured.contains(id)) {
boolean gotIt = tryAcquire();
if (gotIt) {
favoured.add(id);
return true;
}
else {
return false;
}
}
else {
// what do I do here???
}
}
public void release() {
semaphore.release();
}
public synchronized void release(int id) {
favoured.remove(id);
semaphore.release();
}
}
EDIT:
Did some experiment. Please see this answer for results.
In principle, Semaphore has a queue of threads internally, so like Andrew says if you make this queue a priority queue and poll from this queue to give out permits, it probably behaves the way you want. Note that you can't do this with tryAcquire because that way threads don't queue up. From what I see looks like you'd have to hack the AbstractQueuedSynchronizer class to do this.
I could also think of a probabilistic approach, like this:
(I'm not saying that the code below would be a good idea! Just brainstorming here. )
public class SemaphoreWithMemory {
private final Semaphore semaphore = new Semaphore(1);
private final Set<Integer> favoured = new ConcurrentSkipListSet<Integer>();
private final ThreadLocal<Random> rng = //some good rng
public boolean tryAcquire() {
for(int i=0; i<8; i++){
Thread.yield();
// Tend to waste more time than tryAcquire(int id)
// would waste.
if(rng.get().nextDouble() < 0.3){
return semaphore.tryAcquire();
}
}
return semaphore.tryAcquire();
}
public boolean tryAcquire(int id) {
if (!favoured.contains(id)) {
boolean gotIt = semaphore.tryAcquire();
if (gotIt) {
favoured.add(id);
return true;
} else {
return false;
}
} else {
return tryAquire();
}
}
Or have the "favoured" threads hang out a little bit longer like this:
EDIT: Turns out this was a very bad idea (with both fair and non-fair semaphore) (see my experiment for details.
public boolean tryAcquire(int id) {
if (!favoured.contains(id)) {
boolean gotIt = semaphore.tryAcquire(5,TimeUnit.MILLISECONDS);
if (gotIt) {
favoured.add(id);
return true;
} else {
return false;
}
} else {
return tryAquire();
}
I guess this way you can bias the way permits are issued, while it won't be fair. Though with this code you'd probably be wasting a lot of time performance wise...
For blocking acquisition model, what about this:
public class SemWithPreferred {
int max;
int avail;
int preferredThreads;
public SemWithPreferred(int max, int avail) {
this.max = max;
this.avail = avail;
}
synchronized public void get(int id) throws InterruptedException {
boolean thisThreadIsPreferred = idHasBeenServedSuccessfullyBefore(id);
if (thisThreadIsPreferred) {
preferredThreads++;
}
while (! (avail > 0 && (preferredThreads == 0 || thisThreadIsPreferred))) {
wait();
}
System.out.println(String.format("granted, id = %d, preferredThreads = %d", id, preferredThreads));
avail -= 1;
if (thisThreadIsPreferred) {
preferredThreads--;
notifyAll(); // removal of preferred thread could affect other threads' wait predicate
}
}
synchronized public void put() {
if (avail < max) {
avail += 1;
notifyAll();
}
}
boolean idHasBeenServedSuccessfullyBefore(int id) {
// stubbed out, this just treats any id that is a
// multiple of 5 as having been served successfully before
return id % 5 == 0;
}
}
Assuming that you want the threads to wait, I hacked a solution that is not perfect, but should do.
The idea is to have two semaphores and a "favourite is waiting" flag.
Every thread that tries to acquire the SemaphoreWithMemory first tries to acquire the "favouredSemaphore". A "favoured" thread keeps the Semaphore and a non-favoured releases it immediately. Thereby the favoured thread blocks all other incoming threads once he has acquired this Semaphore.
Then the second "normalSemaphore" has to be acquired to finish up.
But the non-favoured thread then checks again that there is no favoured thread waiting using a volatile variable). If none is waiting then he simply continues; if one is waiting, he releases the normalSemaphore and recursively calls acquire again.
I am not really sure that there are no race conditions lurking. If you want to be sure, you perhaps should refactor your code to hand of "work items" to a priority queue, where another thread takes the work item with the highest priority and executes that code.
public final class SemaphoreWithMemory {
private volatile boolean favouredAquired = false;
private final Semaphore favouredSemaphore = new Semaphore(1, true);
private final Semaphore normalSemaphore = new Semaphore(1, true);
private final Set<Integer> favoured = new ConcurrentSkipListSet<Integer>();
public void acquire() throws InterruptedException {
normalSemaphore.acquire();
}
public void acquire(int id) throws InterruptedException {
boolean idIsFavoured = favoured.contains(id);
favouredSemaphore.acquire();
if (!idIsFavoured) {
favouredSemaphore.release();
} else {
favouredAquired = true;
}
normalSemaphore.acquire();
// check again that there is no favoured thread waiting
if (!idIsFavoured) {
if (favouredAquired) {
normalSemaphore.release();
acquire(); // starving probability!
} else {
favoured.add(id);
}
}
}
public void release() {
normalSemaphore.release();
if (favouredAquired) {
favouredAquired = false;
favouredSemaphore.release();
}
}
public void release(int id) {
favoured.remove(id);
release();
}
}
I read this article by Ceki and was interested how biased semaphore acquisition could be (since I felt the "biased locking" behavior would make sense in semaphores as well..). On my hardware with 2 processors and a Sun JVM 1.6, it actually results in pretty uniform lease.
Anyways, I also tried to "bias" the leasing of semaphore with the strategy I wrote in my other answer. Turns out a simple extra yield statement alone results in significant bias. Your problem is more complicated, but perhaps you can do similar tests with your idea and see what you get :)
NOTE The code below is based upon Ceki's code here
Code:
import java.util.concurrent.*;
public class BiasedSemaphore implements Runnable {
static ThreadLocal<Boolean> favored = new ThreadLocal<Boolean>(){
private boolean gaveOut = false;
public synchronized Boolean initialValue(){
if(!gaveOut){
System.out.println("Favored " + Thread.currentThread().getName());
gaveOut = true;
return true;
}
return false;
}
};
static int THREAD_COUNT = Runtime.getRuntime().availableProcessors();
static Semaphore SEM = new Semaphore(1);
static Runnable[] RUNNABLE_ARRAY = new Runnable[THREAD_COUNT];
static Thread[] THREAD_ARRAY = new Thread[THREAD_COUNT];
private int counter = 0;
public static void main(String args[]) throws InterruptedException {
printEnvironmentInfo();
execute();
printResults();
}
public static void printEnvironmentInfo() {
System.out.println("java.runtime.version = "
+ System.getProperty("java.runtime.version"));
System.out.println("java.vendor = "
+ System.getProperty("java.vendor"));
System.out.println("java.version = "
+ System.getProperty("java.version"));
System.out.println("os.name = "
+ System.getProperty("os.name"));
System.out.println("os.version = "
+ System.getProperty("os.version"));
}
public static void execute() throws InterruptedException {
for (int i = 0; i < THREAD_COUNT; i++) {
RUNNABLE_ARRAY[i] = new BiasedSemaphore();
THREAD_ARRAY[i] = new Thread(RUNNABLE_ARRAY[i]);
System.out.println("Runnable at "+i + " operated with "+THREAD_ARRAY[i]);
}
for (Thread t : THREAD_ARRAY) {
t.start();
}
// let the threads run for a while
Thread.sleep(10000);
for (int i = 0; i< THREAD_COUNT; i++) {
THREAD_ARRAY[i].interrupt();
}
for (Thread t : THREAD_ARRAY) {
t.join();
}
}
public static void printResults() {
System.out.println("Ran with " + THREAD_COUNT + " threads");
for (int i = 0; i < RUNNABLE_ARRAY.length; i++) {
System.out.println("runnable[" + i + "]: " + RUNNABLE_ARRAY[i]);
}
}
public void run() {
while (!Thread.currentThread().isInterrupted()) {
if (favored.get()) {
stuff();
} else {
Thread.yield();
// try {
// Thread.sleep(1);
// } catch (InterruptedException e) {
// Thread.currentThread().interrupt();
// }
stuff();
}
}
}
private void stuff() {
if (SEM.tryAcquire()) {
//favored.set(true);
counter++;
try {
Thread.sleep(10);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
SEM.release();
} else {
//favored.set(false);
}
}
public String toString() {
return "counter=" + counter;
}
}
Results:
java.runtime.version = 1.6.0_21-b07
java.vendor = Sun Microsystems Inc.
java.version = 1.6.0_21
os.name = Windows Vista
os.version = 6.0
Runnable at 0 operated with Thread[Thread-0,5,main]
Runnable at 1 operated with Thread[Thread-1,5,main]
Favored Thread-0
Ran with 2 threads
runnable[0]: counter=503
runnable[1]: counter=425
Tried with 30 seconds instead of 10:
java.runtime.version = 1.6.0_21-b07
java.vendor = Sun Microsystems Inc.
java.version = 1.6.0_21
os.name = Windows Vista
os.version = 6.0
Runnable at 0 operated with Thread[Thread-0,5,main]
Runnable at 1 operated with Thread[Thread-1,5,main]
Favored Thread-1
Ran with 2 threads
runnable[0]: counter=1274
runnable[1]: counter=1496
P.S.: Looks like "hanging out" was a very bad idea. When I tried calling SEM.tryAcquire(1,TimeUnit.MILLISECONDS); for favored threads and SEM.tryAcquire() for non-favored threads, non-favored threads got the permit almost 5 times more than the favored thread!
Also, I'd like to add that these results are only measured under 1 particular situation, so it's not clear how these measures behave in other situations.
It strikes me that the simplest way to do this is not to try and combine Semaphores, but to build it from scratch on top of monitors. This is generally risky, but in this case, as there are no good building blocks in java.util.concurrent, it's the clearest way to do it.
Here's what i came up with:
public class SemaphoreWithMemory {
private final Set<Integer> favouredIDs = new HashSet<Integer>();
private final Object favouredLock = new Object();
private final Object ordinaryLock = new Object();
private boolean available = true;
private int favouredWaiting = 0;
/**
Acquires the permit. Blocks until the permit is acquired.
*/
public void acquire(int id) throws InterruptedException {
Object lock;
boolean favoured = false;
synchronized (this) {
// fast exit for uncontended lock
if (available) {
doAcquire(favoured, id);
return;
}
favoured = favouredIDs.contains(id);
if (favoured) {
lock = favouredLock;
++favouredWaiting;
}
else {
lock = ordinaryLock;
}
}
while (true) {
synchronized (this) {
if (available) {
doAcquire(favoured, id);
return;
}
}
synchronized (lock) {
lock.wait();
}
}
}
private void doAcquire(boolean favoured, int id) {
available = false;
if (favoured) --favouredWaiting;
else favouredIDs.add(id);
}
/**
Releases the permit.
*/
public synchronized void release() {
available = true;
Object lock = (favouredWaiting > 0) ? favouredLock : ordinaryLock;
synchronized (lock) {
lock.notify();
}
}
}