I need a semaphore with the following features:
it should be non-blocking, i.e. if the thread cannot get the permit
it should go further without waiting
it should be nonreentrant, i.e. if the same thread enters the
guarded piece of code twice it should take away two permits instead of
one
I have written the following code:
public class SimpleSemaphore
{
private int permits;
private AtomicLong counter = new AtomicLong();
SimpleSemaphore(int permits)
{
this.permits = permits;
}
boolean acquire()
{
if (counter.incrementAndGet() < permits)
{
return true;
}
else
{
counter.decrementAndGet();
return false;
}
}
void release()
{
counter.decrementAndGet();
}
}
Another option is this Semaphore:
public class EasySemaphore
{
private int permits;
private AtomicLong counter = new AtomicLong();
EasySemaphore(int permits)
{
this.permits = permits;
}
boolean acquire()
{
long index = counter.get();
if (index < permits)
{
if (counter.compareAndSet(index, index + 1))
{
return true;
}
}
return false;
}
void release()
{
counter.decrementAndGet();
}
}
Are the both implementations thread-safe and correct?
Which one is better?
How would you go about this task?
Doesn't java.util.concurrent.Semaphore already do all that?
It has a tryAcquire for non-blocking acquire, and it maintains a simple count of remaining permits (of which the same thread could take out more than one).
I would say the second one is better as the counter will never be greater thathan 0 (and its slightly more efficient)
I would use a loop otherwise you can have the method fail when there is still permits left.
public class EasySemaphore {
private final AtomicInteger counter;
EasySemaphore(int permits) {
counter = new AtomicInteger(permits);
}
boolean acquire() {
// highly unlikely to loop more than once.
while(true) {
int count = counter.get();
if (count <= 0) return false;
if (counter.compareAndSet(count, count -1))
return true;
}
}
void release() {
counter.incrementAndGet();
}
}
Related
I am supposed to be using two custom Semaphore classes (binary and counting) to print off letters in an exact sequence. Here is the standard semaphore.
public class Semaphore {
protected int value;
public Semaphore() {
value = 0;
}
public Semaphore(int initial) {
value = (initial >=0) ? initial : 0;
}
public synchronized void P() throws InterruptedException {
while (value==0) {
wait();
}
value--;
}
public synchronized void V() {
value++;
notify();
}
}
And here is the binary semaphore:
public class BinarySemaphore extends Semaphore {
public BinarySemaphore(boolean unlocked) {super(unlocked ? 1 : 0);}
public synchronized void P() throws InterruptedException{
while(value==0) {
wait();
}
value=0;
}
public synchronized void V() {
value=1;
notify();
}
}
Here is the main bulk of the code, except for a reason I can't work out why the threads stop after around thirty or so repetitions. Wait isn't called, the criteria for being true are being reached, so why aren't they working? Any help is much appreciated.
BinarySemaphore binaryWXSemaphore = new BinarySemaphore(false);
BinarySemaphore binaryYZSemaphore = new BinarySemaphore(false);
Semaphore countingWSemaphore = new Semaphore();
Semaphore countingYZSemaphore = new Semaphore();
Runnable runnableW = () -> {
while(true) {
if (binaryWXSemaphore.value == 0 && countingYZSemaphore.value >= countingWSemaphore.value) {
binaryWXSemaphore.V();
countingWSemaphore.V();
System.out.println("W");
}
}
};
Runnable runnableX = () -> {
while(true) {
if (binaryWXSemaphore.value == 1) {
try {
binaryWXSemaphore.P();
System.out.println("X");
} catch (Exception e) {
e.printStackTrace();
}
}
}
};
Runnable runnableY = () -> {
while(true) {
if (binaryYZSemaphore.value == 0 && countingWSemaphore.value > countingYZSemaphore.value) {
binaryYZSemaphore.V();
countingYZSemaphore.V();
System.out.println("y");
}
}
};
Runnable runnableZ = () -> {
while(true) {
if (binaryYZSemaphore.value == 1 && countingWSemaphore.value > countingYZSemaphore.value) {
try {
binaryYZSemaphore.P();
countingYZSemaphore.V();
System.out.println("z");
} catch (Exception e) {
e.printStackTrace();
}
}
}
};
As #iggy points out the issue is related to fact that different threads are reading different values of value, because the way you access it isn't thread safe. Some threads may be using an old copy of the value. Making it volatile will mean each thread access reads more consistent value:
protected volatile int value;
Or switch to AtomicInteger which ensures thread consistent change to the int stored in value. You'll also need to replace the assignments using set/get/inc/decrement methods of AtomicInteger:
protected final AtomicInteger value = new AtomicInteger();
// Then use value.set(0 / 1)
// or value.incrementAndGet / decrementAndGet
Unfortunately, even with the above changes, you may find other issues because value could change in the duration between each Runnable's if statement, and the operations inside those if branches.
Also: replacing notify() by notifyAll() usually gives better multi-thread handling though I don't think this necessarily helps in your example.
I am looking for a Java implementation of the following concurrency semantics. I want something similar to ReadWriteLock except symmetrical, i.e. both the read and write sides can be shared amongst many threads, but read excludes write and vice versa.
There are two locks, let's call them A and B.
Lock A is shared, i.e. there may be multiple threads holding it concurrently. Lock B is also shared, there may be multiple threads holding it concurrently.
If any thread holds lock A then no thread may take B – threads attempting to take B shall block until all threads holding A have released A.
If any thread holds lock B then no thread may take A – threads attempting to take A shall block until all threads holding B have released B.
Is there an existing library class that achieves this? At the moment I have approximated the desired functionality with a ReadWriteLock because fortunately the tasks done in the context of lock B are somewhat rarer. It feels like a hack though, and it could affect the performance of my program under heavy load.
Short answer:
In the standard library, there is nothing like what you need.
Long answer:
To easily implement a custom Lock you should subclass or delegate to an AbstractQueuedSynchronizer.
The following code is an example of a non-fair lock that implements what you need, including some (non exhausting) test. I called it LeftRightLock because of the binary nature of your requirements.
The concept is pretty straightforward:
AbstractQueuedSynchronizer exposes a method to atomically set the state of an int using the Compare and swap idiom ( compareAndSetState(int expect, int update) ), we can use the exposed state keep the count of the threads holding the lock, setting it to a positive value in case the Right lock is being held or a negative value in case the Left lock is being held.
Than we just make sure of the following conditions:
- you can lock Left only if the state of the internal AbstractQueuedSynchronizer is zero or negative
- you can lock Right only if the state of the internal AbstractQueuedSynchronizer is zero or positive
LeftRightLock.java
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
import java.util.concurrent.locks.Lock;
/**
* A binary mutex with the following properties:
*
* Exposes two different {#link Lock}s: LEFT, RIGHT.
*
* When LEFT is held other threads can acquire LEFT but thread trying to acquire RIGHT will be
* blocked. When RIGHT is held other threads can acquire RIGHT but thread trying to acquire LEFT
* will be blocked.
*/
public class LeftRightLock {
public static final int ACQUISITION_FAILED = -1;
public static final int ACQUISITION_SUCCEEDED = 1;
private final LeftRightSync sync = new LeftRightSync();
public void lockLeft() {
sync.acquireShared(LockSide.LEFT.getV());
}
public void lockRight() {
sync.acquireShared(LockSide.RIGHT.getV());
}
public void releaseLeft() {
sync.releaseShared(LockSide.LEFT.getV());
}
public void releaseRight() {
sync.releaseShared(LockSide.RIGHT.getV());
}
public boolean tryLockLeft() {
return sync.tryAcquireShared(LockSide.LEFT) == ACQUISITION_SUCCEEDED;
}
public boolean tryLockRight() {
return sync.tryAcquireShared(LockSide.RIGHT) == ACQUISITION_SUCCEEDED;
}
private enum LockSide {
LEFT(-1), NONE(0), RIGHT(1);
private final int v;
LockSide(int v) {
this.v = v;
}
public int getV() {
return v;
}
}
/**
* <p>
* Keep count the count of threads holding either the LEFT or the RIGHT lock.
* </p>
*
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) greater than 0 means one or more threads are holding RIGHT lock. </li>
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) lower than 0 means one or more threads are holding LEFT lock.</li>
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) equal to zero means no thread is holding any lock.</li>
*/
private static final class LeftRightSync extends AbstractQueuedSynchronizer {
#Override
protected int tryAcquireShared(int requiredSide) {
return (tryChangeThreadCountHoldingCurrentLock(requiredSide, ChangeType.ADD) ? ACQUISITION_SUCCEEDED : ACQUISITION_FAILED);
}
#Override
protected boolean tryReleaseShared(int requiredSide) {
return tryChangeThreadCountHoldingCurrentLock(requiredSide, ChangeType.REMOVE);
}
public boolean tryChangeThreadCountHoldingCurrentLock(int requiredSide, ChangeType changeType) {
if (requiredSide != 1 && requiredSide != -1)
throw new AssertionError("You can either lock LEFT or RIGHT (-1 or +1)");
int curState;
int newState;
do {
curState = this.getState();
if (!sameSide(curState, requiredSide)) {
return false;
}
if (changeType == ChangeType.ADD) {
newState = curState + requiredSide;
} else {
newState = curState - requiredSide;
}
//TODO: protect against int overflow (hopefully you won't have so many threads)
} while (!this.compareAndSetState(curState, newState));
return true;
}
final int tryAcquireShared(LockSide lockSide) {
return this.tryAcquireShared(lockSide.getV());
}
final boolean tryReleaseShared(LockSide lockSide) {
return this.tryReleaseShared(lockSide.getV());
}
private boolean sameSide(int curState, int requiredSide) {
return curState == 0 || sameSign(curState, requiredSide);
}
private boolean sameSign(int a, int b) {
return (a >= 0) ^ (b < 0);
}
public enum ChangeType {
ADD, REMOVE
}
}
}
LeftRightLockTest.java
import org.junit.Test;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
public class LeftRightLockTest {
int logLineSequenceNumber = 0;
private LeftRightLock sut = new LeftRightLock();
#Test(timeout = 2000)
public void acquiringLeftLockExcludeAcquiringRightLock() throws Exception {
sut.lockLeft();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockRight());
assertFalse("I shouldn't be able to acquire the RIGHT lock!", task.get());
}
#Test(timeout = 2000)
public void acquiringRightLockExcludeAcquiringLeftLock() throws Exception {
sut.lockRight();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockLeft());
assertFalse("I shouldn't be able to acquire the LEFT lock!", task.get());
}
#Test(timeout = 2000)
public void theLockShouldBeReentrant() throws Exception {
sut.lockLeft();
assertTrue(sut.tryLockLeft());
}
#Test(timeout = 2000)
public void multipleThreadShouldBeAbleToAcquireTheSameLock_Right() throws Exception {
sut.lockRight();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockRight());
assertTrue(task.get());
}
#Test(timeout = 2000)
public void multipleThreadShouldBeAbleToAcquireTheSameLock_left() throws Exception {
sut.lockLeft();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockLeft());
assertTrue(task.get());
}
#Test(timeout = 2000)
public void shouldKeepCountOfAllTheThreadsHoldingTheSide() throws Exception {
CountDownLatch latchA = new CountDownLatch(1);
CountDownLatch latchB = new CountDownLatch(1);
Thread threadA = spawnThreadToAcquireLeftLock(latchA, sut);
Thread threadB = spawnThreadToAcquireLeftLock(latchB, sut);
System.out.println("Both threads have acquired the left lock.");
try {
latchA.countDown();
threadA.join();
boolean acqStatus = sut.tryLockRight();
System.out.println("The right lock was " + (acqStatus ? "" : "not") + " acquired");
assertFalse("There is still a thread holding the left lock. This shouldn't succeed.", acqStatus);
} finally {
latchB.countDown();
threadB.join();
}
}
#Test(timeout = 5000)
public void shouldBlockThreadsTryingToAcquireLeftIfRightIsHeld() throws Exception {
sut.lockLeft();
CountDownLatch taskStartedLatch = new CountDownLatch(1);
final Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> {
taskStartedLatch.countDown();
sut.lockRight();
return false;
});
taskStartedLatch.await();
Thread.sleep(100);
assertFalse(task.isDone());
}
#Test
public void shouldBeFreeAfterRelease() throws Exception {
sut.lockLeft();
sut.releaseLeft();
assertTrue(sut.tryLockRight());
}
#Test
public void shouldBeFreeAfterMultipleThreadsReleaseIt() throws Exception {
CountDownLatch latch = new CountDownLatch(1);
final Thread thread1 = spawnThreadToAcquireLeftLock(latch, sut);
final Thread thread2 = spawnThreadToAcquireLeftLock(latch, sut);
latch.countDown();
thread1.join();
thread2.join();
assertTrue(sut.tryLockRight());
}
#Test(timeout = 2000)
public void lockShouldBeReleasedIfNoThreadIsHoldingIt() throws Exception {
CountDownLatch releaseLeftLatch = new CountDownLatch(1);
CountDownLatch rightLockTaskIsRunning = new CountDownLatch(1);
Thread leftLockThread1 = spawnThreadToAcquireLeftLock(releaseLeftLatch, sut);
Thread leftLockThread2 = spawnThreadToAcquireLeftLock(releaseLeftLatch, sut);
Future<Boolean> acquireRightLockTask = Executors.newSingleThreadExecutor().submit(() -> {
if (sut.tryLockRight())
throw new AssertionError("The left lock should be still held, I shouldn't be able to acquire right a this point.");
printSynchronously("Going to be blocked on right lock");
rightLockTaskIsRunning.countDown();
sut.lockRight();
printSynchronously("Lock acquired!");
return true;
});
rightLockTaskIsRunning.await();
releaseLeftLatch.countDown();
leftLockThread1.join();
leftLockThread2.join();
assertTrue(acquireRightLockTask.get());
}
private synchronized void printSynchronously(String str) {
System.out.println(logLineSequenceNumber++ + ")" + str);
System.out.flush();
}
private Thread spawnThreadToAcquireLeftLock(CountDownLatch releaseLockLatch, LeftRightLock lock) throws InterruptedException {
CountDownLatch lockAcquiredLatch = new CountDownLatch(1);
final Thread thread = spawnThreadToAcquireLeftLock(releaseLockLatch, lockAcquiredLatch, lock);
lockAcquiredLatch.await();
return thread;
}
private Thread spawnThreadToAcquireLeftLock(CountDownLatch releaseLockLatch, CountDownLatch lockAcquiredLatch, LeftRightLock lock) {
final Thread thread = new Thread(() -> {
lock.lockLeft();
printSynchronously("Thread " + Thread.currentThread() + " Acquired left lock");
try {
lockAcquiredLatch.countDown();
releaseLockLatch.await();
} catch (InterruptedException ignore) {
} finally {
lock.releaseLeft();
}
printSynchronously("Thread " + Thread.currentThread() + " RELEASED left lock");
});
thread.start();
return thread;
}
}
I don't know any library that does that you want. Even if there is such a library it possess little value because every time your request changes the library stops doing the magic.
The actual question here is "How to I implement my own lock with custom specification?"
Java provides tool for that named AbstractQueuedSynchronizer. It has extensive documentation. Apart from docs one would possibly like to look at CountDownLatch and ReentrantLock sources and use them as examples.
For your particular request see code below, but beware that it is 1) not fair 2) not tested
public class MultiReadWriteLock implements ReadWriteLock {
private final Sync sync;
private final Lock readLock;
private final Lock writeLock;
public MultiReadWriteLock() {
this.sync = new Sync();
this.readLock = new MultiLock(Sync.READ, sync);
this.writeLock = new MultiLock(Sync.WRITE, sync);
}
#Override
public Lock readLock() {
return readLock;
}
#Override
public Lock writeLock() {
return writeLock;
}
private static final class Sync extends AbstractQueuedSynchronizer {
private static final int READ = 1;
private static final int WRITE = -1;
#Override
public int tryAcquireShared(int arg) {
int state, result;
do {
state = getState();
if (state >= 0 && arg == READ) {
// new read
result = state + 1;
} else if (state <= 0 && arg == WRITE) {
// new write
result = state - 1;
} else {
// blocked
return -1;
}
} while (!compareAndSetState(state, result));
return 1;
}
#Override
protected boolean tryReleaseShared(int arg) {
int state, result;
do {
state = getState();
if (state == 0) {
return false;
}
if (state > 0 && arg == READ) {
result = state - 1;
} else if (state < 0 && arg == WRITE) {
result = state + 1;
} else {
throw new IllegalMonitorStateException();
}
} while (!compareAndSetState(state, result));
return result == 0;
}
}
private static class MultiLock implements Lock {
private final int parameter;
private final Sync sync;
public MultiLock(int parameter, Sync sync) {
this.parameter = parameter;
this.sync = sync;
}
#Override
public void lock() {
sync.acquireShared(parameter);
}
#Override
public void lockInterruptibly() throws InterruptedException {
sync.acquireSharedInterruptibly(parameter);
}
#Override
public boolean tryLock() {
return sync.tryAcquireShared(parameter) > 0;
}
#Override
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
return sync.tryAcquireSharedNanos(parameter, unit.toNanos(time));
}
#Override
public void unlock() {
sync.releaseShared(parameter);
}
#Override
public Condition newCondition() {
throw new UnsupportedOperationException(
"Conditions are unsupported as there are no exclusive access"
);
}
}
}
After my nth attempt to make a simple fair implementation, I think I understand why I could not find another library/example of the "mutual exclusive lock-pair": it requires a pretty specific user-case. As OP mentioned, you can get a long way with the ReadWriteLock and a fair lock-pair is only useful when there are many requests for a lock in quick succession (else you might as well use one normal lock).
The implementation below is more of a "permit dispenser": it is not re-entrant. It can be made re-entrant though (if not, I fear I failed to make the code simple and readable) but it requires some additional administration for various cases (e.g. one thread locking A twice, still needs to unlock A twice and the unlock-method needs to know when there are no more locks outstanding). An option to throw a deadlock error when one thread locks A and wants to lock B is probably a good idea.
The main idea is that there is an "active lock" that can only be changed by the lock-method when there are no (requests for) locks at all and can be changed by the unlock-method when the active locks outstanding reaches zero. The rest is basically keeping count of lock-requests and making threads wait until the active lock can be changed. Making threads wait involves working with InterruptedExceptions and I made a compromise there: I could not find a good solution that works well in all cases (e.g. application shutdown, one thread that gets interrupted, etc.).
I only did some basic testing (test class at the end), more validation is needed.
import java.util.concurrent.Semaphore;
import java.util.concurrent.locks.ReentrantLock;
/**
* A pair of mutual exclusive read-locks: many threads can hold a lock for A or B, but never A and B.
* <br>Usage:<pre>
* PairedLock plock = new PairedLock();
* plock.lockA();
* try {
* // do stuff
* } finally {
* plock.unlockA();
* }</pre>
* This lock is not reentrant: a lock is not associated with a thread and a thread asking for the same lock
* might be blocked the second time (potentially causing a deadlock).
* <p>
* When a lock for A is active, a lock for B will wait for all locks on A to be unlocked and vice versa.
* <br>When a lock for A is active, and a lock for B is waiting, subsequent locks for A will wait
* until all (waiting) locks for B are unlocked.
* I.e. locking is fair (in FIFO order).
* <p>
* See also
* stackoverflow-java-concurrency-paired-locks-with-shared-access
*
* #author vanOekel
*
*/
public class PairedLock {
static final int MAX_LOCKS = 2;
static final int CLOSE_PERMITS = 10_000;
/** Use a fair lock to keep internal state instead of the {#code synchronized} keyword. */
final ReentrantLock state = new ReentrantLock(true);
/** Amount of threads that have locks. */
final int[] activeLocks = new int[MAX_LOCKS];
/** Amount of threads waiting to receive a lock. */
final int[] waitingLocks = new int[MAX_LOCKS];
/** Threads block on a semaphore until locks are available. */
final Semaphore[] waiters = new Semaphore[MAX_LOCKS];
int activeLock;
volatile boolean closed;
public PairedLock() {
super();
for (int i = 0; i < MAX_LOCKS; i++) {
// no need for fair semaphore: unlocks are done for all in one go.
waiters[i] = new Semaphore(0);
}
}
public void lockA() throws InterruptedException { lock(0); }
public void lockB() throws InterruptedException { lock(1); }
public void lock(int lockNumber) throws InterruptedException {
if (lockNumber < 0 || lockNumber >= MAX_LOCKS) {
throw new IllegalArgumentException("Lock number must be 0 or less than " + MAX_LOCKS);
} else if (isClosed()) {
throw new IllegalStateException("Lock closed.");
}
boolean wait = false;
state.lock();
try {
if (nextLockIsWaiting()) {
wait = true;
} else if (activeLock == lockNumber) {
activeLocks[activeLock]++;
} else if (activeLock != lockNumber && activeLocks[activeLock] == 0) {
// nothing active and nobody waiting - safe to switch to another active lock
activeLock = lockNumber;
activeLocks[activeLock]++;
} else {
// with only two locks this means this is the first lock that needs an active-lock switch.
// in other words:
// activeLock != lockNumber && activeLocks[activeLock] > 0 && waitingLocks[lockNumber] == 0
wait = true;
}
if (wait) {
waitingLocks[lockNumber]++;
}
} finally {
state.unlock();
}
if (wait) {
waiters[lockNumber].acquireUninterruptibly();
// there is no easy way to bring this lock back into a valid state when waiters do no get a lock.
// so for now, use the closed state to make this lock unusable any further.
if (closed) {
throw new InterruptedException("Lock closed.");
}
}
}
protected boolean nextLockIsWaiting() {
return (waitingLocks[nextLock(activeLock)] > 0);
}
protected int nextLock(int lockNumber) {
return (lockNumber == 0 ? 1 : 0);
}
public void unlockA() { unlock(0); }
public void unlockB() { unlock(1); }
public void unlock(int lockNumber) {
// unlock is called in a finally-block and should never throw an exception.
if (lockNumber < 0 || lockNumber >= MAX_LOCKS) {
System.out.println("Cannot unlock lock number " + lockNumber);
return;
}
state.lock();
try {
if (activeLock != lockNumber) {
System.out.println("ERROR: invalid lock state: no unlocks for inactive lock expected (active: " + activeLock + ", unlock: " + lockNumber + ").");
return;
}
activeLocks[lockNumber]--;
if (activeLocks[activeLock] == 0 && nextLockIsWaiting()) {
activeLock = nextLock(lockNumber);
waiters[activeLock].release(waitingLocks[activeLock]);
activeLocks[activeLock] += waitingLocks[activeLock];
waitingLocks[activeLock] = 0;
} else if (activeLocks[lockNumber] < 0) {
System.out.println("ERROR: to many unlocks for lock number " + lockNumber);
activeLocks[lockNumber] = 0;
}
} finally {
state.unlock();
}
}
public boolean isClosed() { return closed; }
/**
* All threads waiting for a lock will be unblocked and an {#link InterruptedException} will be thrown.
* Subsequent calls to the lock-method will throw an {#link IllegalStateException}.
*/
public synchronized void close() {
if (!closed) {
closed = true;
for (int i = 0; i < MAX_LOCKS; i++) {
waiters[i].release(CLOSE_PERMITS);
}
}
}
#Override
public String toString() {
StringBuilder sb = new StringBuilder(this.getClass().getSimpleName());
sb.append("=").append(this.hashCode());
state.lock();
try {
sb.append(", active=").append(activeLock).append(", switching=").append(nextLockIsWaiting());
sb.append(", lockA=").append(activeLocks[0]).append("/").append(waitingLocks[0]);
sb.append(", lockB=").append(activeLocks[1]).append("/").append(waitingLocks[1]);
} finally {
state.unlock();
}
return sb.toString();
}
}
The test class (YMMV - works fine on my system, but may deadlock on yours due to faster or slower starting and running of threads):
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PairedLockTest {
private static final Logger log = LoggerFactory.getLogger(PairedLockTest.class);
public static final ThreadPoolExecutor tp = (ThreadPoolExecutor) Executors.newCachedThreadPool();
public static void main(String[] args) {
try {
new PairedLockTest().test();
} catch (Exception e) {
e.printStackTrace();
} finally {
tp.shutdownNow();
}
}
PairedLock mlock = new PairedLock();
public void test() throws InterruptedException {
CountDownLatch start = new CountDownLatch(1);
CountDownLatch done = new CountDownLatch(2);
mlock.lockA();
try {
logLock("la1 ");
mlock.lockA();
try {
lockAsync(start, null, done, 1);
await(start);
logLock("la2 ");
} finally {
mlock.unlockA();
}
lockAsync(null, null, done, 0);
} finally {
mlock.unlockA();
}
await(done);
logLock();
}
void lockAsync(CountDownLatch start, CountDownLatch locked, CountDownLatch unlocked, int lockNumber) {
tp.execute(() -> {
countDown(start);
await(start);
//log.info("Locking async " + lockNumber);
try {
mlock.lock(lockNumber);
try {
countDown(locked);
logLock("async " + lockNumber + " ");
} finally {
mlock.unlock(lockNumber);
//log.info("Unlocked async " + lockNumber);
//logLock("async " + lockNumber + " ");
}
countDown(unlocked);
} catch (InterruptedException ie) {
log.warn(ie.toString());
}
});
}
void logLock() {
logLock("");
}
void logLock(String msg) {
log.info(msg + mlock.toString());
}
static void countDown(CountDownLatch l) {
if (l != null) {
l.countDown();
}
}
static void await(CountDownLatch l) {
if (l == null) {
return;
}
try {
l.await();
} catch (InterruptedException e) {
log.error(e.toString(), e.getCause());
}
}
}
How about
class ABSync {
private int aHolders;
private int bHolders;
public synchronized void lockA() throws InterruptedException {
while (bHolders > 0) {
wait();
}
aHolders++;
}
public synchronized void lockB() throws InterruptedException {
while (aHolders > 0) {
wait();
}
bHolders++;
}
public synchronized void unlockA() {
aHolders = Math.max(0, aHolders - 1);
if (aHolders == 0) {
notifyAll();
}
}
public synchronized void unlockB() {
bHolders = Math.max(0, bHolders - 1);
if (bHolders == 0) {
notifyAll();
}
}
}
Update: As for "fairness" (or, rather, non-starvation), OPs requirements don't mention it. In order to implement OPs requirements + some form of fairness/non-starvation, it should be specified explicitly (what do you consider fair, how should it behave when flows of requests for currently dominant and non-dominant locks come in etc). One of the ways to implement it would be:
class ABMoreFairSync {
private Lock lock = new ReentrantLock(true);
public final Part A, B;
public ABMoreFairSync() {
A = new Part();
B = new Part();
A.other = B;
B.other = A;
}
private class Part {
private Condition canGo = lock.newCondition();
private int currentGeneration, lastGeneration;
private int holders;
private Part other;
public void lock() throws InterruptedException {
lock.lockInterruptibly();
try {
int myGeneration = lastGeneration;
if (other.holders > 0 || currentGeneration < myGeneration) {
if (other.currentGeneration == other.lastGeneration) {
other.lastGeneration++;
}
while (other.holders > 0 || currentGeneration < myGeneration) {
canGo.await();
}
}
holders++;
} finally {
lock.unlock();
}
}
public void unlock() throws InterruptedException {
lock.lockInterruptibly();
try {
holders = Math.max(0, holders - 1);
if (holders == 0) {
currentGeneration++;
other.canGo.signalAll();
}
} finally {
lock.unlock();
}
}
}
}
To be used as in:
sync.A.lock();
try {
...
} finally {
sync.A.unlock();
}
The idea of generations here is taken from "Java Concurrency in Practice", Listing 14.9.
I have a multithreaded program where one thread is reading data and multiple others are doing work on that data. If I have one writing thread continuously adding data (Example.add()) and the other reader threads sequentially reading that data (Example.getData(1), Example.getData(2), ...), what is the best way to block the readers until data at the index they are requesting is available?
This problem is kind of like producer-consumer, but I don't want to "consume" the data.
public class Example {
private ArrayList<Integer> data;
public Example() {
data = new ArrayList<Integer>();
}
public int getData(int i) {
// I want to block here until the element
// index i is available.
return data.get(i);
}
public void add(int n) {
data.add(n);
}
}
This seems to be a reasonable way to synchronize threads:
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantLock.html
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Condition.html
The Condition link shows an example of this:
class BoundedBuffer {
final Lock lock = new ReentrantLock();
final Condition notFull = lock.newCondition();
final Condition notEmpty = lock.newCondition();
final Object[] items = new Object[100];
int putptr, takeptr, count;
public void put(Object x) throws InterruptedException {
lock.lock();
try {
while (count == items.length)
notFull.await();
items[putptr] = x;
if (++putptr == items.length) putptr = 0;
++count;
notEmpty.signal();
} finally {
lock.unlock();
}
}
public Object take() throws InterruptedException {
lock.lock();
try {
while (count == 0)
notEmpty.await();
Object x = items[takeptr];
if (++takeptr == items.length) takeptr = 0;
--count;
notFull.signal();
return x;
} finally {
lock.unlock();
}
}
}
Please don't judge me on the code style this is a straight copy from the example in Condition.
In your case you might consider using a single lock which all threads wait on, that signals when new elements are added. this would cause all threads to wake up and test if their element is there yet. if not they go back to wait for the next signal.
If you want them to specifically wait for the 1 element you could keep a signal per element but that seems overkill.
something like:
public class Example {
private Lock lock = new ReentrantLock();
private Condition update = lock.newCondition();
public Example(data) {
data = new ArrayList<Integer>();
}
public int getData(int i) {
lock.lock();
try {
while (data.get(i) == null) {
update.await();
}
return data.get(i);
} finally {
lock.unlock();
}
}
public void add(int n) {
data.add(n);
update.signal();
}
}
You can use blocking queue in java. When the queue is empty it blocks for the queue to have data until it is consumed. You can find more information about it here : https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html
Lookup some examples online for Java blocking queue and you can solve your issue
Is there a way of knowing what is the MAXIMUM number of permits that a semaphore object has ever had in its' lifetime?
We initialize it like this:
Semaphore sem = new Semaphore(n);
and at times we acquire, and at times we release what we acquired. But there are certain situations when we need to release more than we acquired in order to increase the number of permits. Is there a way to know the MAXIMUM number of permits that ever was in this semaphore?
The constructor is defined as public Semaphore(int permits). The maximum of int is 231 -1 = 2147483647 so this is your answer.
Semaphore itself does not keep track of a maximum over its lifetime. Implementing a Semphore wrapper around it that keeps track of the maximum can be tricky. Here's a quick draft of such an implementation :
public final class MySemaphore {
private final Semaphore semaphore;
private final AtomicReference<MaxCounter> maxCounter = new AtomicReference<>();
public MySemaphore(int initialAvailable) {
this.semaphore = new Semaphore(initialAvailable);
maxCounter.set(new MaxCounter(initialAvailable, initialAvailable));
}
private static final class MaxCounter {
private final int value;
private final int max;
public MaxCounter(int value, int max) {
this.value = value;
this.max = max;
}
public MaxCounter increment() {
return new MaxCounter(value + 1, Math.max(value + 1, max));
}
public MaxCounter decrement() {
return new MaxCounter(value - 1, max);
}
public int getValue() {
return value;
}
public int getMax() {
return max;
}
}
public void acquire() throws InterruptedException {
semaphore.acquire();
for (;;) {
MaxCounter current = maxCounter.get();
if (maxCounter.compareAndSet(current, current.decrement())) {
return;
}
}
}
public void release() {
for (;;) {
MaxCounter current = maxCounter.get();
if (maxCounter.compareAndSet(current, current.increment())) {
break;
}
}
semaphore.release();
}
public int availablePermits() {
return maxCounter.get().getValue();
}
public int getMaximumEverAvailable() {
return maxCounter.get().getMax();
}
}
The MaxCounter may not be exactly in sync with the internally used semaphore . The internal semaphore may get a release/acquire which is handled from an external perspective as acquire/release. To every client of MySemaphore, though the behavior will be consistent. i.e. availablePermits() will never return a value that is higher than getMaximumEverAvailable()
disclaimer : code not tested*
I have a need for a single-permit semaphore object in my Java program where there is an additional acquire method which looks like this:
boolean tryAcquire(int id)
and behaves as follows: if the id has not been encountered before, then remember it and then just do whatever java.util.concurrent.Semaphore does. If the id has been encountered before and that encounter resulted in the lease of the permit then give this thread priority over all other threads who may be waiting for the permit. I'll also want an extra release method like:
void release(int id)
which does whatever the java.util.concurrent.Semaphore does, plus also "forgets" about the id.
I don't really know how to approach this, but here's the start of a possible implementation but I fear it's going nowhere:
public final class SemaphoreWithMemory {
private final Semaphore semaphore = new Semaphore(1, true);
private final Set<Integer> favoured = new ConcurrentSkipListSet<Integer>();
public boolean tryAcquire() {
return semaphore.tryAcquire();
}
public synchronized boolean tryAcquire(int id) {
if (!favoured.contains(id)) {
boolean gotIt = tryAcquire();
if (gotIt) {
favoured.add(id);
return true;
}
else {
return false;
}
}
else {
// what do I do here???
}
}
public void release() {
semaphore.release();
}
public synchronized void release(int id) {
favoured.remove(id);
semaphore.release();
}
}
EDIT:
Did some experiment. Please see this answer for results.
In principle, Semaphore has a queue of threads internally, so like Andrew says if you make this queue a priority queue and poll from this queue to give out permits, it probably behaves the way you want. Note that you can't do this with tryAcquire because that way threads don't queue up. From what I see looks like you'd have to hack the AbstractQueuedSynchronizer class to do this.
I could also think of a probabilistic approach, like this:
(I'm not saying that the code below would be a good idea! Just brainstorming here. )
public class SemaphoreWithMemory {
private final Semaphore semaphore = new Semaphore(1);
private final Set<Integer> favoured = new ConcurrentSkipListSet<Integer>();
private final ThreadLocal<Random> rng = //some good rng
public boolean tryAcquire() {
for(int i=0; i<8; i++){
Thread.yield();
// Tend to waste more time than tryAcquire(int id)
// would waste.
if(rng.get().nextDouble() < 0.3){
return semaphore.tryAcquire();
}
}
return semaphore.tryAcquire();
}
public boolean tryAcquire(int id) {
if (!favoured.contains(id)) {
boolean gotIt = semaphore.tryAcquire();
if (gotIt) {
favoured.add(id);
return true;
} else {
return false;
}
} else {
return tryAquire();
}
}
Or have the "favoured" threads hang out a little bit longer like this:
EDIT: Turns out this was a very bad idea (with both fair and non-fair semaphore) (see my experiment for details.
public boolean tryAcquire(int id) {
if (!favoured.contains(id)) {
boolean gotIt = semaphore.tryAcquire(5,TimeUnit.MILLISECONDS);
if (gotIt) {
favoured.add(id);
return true;
} else {
return false;
}
} else {
return tryAquire();
}
I guess this way you can bias the way permits are issued, while it won't be fair. Though with this code you'd probably be wasting a lot of time performance wise...
For blocking acquisition model, what about this:
public class SemWithPreferred {
int max;
int avail;
int preferredThreads;
public SemWithPreferred(int max, int avail) {
this.max = max;
this.avail = avail;
}
synchronized public void get(int id) throws InterruptedException {
boolean thisThreadIsPreferred = idHasBeenServedSuccessfullyBefore(id);
if (thisThreadIsPreferred) {
preferredThreads++;
}
while (! (avail > 0 && (preferredThreads == 0 || thisThreadIsPreferred))) {
wait();
}
System.out.println(String.format("granted, id = %d, preferredThreads = %d", id, preferredThreads));
avail -= 1;
if (thisThreadIsPreferred) {
preferredThreads--;
notifyAll(); // removal of preferred thread could affect other threads' wait predicate
}
}
synchronized public void put() {
if (avail < max) {
avail += 1;
notifyAll();
}
}
boolean idHasBeenServedSuccessfullyBefore(int id) {
// stubbed out, this just treats any id that is a
// multiple of 5 as having been served successfully before
return id % 5 == 0;
}
}
Assuming that you want the threads to wait, I hacked a solution that is not perfect, but should do.
The idea is to have two semaphores and a "favourite is waiting" flag.
Every thread that tries to acquire the SemaphoreWithMemory first tries to acquire the "favouredSemaphore". A "favoured" thread keeps the Semaphore and a non-favoured releases it immediately. Thereby the favoured thread blocks all other incoming threads once he has acquired this Semaphore.
Then the second "normalSemaphore" has to be acquired to finish up.
But the non-favoured thread then checks again that there is no favoured thread waiting using a volatile variable). If none is waiting then he simply continues; if one is waiting, he releases the normalSemaphore and recursively calls acquire again.
I am not really sure that there are no race conditions lurking. If you want to be sure, you perhaps should refactor your code to hand of "work items" to a priority queue, where another thread takes the work item with the highest priority and executes that code.
public final class SemaphoreWithMemory {
private volatile boolean favouredAquired = false;
private final Semaphore favouredSemaphore = new Semaphore(1, true);
private final Semaphore normalSemaphore = new Semaphore(1, true);
private final Set<Integer> favoured = new ConcurrentSkipListSet<Integer>();
public void acquire() throws InterruptedException {
normalSemaphore.acquire();
}
public void acquire(int id) throws InterruptedException {
boolean idIsFavoured = favoured.contains(id);
favouredSemaphore.acquire();
if (!idIsFavoured) {
favouredSemaphore.release();
} else {
favouredAquired = true;
}
normalSemaphore.acquire();
// check again that there is no favoured thread waiting
if (!idIsFavoured) {
if (favouredAquired) {
normalSemaphore.release();
acquire(); // starving probability!
} else {
favoured.add(id);
}
}
}
public void release() {
normalSemaphore.release();
if (favouredAquired) {
favouredAquired = false;
favouredSemaphore.release();
}
}
public void release(int id) {
favoured.remove(id);
release();
}
}
I read this article by Ceki and was interested how biased semaphore acquisition could be (since I felt the "biased locking" behavior would make sense in semaphores as well..). On my hardware with 2 processors and a Sun JVM 1.6, it actually results in pretty uniform lease.
Anyways, I also tried to "bias" the leasing of semaphore with the strategy I wrote in my other answer. Turns out a simple extra yield statement alone results in significant bias. Your problem is more complicated, but perhaps you can do similar tests with your idea and see what you get :)
NOTE The code below is based upon Ceki's code here
Code:
import java.util.concurrent.*;
public class BiasedSemaphore implements Runnable {
static ThreadLocal<Boolean> favored = new ThreadLocal<Boolean>(){
private boolean gaveOut = false;
public synchronized Boolean initialValue(){
if(!gaveOut){
System.out.println("Favored " + Thread.currentThread().getName());
gaveOut = true;
return true;
}
return false;
}
};
static int THREAD_COUNT = Runtime.getRuntime().availableProcessors();
static Semaphore SEM = new Semaphore(1);
static Runnable[] RUNNABLE_ARRAY = new Runnable[THREAD_COUNT];
static Thread[] THREAD_ARRAY = new Thread[THREAD_COUNT];
private int counter = 0;
public static void main(String args[]) throws InterruptedException {
printEnvironmentInfo();
execute();
printResults();
}
public static void printEnvironmentInfo() {
System.out.println("java.runtime.version = "
+ System.getProperty("java.runtime.version"));
System.out.println("java.vendor = "
+ System.getProperty("java.vendor"));
System.out.println("java.version = "
+ System.getProperty("java.version"));
System.out.println("os.name = "
+ System.getProperty("os.name"));
System.out.println("os.version = "
+ System.getProperty("os.version"));
}
public static void execute() throws InterruptedException {
for (int i = 0; i < THREAD_COUNT; i++) {
RUNNABLE_ARRAY[i] = new BiasedSemaphore();
THREAD_ARRAY[i] = new Thread(RUNNABLE_ARRAY[i]);
System.out.println("Runnable at "+i + " operated with "+THREAD_ARRAY[i]);
}
for (Thread t : THREAD_ARRAY) {
t.start();
}
// let the threads run for a while
Thread.sleep(10000);
for (int i = 0; i< THREAD_COUNT; i++) {
THREAD_ARRAY[i].interrupt();
}
for (Thread t : THREAD_ARRAY) {
t.join();
}
}
public static void printResults() {
System.out.println("Ran with " + THREAD_COUNT + " threads");
for (int i = 0; i < RUNNABLE_ARRAY.length; i++) {
System.out.println("runnable[" + i + "]: " + RUNNABLE_ARRAY[i]);
}
}
public void run() {
while (!Thread.currentThread().isInterrupted()) {
if (favored.get()) {
stuff();
} else {
Thread.yield();
// try {
// Thread.sleep(1);
// } catch (InterruptedException e) {
// Thread.currentThread().interrupt();
// }
stuff();
}
}
}
private void stuff() {
if (SEM.tryAcquire()) {
//favored.set(true);
counter++;
try {
Thread.sleep(10);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
SEM.release();
} else {
//favored.set(false);
}
}
public String toString() {
return "counter=" + counter;
}
}
Results:
java.runtime.version = 1.6.0_21-b07
java.vendor = Sun Microsystems Inc.
java.version = 1.6.0_21
os.name = Windows Vista
os.version = 6.0
Runnable at 0 operated with Thread[Thread-0,5,main]
Runnable at 1 operated with Thread[Thread-1,5,main]
Favored Thread-0
Ran with 2 threads
runnable[0]: counter=503
runnable[1]: counter=425
Tried with 30 seconds instead of 10:
java.runtime.version = 1.6.0_21-b07
java.vendor = Sun Microsystems Inc.
java.version = 1.6.0_21
os.name = Windows Vista
os.version = 6.0
Runnable at 0 operated with Thread[Thread-0,5,main]
Runnable at 1 operated with Thread[Thread-1,5,main]
Favored Thread-1
Ran with 2 threads
runnable[0]: counter=1274
runnable[1]: counter=1496
P.S.: Looks like "hanging out" was a very bad idea. When I tried calling SEM.tryAcquire(1,TimeUnit.MILLISECONDS); for favored threads and SEM.tryAcquire() for non-favored threads, non-favored threads got the permit almost 5 times more than the favored thread!
Also, I'd like to add that these results are only measured under 1 particular situation, so it's not clear how these measures behave in other situations.
It strikes me that the simplest way to do this is not to try and combine Semaphores, but to build it from scratch on top of monitors. This is generally risky, but in this case, as there are no good building blocks in java.util.concurrent, it's the clearest way to do it.
Here's what i came up with:
public class SemaphoreWithMemory {
private final Set<Integer> favouredIDs = new HashSet<Integer>();
private final Object favouredLock = new Object();
private final Object ordinaryLock = new Object();
private boolean available = true;
private int favouredWaiting = 0;
/**
Acquires the permit. Blocks until the permit is acquired.
*/
public void acquire(int id) throws InterruptedException {
Object lock;
boolean favoured = false;
synchronized (this) {
// fast exit for uncontended lock
if (available) {
doAcquire(favoured, id);
return;
}
favoured = favouredIDs.contains(id);
if (favoured) {
lock = favouredLock;
++favouredWaiting;
}
else {
lock = ordinaryLock;
}
}
while (true) {
synchronized (this) {
if (available) {
doAcquire(favoured, id);
return;
}
}
synchronized (lock) {
lock.wait();
}
}
}
private void doAcquire(boolean favoured, int id) {
available = false;
if (favoured) --favouredWaiting;
else favouredIDs.add(id);
}
/**
Releases the permit.
*/
public synchronized void release() {
available = true;
Object lock = (favouredWaiting > 0) ? favouredLock : ordinaryLock;
synchronized (lock) {
lock.notify();
}
}
}