How to implement a semaphore - java

So I've created a program, trying to display the dangers of using shared variables, so I have 3 classes the main called DangersOfSharedVariables and an Incrementer and Decrementer class.
So the idea is to have two threads running at once, both calling their respected methods, so the Decrementer class will call the decrementShared() method in the main and the Incrementer class will call the incrementShared() method in the main.
Here's the main method:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package dangersofsharedvariables;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
*
*/
public class DangersOfSharedVariables {
/**
* #param args the command line arguments
*/
private static int sharedValue =0;
private static int numberOfCycles = 2000000000;
public static void main(String[] args) {
// TODO code application logic here
Incrementer in = new Incrementer(numberOfCycles);
Decrementer de = new Decrementer(numberOfCycles);
Semaphore sem = new Semaphore(1);
in.start();
try {
in.join();
} catch (InterruptedException ex) {}
de.start();
try {
de.join();
} catch (InterruptedException ex) {}
System.out.println(sharedValue);
}
public void decrementShared(){
sharedValue -=10;
}
public void incrementShared(){
sharedValue +=10;
}
}
Here's the Incrementer class
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package dangersofsharedvariables;
/**
*
*
*/
public class Incrementer extends Thread {
private int numberOfIncrements;
public Incrementer(int numberOfIncrements) {
this.numberOfIncrements = numberOfIncrements;
}
#Override
public void run() {
DangersOfSharedVariables in = new DangersOfSharedVariables();
for(int i = 0; i < numberOfIncrements; i++){
in.incrementShared();
}
}
}
Decrementer Class:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package dangersofsharedvariables;
/**
*
*
*/
public class Decrementer extends Thread {
private int numberOfDecrements;
public Decrementer(int numberOfDecrements){
this.numberOfDecrements = numberOfDecrements;
}
#Override
public void run(){
DangersOfSharedVariables d = new DangersOfSharedVariables();
for(int i = 0; i < numberOfDecrements; i++){
d.decrementShared();
}
}
}
I was googling and a more secure way to do this would be with the use of a Sempaphore class. So I took it upon myself to play around with a semaphore template I found, but am unsure as to how i'd implement it.
Semaphore Class:
package dangersofsharedvariables;
public class Semaphore {
// *************************************************************
// Class properties.
// Allow for both counting or mutex semaphores.
private int count;
// *************************************************************
// Constructor
public Semaphore(int n) {
count = n;
}
// *************************************************************
// Public class methods.
// Only the standard up and down operators are allowed.
public synchronized void down() {
while (count == 0) {
try {
wait(); // Blocking call.
} catch (InterruptedException exception) {
exception.printStackTrace();
}
}
count--;
}
public synchronized void up() {
count++;
notify();
}
}

Based on your query, following is a brief description about semephore data structures. Semaphores are useful in solving a variety of synchronised problems. The concept has been introduced by Dijkstra(1968) where he introduced the idea of semaphores as part of the operating system in order to synchronise processes with each other and with hardware.
The structure of a typical semaphore involves 4 stages:
Non-critical region
Entry protocol
Critical region
Exit protocol
The non-critical region is any code which can be carried out concurrently by 2-n threads.
The entry protocol is the code which must be executed by a process prior to entering a critical region. It is designed to prevent the process from entering the critical region if another process is already using shared resources.
The critical region is the section of code in which a shared resource is being accessed.
The exit protocol is the code that the process must execute immediately on completion of its critical region.
Semaphores can be put to different uses:
for mutual exclusive access to a single shared resource, in which case the semaphore is called a binary semaphore
to protect access to multiple instances of a resource (a counting semaphore)
to synchronise two processes (a blocking semaphore)
The versatility of the semaphore mechanism is achieved through correct initialisation.
For demonstration purposes please consult example below which showcases the most simple binary semaphore implementation:
Semaphore:
package BinarySemaphore;
public class Semaphore{
private static Semaphore semaphore;
private static int resource = 1;
private Semaphore(){}
public synchronized void increment(){
while(isAvailable()){
try {
this.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
resource += 1;
report();
this.notifyAll();
}
public synchronized void decrement(){
while(!isAvailable()){
try {
this.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
resource -= 1;
report();
this.notifyAll();
}
public synchronized final static boolean isAvailable(){
return resource == 1 ? true : false;
}
public synchronized final static void report(){
System.out.println("Resource value: " + resource);
}
public final static Semaphore getInstance(){
if(semaphore == null){
semaphore = new Semaphore();
}
return semaphore;
}
}
Incrementer:
package semaphore;
import BinarySemaphore.Semaphore;
public class Incrementer implements Runnable{
private static Semaphore semaphore = null;
public Incrementer(Semaphore s){
semaphore = s;
}
#Override
public void run() {
for(int i = 0; i < 10; i++){
System.out.println("Incrementing...");
semaphore.increment();
}
}
}
Decrementer:
package semaphore;
import BinarySemaphore.Semaphore;
public class Decrementer implements Runnable{
private static Semaphore semaphore = null;
public Decrementer(Semaphore s) {
semaphore = s;
}
#Override
public void run() {
for(int i = 0; i < 10; i++){
System.out.println("Decrementing...");
semaphore.decrement();
}
}
}
Main:
package semaphore;
import BinarySemaphore.Semaphore;
public class Main {
public static void main(String[] args){
Thread iIncrement = new Thread(new Incrementer(Semaphore.getInstance()));
Thread iDecrement = new Thread(new Decrementer(Semaphore.getInstance()));
iIncrement.start();
iDecrement.start();
}
}
Output:
Decrementing...
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Decrementing...
Resource value: 1
Incrementing...
Resource value: 0
Resource value: 1

The name for what you want is "mutex" which is short for "Mutual Exclusion". A mutex is a block of code that can only be executed by one thread at a time.
The Java language statement synchronized (foo) { ... } implements mutual exclusion. foo is an expression that yields up some object (sometimes called the lock object), and ... are the statements to be protected. The Java language guarantees that no two threads will be allowed to synchronize the same lock object at the same time.
Semaphore can be used to provide mutual exclusion, but it is more cumbersome, and it's antiquated.
Semaphore was invented before computers had hardware primitives for thread synchronization. It was supposed to be the "primitive" operation upon which other synchronization constructs (e.g., mutexes) could be built.
Today, the Java implementation of Semaphore actually is built on top of the same hardware primitives as the synchronized statement.

Related

Print alternately with three threads [duplicate]

I am trying to print numbers from 1 to 10 using three threads. thread 1 prints 1, 2 prints 2, 3 prints 3, 4 is printed by thread 1 again and so on.
I have created a shared printer resource that helps those threads to print number. But I am getting confused as how can i make the number to be visible by all threads.
The problem is eachthread is seeing their own copy of number while I need the same number to be shared by all threads.
I am trying to create this example for learning purposes. I have seen other pages on SO that had same kind of problem but I am not able to get the concept.
Any help is appreciated.
how is this example diffrent from what I am doing?
Printing Even and Odd using two Threads in Java
public class PrintAlternateNumber {
public static void main(String args[]) {
SharedPrinter printer = new SharedPrinter();
Thread t1 = new Thread(new myRunnable2(printer,10,1),"1");
Thread t2 = new Thread(new myRunnable2(printer,10,2),"2");
Thread t3 = new Thread(new myRunnable2(printer,10,3),"3");
t1.start();
t2.start();
t3.start();
}
}
class myRunnable2 implements Runnable {
int max;
SharedPrinter printer;
int threadNumber;
int number=1;
myRunnable2(SharedPrinter printer,int max,int threadNumber) {
this.max=max;
this.printer=printer;
this.threadNumber=threadNumber;
}
#Override
public void run() {
System.out.println(" The thread that just entered run "+ Thread.currentThread().getName());
for(int i =1;i<max;i++){
try {
printer.print(i,threadNumber);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
class SharedPrinter {
boolean canPrintFlag=false;
public synchronized void print(int number,int threadNumber) throws InterruptedException{
if(number%3==threadNumber) {
canPrintFlag=true;
}
while(!canPrintFlag)
{
System.out.println(Thread.currentThread().getName() + " is waiting as it cannot print " + number);
wait();
}
System.out.println(Thread.currentThread().getName()+" printed "+number);
canPrintFlag=false;
notifyAll();
}
}
//output
//The thread that just entered run 2
// The thread that just entered run 3
//The thread that just entered run 1
//3 is waiting as it cannot print 1
//1 printed 1
//1 is waiting as it cannot print 2
//3 is waiting as it cannot print 1
//2 is waiting as it cannot print 1
Technique second
it is still incomplete but I am close
output
0printed by0
2printed by2
1printed by1
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
class AlternateNumber {
public static void main(String args[]) {
printerHell ph = new printerHell();
BlockingQueue<Integer> queue = new ArrayBlockingQueue<Integer>(10);
for(int i=0;i<10;i++)
{
queue.add(i);
}
Thread t1 = new Thread(new myRunnableHell(queue,0,ph),"0");
Thread t2 = new Thread(new myRunnableHell(queue,1,ph),"1");
Thread t3 = new Thread(new myRunnableHell(queue,2,ph),"2");
t1.start();
t2.start();
t3.start();
}
}
class myRunnableHell implements Runnable {
BlockingQueue<Integer> queue;
int threadNumber;
printerHell ph;
myRunnableHell(BlockingQueue<Integer> queue, int threadNumber,printerHell ph) {
this.queue=queue;
this.threadNumber=threadNumber;
this.ph=ph;
};
int currentNumber;
#Override
public void run() {
for(int i=0;i<queue.size();i++)
{
currentNumber=queue.remove();
if(threadNumber%3==currentNumber)
{
ph.print(currentNumber);
}
}
}
}
class printerHell {
public synchronized void print(int Number)
{
System.out.println(Number + "printed by" + Thread.currentThread().getName());
}
}
Please see my solution here..
Using simple wait/notify
https://stackoverflow.com/a/31668619/1044396
Using cyclic barriers:
https://stackoverflow.com/a/23752952/1044396
For your query on 'How different it is from even/odd thread problem.
--> it is almost same ... instead of maintaining two states have one more state to call the third thread, so I believe,this can be extended any number of threads.
EDIT:
You may view this approach when you want to have 'n' number of threads to do the work sequentially.(Instead of having different classes t1,t2,t3 etc)
https://codereview.stackexchange.com/a/98305/78940
EDIT2:
Copying the code here again for the above solution
I tried to solve using a single class 'Thrd' which gets initialized with its starting number.
ThreadConfig class which as size of total number of threads you want to create.
State class which maintains the state of the previous thread.(to maintain ordering)
Here you go..(please review and let me know your views)
EDIT:
How it works -->
when a thread Tx gets a chance to execute.. it will set state variable's state with x. So a next thread(Tx+1) waiting , will get a chance once state gets updated. This way you can maintain the ordering of threads.
I hope i am able to explain the code. Please run it and see or let me know for any specific queries on the below code
1)
package com.kalyan.concurrency;
public class ThreadConfig {
public static final int size = 5;
}
2) package com.kalyan.concurrency;
public class State {
private volatile int state ;
public State() {
this.state =3;
}
public State(int state) {
this.state = state;
}
public int getState() {
return state;
}
public void setState(int state) {
this.state = state;
}
}
3) package com.kalyan.concurrency;
public class Thrd implements Runnable {
int i ;
int name;
int prevThread;
State s;
public Thrd(int i,State s) {
this.i=i;
this.name=i;
this.prevThread=i-1;
if(prevThread == 0) prevThread=ThreadConfig.size;
this.s=s;
}
#Override
public void run() {
while(i<50)
{
synchronized(s)
{
while(s.getState() != prevThread)
{
try {
s.wait();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
synchronized(s)
{
//if(s.getState() ==3)
if(s.getState()==prevThread)
System.out.println("t"+ name+ i);
s.setState(name);
i = i +ThreadConfig.size ;
s.notifyAll();
}
}
}
}
4)
package com.kalyan.concurrency;
public class T1t2t3 {
public static void main(String[] args) {
State s = new State(ThreadConfig.size);
for(int i=1;i<=ThreadConfig.size;i++)
{
Thread T = new Thread(new Thrd(i,s));
T.start();
}
}
}
OUTPUT:
t11
t22
t33
t44
t55
t16
t27
t38
t49
t510
t111
t212
t313
t414
t515
t116..............
I hope I understood you right, but there are to main "features" in java to make a variable being shared between threads:
the volatile keyword
volatile int number = 1;
AtomicInteger (a standard java class -> no library)
AtomicInteger number = new AtomicInteger(1);
These two techniques should both do what you want, however I have no experience using it, I just came upon this word, didn't know what it means and did some digging.
Some stuff to read: ;)
volatile for java explained --> http://java.dzone.com/articles/java-volatile-keyword-0
a better explanation (with IMAGES!!) but for c# (which is still the same usage) --> http://igoro.com/archive/volatile-keyword-in-c-memory-model-explained/
And a link to some usages of AtomicInteger --> https://stackoverflow.com/a/4818753/4986655
I hope I could help you or at least send you in the right direction :)
- superfuzzy

Java Concurrency: Paired locks with shared access

I am looking for a Java implementation of the following concurrency semantics. I want something similar to ReadWriteLock except symmetrical, i.e. both the read and write sides can be shared amongst many threads, but read excludes write and vice versa.
There are two locks, let's call them A and B.
Lock A is shared, i.e. there may be multiple threads holding it concurrently. Lock B is also shared, there may be multiple threads holding it concurrently.
If any thread holds lock A then no thread may take B – threads attempting to take B shall block until all threads holding A have released A.
If any thread holds lock B then no thread may take A – threads attempting to take A shall block until all threads holding B have released B.
Is there an existing library class that achieves this? At the moment I have approximated the desired functionality with a ReadWriteLock because fortunately the tasks done in the context of lock B are somewhat rarer. It feels like a hack though, and it could affect the performance of my program under heavy load.
Short answer:
In the standard library, there is nothing like what you need.
Long answer:
To easily implement a custom Lock you should subclass or delegate to an AbstractQueuedSynchronizer.
The following code is an example of a non-fair lock that implements what you need, including some (non exhausting) test. I called it LeftRightLock because of the binary nature of your requirements.
The concept is pretty straightforward:
AbstractQueuedSynchronizer exposes a method to atomically set the state of an int using the Compare and swap idiom ( compareAndSetState(int expect, int update) ), we can use the exposed state keep the count of the threads holding the lock, setting it to a positive value in case the Right lock is being held or a negative value in case the Left lock is being held.
Than we just make sure of the following conditions:
- you can lock Left only if the state of the internal AbstractQueuedSynchronizer is zero or negative
- you can lock Right only if the state of the internal AbstractQueuedSynchronizer is zero or positive
LeftRightLock.java
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
import java.util.concurrent.locks.Lock;
/**
* A binary mutex with the following properties:
*
* Exposes two different {#link Lock}s: LEFT, RIGHT.
*
* When LEFT is held other threads can acquire LEFT but thread trying to acquire RIGHT will be
* blocked. When RIGHT is held other threads can acquire RIGHT but thread trying to acquire LEFT
* will be blocked.
*/
public class LeftRightLock {
public static final int ACQUISITION_FAILED = -1;
public static final int ACQUISITION_SUCCEEDED = 1;
private final LeftRightSync sync = new LeftRightSync();
public void lockLeft() {
sync.acquireShared(LockSide.LEFT.getV());
}
public void lockRight() {
sync.acquireShared(LockSide.RIGHT.getV());
}
public void releaseLeft() {
sync.releaseShared(LockSide.LEFT.getV());
}
public void releaseRight() {
sync.releaseShared(LockSide.RIGHT.getV());
}
public boolean tryLockLeft() {
return sync.tryAcquireShared(LockSide.LEFT) == ACQUISITION_SUCCEEDED;
}
public boolean tryLockRight() {
return sync.tryAcquireShared(LockSide.RIGHT) == ACQUISITION_SUCCEEDED;
}
private enum LockSide {
LEFT(-1), NONE(0), RIGHT(1);
private final int v;
LockSide(int v) {
this.v = v;
}
public int getV() {
return v;
}
}
/**
* <p>
* Keep count the count of threads holding either the LEFT or the RIGHT lock.
* </p>
*
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) greater than 0 means one or more threads are holding RIGHT lock. </li>
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) lower than 0 means one or more threads are holding LEFT lock.</li>
* <li>A state ({#link AbstractQueuedSynchronizer#getState()}) equal to zero means no thread is holding any lock.</li>
*/
private static final class LeftRightSync extends AbstractQueuedSynchronizer {
#Override
protected int tryAcquireShared(int requiredSide) {
return (tryChangeThreadCountHoldingCurrentLock(requiredSide, ChangeType.ADD) ? ACQUISITION_SUCCEEDED : ACQUISITION_FAILED);
}
#Override
protected boolean tryReleaseShared(int requiredSide) {
return tryChangeThreadCountHoldingCurrentLock(requiredSide, ChangeType.REMOVE);
}
public boolean tryChangeThreadCountHoldingCurrentLock(int requiredSide, ChangeType changeType) {
if (requiredSide != 1 && requiredSide != -1)
throw new AssertionError("You can either lock LEFT or RIGHT (-1 or +1)");
int curState;
int newState;
do {
curState = this.getState();
if (!sameSide(curState, requiredSide)) {
return false;
}
if (changeType == ChangeType.ADD) {
newState = curState + requiredSide;
} else {
newState = curState - requiredSide;
}
//TODO: protect against int overflow (hopefully you won't have so many threads)
} while (!this.compareAndSetState(curState, newState));
return true;
}
final int tryAcquireShared(LockSide lockSide) {
return this.tryAcquireShared(lockSide.getV());
}
final boolean tryReleaseShared(LockSide lockSide) {
return this.tryReleaseShared(lockSide.getV());
}
private boolean sameSide(int curState, int requiredSide) {
return curState == 0 || sameSign(curState, requiredSide);
}
private boolean sameSign(int a, int b) {
return (a >= 0) ^ (b < 0);
}
public enum ChangeType {
ADD, REMOVE
}
}
}
LeftRightLockTest.java
import org.junit.Test;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
public class LeftRightLockTest {
int logLineSequenceNumber = 0;
private LeftRightLock sut = new LeftRightLock();
#Test(timeout = 2000)
public void acquiringLeftLockExcludeAcquiringRightLock() throws Exception {
sut.lockLeft();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockRight());
assertFalse("I shouldn't be able to acquire the RIGHT lock!", task.get());
}
#Test(timeout = 2000)
public void acquiringRightLockExcludeAcquiringLeftLock() throws Exception {
sut.lockRight();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockLeft());
assertFalse("I shouldn't be able to acquire the LEFT lock!", task.get());
}
#Test(timeout = 2000)
public void theLockShouldBeReentrant() throws Exception {
sut.lockLeft();
assertTrue(sut.tryLockLeft());
}
#Test(timeout = 2000)
public void multipleThreadShouldBeAbleToAcquireTheSameLock_Right() throws Exception {
sut.lockRight();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockRight());
assertTrue(task.get());
}
#Test(timeout = 2000)
public void multipleThreadShouldBeAbleToAcquireTheSameLock_left() throws Exception {
sut.lockLeft();
Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> sut.tryLockLeft());
assertTrue(task.get());
}
#Test(timeout = 2000)
public void shouldKeepCountOfAllTheThreadsHoldingTheSide() throws Exception {
CountDownLatch latchA = new CountDownLatch(1);
CountDownLatch latchB = new CountDownLatch(1);
Thread threadA = spawnThreadToAcquireLeftLock(latchA, sut);
Thread threadB = spawnThreadToAcquireLeftLock(latchB, sut);
System.out.println("Both threads have acquired the left lock.");
try {
latchA.countDown();
threadA.join();
boolean acqStatus = sut.tryLockRight();
System.out.println("The right lock was " + (acqStatus ? "" : "not") + " acquired");
assertFalse("There is still a thread holding the left lock. This shouldn't succeed.", acqStatus);
} finally {
latchB.countDown();
threadB.join();
}
}
#Test(timeout = 5000)
public void shouldBlockThreadsTryingToAcquireLeftIfRightIsHeld() throws Exception {
sut.lockLeft();
CountDownLatch taskStartedLatch = new CountDownLatch(1);
final Future<Boolean> task = Executors.newSingleThreadExecutor().submit(() -> {
taskStartedLatch.countDown();
sut.lockRight();
return false;
});
taskStartedLatch.await();
Thread.sleep(100);
assertFalse(task.isDone());
}
#Test
public void shouldBeFreeAfterRelease() throws Exception {
sut.lockLeft();
sut.releaseLeft();
assertTrue(sut.tryLockRight());
}
#Test
public void shouldBeFreeAfterMultipleThreadsReleaseIt() throws Exception {
CountDownLatch latch = new CountDownLatch(1);
final Thread thread1 = spawnThreadToAcquireLeftLock(latch, sut);
final Thread thread2 = spawnThreadToAcquireLeftLock(latch, sut);
latch.countDown();
thread1.join();
thread2.join();
assertTrue(sut.tryLockRight());
}
#Test(timeout = 2000)
public void lockShouldBeReleasedIfNoThreadIsHoldingIt() throws Exception {
CountDownLatch releaseLeftLatch = new CountDownLatch(1);
CountDownLatch rightLockTaskIsRunning = new CountDownLatch(1);
Thread leftLockThread1 = spawnThreadToAcquireLeftLock(releaseLeftLatch, sut);
Thread leftLockThread2 = spawnThreadToAcquireLeftLock(releaseLeftLatch, sut);
Future<Boolean> acquireRightLockTask = Executors.newSingleThreadExecutor().submit(() -> {
if (sut.tryLockRight())
throw new AssertionError("The left lock should be still held, I shouldn't be able to acquire right a this point.");
printSynchronously("Going to be blocked on right lock");
rightLockTaskIsRunning.countDown();
sut.lockRight();
printSynchronously("Lock acquired!");
return true;
});
rightLockTaskIsRunning.await();
releaseLeftLatch.countDown();
leftLockThread1.join();
leftLockThread2.join();
assertTrue(acquireRightLockTask.get());
}
private synchronized void printSynchronously(String str) {
System.out.println(logLineSequenceNumber++ + ")" + str);
System.out.flush();
}
private Thread spawnThreadToAcquireLeftLock(CountDownLatch releaseLockLatch, LeftRightLock lock) throws InterruptedException {
CountDownLatch lockAcquiredLatch = new CountDownLatch(1);
final Thread thread = spawnThreadToAcquireLeftLock(releaseLockLatch, lockAcquiredLatch, lock);
lockAcquiredLatch.await();
return thread;
}
private Thread spawnThreadToAcquireLeftLock(CountDownLatch releaseLockLatch, CountDownLatch lockAcquiredLatch, LeftRightLock lock) {
final Thread thread = new Thread(() -> {
lock.lockLeft();
printSynchronously("Thread " + Thread.currentThread() + " Acquired left lock");
try {
lockAcquiredLatch.countDown();
releaseLockLatch.await();
} catch (InterruptedException ignore) {
} finally {
lock.releaseLeft();
}
printSynchronously("Thread " + Thread.currentThread() + " RELEASED left lock");
});
thread.start();
return thread;
}
}
I don't know any library that does that you want. Even if there is such a library it possess little value because every time your request changes the library stops doing the magic.
The actual question here is "How to I implement my own lock with custom specification?"
Java provides tool for that named AbstractQueuedSynchronizer. It has extensive documentation. Apart from docs one would possibly like to look at CountDownLatch and ReentrantLock sources and use them as examples.
For your particular request see code below, but beware that it is 1) not fair 2) not tested
public class MultiReadWriteLock implements ReadWriteLock {
private final Sync sync;
private final Lock readLock;
private final Lock writeLock;
public MultiReadWriteLock() {
this.sync = new Sync();
this.readLock = new MultiLock(Sync.READ, sync);
this.writeLock = new MultiLock(Sync.WRITE, sync);
}
#Override
public Lock readLock() {
return readLock;
}
#Override
public Lock writeLock() {
return writeLock;
}
private static final class Sync extends AbstractQueuedSynchronizer {
private static final int READ = 1;
private static final int WRITE = -1;
#Override
public int tryAcquireShared(int arg) {
int state, result;
do {
state = getState();
if (state >= 0 && arg == READ) {
// new read
result = state + 1;
} else if (state <= 0 && arg == WRITE) {
// new write
result = state - 1;
} else {
// blocked
return -1;
}
} while (!compareAndSetState(state, result));
return 1;
}
#Override
protected boolean tryReleaseShared(int arg) {
int state, result;
do {
state = getState();
if (state == 0) {
return false;
}
if (state > 0 && arg == READ) {
result = state - 1;
} else if (state < 0 && arg == WRITE) {
result = state + 1;
} else {
throw new IllegalMonitorStateException();
}
} while (!compareAndSetState(state, result));
return result == 0;
}
}
private static class MultiLock implements Lock {
private final int parameter;
private final Sync sync;
public MultiLock(int parameter, Sync sync) {
this.parameter = parameter;
this.sync = sync;
}
#Override
public void lock() {
sync.acquireShared(parameter);
}
#Override
public void lockInterruptibly() throws InterruptedException {
sync.acquireSharedInterruptibly(parameter);
}
#Override
public boolean tryLock() {
return sync.tryAcquireShared(parameter) > 0;
}
#Override
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
return sync.tryAcquireSharedNanos(parameter, unit.toNanos(time));
}
#Override
public void unlock() {
sync.releaseShared(parameter);
}
#Override
public Condition newCondition() {
throw new UnsupportedOperationException(
"Conditions are unsupported as there are no exclusive access"
);
}
}
}
After my nth attempt to make a simple fair implementation, I think I understand why I could not find another library/example of the "mutual exclusive lock-pair": it requires a pretty specific user-case. As OP mentioned, you can get a long way with the ReadWriteLock and a fair lock-pair is only useful when there are many requests for a lock in quick succession (else you might as well use one normal lock).
The implementation below is more of a "permit dispenser": it is not re-entrant. It can be made re-entrant though (if not, I fear I failed to make the code simple and readable) but it requires some additional administration for various cases (e.g. one thread locking A twice, still needs to unlock A twice and the unlock-method needs to know when there are no more locks outstanding). An option to throw a deadlock error when one thread locks A and wants to lock B is probably a good idea.
The main idea is that there is an "active lock" that can only be changed by the lock-method when there are no (requests for) locks at all and can be changed by the unlock-method when the active locks outstanding reaches zero. The rest is basically keeping count of lock-requests and making threads wait until the active lock can be changed. Making threads wait involves working with InterruptedExceptions and I made a compromise there: I could not find a good solution that works well in all cases (e.g. application shutdown, one thread that gets interrupted, etc.).
I only did some basic testing (test class at the end), more validation is needed.
import java.util.concurrent.Semaphore;
import java.util.concurrent.locks.ReentrantLock;
/**
* A pair of mutual exclusive read-locks: many threads can hold a lock for A or B, but never A and B.
* <br>Usage:<pre>
* PairedLock plock = new PairedLock();
* plock.lockA();
* try {
* // do stuff
* } finally {
* plock.unlockA();
* }</pre>
* This lock is not reentrant: a lock is not associated with a thread and a thread asking for the same lock
* might be blocked the second time (potentially causing a deadlock).
* <p>
* When a lock for A is active, a lock for B will wait for all locks on A to be unlocked and vice versa.
* <br>When a lock for A is active, and a lock for B is waiting, subsequent locks for A will wait
* until all (waiting) locks for B are unlocked.
* I.e. locking is fair (in FIFO order).
* <p>
* See also
* stackoverflow-java-concurrency-paired-locks-with-shared-access
*
* #author vanOekel
*
*/
public class PairedLock {
static final int MAX_LOCKS = 2;
static final int CLOSE_PERMITS = 10_000;
/** Use a fair lock to keep internal state instead of the {#code synchronized} keyword. */
final ReentrantLock state = new ReentrantLock(true);
/** Amount of threads that have locks. */
final int[] activeLocks = new int[MAX_LOCKS];
/** Amount of threads waiting to receive a lock. */
final int[] waitingLocks = new int[MAX_LOCKS];
/** Threads block on a semaphore until locks are available. */
final Semaphore[] waiters = new Semaphore[MAX_LOCKS];
int activeLock;
volatile boolean closed;
public PairedLock() {
super();
for (int i = 0; i < MAX_LOCKS; i++) {
// no need for fair semaphore: unlocks are done for all in one go.
waiters[i] = new Semaphore(0);
}
}
public void lockA() throws InterruptedException { lock(0); }
public void lockB() throws InterruptedException { lock(1); }
public void lock(int lockNumber) throws InterruptedException {
if (lockNumber < 0 || lockNumber >= MAX_LOCKS) {
throw new IllegalArgumentException("Lock number must be 0 or less than " + MAX_LOCKS);
} else if (isClosed()) {
throw new IllegalStateException("Lock closed.");
}
boolean wait = false;
state.lock();
try {
if (nextLockIsWaiting()) {
wait = true;
} else if (activeLock == lockNumber) {
activeLocks[activeLock]++;
} else if (activeLock != lockNumber && activeLocks[activeLock] == 0) {
// nothing active and nobody waiting - safe to switch to another active lock
activeLock = lockNumber;
activeLocks[activeLock]++;
} else {
// with only two locks this means this is the first lock that needs an active-lock switch.
// in other words:
// activeLock != lockNumber && activeLocks[activeLock] > 0 && waitingLocks[lockNumber] == 0
wait = true;
}
if (wait) {
waitingLocks[lockNumber]++;
}
} finally {
state.unlock();
}
if (wait) {
waiters[lockNumber].acquireUninterruptibly();
// there is no easy way to bring this lock back into a valid state when waiters do no get a lock.
// so for now, use the closed state to make this lock unusable any further.
if (closed) {
throw new InterruptedException("Lock closed.");
}
}
}
protected boolean nextLockIsWaiting() {
return (waitingLocks[nextLock(activeLock)] > 0);
}
protected int nextLock(int lockNumber) {
return (lockNumber == 0 ? 1 : 0);
}
public void unlockA() { unlock(0); }
public void unlockB() { unlock(1); }
public void unlock(int lockNumber) {
// unlock is called in a finally-block and should never throw an exception.
if (lockNumber < 0 || lockNumber >= MAX_LOCKS) {
System.out.println("Cannot unlock lock number " + lockNumber);
return;
}
state.lock();
try {
if (activeLock != lockNumber) {
System.out.println("ERROR: invalid lock state: no unlocks for inactive lock expected (active: " + activeLock + ", unlock: " + lockNumber + ").");
return;
}
activeLocks[lockNumber]--;
if (activeLocks[activeLock] == 0 && nextLockIsWaiting()) {
activeLock = nextLock(lockNumber);
waiters[activeLock].release(waitingLocks[activeLock]);
activeLocks[activeLock] += waitingLocks[activeLock];
waitingLocks[activeLock] = 0;
} else if (activeLocks[lockNumber] < 0) {
System.out.println("ERROR: to many unlocks for lock number " + lockNumber);
activeLocks[lockNumber] = 0;
}
} finally {
state.unlock();
}
}
public boolean isClosed() { return closed; }
/**
* All threads waiting for a lock will be unblocked and an {#link InterruptedException} will be thrown.
* Subsequent calls to the lock-method will throw an {#link IllegalStateException}.
*/
public synchronized void close() {
if (!closed) {
closed = true;
for (int i = 0; i < MAX_LOCKS; i++) {
waiters[i].release(CLOSE_PERMITS);
}
}
}
#Override
public String toString() {
StringBuilder sb = new StringBuilder(this.getClass().getSimpleName());
sb.append("=").append(this.hashCode());
state.lock();
try {
sb.append(", active=").append(activeLock).append(", switching=").append(nextLockIsWaiting());
sb.append(", lockA=").append(activeLocks[0]).append("/").append(waitingLocks[0]);
sb.append(", lockB=").append(activeLocks[1]).append("/").append(waitingLocks[1]);
} finally {
state.unlock();
}
return sb.toString();
}
}
The test class (YMMV - works fine on my system, but may deadlock on yours due to faster or slower starting and running of threads):
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PairedLockTest {
private static final Logger log = LoggerFactory.getLogger(PairedLockTest.class);
public static final ThreadPoolExecutor tp = (ThreadPoolExecutor) Executors.newCachedThreadPool();
public static void main(String[] args) {
try {
new PairedLockTest().test();
} catch (Exception e) {
e.printStackTrace();
} finally {
tp.shutdownNow();
}
}
PairedLock mlock = new PairedLock();
public void test() throws InterruptedException {
CountDownLatch start = new CountDownLatch(1);
CountDownLatch done = new CountDownLatch(2);
mlock.lockA();
try {
logLock("la1 ");
mlock.lockA();
try {
lockAsync(start, null, done, 1);
await(start);
logLock("la2 ");
} finally {
mlock.unlockA();
}
lockAsync(null, null, done, 0);
} finally {
mlock.unlockA();
}
await(done);
logLock();
}
void lockAsync(CountDownLatch start, CountDownLatch locked, CountDownLatch unlocked, int lockNumber) {
tp.execute(() -> {
countDown(start);
await(start);
//log.info("Locking async " + lockNumber);
try {
mlock.lock(lockNumber);
try {
countDown(locked);
logLock("async " + lockNumber + " ");
} finally {
mlock.unlock(lockNumber);
//log.info("Unlocked async " + lockNumber);
//logLock("async " + lockNumber + " ");
}
countDown(unlocked);
} catch (InterruptedException ie) {
log.warn(ie.toString());
}
});
}
void logLock() {
logLock("");
}
void logLock(String msg) {
log.info(msg + mlock.toString());
}
static void countDown(CountDownLatch l) {
if (l != null) {
l.countDown();
}
}
static void await(CountDownLatch l) {
if (l == null) {
return;
}
try {
l.await();
} catch (InterruptedException e) {
log.error(e.toString(), e.getCause());
}
}
}
How about
class ABSync {
private int aHolders;
private int bHolders;
public synchronized void lockA() throws InterruptedException {
while (bHolders > 0) {
wait();
}
aHolders++;
}
public synchronized void lockB() throws InterruptedException {
while (aHolders > 0) {
wait();
}
bHolders++;
}
public synchronized void unlockA() {
aHolders = Math.max(0, aHolders - 1);
if (aHolders == 0) {
notifyAll();
}
}
public synchronized void unlockB() {
bHolders = Math.max(0, bHolders - 1);
if (bHolders == 0) {
notifyAll();
}
}
}
Update: As for "fairness" (or, rather, non-starvation), OPs requirements don't mention it. In order to implement OPs requirements + some form of fairness/non-starvation, it should be specified explicitly (what do you consider fair, how should it behave when flows of requests for currently dominant and non-dominant locks come in etc). One of the ways to implement it would be:
class ABMoreFairSync {
private Lock lock = new ReentrantLock(true);
public final Part A, B;
public ABMoreFairSync() {
A = new Part();
B = new Part();
A.other = B;
B.other = A;
}
private class Part {
private Condition canGo = lock.newCondition();
private int currentGeneration, lastGeneration;
private int holders;
private Part other;
public void lock() throws InterruptedException {
lock.lockInterruptibly();
try {
int myGeneration = lastGeneration;
if (other.holders > 0 || currentGeneration < myGeneration) {
if (other.currentGeneration == other.lastGeneration) {
other.lastGeneration++;
}
while (other.holders > 0 || currentGeneration < myGeneration) {
canGo.await();
}
}
holders++;
} finally {
lock.unlock();
}
}
public void unlock() throws InterruptedException {
lock.lockInterruptibly();
try {
holders = Math.max(0, holders - 1);
if (holders == 0) {
currentGeneration++;
other.canGo.signalAll();
}
} finally {
lock.unlock();
}
}
}
}
To be used as in:
sync.A.lock();
try {
...
} finally {
sync.A.unlock();
}
The idea of generations here is taken from "Java Concurrency in Practice", Listing 14.9.

Not wait in case synchronized section is occupied [duplicate]

This question already has answers here:
How do determine if an object is locked (synchronized) so not to block in Java?
(8 answers)
Closed 6 years ago.
I have synchronisation block in syncCmd function:
public Object Sync = new Object();
public void syncCmd(String controlCmd) {
synchronized(Sync) {
...
}
}
I need to add some logic in case if one thread has occupied Sync and doing its job. In this case I would like to report "too busy" to system and not get to queue. What is the best way to know if somebody has occupied Sync section? How to know how many threads is waiting in this section? Everything is in Java 1.4.
Have a look at the Lock interface and its implementation ReentrantLock. It allows you to use tryLock() method, including the variant that allows to wait for some time if the resource is already locked:
private ReentrantLock lock = new ReentrantLock();
public void syncCmd(String controlCmd) {
if (lock.tryLock()) {
try {
// Use your synchronized resource here
} finally {
lock.unlock();
}
} else {
// Failed to lock
}
}
Java 1.4, unfortunately, has no java.util.concurrency package and I think the best choice you have is to implement the same logic by means of synchronized and double checks:
public class Lock {
private final Object lock = new Object();
private volatile boolean locked = false;
public boolean tryLock() {
if (!locked) {
synchronized (lock) {
if (!locked) {
locked = true;
return true;
}
}
}
return false;
}
public void unlock() {
synchronized (lock) {
locked = false;
}
}
}
It will not work as fast as ReentrantLock that uses CAS loop backed by processor instructions in modern JVMs, but it will do the job.
This implementation is also not reentrant, you can extend it to track the locking thread and locks count if you need reentrance.
Important update: #Stephen C made a good point that double check is broken in Java 1.4 and one always must keep it in mind. But there're exceptions. For instance, short primitive types. So, I think it will work in this particular case. For more details, please, look at the "Double-Checked Locking is Broken" Declaration.
Synchronized blocks / methods and primitive mutexes can't do that in Java.
But if you use a Lock instead (javadoc), you can use tryLock either to never block or to only block for a limited time.
Example:
Lock l = new ReentrantLock();
if (l.tryLock()) {
try {
// access the resource protected by this lock
} finally {
l.unlock();
}
else {
// report "too busy"
}
But note that it is essential to use "try ... finally" and an explicit unlock() call to ensure that the lock is always released. (Unlike the synchronized constructs, which takes care of that for you automatically.)
Prior to Java 1.5 there is no solution that I am aware of in pure Java. It might be possible with native code trickery, but I don't know how.
You / your management should be looking to ditch support in your products for Java 1.4, and to migrating away from any third-party product that depends on top of it. Java 1.5 itself was EOL'd many years ago. In fact, all releases prior to Java 1.8 have been EOL'd; see the Oracle Java SE Support Roadmap document.
Two of the answers above talked about java.util.concurrent.locks.ReentrantLock, but it doesn't exist in Java 1.4.
Too bad so sad?
No! If system libraries and 3rd party libraries don't hand you what you want, then write it yourself!
The code below does what you asked for, and absolutely nothing more. I personally would not use it without first adding some features that would make it more useable, more testable, and most importantly, more foolproof.
I'm just offering it to you as an example of where to begin.
public class ExtremelySimplisticNonReentrantLock {
boolean isLocked = false;
/**
* #return true if the lock was acquired, false otherwise.
*/
public synchronized boolean tryToAcquire() {
if (isLocked) {
return false;
}
isLocked = true;
return true;
}
public synchronized void release() {
lsLocked = false;
}
}
Share and Enjoy!
Try this (Two classes - Executor and Tracker ) :
Executor :
package com.example.so.jdk1_4.synch;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Random;
/**
* <p> For http://stackoverflow.com/questions/38671520/not-wait-in-case-synchronized-section-is-occupied </p>
* #author Ravindra HV
*/
public class InUseExample {
public synchronized void execute(String command) {
InUseTracker.obtainClassInstance().setInuse(true);
try {
System.out.println("Executing :"+command);
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}// do work
InUseTracker.obtainClassInstance().setInuse(false);
}
/**
* #param args
*/
public static void main(String[] args) {
System.out.println("Start :"+new Date());
testInUseExample();
System.out.println("Final wait count :"+InUseTracker.obtainClassInstance().waitCount());
System.out.println("End :"+new Date());
}
private static void testInUseExample() {
final InUseExample inUseExample = new InUseExample();
Runnable runnable = new Runnable() {
#Override
public void run() {
try {
InUseTracker.obtainClassInstance().incrementWaitCount();
while(true) {
if( InUseTracker.obtainClassInstance().isInuse() == false ) { // reduces the chances of this thread going to a block mode..
inUseExample.execute(Thread.currentThread().getName());
break;
}
else {
try {
Random random = new Random();
String message = Thread.currentThread().getName()+" - block in use by :"+InUseTracker.obtainClassInstance().getInUseBy();
message = message+" "+". Wait Count :"+InUseTracker.obtainClassInstance().waitCount();
System.out.println(message);
Thread.sleep(random.nextInt(1000));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
} catch (Exception e) {
e.printStackTrace();
} finally {
InUseTracker.obtainClassInstance().decrementWaitCount();
}
}
};
int threadCount = 10;
List<Thread> threadPoolTemp = new ArrayList<Thread>();
for(int i=0;i<threadCount;i++) {
Thread thread = new Thread(runnable);
threadPoolTemp.add(thread);
}
for (Thread thread : threadPoolTemp) {
thread.start();
}
for (Thread thread : threadPoolTemp) {
try {
thread.join(); // wait until all threads have executed..
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Tracker :
package com.example.so.jdk1_4.synch;
/**
* <p> For http://stackoverflow.com/questions/38671520/not-wait-in-case-synchronized-section-is-occupied </p>
* #author Ravindra HV
*/
public class InUseTracker {
private boolean inuse;
private int waitCount;
private String inUseBy;
private static InUseTracker DEFAULT_INSTANCE = new InUseTracker();
private InUseTracker() {
}
public static InUseTracker obtainClassInstance() {
return DEFAULT_INSTANCE;
}
public synchronized boolean isInuse() {
return inuse;
}
public synchronized void setInuse(boolean inuse) {
this.inuse = inuse;
if(inuse) {
setInUseBy(Thread.currentThread().getName());
}
else {
setInUseBy("");
}
}
private void setInUseBy(String inUseBy) {
this.inUseBy = inUseBy;
}
public synchronized String getInUseBy() {
return inUseBy;
}
public synchronized void incrementWaitCount() {
waitCount++;
}
public synchronized void decrementWaitCount() {
waitCount--;
}
public synchronized int waitCount() {
return waitCount;
}
}
PS: Guess you'd have to move the
InUseTracker.obtainClassInstance().setInuse(false);
within a finally if or as appropriate.

All threads get locked in wait() state [duplicate]

This question already has answers here:
Notify not getting the thread out of wait state
(3 answers)
Closed 7 years ago.
Basically I have to create 3 classes (2 threaded).
First one holds some cargo (has a minimum capacity (0) and a maximum (200))
Second one supplies the cargo every 500ms.
Third one takes away from cargo every 500ms.
Main program has one cargo class(1), 2 supplier classes(2) and 2 substraction classes(3). Problem I'm having is that one by one, they're falling into a wait(); state and never get out. Eventually all of them get stucked in the wait() state, with the program running, but without them actually doing anything.
First class:
public class Storage {
private int maxCapacity;
private int currentCapacity;
public Storage( int currentCapacity, int maxCapacity ) {
this.currentCapacity = currentCapacity;
this.maxCapacity = maxCapacity;
}
public int getCapacity(){ return this.currentCapacity; }
public void increase( int q ) {
this.currentCapacity += q;
System.out.println("increase" + q + ". Total: " + currentCapacity);
}
public int getMax() { return this.maxCapacity; }
public void decrease( int q ) {
this.currentCapacity -= q;
System.out.println("decrease - " + q + ". Total: " + currentCapacity);
}
}
2nd class (supplier):
public class Supplier implements Runnable {
private int capacity;
private Storage storage;
private volatile boolean run;
public Supplier( int capacity, Storage storage ) {
this.capacity = capacity;
this.storage = storage;
this.run = true;
}
public void kiss_kill() { run = !run; }
public synchronized void add() {
while(storage.getCapacity() + capacity > storage.getMax()) {
try {
System.out.println("wait - supplier");
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
storage.increase(capacity);
notifyAll();
}
public void run() {
synchronized (this) {
while(run) {
add();
Thread.yield(); //would be wait(500), but this just speeds it up
}
}
}
}
3rd class (taker/demander):
public class Taker implements Runnable {
private int capacity;
private Storage storage;
private volatile boolean run;
public Taker( int capacity, Storage storage ) {
this.capacity = capacity;
this.storage = storage;
this.run = true;
}
public void kiss_kill() { run = !run; }
public synchronized void take() {
while(storage.getCapacity() - capacity < 0) {
try {
System.out.println("wait - taker");
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
storage.decrease(capacity);
notifyAll();
}
public void run() {
synchronized (this) {
while(run) {
take();
Thread.yield(); //again, wait(500) should be instead
}
}
}
}
Main is something like this:
public class Main{
public static void main(String... args) {
Storage sk = new Storage(100, 200);
Supplier[] s = { new Supplier(10, sk), new Supplier(15, sk) };
Taker[] p = { new Taker(15, sk), new Taker(20, sk) };
Thread t[] = {
new Thread(s[0]),
new Thread(s[1]),
new Thread(p[0]),
new Thread(p[1]) };
for(Thread th : t) th.start();
try {
Thread.sleep(60000); //program should last for 60s.
} catch (InterruptedException e) {
e.printStackTrace();
}
s[0].kiss_kill(); s[1].kiss_kill(); p[0].kiss_kill(); p[1].kiss_kill();
}
}
Why doesn't notifyAll() release the wait() state of other object? What could I do to fix this?
Sorry, I know it's a long example, I hate posting too many classes like this. Thanks for reading!
I translated the code, so if you spot anything that you're unsure about that I've missed, please tell me and I'll fix it right away!
Doing concurrency is easy:
Anyone can slap synchronized on methods and synchronized () {} around blocks of code. It does not mean it is correct. And then they can continue to slap synchronized on everything until it works until it doesn't.
Doing concurrency correctly is Hard:
You should lock on the data that needs to be consistent not the methods making the changes. And you have to use the same lock instance for everything.
In this case that is the currentCapacity in Storage. That is the only thing that is shared and the only thing that needs to be consistent.
What you are doing now is having the classes lock on instances of themselves which means nothing shared is being protected because there is no shared lock.
Think about it, if you are not locking on the same exact instance which must be final of an object then what are you protecting?
Also what about code that has access to the object that needs to be consistent and does not request a lock on it. Well it just does what it wants. synchronized() {} in calling classes is not how you protect shared data from external manipulation.
Thread safe objects are NOT about the synchronized keyword:
Read up on the java.util.concurrent package it has all the things you need already. Use the correct data structure for your use case.
In this particular case if you use AtomicInteger for your counter, you do not need any error prone manual locking, no need for synchronized anywhere, it is already thread safe.
Immutable Data:
If you work with immutable data exclusively you do not need any of this silly locking semantics that are extremely error prone for even those that understand it and even more so for those that think they understand it.
Here is a working idiomatic example:
This is a good chance to learn what non-deterministic means and how to use the step debugger in your IDE to debug concurrent programs.
Q33700412.java
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicInteger;
import com.vertigrated.FormattedRuntimeException;
public class Q33700412
{
public static void main(final String[] args)
{
final Storage s = new Storage(100);
final int ap = Runtime.getRuntime().availableProcessors();
final ExecutorService es = Executors.newFixedThreadPool(ap);
for (int i = 0; i < ap; i++)
{
es.execute(new Runnable()
{
final Random r = new Random();
#Override
public void run()
{
while (true)
{
/* this if/else block is NOT thread safe, I did this on purpose
the state can change between s.remainingCapacity() and
the call to s.increase/s.decrease.
This is ok, because the Storage object is internally consistent.
This thread might fail if this happens, this is the educational part.
*/
if (s.remainingCapacity() > 0)
{
if (r.nextBoolean()) { s.increase(r.nextInt(10)); }
else { s.decrease(10); }
System.out.format("Current Capacity is %d", s.getCurrentCapacity());
System.out.println();
}
else
{
System.out.format("Max Capacity %d Reached", s.getMaxCapacity());
System.out.println();
}
try { Thread.sleep(r.nextInt(5000)); }
catch (InterruptedException e) { throw new RuntimeException(e); }
}
}
});
}
es.shutdown();
try
{
Thread.sleep(TimeUnit.MINUTES.toMillis(1));
es.shutdown();
}
catch (InterruptedException e) { System.out.println("Done!"); }
}
public static final class Storage
{
/* AtomicInteger is used so that it can be mutable and final at the same time */
private final AtomicInteger currentCapacity;
private final int maxCapacity;
public Storage(final int maxCapacity) { this(0, maxCapacity); }
public Storage(final int currentCapacity, final int maxCapacity)
{
this.currentCapacity = new AtomicInteger(currentCapacity);
this.maxCapacity = maxCapacity;
}
public int remainingCapacity() { return this.maxCapacity - this.currentCapacity.get(); }
public int getCurrentCapacity() { return this.currentCapacity.get(); }
public void increase(final int q)
{
synchronized (this.currentCapacity)
{
if (this.currentCapacity.get() < this.maxCapacity)
{
this.currentCapacity.addAndGet(q);
}
else
{
throw new FormattedRuntimeException("Max Capacity %d Exceeded!", this.maxCapacity);
}
}
}
public int getMaxCapacity() { return this.maxCapacity; }
public void decrease(final int q)
{
synchronized (this.currentCapacity)
{
if (this.currentCapacity.get() - q >= 0)
{
this.currentCapacity.addAndGet(q * -1);
}
else
{
this.currentCapacity.set(0);
}
}
}
}
}
Notes:
Limit the scope of synchronized blocks to the minimum they need to protect and lock on the object that needs to stay consistent.
The lock object must be marked final or the reference can change and you will be locking on different instances.
The more final the more correct your programs are likely to be the first time.
Jarrod Roberson gave you the "how" half of the answer. Here's the other half--the "why".
Your Supplier object's add() method waits on itself (i.e., on the supplier object), and it notifies itself.
Your Taker object's take() method waits on its self (i.e., on the taker object), and it notifies its self.
The supplier never notifies the taker, and taker never notifies the supplier.
You should do all of your synchronization on the shared object (i.e., on the Storage object.
So I should convert storage into a thread?
No, you don't want Storage to be a thread, you want it to be the lock. Instead of having your Supplier objects and your Taker objects synchronize on themselves, they should all synchronize on the shared Storage object.
E.g., do this:
public void take() {
synchronized(storage) {
while(...) {
try {
storage.wait();
} catch ...
}
...
storage.notifyAll();
}
}
Instead of this:
public synchronized void take() {
while(...) {
try {
wait();
} catch ...
}
...
notifyAll();
}
And do the same for all of your other synchronized methods.

Java: Threads, how to make them all do something

I am trying to implement nodes talking to each other in Java. I am doing this by creating a new thread for every node that wants to talk to the server.
When the given number of nodes, i.e. that many threads have been created, have connected to the server I want each thread to execute their next bit of code after adding to the "sharedCounter".
I think I need to use 'locks' on the shared variable, and something like signalAll() or notifyAll() to get all the threads going, but I can't seem to make clear sense of exactly how this works or to implement it.
Any help explaining these Java concepts would be greatly appreciated :D
Below is roughly the structure of my code:
import java.net.*;
import java.io.*;
public class Node {
public static void main(String[] args) {
...
// Chooses server or client launchers depend on parameters.
...
}
}
class sharedResource {
private int sharedCounter;
public sharedResource(int i) {
sharedCounter = i;
}
public synchronized void incSharedCounter() {
sharedCounter--;
if (sharedCounter == 0)
// Get all threads to do something
}
}
class Server {
...
for (int i = 0; i < numberOfThreads; i++) {
new serverThread(serverSocket.accept()).start();
}
...
sharedResource threadCount = new sharedResource(numberOfThreads);
...
}
class serverThread extends Thread {
...
//some code
Server.threadCount.incSharedCounter();
// Some more code to run when sharedCounte == 0
...
}
class Client {
...
}
     // Get all threads to do something
Threads (or rather Runnables, which you should implement rather than extending Thread) have a run method that contains the code they are expected to execute.
Once you call Thread#start (which in turn calls Runnable#run), the thread will start doing exactly that.
Since you seem to be new to multi-threading in Java, I recommend that you read an introduction to the Concurrency Utility package, that has been introduced in Java5 to make it easier to implement concurrent operations.
Specifically what you seem to be looking for is a way to "pause" the operation until a condition is met (in your case a counter having reached zero). For this, you should look at a CountDownLatch.
Indeed, the subject is broad, but I'll try to explain the basics. More details can be read from various blogs and articles. One of which is the Java trail.
It is best to see each thread as being runners (physical persons) that run alongside each other in a race. Each runner may perform any task while running. For example, take a cup of water from a table at a given moment in the race. Physically, they cannot both drink from the same cup at once, but in the virtual world, it is possible (this is where the line is drawn).
For example, take again two runners; each of them has to run back and forth a track, and push a button (shared by the runners) at each end for 1'000'000 times, the button is simply incrementing a counter by one each time. When they completed their run, what would be the value of the counter? In the physical world, it would be 2'000'000 because the runners cannot push the button at the same time, they would wait for the first one to leave first... that is unless they fight over it... Well, this is exactly what two threads would do. Consider this code :
public class ThreadTest extends Thread {
static public final int TOTAL_INC = 1000000;
static public int counter = 0;
#Override
public void run() {
for (int i=0; i<TOTAL_INC; i++) {
counter++;
}
System.out.println("Thread stopped incrementing counter " + TOTAL_INC + " times");
}
public static void main(String[] args) throws InterruptedException {
Thread t1 = new ThreadTest();
Thread t2 = new ThreadTest();
t1.start();
t2.start();
t1.join(); // wait for each thread to stop on their own...
t2.join(); //
System.out.println("Final counter is : " + counter + " which should be equal to " + TOTAL_INC * 2);
}
}
An output could be something like
Thread stopped incrementing counter 1000000 times
Thread stopped incrementing counter 1000000 times
Final counter is : 1143470 which should be equal to 2000000
Once in a while, the two thread would just increment the same value twice; this is called a race condition.
Synchronizing the run method will not work, and you'd have to use some locking mechanism to prevent this from happening. Consider the following changes in the run method :
static private Object lock = new Object();
#Override
public void run() {
for (int i=0; i<TOTAL_INC; i++) {
synchronized(lock) {
counter++;
}
}
System.out.println("Thread stopped incrementing counter " + TOTAL_INC + " times");
}
Now the expected output is
...
Final counter is : 2000000 which should be equal to 2000000
We have synchronized our counter with a shared object. This is like putting a queue line before only one runner can access the button at once.
NOTE : this locking mechanism is called a mutex. If a resource can be accessed by n threads at once, you might consider using a semaphore.
Multithreading is also associated with deadlocking. A deadlock is when two threads mutually waits for the other to free some synchronized resource to continue. For example :
Thread 1 starts
Thread 2 starts
Thread 1 acquire synchronized object1
Thread 2 acquire synchronized object2
Thread 2 needs to acquire object2 for continuing (locked by Thread 1)
Thread 1 needs to acquire object1 for continuing (locked by Thread 2)
Program hangs in deadlock
While there are many ways to prevent this from happening (it depends on what your threads are doing, and how they are implemented...) You should read about that particularly.
NOTE : the methods wait, notify and notifyAll can only be called when an object is synchronized. For example :
static public final int TOTAL_INC = 10;
static private int counter = 0;
static private Object lock = new Object();
static class Thread1 extends Thread {
#Override
public void run() {
synchronized (lock) {
for (int i=0; i<TOTAL_INC; i++) {
try {
lock.wait();
counter++;
lock.notify();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
}
static class Thread2 extends Thread {
#Override
public void run() {
synchronized (lock) {
for (int i=0; i<TOTAL_INC; i++) {
try {
lock.notify();
counter--;
lock.wait();
} catch (InterruptedException e) {
/* ignored */
}
}
}
}
}
Notice that both threads are running their for...loop blocks within the synchronized block. (The result of counter == 0 when both threads end.) This can be achieved because they "let each other" access the synchronized resource via the resource's wait and notify methods. Without using those two methods, both threads would simply run sequentially and not concurrently (or more precisely, alternately).
I hope this shed some light about threads (in Java).
** UPDATE **
Here is a little proof of concept of everything discussed above, using the CountDownLatch class suggested by Thilo earlier :
static class Server {
static public final int NODE_COUNT = 5;
private List<RunnableNode> nodes;
private CountDownLatch startSignal;
private Object lock = new Object();
public Server() {
nodes = Collections.synchronizedList(new ArrayList<RunnableNode>());
startSignal = new CountDownLatch(Server.NODE_COUNT);
}
public Object getLock() {
return lock;
}
public synchronized void connect(RunnableNode node) {
if (startSignal.getCount() > 0) {
startSignal.countDown();
nodes.add(node);
System.out.println("Received connection from node " + node.getId() + " (" + startSignal.getCount() + " remaining...)");
} else {
System.out.println("Client overflow! Refusing connection from node " + node.getId());
throw new IllegalStateException("Too many nodes connected");
}
}
public void shutdown() {
for (RunnableNode node : nodes) {
node.shutdown();
}
}
public void awaitAllConnections() {
try {
startSignal.await();
synchronized (lock) {
lock.notifyAll(); // awake all nodes
}
} catch (InterruptedException e) {
/* ignore */
shutdown(); // properly close any connected node now
}
}
}
static class RunnableNode implements Runnable {
private Server server;
private int id;
private boolean working;
public RunnableNode(int id, Server server) {
this.id = id;
this.server = server;
this.working = true;
}
public int getId() {
return id;
}
public void run() {
try {
Thread.sleep((long) (Math.random() * 5) * 1000); // just wait randomly from 0 to 5 seconds....
synchronized (server.getLock()) {
server.connect(this);
server.getLock().wait();
}
if (!Thread.currentThread().isAlive()) {
throw new InterruptedException();
} else {
System.out.println("Node " + id + " started successfully!");
while (working) {
Thread.yield();
}
}
} catch (InterruptedException e1) {
System.out.print("Ooop! ...");
} catch (IllegalStateException e2) {
System.out.print("Awwww! Too late! ...");
}
System.out.println("Node " + id + " is shutting down");
}
public void shutdown() {
working = false; // shutdown node here...
}
}
static public void main(String...args) throws InterruptedException {
Server server = new Server();
for (int i=0; i<Server.NODE_COUNT + 4; i++) { // create 4 more nodes than needed...
new Thread(new RunnableNode(i, server)).start();
}
server.awaitAllConnections();
System.out.println("All connection received! Server started!");
Thread.sleep(6000);
server.shutdown();
}
This is a broad topic. You might try reading through the official guides for concurrency (i.e. threading, more or less) in Java. This isn't something with cut-and-dried solutions; you have to design something.

Categories