Ensure Java synchronized locks are taken in order? - java

we have two threads accessing one list via a synchronized method. Can we
a) rely on the run time to make sure that each of them will receive access to the method based on the order they tried to or
b) does the VM follow any other rules
c) is there a better way to serialize the requests?

No, synchronized will give access in any order (Depends on the JVM implementation). This could even cause Threads to starve in some scenarios.
You can ensure the order by using ReentrantLock (since Java 5.0) with the fair=true option. (Lock lock = new ReentrantLock(true);)

No you cannot be sure that two calls to a synchronized method will occur in order. The order is unspecified and implementation dependent.
This is defined in the 17.1 Locks section of the JLS. Notice that is says nothing about the order in which threads waiting on a lock should gain access.

You can't rely on the order in which the particular method is called from each threads. If it is only two threads may be yes. But imagine if there are 3 threads and 1 thread already acquired access. The other 2 threads when they try to access will wait and any one of them can be awarded the access, and this does not depend on the order in which they called this method.
So, it is not suggested to rely on the order.

c) is there a better way to serialize the requests?
Are you by any chance using the list as a queue, i.e., does the usage pattern look something like this?
while (some condition) {
synchronized(theList){
anItem = get and remove an element from theList
}
do some work with anItem
}
If so, you may want to look at the BlockingQueue interface instead of using your own locking schemes. The implementations (like ArrayBlockingQueue) have settings for fairness and more.

I always leave syncs to app server or engine unless defining own intensity

I have solved similar problem with couple of instrument. The problem I was trying to solve is Ping Pong Service. two thread one prints Ping and the other prints Pong. but They have to be SEQUENTIAL. (no double Ping or double Pong)
I will put one of the Implementation here but you can have a look other implementation (6 or 7 different way for now)
https://github.com/tugrulkarakaya/pingpong
import java.util.concurrent.*;
public class Method2CyclicBarrier {
ExecutorService service;
CyclicBarrier c1 = new CyclicBarrier(2);
CyclicBarrier c2 = new CyclicBarrier(2);
public static void main(String[] args) {
Method2CyclicBarrier m = new Method2CyclicBarrier();
m.runPingPong();
}
public void runPingPong(){
service = Executors.newFixedThreadPool(2);
service.submit(() -> this.printPing(c1, c2));
service.submit(() -> this.printPong(c1, c2));
}
public void printPing(CyclicBarrier c1, CyclicBarrier c2) {
while(!Thread.currentThread().isInterrupted()) {
try {
c1.await();
System.out.println("PING");
c2.await();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
catch(BrokenBarrierException ex){
}
}
}
public void printPong(CyclicBarrier c1, CyclicBarrier c2){
while(!Thread.currentThread().isInterrupted()) {
try {
c1.await();
c2.await();
System.out.println("PONG");
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
} catch(BrokenBarrierException ex){
}
}
}
}

Yes.
If access to the list is via one synchronized method, concurrent requests from multiple threads will be serialized.

Related

How to make Java threads get locks in order they tried to acquire it? [duplicate]

we have two threads accessing one list via a synchronized method. Can we
a) rely on the run time to make sure that each of them will receive access to the method based on the order they tried to or
b) does the VM follow any other rules
c) is there a better way to serialize the requests?
No, synchronized will give access in any order (Depends on the JVM implementation). This could even cause Threads to starve in some scenarios.
You can ensure the order by using ReentrantLock (since Java 5.0) with the fair=true option. (Lock lock = new ReentrantLock(true);)
No you cannot be sure that two calls to a synchronized method will occur in order. The order is unspecified and implementation dependent.
This is defined in the 17.1 Locks section of the JLS. Notice that is says nothing about the order in which threads waiting on a lock should gain access.
You can't rely on the order in which the particular method is called from each threads. If it is only two threads may be yes. But imagine if there are 3 threads and 1 thread already acquired access. The other 2 threads when they try to access will wait and any one of them can be awarded the access, and this does not depend on the order in which they called this method.
So, it is not suggested to rely on the order.
c) is there a better way to serialize the requests?
Are you by any chance using the list as a queue, i.e., does the usage pattern look something like this?
while (some condition) {
synchronized(theList){
anItem = get and remove an element from theList
}
do some work with anItem
}
If so, you may want to look at the BlockingQueue interface instead of using your own locking schemes. The implementations (like ArrayBlockingQueue) have settings for fairness and more.
I always leave syncs to app server or engine unless defining own intensity
I have solved similar problem with couple of instrument. The problem I was trying to solve is Ping Pong Service. two thread one prints Ping and the other prints Pong. but They have to be SEQUENTIAL. (no double Ping or double Pong)
I will put one of the Implementation here but you can have a look other implementation (6 or 7 different way for now)
https://github.com/tugrulkarakaya/pingpong
import java.util.concurrent.*;
public class Method2CyclicBarrier {
ExecutorService service;
CyclicBarrier c1 = new CyclicBarrier(2);
CyclicBarrier c2 = new CyclicBarrier(2);
public static void main(String[] args) {
Method2CyclicBarrier m = new Method2CyclicBarrier();
m.runPingPong();
}
public void runPingPong(){
service = Executors.newFixedThreadPool(2);
service.submit(() -> this.printPing(c1, c2));
service.submit(() -> this.printPong(c1, c2));
}
public void printPing(CyclicBarrier c1, CyclicBarrier c2) {
while(!Thread.currentThread().isInterrupted()) {
try {
c1.await();
System.out.println("PING");
c2.await();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
catch(BrokenBarrierException ex){
}
}
}
public void printPong(CyclicBarrier c1, CyclicBarrier c2){
while(!Thread.currentThread().isInterrupted()) {
try {
c1.await();
c2.await();
System.out.println("PONG");
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
} catch(BrokenBarrierException ex){
}
}
}
}
Yes.
If access to the list is via one synchronized method, concurrent requests from multiple threads will be serialized.

Is there better way to use wait/notify with connection with AtomicInteger

I've this kind of code:
public class RecursiveQueue {
//#Inject
private QueueService queueService;
public static void main(String[] args) {
RecursiveQueue test = new RecursiveQueue();
test.enqueue(new Node("X"), true);
test.enqueue(new Node("Y"), false);
test.enqueue(new Node("Z"), false);
}
private void enqueue(final Node node, final boolean waitTillFinished) {
final AtomicLong totalDuration = new AtomicLong(0L);
final AtomicInteger counter = new AtomicInteger(0);
AfterCallback callback= new AfterCallback() {
#Override
public void onFinish(Result result) {
for(Node aNode : result.getChildren()) {
counter.incrementAndGet();
queueService.requestProcess(aNode, this);
}
totalDuration.addAndGet(result.getDuration());
if(counter.decrementAndGet() <= 0) { //last one
System.out.println("Processing of " + node.toString() + " has finished in " + totalDuration.get() + " ms");
if(waitTillFinished) {
counter.notify();
}
}
}
};
counter.incrementAndGet();
queueService.requestProcess(node, callback);
if(waitTillFinished) {
try {
counter.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Imagine there is a queueService which uses blocking queue and few consumer threads to process nodes = calls DAO to fetch children of nodes (it's a tree).
So requestProcess method just enqueues the node and does not block.
Is there some better/safe way to avoid using wait/notify in this sample ?
According to some findings I can use Phaser (but I work on java 6) or conditions (but I'm not using locks).
There is no synchronized anything in your example. You mustn't call o.wait() or o.notify() except from within a synchronized(o) {...} block.
Your call to wait() is not in a loop. This may not ever happen in your JVM, but the language spec permits wait() to return prematurely (that's known as a spurious wakeup) More generally, it is good practice to always use a loop because it's a familiar design pattern. A while statement costs no more than an if, and you should have it because of the possibility of spurious wakeup, and you'd absolutely must have it in a multi-consumer situation, and so you might as well just always write it that way.
Since you must use synchronized blocks in order to use wait() and notify(), there probably is no reason to use Atomic anything.
This "recursive" thing seems awfully complicated, what with the callback adding more items to the queue. How deep can that go?
I think you are looking for CountDownLatch.
You actually use locks or, let's put it this way, you should be using them if you try to use wait/notify as James pointed out. As you are bound to Java 1.6 and ForkJoin or Phaser are not available to you the choice is either implementing wait/notify properly or using Condition with explicit lock. This would be a matter of your personal preferences.
Another alternative is to try and restructure your algorithm so you first get to know the entire set of steps you would need to execute. It is not always possible though.

Can we say that by synchronizing a block of code we are making the contained statements atomic?

I want to clear my understanding that if I surround a block of code with synchronized(this){} statement, does this mean that I am making those statements atomic?
No, it does not ensure your statements are atomic. For example, if you have two statements inside one synchronized block, the first may succeed, but the second may fail. Hence, the result is not "all or nothing". But regarding multiple threads, you ensure that no statement of two threads are interleaved. In other words: all statements of all threads are strictly serialized, even so, there is no guarantee, that all or none statements of a thread gets executed.
Have a look at how Atomicity is defined.
Here is an example showing that the reader is able to ready a corrupted state. Hence the synchronized block was not executed atomically (forgive me the nasty formatting):
public class Example {
public static void sleep() {
try { Thread.sleep(400); } catch (InterruptedException e) {};
}
public static void main(String[] args) {
final Example example = new Example(1);
ExecutorService executor = newFixedThreadPool(2);
try {
Future<?> reader = executor.submit(new Runnable() { #Override public void run() {
int value; do {
value = example.getSingleElement();
System.out.println("single value is: " + value);
} while (value != 10);
}});
Future<?> writer = executor.submit(new Runnable() { #Override public void run() {
for (int value = 2; value < 10; value++) example.failDoingAtomic(value);
}});
reader.get(); writer.get();
} catch (Exception e) { e.getCause().printStackTrace();
} finally { executor.shutdown(); }
}
private final Set<Integer> singleElementSet;
public Example(int singleIntValue) {
singleElementSet = new HashSet<>(Arrays.asList(singleIntValue));
}
public synchronized void failDoingAtomic(int replacement) {
singleElementSet.clear();
if (new Random().nextBoolean()) sleep();
else throw new RuntimeException("I failed badly before adding the new value :-(");
singleElementSet.add(replacement);
}
public int getSingleElement() {
return singleElementSet.iterator().next();
}
}
No, synchronization and atomicity are two different concepts.
Synchronization means that a code block can be executed by at most one thread at a time, but other threads (that execute some other code that uses the same data) can see intermediate results produced inside the "synchronized" block.
Atomicity means that other threads do not see intermediate results - they see either the initial or the final state of the data affected by the atomic operation.
It's unfortunate that java uses synchronized as a keyword. A synchronized block in Java is a "mutex" (short for "mutual exclusion"). It's a mechanism that insures only one thread at a time can enter the block.
Mutexes are just one of many tools that are used to achieve "synchronization" in a multi-threaded program: Broadly speaking, synchronization refers to all of the techniques that are used to insure that the threads will work in a coordinated fashion to achieve a desired outcome.
Atomicity is what Oleg Estekhin said, above. We usually hear about it in the context of "transactions." Mutual exclusion (i.e., Java's synchronized) guarantees something less than atomicity: Namely, it protects invariants.
An invariant is any assertion about the program's state that is supposed to be "always" true. E.g., in a game where players exchange virtual coins, the total number of coins in the game might be an invariant. But it's often impossible to advance the state of the program without temporarily breaking the invariant. The purpose of mutexes is to insure that only one thread---the one that is doing the work---can see the temporary "broken" state.
For code that use syncronized on that object - yes.
For code, that don't use syncronized keyword for that object - no.
Can we say that by synchronizing a block of code we are making the contained statements atomic?
You are taking a very big leap there. Atomicity means that the operation if atomic will complete in one CPU cycle or equivalent to one CPU cycle whereas Synchronizing a block means only one thread can access the critical region. It may take multiple CPU cycles for processing code in the critical region(which will make it non atomic).

Is there a way to include a Condition to adquire method in java (or scala) Semaphore object?

I have a program that holds many threads, lets put as an example six threads. Five of them should be able to use concurrently a given resource but the last thread shouldn't if a given condition occurs and should wait until that condition is over.
In my understanding a ReentrantLock can't be used because it can only be held by one thread at a time. In the other hand a Semaphore can be held by many threads at a time but I can't find a way to attach the condition to aquire method.
Can This high level objects do the trick or I will have to implement this functionality using notify and wait directly?
Eg.
class A{
getResource{ ... }
}
//This Runable could be spawn many times at the same time
class B implements Runnable{
run {
setConditionToTrue
getResource
...
getResource
...
getResource
setConditionToFalse
}
}
//This will be working forever but only one Thread
class C implements Runnable{
run{
loop{
if(Condition == true) wait
getResource
}
}
}
Thanks in advance pals
I am restating your problem here: You want your B threads to access the shared resource concurrently, but your C thread should wait for some condition to occur before using the resource.
If I understand your question correctly, You can use ReentrantLock to solve your problem.
Introduce a new function called getAccess() and make the C thread call this function to get the shared resource. Introduce two more functions to allow and stop the access to shared resource.
class A {
private final ReentrantLock lock = new ReentrantLock();
private Condition someCondition = lock.newCondition();
private boolean bCondition = false;
getResource{ ... } // Your existing method used by B threads
getAccess() { // Protected access to some resource, called by C thread
lock.acquire();
try {
if (!bCondition)
someCondition.await(); // B thread will wait here but releases the lock
} finally {
lock.release();
}
}
allowAccess() { // B thread can call this func to notify C and allow access
lock.acquire();
try {
bCondition = true;
someCondition.signal(); // Decided to release the resource
} finally {
lock.release();
}
}
stopAccess() { // B thread can stop the access
lock.acquire();
try {
bCondition = false;
} finally {
lock.release();
}
}
}
If you want several threads to share a resource, you need to be more specific about the meaning of that sharing. Normally this means differentiating between those threads that read the current value of the resource and other threads that change the value. This implies is a concurrent-read / exclusive-write pattern ('crew') if the writes are to be race-free and stable.
In the Java API, this is provided by the ReentrantReadWriteLock. There are also other alternatives worthy of consideration, such as the carefully-implemented Crew in JCSP.
Using [J]CSP, a different pattern is also available: that of wrapping the common resource in its own thread and providing access to it via a shared JCSP channel from all the client threads. This client-server pattern is easy to understand and implement, and has the added benefit that it is formally deadlock-free, given that the thread communication graph is acyclic (more).

Thread-safe code without using the `synchronized` keyword?

What are the possible ways to make code thread-safe without using the synchronized keyword?
Actually, lots of ways:
No need for synchronization at all if you don't have mutable state.
No need for synchronization if the mutable state is confined to a single thread. This can be done by using local variables or java.lang.ThreadLocal.
You can also use built-in synchronizers. java.util.concurrent.locks.ReentrantLock has the same functionality as the lock you access when using synchronized blocks and methods, and it is even more powerful.
Only have variables/references local to methods. Or ensure that any instance variables are immutable.
You can make your code thread-safe by making all the data immutable, if there is no mutability, everything is thread-safe.
Secondly, you may want to have a look at java concurrent API which has provision for providing read / write locks which perform better in case there are lots of readers and a few writers. Pure synchronized keyword will block two readers also.
////////////FIRST METHOD USING SINGLE boolean//////////////
public class ThreadTest implements Runnable {
ThreadTest() {
Log.i("Ayaz", "Constructor..");
}
private boolean lockBoolean = false;
public void run() {
Log.i("Ayaz", "Thread started.." + Thread.currentThread().getName());
while (lockBoolean) {
// infinite loop for other thread if one is accessing
}
lockBoolean = true;
synchronizedMethod();
}
/**
* This method is synchronized without using synchronized keyword
*/
public void synchronizedMethod() {
Log.e("Ayaz", "processing...." + Thread.currentThread().getName());
try {
Thread.currentThread().sleep(3000);
} catch (Exception e) {
System.out.println("Exp");
}
Log.e("Ayaz", "complete.." + Thread.currentThread().getName());
lockBoolean = false;
}
} //end of ThreadTest class
//For testing use below line in main method or in Activity
ThreadTest threadTest = new ThreadTest();
Thread threadA = new Thread(threadTest, "A thead");
Thread threadB = new Thread(threadTest, "B thead");
threadA.start();
threadB.start();
///////////SECOND METHOD USING TWO boolean/////////////////
public class ThreadTest implements Runnable {
ThreadTest() {
Log.i("Ayaz", "Constructor..");
}
private boolean isAnyThreadInUse = false;
private boolean lockBoolean = false;
public void run() {
Log.i("Ayaz", "Thread started.." + Thread.currentThread().getName());
while (!lockBoolean)
if (!isAnyThreadInUse) {
isAnyThreadInUse = true;
synchronizedMethod();
lockBoolean = true;
}
}
/**
* This method is synchronized without using synchronized keyword
*/
public void synchronizedMethod() {
Log.e("Ayaz", "processing...." + Thread.currentThread().getName());
try {
Thread.currentThread().sleep(3000);
} catch (Exception e) {
System.out.println("Exp");
}
Log.e("Ayaz", "complete.." + Thread.currentThread().getName());
isAnyThreadInUse = false;
}
} // end of ThreadTest class
//For testing use below line in main method or in Activity
ThreadTest threadTest = new ThreadTest();
Thread t1 = new Thread(threadTest, "a thead");
Thread t2 = new Thread(threadTest, "b thead");
t1.start();
t2.start();
To maintain predictability you must either ensure all access to mutable data is made sequentially or handle the issues caused by parallel access.
The most gross protection uses the synchronized keyword. Beyond that there are at least two layers of possibility, each with their benefits.
Locks/Semaphores
These can be very effective. For example, if you have a structure that is read by many threads but only updated by one you may find a ReadWriteLock useful.
Locks can be much more efficient if you choose your lock to match the algorithm.
Atomics
Use of AtomicReference for example can often provide completely lock free functionality. This can usually provide huge benefits.
The reasoning behind atomics is to allow them to fail but to tell you they failed in a way you can handle it.
For example, if you want to change a value you can read it and then write its new value so long as it is still the old value. This is called a "compare and set" or cas and can usually be implemented in hardware and so is extremely efficient. All you then need is something like:
long old = atomic.get();
while ( !atomic.cas(old, old+1) ) {
// The value changed between my get and the cas. Get it again.
old = atomic.get();
}
Note, however, that predictability is not always the requirement.
Well there are many ways you can achieve this, but each contains many flavors. Java 8 also ships with new concurrency features.
Some ways you could make sure thread safety are:
Semaphores
Locks-Reentrantlock,ReadWriteLock,StampedLock(Java 8)
Why do u need to do it?
Using only local variable/references will not solve most of the complex business needs.
Also, if instance variable are immutable, their references can still be changed by other threads.
One option is use something like a SingleThreadModel, but it is highly discouraged and deprecated.
u can also look at concurrent api as suggested above by Kal

Categories