is reentrant lock is complete replacement for synchronisation? - java

I gone through the article "http://www.ibm.com/developerworks/java/library/j-jtp10264/".They mentioned that "The Lock framework is a compatible replacement for synchronisation". I understood that by using Reentrant locks we can hold the lock across the methods, wait for the lock for certain period of time (It is not possible using synchronised block (or) methods). My doubt is, is it possible to replace the application with synchronisation mechanism with Reentrant locks?
For example, I want to implement a thread safe stack data structure, where all the push, pop, getTop methods are synchronised, so in multi threaded environment, only one thread can able to access one synchronised method at a time (If one thread is using push method, no other threads can able to access push, pop, getTop (or) any other synchronised methods of Stack class). Is it possible to implement same thread safe stack data structure using Reentrant lock? If possible, please provide an example to understand this.

Anything you can do with synchronized you can also do with ReentrantLock but not vice-versa. That being said, if all you need are lock/unlock semantics I would suggest synchronized as it's, in my opinion, more readable.

The answer is "Yes".
lock - unlock pair used instead of synchronize( ) { ... }.
await and signal in Condition is replacement for wait and notify.

Brian Goetz discusses this in "Java Concurrency in Practice" in chapter 13.4:
ReentrantLock is an advanced tool for situations where intrinsic locking is not practical. Use it if you need its advanced features: timed, polled, or interruptible lock acquisition, fair queueing, or non-block-structured locking. Otherwise, prefer synchronized.
I absolutely agree because IMHO this:
synchronized (lock) {
// ...
}
Is way more readable and less error prone than this:
try {
lock.lock();
// ...
} finally {
lock.unlock();
}
Long story short: from a technical point of view, yes, you could replace synchronized with ReentrantLock, but I wouldn't do it per se.
Also checkout these questions:
Synchronization vs Lock
Why use a ReentrantLock if one can use synchronized(this)?

ReentrantLock is one of the alternatives to synchronization.
A reentrant mutual exclusion Lock with the same basic behavior and semantics as the implicit monitor lock accessed using synchronized methods and statements, but with extended capabilities.
Refer to this question for other alternatives to synchronization (Concurrent Collections, Atomic variables, Executors, ThreadLocal variables):
Avoid synchronized(this) in Java?

Related

Is this use of AtomicBoolean a valid replacement for synchronized blocks?

Consider two methods a() and b() that cannot be executed at the same time.
The synchronized key word can be used to achieve this as below. Can I achieve the same effect using AtomicBoolean as per the code below this?
final class SynchonizedAB {
synchronized void a(){
// code to execute
}
synchronized void b(){
// code to execute
}
}
Attempt to achieve the same affect as above using AtomicBoolean:
final class AtomicAB {
private AtomicBoolean atomicBoolean = new AtomicBoolean();
void a(){
while(!atomicBoolean.compareAndSet(false,true){
}
// code to execute
atomicBoolean.set(false);
}
void b(){
while(!atomicBoolean.compareAndSet(false,true){
}
// code to execute
atomicBoolean.set(false);
}
}
No, since synchronized will block, while with the AtomicBoolean you'll be busy-waiting.
Both will ensure that only a single thread will get to execute the block at a time, but do you want to have your CPU spinning on the while block?
It depends on what you are planning to achieve with original synchronized version of the code. If synchronized was added in original code just to ensure only one thread will be present at a time in either a or b method then to me both version of the code looks similar.
However there are few differences as mentioned by Kayaman. Also to add more diffs, with synchronized block you will get memory barrier which you will miss with Atomic CAS loops. But if the body of the method doesn't need such barrier then that difference gets eliminated too.
Whether Atomic cas loop performs better over synchronized block or not in indivisual case that only performance test can tell but this is the same technique being followed at multiple places in concurrent package to avoid synchronization at block level.
From a behavioral standpoint, this appears to be a partial replacement for Java's built-in synchronization (monitor locks). In particular, it appears to provide correct mutual exclusion which is what most people are after when they're using locks.
It also appears to provide the proper memory visibility semantics. The Atomic* family of classes has similar memory semantics to volatile, so releasing one of these "locks" will provide a happens-before relationship to another thread's acquisition of the "lock" which will provide the visibility guarantee that you want.
Where this differs from Java's synchronized blocks is that it does not provide automatic unlocking in the case of exceptions. To get similar semantics with these locks, you'd have to wrap the locking and usage in a try-finally statement:
void a() {
while (!atomicBoolean.compareAndSet(false, true) { }
try {
// code to execute
} finally {
atomicBoolean.set(false);
}
}
(and similar for b)
This construct does appear to provide similar behavior to Java's built-in monitor locks, but overall I have a feeling that this effort is misguided. From your comments on another answer it appears that you are interested in avoiding the OS overhead of blocking threads. There is certainly overhead when this occurs. However, Java's built-in locks have been heavily optimized, providing very inexpensive uncontended locking, biased locking, and adaptive spin-looping in the case of short-term contention. The last of these attempts to avoid OS-level blocking in many cases. By implementing your own locks, you give up these optimizations.
You should benchmark, of course. If your performance is suffering from OS-level blocking overhead, perhaps your locks are too coarse. Reducing the amount of locking, or splitting locks, might be a more fruitful way to reduce contention overhead than to try to implement your own locks.

What is the purpose of using synchronized (Thread.currentThread()){...} in Java?

I faced the following code in our project:
synchronized (Thread.currentThread()){
//some code
}
I don't understand the reason to use synchronized on currentThread.
Is there any difference between
synchronized (Thread.currentThread()){
//some code
}
and just
//some code
Can you provide an example which shows the difference?
UPDATE
more in details this code as follows:
synchronized (Thread.currentThread()) {
Thread.currentThread().wait(timeInterval);
}
It looks like just Thread.sleep(timeInterval). Is it truth?
consider this
Thread t = new Thread() {
public void run() { // A
synchronized (Thread.currentThread()) {
System.out.println("A");
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
}
}
}
};
t.start();
synchronized (t) { // B
System.out.println("B");
Thread.sleep(5000);
}
blocks A and B cannot run simultaneously, so in the given test either "A" or "B" output will be delayed by 5 secs, which one will come first is undefined
Although this is almost definitely an antipattern and should be solved differently, your immediate question still calls for an answer. If your entire codebase never acquires a lock on any Thread instance other than Thread.currentThread(), then indeed this lock will never be contended. However, if anywhere else you have
synchronized (someSpecificThreadInstance) { ... }
then such a block will have to contend with your shown block for the same lock. It may indeed happen that the thread reaching synchronized (Thread.currentThread()) must wait for some other thread to relinquish the lock.
Basically there is no difference between the presence and absence of the synchronized block. However, I can think of a situation that could give other meaning to this usage.
The synchronized blocks has an interesting side-effect of causing a memory barrier to be created by the runtime before entering and after leaving the block. A memory barrier is a special instruction to the CPU that enforces all variables that are shared between multiple threads to return their latest values. Usually, a thread works with its own copy of a shared variable, and its value is visible to this thread only. A memory barrier instructs the thread to update the value in a way so that the change is visible to the other threads.
So, the synchronized block in this case does not do any locking (as there will be no real case of lock and wait situation, at lest none I can think of)(unless the use-case mentioned in this answer is addressed), but instead it enforces the values of the shared fields to return their latest value. This, however, is true if the other places of the code that work with the variables in question also uses memory barriers (like having the same synchronized block around the update/reassignment operations). Still, this is not a solution for avoiding race conditions.
If you're interested, I recommend you to read this article. It is about memory barriers and locking in C# and the .NET framework, but the problem is similar for Java and the JVM (except for the behavior of volatile fields). It helped me a lot in understanding how threads, volatile fields and locks work in general.
One must take into account some serious considerations in this approach, that were mentioned in comments below this answer.
The memory barrier does not imply locking. The access will still be non-synchronized and a subject to race conditions and other potential issues one may encounter. The only benefit is the thread being able to read the latest values of the shared memory fields, without the use of locks. Some practices use similar approaches if the working thread only reads from values and it does only care for them to be the most present ones, while avoiding the overhead of locks - a use case could be a high-performance simultaneous data processing algorithm.
The approach above is unreliable. As per Holger's comment, the compiler could eliminate the lock statements when optimizing, as it could consider them unnecessary. This will also remove the memory barriers. The code then will not issue a lock, and it will not work as expected if a lock was meant to be used, or the purpose was to create a memory barrier.
The approach above is also unreliable because the runtime JVM can remove synchronization when it can prove the monitor will never be acquired by another thread, which is true of this construct if the code never synchronizes on another thread object which is not the current thread's thread object. So even if it works during testing on system A, it might fail under another JVM on system B. Even worse, the code could work for a while and then cease working as optimizations are applied.
The intentions of the code as it stays now are ambiguous, so one should use more explicit and expressive means to achieve its effect (see Marko Topolnik's comment for reference).
You are implementing a recursive mutex.
i.e. the same thread can enter the synchronisation block, but not other threads.

Difference b/w intrinsic locking, client side locking & extrinsic locking?

what is the difference b/w intrinsic locking, client side locking & extrinsic locking ?
What is the best way to create a thread safe class ?
which kind of locking is prefered & why ?
I would highly recommend you to read "Java Concurrency In Practice" by Brian Goetz. It is an excellent book that will help you to understand all the concepts about concurrency!
About your questions, I am not sure if I can answer them all, but I can give it a try. Most of the times, if the question is "what is the best way to lock" etc, the answer is always it depends on what problem you try to solve.
Question 1:
What you try to compare here are not exactly comparable;
Java provides a built in mechanism for locking, the synchronized block. Every object can implicitly act as a lock for purposes of synchronization; these built-in locks are called intrinsic locks.
What is interesting with the term intrinsic is that the ownership of a lock is per thread and not per method invocation. That means that only one thread can hold the lock at a given time. What you might also find interesting is the term reentrancy, which allows the same thread to acquire the same lock again. Intrinsic locks are reentrant.
Client side locking, if I understand what you mean, is something different. When you don't have a thread safe class, your clients need to take care about this. They need to hold locks so they can make sure that there are not any race conditions.
Extrinsic locking is, instead of using the built in mechanism of synchronized block which gives you implicit locks to specifically use explicit locks. It is kind of more sophisticate way of locking. There are many advantages (for example you can set priorities). A good starting point is the java documentation about locks
Question 2:
It depends :) The easiest for me is to try to keep everything immutable. When something is immutable, I don't need to care about thread safety anymore
Question 3:
I kind of answered it on your first question
Explicit - locking using concurrent lock utilities like Lock interface. eg - ConcurrentHashMap
Intrinsic - locking using synchronized.
Client side locking - Classes like ConcurrentHashMap doesn't support Client side locking because get method is not using any kind of lock. so although you put a lock over its object like synchronized (object of ConcurrentHashMap) still some other thread can access object of ConcurrentHashMap.
Classes having all set get methods Explicit or Intrinsic locks are supporting client side locking. As some client code come and lock over that object. below is example of Vector
public static Object getLast(Vector list) {
synchronized (list) {
int lastIndex = list.size() - 1;
return list.get(lastIndex);
}
}
public static void deleteLast(Vector list) {
synchronized (list) {
int lastIndex = list.size() - 1;
list.remove(lastIndex);
}
}
Here are some links that discuss the different locking schemes:
Explicit versus Intrinsic
Client side locking and when to avoid it
I don't know that there is a "best" way to create a thread safe class, it depends on what you are trying to achieve exactly. Usually you don't have to make the whole class thread safe, only guard the resources that different threads all have access to, such as common lists etc.

Whats the best way to use lock explicit vs implicit?

Is there any good use case favouring implicit locks by synchronized keyword ?
Generally the thing to consider is that synchronized lock on this (either the instance of class depending on if it is a static method). This means that if another class has access to the instance that has the synchronized it could lock on the same object.
Therefore it is generally considered best practice to explicitly synchronize / lock on a private final field.
If you don't need the tryLock, lockInterruptibly or any other specialised methods that are available through lock objects, then using synchronized is safer and easier to use: when using a Lock, you need to follow a specific unlocking pattern with a finally block and failure to do so could end up in a lock that never gets released.
If you do need those methods then you don't have a choice...

Why does the acquire() method in Semaphores not have to be synchronized?

I am getting into Semaphores in Java and was reading this article http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/Semaphore.html . The only thing I don't get is why the acquire() method is not used in a synchronized context. Looking at the example from the above webiste:
They create a semaphore:
private Semaphore semaphore = new Semaphore(100);
and get a permit just like this:
semaphore.acquire();
Now, wouldn't it be possible that two or more threads try to acquire() at the same time? If so, there would be a little problem with the count.
Or, does the semaphore itself handle the synchronization?
Or, does the semaphore itself handle the synchronization?
Yes that's basically it. Semaphores are thread safe as explained in the javadoc:
Memory consistency effects: Actions in a thread prior to calling a "release" method such as release() happen-before actions following a successful "acquire" method such as acquire() in another thread.
Most operations on the objects in the java.util.concurrent package are thread safe. More details are provided at the very bottom of the package javadoc.
Semaphores ought to be fast and therefore use the atomic concurrency primitives from the Unsafe class, like CAS (compare and swap).
With these primitives synchronization happens on a much lower level and monitors are not needed. (Lock-free synchronization).
In fact the synchronization is performed by a loop continously using CAS until the expected value equals the written/read value.
synchronization is guaranteed by AbstractQueuedSynchronizer with CAS operations
see the javadoc here

Categories