Java double check lock singleton must use the volatile keyword? [duplicate] - java

From Head First design patterns book, the singleton pattern with double checked locking has been implemented as below:
public class Singleton {
private volatile static Singleton instance;
private Singleton() {}
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
}
I don't understand why volatile is being used. Doesn't volatile usage defeat the purpose of using double checked locking i.e performance?

A good resource for understanding why volatile is needed comes from the JCIP book. Wikipedia has a decent explanation of that material as well.
The real problem is that Thread A may assign a memory space for instance before it is finished constructing instance. Thread B will see that assignment and try to use it. This results in Thread B failing because it is using a partially constructed version of instance.

As quoted by #irreputable, volatile is not expensive. Even if it is expensive, consistency should be given priority over performance.
There is one more clean elegant way for Lazy Singletons.
public final class Singleton {
private Singleton() {}
public static Singleton getInstance() {
return LazyHolder.INSTANCE;
}
private static class LazyHolder {
private static final Singleton INSTANCE = new Singleton();
}
}
Source article : Initialization-on-demand_holder_idiom from wikipedia
In software engineering, the Initialization on Demand Holder (design pattern) idiom is a lazy-loaded singleton. In all versions of Java, the idiom enables a safe, highly concurrent lazy initialization with good performance
Since the class does not have any static variables to initialize, the initialization completes trivially.
The static class definition LazyHolder within it is not initialized until the JVM determines that LazyHolder must be executed.
The static class LazyHolder is only executed when the static method getInstance is invoked on the class Singleton, and the first time this happens the JVM will load and initialize the LazyHolder class.
This solution is thread-safe without requiring special language constructs (i.e. volatile or synchronized).

Well, there's no double-checked locking for performance. It is a broken pattern.
Leaving emotions aside, volatile is here because without it by the time second thread passes instance == null, first thread might not construct new Singleton() yet: no one promises that creation of the object happens-before assignment to instance for any thread but the one actually creating the object.
volatile in turn establishes happens-before relation between reads and writes, and fixes the broken pattern.
If you are looking for performance, use holder inner static class instead.

Declaring the variable as volatile guarantees that all accesses to it actually read its current value from memory.
Without volatile, the compiler may optimize away the memory accesses to the variable (such as keeping its value in a register), so only the first use of the variable reads the actual memory location holding the variable. This is a problem if the variable is modified by another thread between the first and second access; the first thread has only a copy of the first (pre-modified) value, so the second if statement tests a stale copy of the variable's value.

If you didn't have it, a second thread could get into the synchronized block after the first set it to null, and your local cache would still think it was null.
The first one is not for correctness (if it were you are correct that it would be self defeating) but rather for optimization.

A volatile read is not really expensive in itself.
You can design a test to call getInstance() in a tight loop, to observe the impact of a volatile read; however that test is not realistic; in such situation, programmer usually would call getInstance() once and cache the instance for the duration of use.
Another impl is by using a final field (see wikipedia). This requires an additional read, which may become more expensive than the volatile version. The final version may be faster in a tight loop, however that test is moot as previously argued.

The reason why you need volatile is because volatile has 2 semantics in Java
variable visibility between threads
stop re-ordering
So the problem without volatile in the double checked lock is that statement
instance = new Singleton()
have 3 main steps in bytecode which can be viewed by command javap -c Singleton.class
17: new #3 // class Singleton
20: dup
21: invokespecial #4 // Method "<init>":()V
Allocate memory space for the object (not initialized yet)
Create a variable to point to this space memory address
Call constructor to initialize the object
These 3 steps can be re-ordered during runtime by CPU or JVM which can be a case you will get an instance not fully initialized yet.
By having volatile JVM will insert monitorenter and monitorexit to avoid re-ordering as below.
10: monitorenter
11: getstatic #2 // Field instance:LSingleton;
14: ifnonnull 27
17: new #3 // class Singleton
20: dup
21: invokespecial #4 // Method "<init>":()V
24: putstatic #2 // Field instance:LSingleton;
27: aload_0
28: monitorexit
So volative is required for singleton.

Double checked locking is a technique to prevent creating another instance of singleton when call to getInstance method is made in multithreading environment.
Pay attention
Singleton instance is checked twice before initialization.
Synchronized critical section is used only after first checking singleton instance for that reason to improve performance.
volatile keyword on the declaration of the instance member. This will tell the compiler to always read from, and write to, main memory and not the CPU cache. With volatile variable guaranteeing happens-before relationship, all the write will happen before any read of instance variable.
Disadvantages
Since it requires the volatile keyword to work properly, it's not compatible with Java 1.4 and lower versions. The problem is that an out-of-order write may allow the instance reference to be returned before the singleton constructor is executed.
Performance issue because of decline cache for volatile variable.
Singleton instance is checked two times before initialization.
It's quite verbose and it makes the code difficult to read.
There are several realization of singleton pattern each one with advantages and disadvantages.
Eager loading singleton
Double-checked locking singleton
Initialization-on-demand holder idiom
The enum based singleton
Detailed description each of them is too verbose so I just put a link to a good article - All you want to know about Singleton

Related

How to create a session scoped thread safe object instance?

I want to have a resettable object instance for a session within my program that is thread safe, an example of a session might be a logged in user session.
I am currently doing something like this;
public final class ObjectFactory {
private static volatile NativeObjectWrapper instance = null;
private Singleton() {}
public static NativeObjectWrapper getInstance() {
if (instance == null) {
synchronized(ObjectFactory.class) {
if (instance == null) {
instance = new NativeObjectWrapper(AuthData);
}
}
}
return instance;
}
public void reset() {
synchronized(ObjectFactory.class) {
instance = null;
}
}
}
I want to have the object created lazily, with the ability to reset it. Is the above approach threadsafe? if not is there a common pattern to solve this?
An example again would be that scoped object here has some inner data based on the user session and therefore should be a new instance per user session.
Is the above approach threadsafe?
No, it is not.
Say we have two threads - A and B.
A calls getInstance(), passes the instance==null check, and then there's a context switch to B, which calls reset(). After B finishes executing reset(), A gets the context again and returns instance, which is now null.
if not is there a common pattern to solve this?
I don't remember seening singletons with a reset method, so I'm not aware of any common patterns for this problem. However, the simplest solution would be to just remove the first if (instance == null) check in getInstance(). This would make your implementation thread safe, as instance is always checked and modified within a synchronized block. In this scenario, you could also remove the volatile modifier from instance since it is always accessed from within a synchronized block.
There are more complex solutions I can think of, but I'd use them only if real-world profiling showed that you're spending too much time blocked on that synchronized block. Note that the JVM has some sophisticated ways of avoiding using "real" locks to minimize blocking.
One trickier approach could be to read the instance field just once:
public static Singleton getInstance() {
Singleton toReturn = instance;
if (toReturn == null) {
synchronized(SingletonFactory.class) {
if (instance == null) {
instance = new Singleton();
toReturn = instance;
}
}
}
return toReturn ;
}
But this could result in returning an old "instance". For example a thread could execute Singleton toReturn = instance and get a valid instance, then lose the CPU. At this point, a 1000 other threads could create and reset 1000 other instances until the original thread gets a spin on the CPU again, at which point it returns an old instance value. It's up to you whether such a case is acceptable.
Is the above approach threadsafe?
The answer depends on what you think "thread safe" means. There is nothing in your reset() method to prevent a thread that previously called getInstance() from continuing to use the old instance.
Is that "thread safe?"
Generally speaking, "thread safe" means that the actions of one thread can never cause other threads to see shared data in an inconsistent or invalid state. But what "inconsistent" or "invalid" mean depends on the structure of the shared data (i.e., on the design of the application.)
Another way of looking at it: If somebody tells you that a class is "thread safe," then they're probably telling you that concurrent calls to the class's methods by multiple threads will not do anything that disagrees with the class documentation and, will not do anything that disagrees with how a reaonable programmer thinks the class should behave in cases where the documentation is not absolutely clear.
NOTE: That is a weaker definition of "thread safety" because it glosses over the fact that, using thread-safe components to build a system does not guarantee that the system itself will be thread-safe.
Does everybody who uses your class clearly understand that no thread in the program may ever call reset() while any reference to the old singleton still exists? If so, then I would call that a weak design because it is very far from being "junior-programmer-safe," but I would grudgingly admit that, from a strict, language-lawyerly point of view, you could call your ObjectFactory class "thread safe."

Double checked locking without using volatile-keyword and without synchronizing the entire getInstance() method

Following is my singleton class where I am using double-checked-locking without using volatile keyword and without synchronizing the entire getInstance() method:
public class MySingleton {
private static MySingleton mySingleton;
public static MySingleton getInstance() {
if(mySingleton == null) {
synchronized(MySingleton.class) {
if(mySingleton == null) {
MySingleton temp = new MySingleton();
mySingleton = temp;
}
}
}
return mySingleton;
}
}
According to me, this is thread-safe. If anyone thinks, this is not thread-safe, can someone please elaborate on why he/she thinks this is not thread-safe?
Thanks.
I wasn't aware of this issue until I read all the comments. The problem is that the various optimization processes (compiler, Hot Spot, whatever) rewrite the code. Your "temp" solution could easily be removed. I find it hard to believe that a constructor could return a partial object, but if knowledgeable contributors are saying so, I'd trust their opinion.
Yes, but I am using a "temp" variable. Doesn't it solve the "partially-created-object" issue?
No. It does not.
Suppose some thread A calls getInstance(), and ends up creating a new instance and assigning the mySingleton variable. Then thread T comes along, calls getInstance() and sees that mySingleton is not null.
At this point, thread T has not used any synchronization. Without synchronization, the Java Language Specification (JLS) does not require that thread T see the assignments made by thread A in the same order that thread A made them.
Let's suppose that the singleton object has some member variables. Thread A obviously must have initialized those variables before it stored the reference into mySingleton. But the JLS allows thread T to see mySingleton != null and yet still see the member variables in their uninitialized state. On some multi-core platforms, it can actually happen that way.
Assigning the object reference to a local temp variable first doesn't change anything. In fact, as Steve11235 pointed out, the temp variable might not even actually exist in the byte codes or in the native instructions because either the Java compiler or the hot-spot compiler could completely optimize it away.

Is it advisable to always use volatile variables with synchronized blocks/methods?

As I understand it, volatile helps in memory visibility and synchronized helps in achieving execution control. Volatile just guarantees that the value read by the thread would have the latest value written to it.
Consider the following:
public class Singleton{
private static volatile Singleton INSTANCE = null;
private Singleton(){}
public static Singleton getInstance(){
if(INSTANCE==null){
synchronized(Integer.class){
if(INSTANCE==null){
INSTANCE = new Singleton();
}
}
}
return INSTANCE;
}
}
In the above piece of code, we use double-checked locking. This helps us create only one instance of Singleton and this is communicated to the other threads by the creating thread as soon as possible. This is what the keyword volatile does. We need the above synchronized block because the delay in the thread reading the INSTANCE variable as null and initializing the object could cause a race condition.
Now consider the following:
public class Singleton{
private static Singleton INSTANCE = null;
private Singleton(){}
public static synchronized Singleton getInstance(){
if(INSTANCE==null){
INSTANCE = new Singleton();
}
return INSTANCE;
}
}
Say we have 2 threads t1 and t2 trying to get the Singleton object. Thread t1 enters the getInstance() method first and creates the INSTANCE object. Now this newly created object should be visible to all the other threads. If the INSTANCE variable is not volatile then how do we make sure that the object is still not in t1's memory and visible to other threads. How soon is the above INSTANCE initialized by t1 visible to other threads ?
Does this mean that it is advisable to always make variables volatile with synchronized ?
In what scenarios would we not require the variable to be volatile ?
P.S I have read other questions on StackOverflow but could not find the answer to my question. Please comment before down-voting.
My question arises from the explanation given here
I think what you're missing is this from JLS 17.4.4:
An unlock action on monitor m synchronizes-with all subsequent lock actions on m (where "subsequent" is defined according to the synchronization order).
Which is very similar to the bullet about volatile variables:
A write to a volatile variable v (ยง8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order).
Then in 17.4.5:
If an action x synchronizes-with a following action y, then we also have hb(x, y).
... where hb is the "happens-before" relation.
Then:
If one action happens-before another, then the first is visible to and ordered before the second.
The memory model is incredibly complicated and I don't claim to be an expert, but my understanding is that the implication of the quoted parts is that the second pattern you've shown is safe without the variable being volatile - and indeed any variable which is only modified and read within synchronization blocks for the same monitor is safe without being volatile. The more interesting aspect (to me) is what happens to the variables within the object that the variable's value refers to. If Singleton isn't immutable, you've still potentially got problems there - but that's one step removed.
To put it more concretely, if two threads call getInstance() when INSTANCE is null, one of those threads will lock the monitor first. The write action of a non-null reference to INSTANCE happens-before the unlock operation, and that unlock operation happens-before the lock operation of the other thread. The lock operation happens-before the read of the INSTANCE variable, therefore the write happens-before the read... at which point, we are guaranteed that the write is visible to the reading thread.
This explanation of what is happening here is entirely wrong, as I misunderstood the Java Memory Model. See Jon Skeet's answer.
Safe lazy initialization
The action you are attempting in this case is "lazy-initialization", and that particular pattern is useful for instances, but sub-optimal for static variables. For static variables, the lazy initialization holder class idiom is preferred.
The following quote and code block are copied directly from Item 71 of Effective Java (2nd Edition), by Josh Bloch:
Because there is no locking if the field is already initialized, it
is critical that the field be declared volatile.
// Double-check idiom for lazy initialization of instance fields
private volatile FieldType field;
FieldType getField() {
FieldType result = field;
if (result == null) { // First check (no locking)
synchronized(this) {
result = field;
if (result == null) // Second check (with locking)
field = result = computeFieldValue();
}
}
return result;
}
In one of his talks, he recommended to copy this structure exactly when performing lazy initialization for instance fields, as it is optimal in such situations, and it is very easy to break it by changing it.
What is actually happening?
EDIT: This section is incorrect.
The volatile keyword means that all read and write operations for the variable are atomic; that is, they happen as one single step from the perspective of anything else. Additionally, volatile variables are always read from and written to main memory, not processor cache. The combination of these two properties guarantees that, as soon as a volatile variable variable is modified on one thread, subsequent reads on another thread will read the updated value. This guarantee is not present for non-volatile variables.
The double-check idiom does not guarantee that only one instance is created. Rather, it is so that, once the variable is initialized, future calls to getInstance() do not need to enter a synchronized block, which is expensive.
The guarantee that it is not initialized twice is made by the fact that (a) it is a volatile field, and (b) it is checked (again) inside of the synchronized block. The outer check helps efficiency; the inner check guarantees single initialization.
I highly recommend reading Item 71 of Effective Java (2nd Edition) for a more complete explanation. I also recommend the book in general as being fantastic.
UPDATE:
The local result variable used reduces the number of accesses of the volatile field needed, which improves performance. If the local variable was left out, and all reads and writes directly accessed the volatile field, it should have the same result, but take slightly longer.

Java Thread Safety of Initialized Objects

Consider the following class:
public class MyClass
{
private MyObject obj;
public MyClass()
{
obj = new MyObject();
}
public void methodCalledByOtherThreads()
{
obj.doStuff();
}
}
Since obj was created on one thread and accessed from another, could obj be null when methodCalledByOtherThread is called? If so, would declaring obj as volatile be the best way to fix this issue? Would declaring obj as final make any difference?
Edit:
For clarity, I think my main question is:
Can other threads see that obj has been initialized by some main thread or could obj be stale (null)?
For the methodCalledByOtherThreads to be called by another thread and cause problems, that thread would have to get a reference to a MyClass object whose obj field is not initialized, ie. where the constructor has not yet returned.
This would be possible if you leaked the this reference from the constructor. For example
public MyClass()
{
SomeClass.leak(this);
obj = new MyObject();
}
If the SomeClass.leak() method starts a separate thread that calls methodCalledByOtherThreads() on the this reference, then you would have problems, but this is true regardless of the volatile.
Since you don't have what I'm describing above, your code is fine.
It depends on whether the reference is published "unsafely". A reference is "published" by being written to a shared variable; another thread reads the variable to get the reference. If there is no relationship of happens-before(write, read), the publication is called unsafe. An example of unsafe publication is through a non-volatile static field.
#chrylis 's interpretation of "unsafe publication" is not accurate. Leaking this before constructor exit is orthogonal to the concept of unsafe publication.
Through unsafe publication, another thread may observe the object in an uncertain state (hence the name); in your case, field obj may appear to be null to another thread. Unless, obj is final, then it cannot appear to be null even if the host object is published unsafely.
This is all too technical and it requires further readings to understand. The good news is, you don't need to master "unsafe publication", because it is a discouraged practice anyway. The best practice is simply: never do unsafe publication; i.e. never do data race; i.e. always read/write shared data through proper synchronization, by using synchronized, volatile or java.util.concurrent.
If we always avoid unsafe publication, do we still need final fields? The answer is no. Then why are some objects (e.g. String) designed to be "thread safe immutable" by using final fields? Because it's assumed that they can be used in malicious code that tries to create uncertain state through deliberate unsafe publication. I think this is an overblown concern. It doesn't make much sense in server environments - if an application embeds malicious code, the server is compromised, period. It probably makes a bit of sense in Applet environment where JVM runs untrusted codes from unknown sources - even then, this is an improbable attack vector; there's no precedence of this kind of attack; there are a lot of other more easily exploitable security holes, apparently.
This code is fine because the reference to the instance of MyClass can't be visible to any other threads before the constructor returns.
Specifically, the happens-before relation requires that the visible effects of actions occur in the same order as they're listed in the program code, so that in the thread where the MyClass is constructed, obj must be definitely assigned before the constructor returns, and the instantiating thread goes directly from the state of not having a reference to the MyClass object to having a reference to a fully-constructed MyClass object.
That thread can then pass a reference to that object to another thread, but all of the construction will have transitively happened-before the second thread can call any methods on it. This might happen through the constructing thread's launching the second thread, a synchronized method, a volatile field, or the other concurrency mechanisms, but all of them will ensure that all of the actions that took place in the instantiating thread are finished before the memory barrier is passed.
Note that if a reference to this gets passed out of the class inside the constructor somewhere, that reference might go floating around and get used before the constructor is finished. That's what's known as unsafe publishing of the object, but code such as yours that doesn't call non-final methods from the constructor (or directly pass out references to this) is fine.
Your other thread could see a null object. A volatile object could possibly help, but an explicit lock mechanism (or a Builder) would likely be a better solution.
Have a look at Java Concurrency in Practice - Sample 14.12
This class (if taken as is) is NOT thread safe. In two words: there is reordering of instructions in java (Instruction reordering & happens-before relationship in java) and when in your code you're instantiating MyClass, under some circumstances you may get following set of instructions:
Allocate memory for new instance of MyClass;
Return link to this block of memory;
Link to this not fully initialized MyClass is available for other threads, they can call "methodCalledByOtherThreads()" and get NullPointerException;
Initialize internals of MyClass.
In order to prevent this and make your MyClass really thread safe - you either have to add "final" or "volatile" to the "obj" field. In this case Java's memory model (starting from Java 5 on) will guarantee that during initialization of MyClass, reference to alocated for it block of memory will be returned only when all internals are initialized.
For more details I would strictly recommend you to read nice book "Java Concurrency in Practice". Exactly your case is described on the pages 50-51 (section 3.5.1). I would even say - you just can write correct multithreaded code without reading that book! :)
The originally picked answer by #Sotirios Delimanolis is wrong. #ZhongYu 's answer is correct.
There is the visibility issue of the concern here. So if MyClass is published unsafely, anything could happen.
Someone in the comment asked for evidence - one can check Listing 3.15 in the book Java Concurrency in Practice:
public class Holder {
private int n;
// Initialize in thread A
public Holder(int n) { this.n = n; }
// Called in thread B
public void assertSanity() {
if (n != n) throw new AssertionError("This statement is false.");
}
}
Someone comes up an example to verify this piece of code:
coding a proof for potential concurrency issue
As to the specific example of this post:
public class MyClass{
private MyObject obj;
// Initialize in thread A
public MyClass(){
obj = new MyObject();
}
// Called in thread B
public void methodCalledByOtherThreads(){
obj.doStuff();
}
}
If MyClass is initialized in Thread A, there is no guarantee that thread B will see this initialization (because the change might stay in the cache of the CPU that Thread A runs on and has not propagated into main memory).
Just as #ZhongYu has pointed out, because the write and read happens at 2 independent threads, so there is no happens-before(write, read) relation.
To fix this, as the original author has mentioned, we can declare private MyObject obj as volatile, which will ensure that the reference itself will be visible to other threads in timely manner
(https://www.logicbig.com/tutorials/core-java-tutorial/java-multi-threading/volatile-ref-object.html) .

Why is volatile used in double checked locking

From Head First design patterns book, the singleton pattern with double checked locking has been implemented as below:
public class Singleton {
private volatile static Singleton instance;
private Singleton() {}
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
}
I don't understand why volatile is being used. Doesn't volatile usage defeat the purpose of using double checked locking i.e performance?
A good resource for understanding why volatile is needed comes from the JCIP book. Wikipedia has a decent explanation of that material as well.
The real problem is that Thread A may assign a memory space for instance before it is finished constructing instance. Thread B will see that assignment and try to use it. This results in Thread B failing because it is using a partially constructed version of instance.
As quoted by #irreputable, volatile is not expensive. Even if it is expensive, consistency should be given priority over performance.
There is one more clean elegant way for Lazy Singletons.
public final class Singleton {
private Singleton() {}
public static Singleton getInstance() {
return LazyHolder.INSTANCE;
}
private static class LazyHolder {
private static final Singleton INSTANCE = new Singleton();
}
}
Source article : Initialization-on-demand_holder_idiom from wikipedia
In software engineering, the Initialization on Demand Holder (design pattern) idiom is a lazy-loaded singleton. In all versions of Java, the idiom enables a safe, highly concurrent lazy initialization with good performance
Since the class does not have any static variables to initialize, the initialization completes trivially.
The static class definition LazyHolder within it is not initialized until the JVM determines that LazyHolder must be executed.
The static class LazyHolder is only executed when the static method getInstance is invoked on the class Singleton, and the first time this happens the JVM will load and initialize the LazyHolder class.
This solution is thread-safe without requiring special language constructs (i.e. volatile or synchronized).
Well, there's no double-checked locking for performance. It is a broken pattern.
Leaving emotions aside, volatile is here because without it by the time second thread passes instance == null, first thread might not construct new Singleton() yet: no one promises that creation of the object happens-before assignment to instance for any thread but the one actually creating the object.
volatile in turn establishes happens-before relation between reads and writes, and fixes the broken pattern.
If you are looking for performance, use holder inner static class instead.
Declaring the variable as volatile guarantees that all accesses to it actually read its current value from memory.
Without volatile, the compiler may optimize away the memory accesses to the variable (such as keeping its value in a register), so only the first use of the variable reads the actual memory location holding the variable. This is a problem if the variable is modified by another thread between the first and second access; the first thread has only a copy of the first (pre-modified) value, so the second if statement tests a stale copy of the variable's value.
If you didn't have it, a second thread could get into the synchronized block after the first set it to null, and your local cache would still think it was null.
The first one is not for correctness (if it were you are correct that it would be self defeating) but rather for optimization.
A volatile read is not really expensive in itself.
You can design a test to call getInstance() in a tight loop, to observe the impact of a volatile read; however that test is not realistic; in such situation, programmer usually would call getInstance() once and cache the instance for the duration of use.
Another impl is by using a final field (see wikipedia). This requires an additional read, which may become more expensive than the volatile version. The final version may be faster in a tight loop, however that test is moot as previously argued.
The reason why you need volatile is because volatile has 2 semantics in Java
variable visibility between threads
stop re-ordering
So the problem without volatile in the double checked lock is that statement
instance = new Singleton()
have 3 main steps in bytecode which can be viewed by command javap -c Singleton.class
17: new #3 // class Singleton
20: dup
21: invokespecial #4 // Method "<init>":()V
Allocate memory space for the object (not initialized yet)
Create a variable to point to this space memory address
Call constructor to initialize the object
These 3 steps can be re-ordered during runtime by CPU or JVM which can be a case you will get an instance not fully initialized yet.
By having volatile JVM will insert monitorenter and monitorexit to avoid re-ordering as below.
10: monitorenter
11: getstatic #2 // Field instance:LSingleton;
14: ifnonnull 27
17: new #3 // class Singleton
20: dup
21: invokespecial #4 // Method "<init>":()V
24: putstatic #2 // Field instance:LSingleton;
27: aload_0
28: monitorexit
So volative is required for singleton.
Double checked locking is a technique to prevent creating another instance of singleton when call to getInstance method is made in multithreading environment.
Pay attention
Singleton instance is checked twice before initialization.
Synchronized critical section is used only after first checking singleton instance for that reason to improve performance.
volatile keyword on the declaration of the instance member. This will tell the compiler to always read from, and write to, main memory and not the CPU cache. With volatile variable guaranteeing happens-before relationship, all the write will happen before any read of instance variable.
Disadvantages
Since it requires the volatile keyword to work properly, it's not compatible with Java 1.4 and lower versions. The problem is that an out-of-order write may allow the instance reference to be returned before the singleton constructor is executed.
Performance issue because of decline cache for volatile variable.
Singleton instance is checked two times before initialization.
It's quite verbose and it makes the code difficult to read.
There are several realization of singleton pattern each one with advantages and disadvantages.
Eager loading singleton
Double-checked locking singleton
Initialization-on-demand holder idiom
The enum based singleton
Detailed description each of them is too verbose so I just put a link to a good article - All you want to know about Singleton

Categories