Is volatile functionality same with synchronized getter? [duplicate] - java

I read some java code, and found these functions:
synchronized void setConnected(boolean connected){
this.connected = connected;
}
synchronized boolean isConnected(){
return connected;
}
I wonder if synchronization makes any sense here, or just author didn't understand the need for synchronized keyword?
I'd suppose that synchronized is useless here. Or am I mistaken?

The keyword synchronized is one way of ensuring thread-safety. Beware: there's (way) more to thread-safety than deadlocks, or missing updates because of two threads incrementing an int without synchronization.
Consider the following class:
class Connection {
private boolean connected;
synchronized void setConnected(boolean connected){
this.connected = connected;
}
synchronized boolean isConnected(){
return connected;
}
}
If multiple threads share an instance of Connection and one thread calls setConnected(true), without synchronized it is possible that other threads keep seeing isConnected() == false. The synchronized keyword guarantees that all threads sees the current value of the field.
In more technical terms, the synchronized keyword ensures a memory barrier (hint: google that).
In more details: every write made before releasing a monitor (ie, before leaving a synchronized block) is guaranteed to be seen by every read made after acquiring the same monitor (ie, after entering a block synchronizing on the same object). In Java, there's something called happens-before (hint: google that), which is not as trivial as "I wrote the code in this order, so things get executed in this order". Using synchronized is a way to establish a happens-before relationship and guarantee that threads see the memory as you would expect them to see.
Another way to achieve the same guarantees, in this case, would be to eliminate the synchronized keyword and mark the field volatile. The guarantees provided by volatile are as follows: all writes made by a thread before a volatile write are guaranteed to be visible to a thread after a subsequent volatile read of the same field.
As a final note, in this particular case it might be better to use a volatile field instead of synchronized accessors, because the two approaches provide the same guarantees and the volatile-field approach allows simultaneous accesses to the field from different threads (which might improve performance if the synchronized version has too much contention).

Synchronization is needed here to prevent memory consistency errors, see http://docs.oracle.com/javase/tutorial/essential/concurrency/memconsist.html. Though in this concrete case volatile would be much more efficient solution
private volatile boolean connected;
void setConnected(boolean connected){
this.connected = connected;
}
boolean isConnected(){
return connected;
}

The author has probably designed the code with a multi-threaded approach in mind. This means that the methods are synchronized and more than one thread will not be able to access the synchronized code at the same time on the same object instance.

Related

Doesn't "volatile" ensure that other threads see a consistent value for the variable? [duplicate]

How do atomic / volatile / synchronized work internally?
What is the difference between the following code blocks?
Code 1
private int counter;
public int getNextUniqueIndex() {
return counter++;
}
Code 2
private AtomicInteger counter;
public int getNextUniqueIndex() {
return counter.getAndIncrement();
}
Code 3
private volatile int counter;
public int getNextUniqueIndex() {
return counter++;
}
Does volatile work in the following way? Is
volatile int i = 0;
void incIBy5() {
i += 5;
}
equivalent to
Integer i = 5;
void incIBy5() {
int temp;
synchronized(i) { temp = i }
synchronized(i) { i = temp + 5 }
}
I think that two threads cannot enter a synchronized block at the same time... am I right? If this is true then how does atomic.incrementAndGet() work without synchronized? And is it thread-safe?
And what is the difference between internal reading and writing to volatile variables / atomic variables? I read in some article that the thread has a local copy of the variables - what is that?
You are specifically asking about how they internally work, so here you are:
No synchronization
private int counter;
public int getNextUniqueIndex() {
return counter++;
}
It basically reads value from memory, increments it and puts back to memory. This works in single thread but nowadays, in the era of multi-core, multi-CPU, multi-level caches it won't work correctly. First of all it introduces race condition (several threads can read the value at the same time), but also visibility problems. The value might only be stored in "local" CPU memory (some cache) and not be visible for other CPUs/cores (and thus - threads). This is why many refer to local copy of a variable in a thread. It is very unsafe. Consider this popular but broken thread-stopping code:
private boolean stopped;
public void run() {
while(!stopped) {
//do some work
}
}
public void pleaseStop() {
stopped = true;
}
Add volatile to stopped variable and it works fine - if any other thread modifies stopped variable via pleaseStop() method, you are guaranteed to see that change immediately in working thread's while(!stopped) loop. BTW this is not a good way to interrupt a thread either, see: How to stop a thread that is running forever without any use and Stopping a specific java thread.
AtomicInteger
private AtomicInteger counter = new AtomicInteger();
public int getNextUniqueIndex() {
return counter.getAndIncrement();
}
The AtomicInteger class uses CAS (compare-and-swap) low-level CPU operations (no synchronization needed!) They allow you to modify a particular variable only if the present value is equal to something else (and is returned successfully). So when you execute getAndIncrement() it actually runs in a loop (simplified real implementation):
int current;
do {
current = get();
} while(!compareAndSet(current, current + 1));
So basically: read; try to store incremented value; if not successful (the value is no longer equal to current), read and try again. The compareAndSet() is implemented in native code (assembly).
volatile without synchronization
private volatile int counter;
public int getNextUniqueIndex() {
return counter++;
}
This code is not correct. It fixes the visibility issue (volatile makes sure other threads can see change made to counter) but still has a race condition. This has been explained multiple times: pre/post-incrementation is not atomic.
The only side effect of volatile is "flushing" caches so that all other parties see the freshest version of the data. This is too strict in most situations; that is why volatile is not default.
volatile without synchronization (2)
volatile int i = 0;
void incIBy5() {
i += 5;
}
The same problem as above, but even worse because i is not private. The race condition is still present. Why is it a problem? If, say, two threads run this code simultaneously, the output might be + 5 or + 10. However, you are guaranteed to see the change.
Multiple independent synchronized
void incIBy5() {
int temp;
synchronized(i) { temp = i }
synchronized(i) { i = temp + 5 }
}
Surprise, this code is incorrect as well. In fact, it is completely wrong. First of all you are synchronizing on i, which is about to be changed (moreover, i is a primitive, so I guess you are synchronizing on a temporary Integer created via autoboxing...) Completely flawed. You could also write:
synchronized(new Object()) {
//thread-safe, SRSLy?
}
No two threads can enter the same synchronized block with the same lock. In this case (and similarly in your code) the lock object changes upon every execution, so synchronized effectively has no effect.
Even if you have used a final variable (or this) for synchronization, the code is still incorrect. Two threads can first read i to temp synchronously (having the same value locally in temp), then the first assigns a new value to i (say, from 1 to 6) and the other one does the same thing (from 1 to 6).
The synchronization must span from reading to assigning a value. Your first synchronization has no effect (reading an int is atomic) and the second as well. In my opinion, these are the correct forms:
void synchronized incIBy5() {
i += 5
}
void incIBy5() {
synchronized(this) {
i += 5
}
}
void incIBy5() {
synchronized(this) {
int temp = i;
i = temp + 5;
}
}
Declaring a variable as volatile means that modifying its value immediately affects the actual memory storage for the variable. The compiler cannot optimize away any references made to the variable. This guarantees that when one thread modifies the variable, all other threads see the new value immediately. (This is not guaranteed for non-volatile variables.)
Declaring an atomic variable guarantees that operations made on the variable occur in an atomic fashion, i.e., that all of the substeps of the operation are completed within the thread they are executed and are not interrupted by other threads. For example, an increment-and-test operation requires the variable to be incremented and then compared to another value; an atomic operation guarantees that both of these steps will be completed as if they were a single indivisible/uninterruptible operation.
Synchronizing all accesses to a variable allows only a single thread at a time to access the variable, and forces all other threads to wait for that accessing thread to release its access to the variable.
Synchronized access is similar to atomic access, but the atomic operations are generally implemented at a lower level of programming. Also, it is entirely possible to synchronize only some accesses to a variable and allow other accesses to be unsynchronized (e.g., synchronize all writes to a variable but none of the reads from it).
Atomicity, synchronization, and volatility are independent attributes, but are typically used in combination to enforce proper thread cooperation for accessing variables.
Addendum (April 2016)
Synchronized access to a variable is usually implemented using a monitor or semaphore. These are low-level mutex (mutual exclusion) mechanisms that allow a thread to acquire control of a variable or block of code exclusively, forcing all other threads to wait if they also attempt to acquire the same mutex. Once the owning thread releases the mutex, another thread can acquire the mutex in turn.
Addendum (July 2016)
Synchronization occurs on an object. This means that calling a synchronized method of a class will lock the this object of the call. Static synchronized methods will lock the Class object itself.
Likewise, entering a synchronized block requires locking the this object of the method.
This means that a synchronized method (or block) can be executing in multiple threads at the same time if they are locking on different objects, but only one thread can execute a synchronized method (or block) at a time for any given single object.
volatile:
volatile is a keyword. volatile forces all threads to get latest value of the variable from main memory instead of cache. No locking is required to access volatile variables. All threads can access volatile variable value at same time.
Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable.
This means that changes to a volatile variable are always visible to other threads. What's more, it also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.
When to use: One thread modifies the data and other threads have to read latest value of data. Other threads will take some action but they won't update data.
AtomicXXX:
AtomicXXX classes support lock-free thread-safe programming on single variables. These AtomicXXX classes (like AtomicInteger) resolves memory inconsistency errors / side effects of modification of volatile variables, which have been accessed in multiple threads.
When to use: Multiple threads can read and modify data.
synchronized:
synchronized is keyword used to guard a method or code block. By making method as synchronized has two effects:
First, it is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
When to use: Multiple threads can read and modify data. Your business logic not only update the data but also executes atomic operations
AtomicXXX is equivalent of volatile + synchronized even though the implementation is different. AmtomicXXX extends volatile variables + compareAndSet methods but does not use synchronization.
Related SE questions:
Difference between volatile and synchronized in Java
Volatile boolean vs AtomicBoolean
Good articles to read: ( Above content is taken from these documentation pages)
https://docs.oracle.com/javase/tutorial/essential/concurrency/sync.html
https://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html
https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/package-summary.html
I know that two threads can not enter in Synchronize block at the same time
Two thread cannot enter a synchronized block on the same object twice. This means that two threads can enter the same block on different objects. This confusion can lead to code like this.
private Integer i = 0;
synchronized(i) {
i++;
}
This will not behave as expected as it could be locking on a different object each time.
if this is true than How this atomic.incrementAndGet() works without Synchronize ?? and is thread safe ??
yes. It doesn't use locking to achieve thread safety.
If you want to know how they work in more detail, you can read the code for them.
And what is difference between internal reading and writing to Volatile Variable / Atomic Variable ??
Atomic class uses volatile fields. There is no difference in the field. The difference is the operations performed. The Atomic classes use CompareAndSwap or CAS operations.
i read in some article that thread has local copy of variables what is that ??
I can only assume that it referring to the fact that each CPU has its own cached view of memory which can be different from every other CPU. To ensure that your CPU has a consistent view of data, you need to use thread safety techniques.
This is only an issue when memory is shared at least one thread updates it.
Synchronized Vs Atomic Vs Volatile:
Volatile and Atomic is apply only on variable , While Synchronized apply on method.
Volatile ensure about visibility not atomicity/consistency of object , While other both ensure about visibility and atomicity.
Volatile variable store in RAM and it’s faster in access but we can’t achive Thread safety or synchronization whitout synchronized keyword.
Synchronized implemented as synchronized block or synchronized method while both not. We can thread safe multiple line of code with help of synchronized keyword while with both we can’t achieve the same.
Synchronized can lock the same class object or different class object while both can’t.
Please correct me if anything i missed.
A volatile + synchronization is a fool proof solution for an operation(statement) to be fully atomic which includes multiple instructions to the CPU.
Say for eg:volatile int i = 2; i++, which is nothing but i = i + 1; which makes i as the value 3 in the memory after the execution of this statement.
This includes reading the existing value from memory for i(which is 2), load into the CPU accumulator register and do with the calculation by increment the existing value with one(2 + 1 = 3 in accumulator) and then write back that incremented value back to the memory. These operations are not atomic enough though the value is of i is volatile. i being volatile guarantees only that a SINGLE read/write from memory is atomic and not with MULTIPLE. Hence, we need to have synchronized also around i++ to keep it to be fool proof atomic statement. Remember the fact that a statement includes multiple statements.
Hope the explanation is clear enough.
The Java volatile modifier is an example of a special mechanism to guarantee that communication happens between threads. When one thread writes to a volatile variable, and another thread sees that write, the first thread is telling the second about all of the contents of memory up until it performed the write to that volatile variable.
Atomic operations are performed in a single unit of task without interference from other operations. Atomic operations are necessity in multi-threaded environment to avoid data inconsistency.

Does atomic variables guarantee memory visibility?

Small question about memory visibility.
CodeSample1:
class CustomLock {
private boolean locked = false;
public boolean lock() {
if(!locked) {
locked = true;
return true;
}
return false;
}
}
This code is prone to bugs in a multi-threaded environment, first because of the "if-then-act" which is not atomic, and second because of potential memory visibility issues where for example threadA sets the field to true, but threadB that later wishes to read the field's value might not see that, and still see the value false.
The simplest solution is to use the synchronized keyword, as in CodeSample2.
CodeSample2:
class CustomLock {
private boolean locked = false;
public synchronized boolean lock() {
if(!locked) {
locked = true;
return true;
}
return false;
}
}
Now what if I wish to use an atomic variable, and for the example, an AtomicBoolean (question applies to all atomic variables),
CodeSample3:
public static class CustomLock {
private AtomicBoolean locked = new AtomicBoolean(false);
public boolean lock() {
return locked.compareAndSet(false, true);
}
}
Better performance considerations aside, we can see that now we've implemented similar logic to the "if-then-act" from CodeSample1, using AtomicBoolean.
It doesn't really matter what the code does logically, the question I have is what if 2 threads invoke the lock() method in CodeSample3 right about the same time, while it's clear that any write operation to the field will now be done atomically, does the use of AtomicBoolean also guarantees memory visibility?
Sorry for the long story, just wanted to make sure I'm coming across as clear as possible, Thanks guys...
Yes, according to the javadocs it guarantees:
compareAndSet and all other read-and-update operations such as getAndIncrement have the memory effects of both reading and writing volatile variables.
the question I have is what if 2 threads invoke the lock() method in CodeSample3 right about the same time, while it's clear that any write operation to the field will now be done atomically, does the use of AtomicBoolean also guarantees memory visibility?
For AtomicBoolean to handle multiple operations from different threads at the same time it has to guarantee memory visibility. It can make the guarantee because it wraps a volatile field. It is the language semantics of the volatile which ensures that memory barriers are crossed so that multiple threads see the most up to date value and that any updates will be published to main memory.
Btw, your lock(...) method should ready be tryLock(...) because it might not get the lock.

Do we need to synchronize writes if we are synchronizing reads?

I have few doubts about synchronized blocks.
Before my questions I would like to share the answers from another related post Link for Answer to related question. I quote Peter Lawrey from the same answer.
synchronized ensures you have a consistent view of the data. This means you will read the latest value and other caches will get the
latest value. Caches are smart enough to talk to each other via a
special bus (not something required by the JLS, but allowed) This
bus means that it doesn't have to touch main memory to get a
consistent view.
If you only use synchronized, you wouldn't need volatile. Volatile is useful if you have a very simple operation for which synchronized
would be overkill.
In reference to above I have three questions below :
Q1. Suppose in a multi threaded application there is an object or a primitive instance field being only read in a synchronized block (write may be happening in some other method without synchronization). Also Synchronized block is defined upon some other Object. Does declaring it volatile (even if it is read inside Synchronized block only) makes any sense ?
Q2. I understand the value of the states of the object upon which Synchronization has been done is consistent. I am not sure for the state of other objects and primitive fields being read in side the Synchronized block. Suppose changes are made without obtaining a lock but reading is done by obtaining a lock. Does state of all the objects and value of all primitive fields inside a Synchronized block will have consistent view always. ?
Q3. [Update] : Will all fields being read in a synchronized block will be read from main memory regardless of what we lock on ? [answered by CKing]
I have a prepared a reference code for my questions above.
public class Test {
private SomeClass someObj;
private boolean isSomeFlag;
private Object lock = new Object();
public SomeClass getObject() {
return someObj;
}
public void setObject(SomeClass someObj) {
this.someObj = someObj;
}
public void executeSomeProcess(){
//some process...
}
// synchronized block is on a private someObj lock.
// inside the lock method does the value of isSomeFlag and state of someObj remain consistent?
public void someMethod(){
synchronized (lock) {
while(isSomeFlag){
executeSomeProcess();
}
if(someObj.isLogicToBePerformed()){
someObj.performSomeLogic();
}
}
}
// this is method without synchronization.
public void setSomeFlag(boolean isSomeFlag) {
this.isSomeFlag = isSomeFlag;
}
}
The first thing you need to understand is that there is a subtle difference between the scenario being discussed in the linked answer and the scenario you are talking about. You speak about modifying a value without synchronization whereas all values are modified within a synchronized context in the linked answer. With this understanding in mind, let's address your questions :
Q1. Suppose in a multi threaded application there is an object or a primitive instance field being only read in a synchronized block (write may be happening in some other method without synchronization). Also Synchronized block is defined upon some other Object. Does declaring it volatile (even if it is read inside Synchronized block only) makes any sense ?
Yes it does make sense to declare the field as volatile. Since the write is not happening in a synchronized context, there is no guarantee that the writing thread will flush the newly updated value to main memory. The reading thread may still see inconsistent values because of this.
Suppose changes are made without obtaining a lock but reading is done by obtaining a lock. Does state of all the objects and value of all primitive fields inside a Synchronized block will have consistent view always. ?
The answer is still no. The reasoning is the same as above.
Bottom line : Modifying values outside synchronized context will not ensure that these values get flushed to main memory. (as the reader thread may enter the synchronized block before the writer thread does) Threads that read these values in a synchronized context may still end up reading older values even if they get these values from the main memory.
Note that this question talks about primitives so it is also important to understand that Java provides Out-of-thin-air safety for 32-bit primitives (all primitives except long and double) which means that you can be assured that you will atleast see a valid value (if not consistent).
All synchronized does is capture the lock of the object that it is synchronized on. If the lock is already captured, it will wait for its release. It does not in any way assert that that object's internal fields won't change. For that, there is volatile
When you synchronize on an object monitor A, it is guaranteed that another thread synchronizing on the same monitor A afterwards will see any changes made by the first thread to any object. That's the visibility guarantee provided by synchronized, nothing more.
A volatile variable guarantees visibility (for the variable only, a volatile HashMap doesn't mean the contents of the map would be visible) between threads regardless of any synchronized blocks.

ConcurrentHashMap of Future and double-check locking

Given:
A lazy initialized singleton class implemented with double-check locking pattern with all the relevant volatile and synchronized stuff in getInstance. This singleton launches asynchronous operations via an ExecutorService,
There are seven type of tasks, each one identified by a unique key,
When a task is launched, it is stored in a cached based on ConcurrentHashMap,
When a client ask for a task, if the task in the cache is done, a new one is launched and cached; if it is running, the task is retrieved from the cache and passed to the client.
Here is a excerpt of the code:
private static volatile TaskLauncher instance;
private ExecutorService threadPool;
private ConcurrentHashMap<String, Future<Object>> tasksCache;
private TaskLauncher() {
threadPool = Executors.newFixedThreadPool(7);
tasksCache = new ConcurrentHashMap<String, Future<Object>>();
}
public static TaskLauncher getInstance() {
if (instance == null) {
synchronized (TaskLauncher.class) {
if (instance == null) {
instance = TaskLauncher();
}
}
}
return instance;
}
public Future<Object> getTask(String key) {
Future<Object> expectedTask = tasksCache.get(key);
if (expectedTask == null || expectedTask.isDone()) {
synchronized (tasksCache) {
if (expectedTask == null || expectedTask.isDone()) {
// Make some stuff to create a new task
expectedTask = [...];
threadPool.execute(expectedTask);
taskCache.put(key, expectedTask);
}
}
}
return expectedTask;
}
I got one major question, and another minor one:
Do I need to perform double-check locking control in my getTask method? I know ConcurrentHashMap is thread-safe for read operations, so my get(key) is thread-safe and may not need double-check locking (but yet quite unsure of this…). But what about the isDone() method of Future?
How do you chose the right lock object in a synchronized block? I know it must no be null, so I use first the TaskLauncher.class object in getInstance() and then the tasksCache, already initialized, in the getTask(String key) method. And has this choice any importance in fact?
Do I need to perform double-check locking control in my getTask method?
You don't need to do double-checked locking (DCL) here. (In fact, it is very rare that you need to use DCL. In 99.9% of cases, regular locking is just fine. Regular locking on a modern JVM is fast enough that the performance benefit of DCL is usually too small to make a noticeable difference.)
However, synchronization is necessary unless you declared tasksCache to be final. And if tasksCache is not final, then simple locking should be just fine.
I know ConcurrentHashMap is thread-safe for read operations ...
That's not the issue. The issue is whether reading the value of the taskCache reference is going to give you the right value if the TaskLauncher is created and used on different threads. The thread-safety of fetching a reference from a variable is not affected one way or another by the thread-safety of the referenced object.
But what about the isDone() method of Future?
Again ... that has no bearing on whether or not you need to use DCL or other synchronization.
For the record, the memory semantics "contract" for Future is specified in the javadoc:
"Memory consistency effects: Actions taken by the asynchronous computation happen-before actions following the corresponding Future.get() in another thread."
In other words, no extra synchronization is required when you call get() on a (properly implemented) Future.
How do you chose the right lock object in a synchronized block?
The locking serves to synchronize access to the variables read and written by different threads while hold the lock.
In theory, you could write your entire application to use just one lock. But if you did that, you would get the situation where one thread waits for another, despite the first thread not needing to use the variables that were used by the other one. So normal practice is use a lock that is associated with the variables.
The other thing you need to be ensure is that when two threads need to access the same set of variables, they use the same object (or objects) as locks. If they use different locks, then they don't achieve proper synchronization ...
(There are also issues about whether lock on this or on a private lock, and about the order in which locks should be acquired. But these are beyond the scope of the question you asked.)
Those are the general "rules". To decide in a specific case, you need to understand precisely what you are trying to protect, and choose the lock accordingly.
AbstractQueuedSync used in side FutureTask has a variable state of a
thread and its a volatile (thread safe) variable. So need not to worry about isDone() method.
private volatile int state;
Choice of lock object is based on the instance type and situation,
Lets say you have multiple objects and they have Sync blocks on
TaskLauncher.class then all the methods in all the instances with be
synchronized by this single lock (use this if you want to share a
single shared memory across all the instances).
If all instances have their own shared memory b/w threads and methods use this. Using this will save you one extra lock object as well.
In your case you can use
TaskLauncher.class ,tasksCache, this its all same in terms of synchronization as its singelton.

What is the difference between atomic / volatile / synchronized?

How do atomic / volatile / synchronized work internally?
What is the difference between the following code blocks?
Code 1
private int counter;
public int getNextUniqueIndex() {
return counter++;
}
Code 2
private AtomicInteger counter;
public int getNextUniqueIndex() {
return counter.getAndIncrement();
}
Code 3
private volatile int counter;
public int getNextUniqueIndex() {
return counter++;
}
Does volatile work in the following way? Is
volatile int i = 0;
void incIBy5() {
i += 5;
}
equivalent to
Integer i = 5;
void incIBy5() {
int temp;
synchronized(i) { temp = i }
synchronized(i) { i = temp + 5 }
}
I think that two threads cannot enter a synchronized block at the same time... am I right? If this is true then how does atomic.incrementAndGet() work without synchronized? And is it thread-safe?
And what is the difference between internal reading and writing to volatile variables / atomic variables? I read in some article that the thread has a local copy of the variables - what is that?
You are specifically asking about how they internally work, so here you are:
No synchronization
private int counter;
public int getNextUniqueIndex() {
return counter++;
}
It basically reads value from memory, increments it and puts back to memory. This works in single thread but nowadays, in the era of multi-core, multi-CPU, multi-level caches it won't work correctly. First of all it introduces race condition (several threads can read the value at the same time), but also visibility problems. The value might only be stored in "local" CPU memory (some cache) and not be visible for other CPUs/cores (and thus - threads). This is why many refer to local copy of a variable in a thread. It is very unsafe. Consider this popular but broken thread-stopping code:
private boolean stopped;
public void run() {
while(!stopped) {
//do some work
}
}
public void pleaseStop() {
stopped = true;
}
Add volatile to stopped variable and it works fine - if any other thread modifies stopped variable via pleaseStop() method, you are guaranteed to see that change immediately in working thread's while(!stopped) loop. BTW this is not a good way to interrupt a thread either, see: How to stop a thread that is running forever without any use and Stopping a specific java thread.
AtomicInteger
private AtomicInteger counter = new AtomicInteger();
public int getNextUniqueIndex() {
return counter.getAndIncrement();
}
The AtomicInteger class uses CAS (compare-and-swap) low-level CPU operations (no synchronization needed!) They allow you to modify a particular variable only if the present value is equal to something else (and is returned successfully). So when you execute getAndIncrement() it actually runs in a loop (simplified real implementation):
int current;
do {
current = get();
} while(!compareAndSet(current, current + 1));
So basically: read; try to store incremented value; if not successful (the value is no longer equal to current), read and try again. The compareAndSet() is implemented in native code (assembly).
volatile without synchronization
private volatile int counter;
public int getNextUniqueIndex() {
return counter++;
}
This code is not correct. It fixes the visibility issue (volatile makes sure other threads can see change made to counter) but still has a race condition. This has been explained multiple times: pre/post-incrementation is not atomic.
The only side effect of volatile is "flushing" caches so that all other parties see the freshest version of the data. This is too strict in most situations; that is why volatile is not default.
volatile without synchronization (2)
volatile int i = 0;
void incIBy5() {
i += 5;
}
The same problem as above, but even worse because i is not private. The race condition is still present. Why is it a problem? If, say, two threads run this code simultaneously, the output might be + 5 or + 10. However, you are guaranteed to see the change.
Multiple independent synchronized
void incIBy5() {
int temp;
synchronized(i) { temp = i }
synchronized(i) { i = temp + 5 }
}
Surprise, this code is incorrect as well. In fact, it is completely wrong. First of all you are synchronizing on i, which is about to be changed (moreover, i is a primitive, so I guess you are synchronizing on a temporary Integer created via autoboxing...) Completely flawed. You could also write:
synchronized(new Object()) {
//thread-safe, SRSLy?
}
No two threads can enter the same synchronized block with the same lock. In this case (and similarly in your code) the lock object changes upon every execution, so synchronized effectively has no effect.
Even if you have used a final variable (or this) for synchronization, the code is still incorrect. Two threads can first read i to temp synchronously (having the same value locally in temp), then the first assigns a new value to i (say, from 1 to 6) and the other one does the same thing (from 1 to 6).
The synchronization must span from reading to assigning a value. Your first synchronization has no effect (reading an int is atomic) and the second as well. In my opinion, these are the correct forms:
void synchronized incIBy5() {
i += 5
}
void incIBy5() {
synchronized(this) {
i += 5
}
}
void incIBy5() {
synchronized(this) {
int temp = i;
i = temp + 5;
}
}
Declaring a variable as volatile means that modifying its value immediately affects the actual memory storage for the variable. The compiler cannot optimize away any references made to the variable. This guarantees that when one thread modifies the variable, all other threads see the new value immediately. (This is not guaranteed for non-volatile variables.)
Declaring an atomic variable guarantees that operations made on the variable occur in an atomic fashion, i.e., that all of the substeps of the operation are completed within the thread they are executed and are not interrupted by other threads. For example, an increment-and-test operation requires the variable to be incremented and then compared to another value; an atomic operation guarantees that both of these steps will be completed as if they were a single indivisible/uninterruptible operation.
Synchronizing all accesses to a variable allows only a single thread at a time to access the variable, and forces all other threads to wait for that accessing thread to release its access to the variable.
Synchronized access is similar to atomic access, but the atomic operations are generally implemented at a lower level of programming. Also, it is entirely possible to synchronize only some accesses to a variable and allow other accesses to be unsynchronized (e.g., synchronize all writes to a variable but none of the reads from it).
Atomicity, synchronization, and volatility are independent attributes, but are typically used in combination to enforce proper thread cooperation for accessing variables.
Addendum (April 2016)
Synchronized access to a variable is usually implemented using a monitor or semaphore. These are low-level mutex (mutual exclusion) mechanisms that allow a thread to acquire control of a variable or block of code exclusively, forcing all other threads to wait if they also attempt to acquire the same mutex. Once the owning thread releases the mutex, another thread can acquire the mutex in turn.
Addendum (July 2016)
Synchronization occurs on an object. This means that calling a synchronized method of a class will lock the this object of the call. Static synchronized methods will lock the Class object itself.
Likewise, entering a synchronized block requires locking the this object of the method.
This means that a synchronized method (or block) can be executing in multiple threads at the same time if they are locking on different objects, but only one thread can execute a synchronized method (or block) at a time for any given single object.
volatile:
volatile is a keyword. volatile forces all threads to get latest value of the variable from main memory instead of cache. No locking is required to access volatile variables. All threads can access volatile variable value at same time.
Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable.
This means that changes to a volatile variable are always visible to other threads. What's more, it also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.
When to use: One thread modifies the data and other threads have to read latest value of data. Other threads will take some action but they won't update data.
AtomicXXX:
AtomicXXX classes support lock-free thread-safe programming on single variables. These AtomicXXX classes (like AtomicInteger) resolves memory inconsistency errors / side effects of modification of volatile variables, which have been accessed in multiple threads.
When to use: Multiple threads can read and modify data.
synchronized:
synchronized is keyword used to guard a method or code block. By making method as synchronized has two effects:
First, it is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
When to use: Multiple threads can read and modify data. Your business logic not only update the data but also executes atomic operations
AtomicXXX is equivalent of volatile + synchronized even though the implementation is different. AmtomicXXX extends volatile variables + compareAndSet methods but does not use synchronization.
Related SE questions:
Difference between volatile and synchronized in Java
Volatile boolean vs AtomicBoolean
Good articles to read: ( Above content is taken from these documentation pages)
https://docs.oracle.com/javase/tutorial/essential/concurrency/sync.html
https://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html
https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/package-summary.html
I know that two threads can not enter in Synchronize block at the same time
Two thread cannot enter a synchronized block on the same object twice. This means that two threads can enter the same block on different objects. This confusion can lead to code like this.
private Integer i = 0;
synchronized(i) {
i++;
}
This will not behave as expected as it could be locking on a different object each time.
if this is true than How this atomic.incrementAndGet() works without Synchronize ?? and is thread safe ??
yes. It doesn't use locking to achieve thread safety.
If you want to know how they work in more detail, you can read the code for them.
And what is difference between internal reading and writing to Volatile Variable / Atomic Variable ??
Atomic class uses volatile fields. There is no difference in the field. The difference is the operations performed. The Atomic classes use CompareAndSwap or CAS operations.
i read in some article that thread has local copy of variables what is that ??
I can only assume that it referring to the fact that each CPU has its own cached view of memory which can be different from every other CPU. To ensure that your CPU has a consistent view of data, you need to use thread safety techniques.
This is only an issue when memory is shared at least one thread updates it.
Synchronized Vs Atomic Vs Volatile:
Volatile and Atomic is apply only on variable , While Synchronized apply on method.
Volatile ensure about visibility not atomicity/consistency of object , While other both ensure about visibility and atomicity.
Volatile variable store in RAM and it’s faster in access but we can’t achive Thread safety or synchronization whitout synchronized keyword.
Synchronized implemented as synchronized block or synchronized method while both not. We can thread safe multiple line of code with help of synchronized keyword while with both we can’t achieve the same.
Synchronized can lock the same class object or different class object while both can’t.
Please correct me if anything i missed.
A volatile + synchronization is a fool proof solution for an operation(statement) to be fully atomic which includes multiple instructions to the CPU.
Say for eg:volatile int i = 2; i++, which is nothing but i = i + 1; which makes i as the value 3 in the memory after the execution of this statement.
This includes reading the existing value from memory for i(which is 2), load into the CPU accumulator register and do with the calculation by increment the existing value with one(2 + 1 = 3 in accumulator) and then write back that incremented value back to the memory. These operations are not atomic enough though the value is of i is volatile. i being volatile guarantees only that a SINGLE read/write from memory is atomic and not with MULTIPLE. Hence, we need to have synchronized also around i++ to keep it to be fool proof atomic statement. Remember the fact that a statement includes multiple statements.
Hope the explanation is clear enough.
The Java volatile modifier is an example of a special mechanism to guarantee that communication happens between threads. When one thread writes to a volatile variable, and another thread sees that write, the first thread is telling the second about all of the contents of memory up until it performed the write to that volatile variable.
Atomic operations are performed in a single unit of task without interference from other operations. Atomic operations are necessity in multi-threaded environment to avoid data inconsistency.

Categories