I have read article concerning atomic operation in Java but still have some doubts needing to be clarified:
int volatile num;
public void doSomething() {
num = 10; // write operation
System.out.println(num) // read
num = 20; // write
System.out.println(num); // read
}
So i have done w-r-w-r 4 operations on 1 method, are they atomic operations? What will happen if multiple threads invoke doSomething() method simultaneously ?
An operation is atomic if no thread will see an intermediary state, i.e. the operation will either have completed fully, or not at all.
Reading an int field is an atomic operation, i.e. all 32 bits are read at once. Writing an int field is also atomic, the field will either have been written fully, or not at all.
However, the method doSomething() is not atomic; a thread may yield the CPU to another thread while the method is being executing, and that thread may see that some, but not all, operations have been executed.
That is, if threads T1 and T2 both execute doSomething(), the following may happen:
T1: num = 10;
T2: num = 10;
T1: System.out.println(num); // prints 10
T1: num = 20;
T1: System.out.println(num); // prints 20
T2: System.out.println(num); // prints 20
T2: num = 20;
T2: System.out.println(num); // prints 20
If doSomething() were synchronized, its atomicity would be guaranteed, and the above scenario impossible.
volatile ensures that if you have a thread A and a thread B, that any change to that variable will be seen by both. So if it at some point thread A changes this value, thread B could in the future look at it.
Atomic operations ensure that the execution of the said operation happens "in one step." This is somewhat confusion because looking at the code 'x = 10;' may appear to be "one step", but actually requires several steps on the CPU. An atomic operation can be formed in a variety of ways, one of which is by locking using synchronized:
What the volatile keyword promises.
The lock of an object (or the Class in the case of static methods) is acquired, and no two objects can access it at the same time.
As you asked in a comment earlier, even if you had three separate atomic steps that thread A was executing at some point, there's a chance that thread B could begin executing in the middle of those three steps. To ensure the thread safety of the object, all three steps would have to be grouped together to act like a single step. This is part of the reason locks are used.
A very important thing to note is that if you want to ensure that your object can never be accessed by two threads at the same time, all of your methods must be synchronized. You could create a non-synchronized method on the object that would access the values stored in the object, but that would compromise the thread safety of the class.
You may be interested in the java.util.concurrent.atomic library. I'm also no expert on these matters, so I would suggest a book that was recommended to me: Java Concurrency in Practice
Each individual read and write to a volatile variable is atomic. This means that a thread won't see the value of num changing while it's reading it, but it can still change in between each statement. So a thread running doSomething while other threads are doing the same, will print a 10 or 20 followed by another 10 or 20. After all threads have finished calling doSomething, the value of num will be 20.
My answer modified according to Brian Roach's comment.
It's atomic because it is integer in this case.
Volatile can only ganrentee visibility among threads, but not atomic. volatile can make you see the change of the integer, but cannot ganrentee the integration in changes.
For example, long and double can cause unexpected intermediate state.
Atomic Operations and Synchronization:
Atomic executions are performed in a single unit of task without getting affected from other executions. Atomic operations are required in multi-threaded environment to avoid data irregularity.
If we are reading/writing an int value then it is an atomic operation. But generally if it is inside a method then if the method is not synchronized many threads can access it which can lead to inconsistent values. However, int++ is not an atomic operation. So by the time one threads read it’s value and increment it by one, other thread has read the older value leading to wrong result.
To solve data inconsistency, we will have to make sure that increment operation on count is atomic, we can do that using Synchronization but Java 5 java.util.concurrent.atomic provides wrapper classes for int and long that can be used to achieve this atomically without usage of Synchronization.
Using int might create data data inconsistencies as shown below:
public class AtomicClass {
public static void main(String[] args) throws InterruptedException {
ThreardProcesing pt = new ThreardProcesing();
Thread thread_1 = new Thread(pt, "thread_1");
thread_1.start();
Thread thread_2 = new Thread(pt, "thread_2");
thread_2.start();
thread_1.join();
thread_2.join();
System.out.println("Processing count=" + pt.getCount());
}
}
class ThreardProcesing implements Runnable {
private int count;
#Override
public void run() {
for (int i = 1; i < 5; i++) {
processSomething(i);
count++;
}
}
public int getCount() {
return this.count;
}
private void processSomething(int i) {
// processing some job
try {
Thread.sleep(i * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
OUTPUT: count value varies between 5,6,7,8
We can resolve this using java.util.concurrent.atomic that will always output count value as 8 because AtomicInteger method incrementAndGet() atomically increments the current value by one. shown below:
public class AtomicClass {
public static void main(String[] args) throws InterruptedException {
ThreardProcesing pt = new ThreardProcesing();
Thread thread_1 = new Thread(pt, "thread_1");
thread_1.start();
Thread thread_2 = new Thread(pt, "thread_2");
thread_2.start();
thread_1.join();
thread_2.join();
System.out.println("Processing count=" + pt.getCount());
}
}
class ThreardProcesing implements Runnable {
private AtomicInteger count = new AtomicInteger();
#Override
public void run() {
for (int i = 1; i < 5; i++) {
processSomething(i);
count.incrementAndGet();
}
}
public int getCount() {
return this.count.get();
}
private void processSomething(int i) {
// processing some job
try {
Thread.sleep(i * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Source: Atomic Operations in java
Related
In the tutorial of java multi-threading, it gives an exmaple of Memory Consistency Errors. But I can not reproduce it. Is there any other method to simulate Memory Consistency Errors?
The example provided in the tutorial:
Suppose a simple int field is defined and initialized:
int counter = 0;
The counter field is shared between two threads, A and B. Suppose thread A increments counter:
counter++;
Then, shortly afterwards, thread B prints out counter:
System.out.println(counter);
If the two statements had been executed in the same thread, it would be safe to assume that the value printed out would be "1". But if the two statements are executed in separate threads, the value printed out might well be "0", because there's no guarantee that thread A's change to counter will be visible to thread B — unless the programmer has established a happens-before relationship between these two statements.
I answered a question a while ago about a bug in Java 5. Why doesn't volatile in java 5+ ensure visibility from another thread?
Given this piece of code:
public class Test {
volatile static private int a;
static private int b;
public static void main(String [] args) throws Exception {
for (int i = 0; i < 100; i++) {
new Thread() {
#Override
public void run() {
int tt = b; // makes the jvm cache the value of b
while (a==0) {
}
if (b == 0) {
System.out.println("error");
}
}
}.start();
}
b = 1;
a = 1;
}
}
The volatile store of a happens after the normal store of b. So when the thread runs and sees a != 0, because of the rules defined in the JMM, we must see b == 1.
The bug in the JRE allowed the thread to make it to the error line and was subsequently resolved. This definitely would fail if you don't have a defined as volatile.
This might reproduce the problem, at least on my computer, I can reproduce it after some loops.
Suppose you have a Counter class:
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
Let thread_A set flag as true, and save the time into
modifyTime.
Let another thread, let's say thread_B, read the Counter's flag. If thread_B still get false even when it is later than modifyTime, then we can say we have reproduced the problem.
Example code
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
public class App {
public static void main(String[] args) {
while (!test());
}
private static boolean test() {
final Holder holder = new Holder();
new Thread(new Runnable() {
#Override
public void run() {
try {
Thread.sleep(10);
holder.flag = true;
holder.modifyTime = System.currentTimeMillis();
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
long lastCheckStartTime = 0L;
long lastCheckFailTime = 0L;
while (true) {
lastCheckStartTime = System.currentTimeMillis();
if (holder.flag) {
break;
} else {
lastCheckFailTime = System.currentTimeMillis();
System.out.println(lastCheckFailTime);
}
}
if (lastCheckFailTime > holder.modifyTime
&& lastCheckStartTime > holder.modifyTime) {
System.out.println("last check fail time " + lastCheckFailTime);
System.out.println("modify time " + holder.modifyTime);
return true;
} else {
return false;
}
}
}
Result
last check time 1565285999497
modify time 1565285999494
This means thread_B get false from Counter's flag filed at time 1565285999497, even thread_A has set it as true at time 1565285999494(3 milli seconds ealier).
The example used is too bad to demonstrate the memory consistency issue. Making it work will require brittle reasoning and complicated coding. Yet you may not be able to see the results. Multi-threading issues occur due to unlucky timing. If someone wants to increase the chances of observing issue, we need to increase chances of unlucky timing.
Following program achieves it.
public class ConsistencyIssue {
static int counter = 0;
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter);
}
private static class Increment implements Runnable{
#Override
public void run() {
for(int i = 1; i <= 10000; i++)
counter++;
}
}
}
Execution 1 output: 10963,
Execution 2 output: 14552
Final count should have been 20000, but it is less than that. Reason is count++ is multi step operation,
1. read count
2. increment count
3. store it
two threads may read say count 1 at once, increment it to 2. and write out 2. But if it was a serial execution it should have been 1++ -> 2++ -> 3.
We need a way to make all 3 steps atomic. i.e to be executed by only one thread at a time.
Solution 1: Synchronized
Surround the increment with Synchronized. Since counter is static variable you need to use class level synchronization
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
synchronized (ConsistencyIssue.class) {
counter++;
}
}
Now it outputs: 20000
Solution 2: AtomicInteger
public class ConsistencyIssue {
static AtomicInteger counter = new AtomicInteger(0);
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter.get());
}
private static class Increment implements Runnable {
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
counter.incrementAndGet();
}
}
}
We can do with semaphores, explicit locking too. but for this simple code AtomicInteger is enough
Sometimes when I try to reproduce some real concurrency problems, I use the debugger.
Make a breakpoint on the print and a breakpoint on the increment and run the whole thing.
Releasing the breakpoints in different sequences gives different results.
Maybe to simple but it worked for me.
Please have another look at how the example is introduced in your source.
The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement. To see this, consider the following example.
This example illustrates the fact that multi-threading is not deterministic, in the sense that you get no guarantee about the order in which operations of different threads will be executed, which might result in different observations across several runs. But it does not illustrate a memory consistency error!
To understand what a memory consistency error is, you need to first get an insight about memory consistency. The simplest model of memory consistency has been introduced by Lamport in 1979. Here is the original definition.
The result of any execution is the same as if the operations of all the processes were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program
Now, consider this example multi-threaded program, please have a look at this image from a more recent research paper about sequential consistency. It illustrates what a real memory consistency error might look like.
To finally answer your question, please note the following points:
A memory consistency error always depends on the underlying memory model (A particular programming languages may allow more behaviours for optimization purposes). What's the best memory model is still an open research question.
The example given above gives an example of sequential consistency violation, but there is no guarantee that you can observe it with your favorite programming language, for two reasons: it depends on the programming language exact memory model, and due to undeterminism, you have no way to force a particular incorrect execution.
Memory models are a wide topic. To get more information, you can for example have a look at Torsten Hoefler and Markus Püschel course at ETH Zürich, from which I understood most of these concepts.
Sources
Leslie Lamport. How to Make a Multiprocessor Computer That Correctly Executes Multiprocessor Programs, 1979
Wei-Yu Chen, Arvind Krishnamurthy, Katherine Yelick, Polynomial-Time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays, 2003
Design of Parallel and High-Performance Computing course, ETH Zürich
I'm learning multithreading. Can anyone tell why here the output is always 100, even though there are two threads which are doing 100 increments?
public class App {
public static int counter = 0;
public static void process() {
Thread thread1 = new Thread(new Runnable() {
#Override
public void run() {
for (int i = 0; i < 100; ++i) {
++counter;
}
}
});
Thread thread2 = new Thread(new Runnable() {
#Override
public void run() {
for (int i = 0; i < 100; ++i) {
++counter;
}
}
});
thread1.start();
thread2.start();
}
public static void main(String[] args) {
process();
System.out.println(counter);
}
}
The output is 100.
You're only starting the threads, not waiting for them to complete before you print the result. When I run your code, the output is 0, not 100.
You can wait for the threads with
thread1.join();
thread2.join();
(at the end of the process() method). When I add those, I get 200 as output. (Note that Thread.join() throws an InterruptedException, so you have to catch or declare this exception.)
But I'm 'lucky' to get 200 as output, since the actual behaviour is undefined as Stephen C notes. The reason why is one of the main pitfalls of multithreading: your code is not thread safe.
Basically: ++counter is shorthand for
read the value of counter
add 1
write the value of counter
If thread B does step 1 while thread A hasn't finished step 3 yet, it will try to write the same result as thread A, so you'll miss an increment.
One of the ways to solve this is using AtomicInteger, e.g.
public static AtomicInteger counter = new AtomicInteger(0);
...
Thread thread1 = new Thread(new Runnable() {
#Override
public void run() {
for (int i = 0; i < 100; ++i) {
counter.incrementAndGet();
}
}
});
Can anyone tell why here the output is always 100, even though there are two threads which are doing 100 increments?
The reason is that you have two threads writing a shared variable and a third reading, all without any synchronization. According to the Java Memory Model, this means that the actual behavior of your example is unspecified.
In reality, your main thread is (probably) printing the output before the second thread starts. (And apparently on some platforms, it prints it before the first one starts. Or maybe, it is seeing a stale value for counter. It is a bit hard to tell. But this is all within the meaning of unspecified)
Apparently, adding join calls before printing the results appears to fix the problem, but I think that is really by luck1. If you changed 100 to a large enough number, I suspect that you would find that incorrect counter values would be printed once again.
Another answer suggests using volatile. This isn't a solution. While a read operation following a write operation on a volatile is guaranteed to give the latest value written, that value may be a value written by another thread. In fact the counter++ expression is an atomic read followed by an atomic write ... but the sequence is not always atomic. If two or more threads do this simultaneously on the same variable, they are liable to lose increments.
The correct solutions to this are to either using an AtomicInteger, or to perform the counter++ operations inside a synchronized block; e.g.
for (int i = 0; i < 100; ++i) {
synchronized(App.class) {
++counter;
}
}
Then it makes no difference that the two threads may or may not be executed in parallel.
1 - What I think happens is that the first thread finishes before the second thread starts. Starting a new thread takes a significant length of time.
In Your case, There are three threads are going to execute: one main, thread1 and thread2. All these three threads are not synchronised and in this case Poor counter variable behaviour will not be specific and particular.
These kind of Problem called as Race Conditions.
Case1: If i add only one simple print statement before counter print like:
process();
System.out.println("counter value:");
System.out.println(counter);
in this situation scenario will be different. and there are lot more..
So in these type of cases, according to your requirement modification will happen.
If you want to execute one thread at time go for Thread join like:
thread1.join();
thread2.join();
join() is a Thread class method and non static method so it will always apply on thread object so apply join after thread start.
If you want to read about Multi threading in java please follow; https://docs.oracle.com/cd/E19455-01/806-5257/6je9h032e/index.html
You are checking the result before threads are done.
thread1.start();
thread2.start();
try{
thread1.join();
thread2.join();
}
catch(InterruptedException e){}
And make counter variable volatile.
I am trying to see how volatile works here. If I declare cc as volatile, I get the output below. I know thread execution output varies from time to time, but I read somewhere that volatile is the same as synchronized, so why do I get this output? And if I use two instances of Thread1 does that matter?
2Thread-0
2Thread-1
4Thread-1
3Thread-0
5Thread-1
6Thread-0
7Thread-1
8Thread-0
9Thread-1
10Thread-0
11Thread-1
12Thread-0
public class Volexample {
int cc=0;
public static void main(String[] args) {
Volexample ve=new Volexample();
CountClass count =ve.new CountClass();
Thread1 t1=ve.new Thread1(count);
Thread2 t2=ve.new Thread2(count);
t1.start();
t2.start();
}
class Thread1 extends Thread{
CountClass count =new CountClass();
Thread1(CountClass count ){
this.count=count;
}
#Override
public void run() {
/*for(int i=0;i<=5;i++)
count.countUp();*/
for(int i=0;i<=5;i++){
cc++;
System.out.println(cc + Thread.currentThread().getName());
}
}
}
class Thread2 extends Thread {
CountClass count =new CountClass();
Thread2(CountClass count ){
this.count=count;
}
#Override
public void run(){
/*for(int i=0;i<=5;i++)
count.countUp();*/
for(int i=0;i<=5;i++){
cc++;
System.out.println(cc + Thread.currentThread().getName());
}
}
}
class CountClass{
volatile int count=0;
void countUp(){
count++;
System.out.println(count + Thread.currentThread().getName());
}
}
}
In Java, the semantics of the volatile keyword are very well defined. They ensure that other threads will see the latest changes to a variable. But they do not make read-modify-write operations atomic.
So, if i is volatile and you do i++, you are guaranteed to read the latest value of i and you are guaranteed that other threads will see your write to i immediately, but you are not guaranteed that two threads won't interleave their read/modify/write operations so that the two increments have the effect of a single increment.
Suppose i is a volatile integer whose value was initialized to zero, no writes have occurred other than that yet, and two threads do i++;, the following can happen:
The first thread reads a zero, the latest value of i.
The second threads reads a zero, also the latest value of i.
The first thread increments the zero it read, getting one.
The second thread increments the zero it read, also getting one.
The first thread writes the one it computed to i.
The second thread writes the one it computed to i.
The latest value written to i is one, so any thread that accesses i now will see one.
Notice that an increment was lost, even though every thread always read the latest value written by any other thread. The volatile keyword gives visibility, not atomicity.
You can use synchronized to form complex atomic operations. If you just need simple ones, you can use the various Atomic* classes that Java provides.
A use-case for using volatile would be reading/writing from memory that is mapped to device registers, for example on a micro-controller where something other than the CPU would be reading/writing values to that "memory" address and so the compiler should not optimise that variable away .
The Java volatile keyword is used to mark a Java variable as "being stored in main memory". That means, that every read of a volatile variable will be read from the computer's main memory, and not from the cache and that every write to a volatile variable will be written to main memory, and not just to the cache.
It guarantees that you are accessing the newest value of this variable.
P. S. Use larger loops to notice bugs. For example try to iterate 10e9 times.
class Counter
{
public int i=0;
public void increment()
{
i++;
System.out.println("i is "+i);
System.out.println("i/=2 executing");
i=i+22;
System.out.println("i is (after i+22) "+i);
System.out.println("i+=1 executing");
i++;
System.out.println("i is (after i++) "+i);
}
public void decrement()
{
i--;
System.out.println("i is "+i);
System.out.println("i*=2 executing");
i=i*2;
System.out.println("i is after i*2"+i);
System.out.println("i-=1 executing");
i=i-1;
System.out.println("i is after i-1 "+i);
}
public int value()
{
return i;
} }
class ThreadA
{
public ThreadA(final Counter c)
{
new Thread(new Runnable(){
public void run()
{
System.out.println("Thread A trying to increment");
c.increment();
System.out.println("Increment completed "+c.i);
}
}).start();
}
}
class ThreadB
{
public ThreadB(final Counter c)
{
new Thread(new Runnable(){
public void run()
{
System.out.println("Thread B trying to decrement");
c.decrement();
System.out.println("Decrement completed "+c.i);
}
}).start();
}
}
class ThreadInterference
{
public static void main(String args[]) throws Exception
{
Counter c=new Counter();
new ThreadA(c);
new ThreadB(c);
}
}
In the above code, ThreadA first got access to Counter object and incremented the value along with performing some extra operations. For the very first time ThreadA does not have a cached value of i. But after the execution of i++ (in first line) it will get cache the value. Later on the value is updated and gets 24. According to the program, as the variable i is not volatile so the changes will be done in the local cache of ThreadA,
Now when ThreadB accesses the decrement() method the value of i is as updated by ThreadA i.e. 24. How could that be possible?
Assuming that threads won't see each updates that other threads make to shared data is as inappropriate as assuming that all threads will see each other's updates immediately.
The important thing is to take account of the possibility of not seeing updates - not to rely on it.
There's another issue besides not seeing the update from other threads, mind you - all of your operations act in a "read, modify, write" sense... if another thread modifies the value after you've read it, you'll basically ignore it.
So for example, suppose i is 5 when we reach this line:
i = i * 2;
... but half way through it, another thread modifies it to be 4.
That line can be thought of as:
int tmp = i;
tmp = tmp * 2;
i = tmp;
If the second thread changes i to 4 after the first line in the "expanded" version, then even if i is volatile the write of 4 will still be effectively lost - because by that point, tmp is 5, it will be doubled to 10, and then 10 will be written out.
As specified in JLS 8.3.1.4:
The Java programming language allows threads to access shared
variables (§17.1). As a rule, to ensure that shared variables are
consistently and reliably updated, a thread should ensure that it has
exclusive use of such variables by obtaining a lock that,
conventionally, enforces mutual exclusion for those shared variables........A field may be
declared volatile, in which case the Java Memory Model ensures that
all threads see a consistent value for the variable
Although not always but there is still a chance that the shared values among threads are not consistenly and reliably updated, which would lead to some unpredictable outcome of program. In code given below
class Test {
static int i = 0, j = 0;
static void one() { i++; j++; }
static void two() {
System.out.println("i=" + i + " j=" + j);
}
}
If, one thread repeatedly calls the method one (but no more than Integer.MAX_VALUE times in all), and another thread repeatedly calls the method two then method two could occasionally print a value for j that is greater than the value of i, because the example includes no synchronization and, the shared values of i and j might be updated out of order.
But if you declare i and j to be volatile , This allows method one and method two to be executed concurrently, but guarantees that accesses to the shared values for i and j occur exactly as many times, and in exactly the same order, as they appear to occur during execution of the program text by each thread. Therefore, the shared value for j is never greater than that for i,because each update to i must be reflected in the shared value for i before the update to j occurs.
Now i came to know that common objects (the objects that are being shared by multiple threads) are not cached by those threads. As the object is common, Java Memory Model is smart enough to identify that common objects when cached by threads could produce surprising results.
How could that be possible?
Because there is nowhere in the JLS that says values have to be cached within a thread.
This is what the spec does say:
If you have a non-volatile variable x, and it's updated by a thread T1, there is no guarantee that T2 can ever observe the change of x by T1. The only way to guarantee that T2 sees a change of T1 is with a happens-before relationship.
It just so happens that some implementations of Java cache non-volatile variables within a thread in certain cases. In other words, you can't rely on a non-volatile variable being cached.
I have experience this weird behavior of volatile keyword recently. As far as i know,
volatile keyword is applied on to the variable to reflect the changes done on the data of
the variable by one thread onto the other thread.
volatile keyword prevents caching of the data on the thread.
I did a small test........
I used an integer variable named count, and used volatile keyword on it.
Then made 2 different threads to increment the variable value to 10000, so the end resultant should be 20000.
But thats not the case always, with volatile keyword i am getting not getting 20000 consistently, but 18534, 15000, etc.... and sometimes 20000.
But while i used synchronized keyword, it just worked fine, why....??
Can anyone please explain me this behaviour of volatile keyword.
i am posting my code with volatile keyword and as well as the one with synchronzied keyword.
The following code below behaves inconsistently with volatile keyword on variable count
public class SynVsVol implements Runnable{
volatile int count = 0;
public void go(){
for (int i=0 ; i<10000 ; i++){
count = count + 1;
}
}
#Override
public void run() {
go();
}
public static void main(String[] args){
SynVsVol s = new SynVsVol();
Thread t1 = new Thread(s);
Thread t2 = new Thread(s);
t1.start();
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Total Count Value: "+s.count);
}
}
The following code behaves perfectly with synchronized keyword on the method go().
public class SynVsVol implements Runnable{
int count = 0;
public synchronized void go(){
for (int i=0 ; i<10000 ; i++){
count = count + 1;
}
}
#Override
public void run() {
go();
}
public static void main(String[] args){
SynVsVol s = new SynVsVol();
Thread t1 = new Thread(s);
Thread t2 = new Thread(s);
t1.start();
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Total Count Value: "+s.count);
}
}
count = count + 1 is not atomic. It has three steps:
read the current value of the variable
increment the value
write the new value back to the variable
These three steps are getting interwoven, resulting in different execution paths, resulting in an incorrect value. Use AtomicInteger.incrementAndGet() instead if you want to avoid the synchronized keyword.
So although the volatile keyword acts pretty much as you described it, that only applies to each seperate operation, not to all three operations collectively.
The volatile keyword is not a synchronization primitive. It merely prevents caching of the value on the thread, but it does not prevent two threads from modifying the same value and writing it back concurrently.
Let's say two threads come to the point when they need to increment the counter, which is now set to 5. Both threads see 5, make 6 out of it, and write it back into the counter. If the counter were not volatile, both threads could have assumed that they know the value is 6, and skip the next read. However, it's volatile, so they both would read 6 back, and continue incrementing. Since the threads are not going in lock-step, you may see a value different from 10000 in the output, but there's virtually no chance that you would see 20000.
The fact that a variable is volatile does not mean every operation it's involved in is atomic. For instance, this line in SynVsVol.Go:
count = count + 1;
will first have count read, then incremented, and the result will then be written back. If some other thread will execute it at the same time, the results depend on the interleaving of the commands.
Now, when you add the syncronized, SynVsVol.Go executes atomically. Namely, the increment is done as a whole by a single thread, and the other one can't modify count until it is done.
Lastly, caching of member variables that are only modified within a syncronized block is much easier. The compiler can read their value when the monitor is acquired, cache it in a register, have all changes done on that register, and eventually flush it back to the main memory when the monitor is released. This is also the case when you call wait in a synchronized block, and when some other thread notifys you: cached member variables will be synchronized, and your program will remain coherent. That's guaranteed even if the member variable is not declared as volatile:
Synchronization ensures that memory writes by a thread before or
during a synchronized block are made visible in a predictable manner
to other threads which synchronize on the same monitor.
Your code is broken because it treats the read-and-increment operation on a volatile as atomic, which it is not. The code doesn't contain a data race, but it does contain a race condition on the int.