I am trying to see how volatile works here. If I declare cc as volatile, I get the output below. I know thread execution output varies from time to time, but I read somewhere that volatile is the same as synchronized, so why do I get this output? And if I use two instances of Thread1 does that matter?
2Thread-0
2Thread-1
4Thread-1
3Thread-0
5Thread-1
6Thread-0
7Thread-1
8Thread-0
9Thread-1
10Thread-0
11Thread-1
12Thread-0
public class Volexample {
int cc=0;
public static void main(String[] args) {
Volexample ve=new Volexample();
CountClass count =ve.new CountClass();
Thread1 t1=ve.new Thread1(count);
Thread2 t2=ve.new Thread2(count);
t1.start();
t2.start();
}
class Thread1 extends Thread{
CountClass count =new CountClass();
Thread1(CountClass count ){
this.count=count;
}
#Override
public void run() {
/*for(int i=0;i<=5;i++)
count.countUp();*/
for(int i=0;i<=5;i++){
cc++;
System.out.println(cc + Thread.currentThread().getName());
}
}
}
class Thread2 extends Thread {
CountClass count =new CountClass();
Thread2(CountClass count ){
this.count=count;
}
#Override
public void run(){
/*for(int i=0;i<=5;i++)
count.countUp();*/
for(int i=0;i<=5;i++){
cc++;
System.out.println(cc + Thread.currentThread().getName());
}
}
}
class CountClass{
volatile int count=0;
void countUp(){
count++;
System.out.println(count + Thread.currentThread().getName());
}
}
}
In Java, the semantics of the volatile keyword are very well defined. They ensure that other threads will see the latest changes to a variable. But they do not make read-modify-write operations atomic.
So, if i is volatile and you do i++, you are guaranteed to read the latest value of i and you are guaranteed that other threads will see your write to i immediately, but you are not guaranteed that two threads won't interleave their read/modify/write operations so that the two increments have the effect of a single increment.
Suppose i is a volatile integer whose value was initialized to zero, no writes have occurred other than that yet, and two threads do i++;, the following can happen:
The first thread reads a zero, the latest value of i.
The second threads reads a zero, also the latest value of i.
The first thread increments the zero it read, getting one.
The second thread increments the zero it read, also getting one.
The first thread writes the one it computed to i.
The second thread writes the one it computed to i.
The latest value written to i is one, so any thread that accesses i now will see one.
Notice that an increment was lost, even though every thread always read the latest value written by any other thread. The volatile keyword gives visibility, not atomicity.
You can use synchronized to form complex atomic operations. If you just need simple ones, you can use the various Atomic* classes that Java provides.
A use-case for using volatile would be reading/writing from memory that is mapped to device registers, for example on a micro-controller where something other than the CPU would be reading/writing values to that "memory" address and so the compiler should not optimise that variable away .
The Java volatile keyword is used to mark a Java variable as "being stored in main memory". That means, that every read of a volatile variable will be read from the computer's main memory, and not from the cache and that every write to a volatile variable will be written to main memory, and not just to the cache.
It guarantees that you are accessing the newest value of this variable.
P. S. Use larger loops to notice bugs. For example try to iterate 10e9 times.
Related
I have written a short program in order to check the effect of the race condition. Class Counter is given below. The class has two methods to update the counter instance variable c. On purpose, I added a random code in both methods , see related code variable i, to increase the probability of their interleaved execution when accessed by two threads.
In the main() method of my program, I put in a loop the following code
t1=new Thread() { public void run(){objCounter.increment();}};
t2=new Thread() { public void run(){objCounter.decrement();}};
t1.start();
t2.start();
try{
t1.join();
t2.join();
}
catch (InterruptedException IE) {}
Then I printed the different values of c in the objCount... Further to the expected values 1, 0, -1, the program displays also the unexpected values: -2,-1, -3, even 4
I sincerely can't see what threads interleaving will lead to the unexpected values given above. Ideally, I should look at the assembly code to see how the statements c++, and c-- got translated...regardless, I Think there is another reason behind the unexpected values.
class Counter{
private volatile int c=0;
public void increment(){
int i=9;
i=i+7;
c++;
i=i+3;
}
public void decrement() {
int i=9;
i=i+7;
c--;
i=i+3;
}
public int value(){ return c; }
}
Even if you marked an int as volatile, that kind of operations are not atomic. Try to replace your primitive int with a Thread Safe Class like:
https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicInteger.html
Or just access it through a synchronyzed method.
I put in a loop the following code
You don't show reinitialization of the objCounter variable; this suggests that you're reusing the variable between loop iterations.
As such, you can get -2 from the situation resulting in -1 (e.g. Thread 1 read, Thread 2 read, T1 write, T2 write) happening twice.
In order to avoid reusing the state from previous runs, you should declare and initialize the objCounter variable inside the loop:
for (...) {
Counter objCounter = new Counter();
t1=new Thread() { public void run(){objCounter.increment();}};
t2=new Thread() { public void run(){objCounter.decrement();}};
// ... Start/join the threads.
}
It can't be declared before the loop and initialized inside the loop, because then it is not effectively final, which is required (that, or actual finality) to refer to it inside the anonymous classes of the threads.
On purpose, I added a random code in both methods , see related code variable i, to increase the probability of their interleaved execution when accessed by two threads.
As an aside, this your random code does nothing of the sort.
There is no requirement for Java to execute the statements in program order, only to appear to execute them in the program order from the perspective of the current thread.
These statements may be executed before or after the c++/--, if they are executed at all - they could simply be detected as useless.
You may as well just remove this code, it really only serves to obfuscate.
I am trying to count how many instances of a class generated during the run time of a process under multi-threading environment. The way how I do it is to increase a static counter in the constructor by looking at this post:
How to Count Number of Instances of a Class
So in multi-threading environment, here is how i define the class:
class Television {
private static volatile int counter = 0;
public Television(){
counter ++;
}
}
However, I am not sure whether there is a potential bug with the code above since I think constructor in java does not imply synchronization and counter++ is not atomic so if two threads are creating instances simultaneously, is the code a bug somehow? but I am not quite sure yet.
There is a bug in this code (specifically, a race condition), because the read of counter and write to counter aren't atomically executed.
In other words, two threads can read the same value of counter, increment that value, and then write the same value back to the variable.
Thread 1 Thread 2
======== ========
Read 0
Read 0
Increment
Increment
Write 1
Write 1
So the value would be 1, not 2, afterwards.
Use AtomicInteger and AtomicInteger.incrementAndGet() instead.
As counter++ is NOT atomic, you can replace it with JDK's AtomicInteger which is threadsafe.
You can AtomicInteger's use getAndIncrement() method as shown below:
class Television {
private static final AtomicInteger counter = new AtomicInteger();
public Television(){
counter.getAndIncrement();
}
}
An AtomicInteger is used in applications such as atomically
incremented counters, and cannot be used as a replacement for an
Integer.
You can look here
There are two ways here to bypass the underlying "++ on int" not being an atomic operation:
A) as others suggested, use AtomicInteger
B) introduce a common LOCK that all ctors can be using to sync on; like:
private final static Object LOCK = new Object();
public Television() {
synchronized (LOCK) {
counter++;
}
class Counter
{
public int i=0;
public void increment()
{
i++;
System.out.println("i is "+i);
System.out.println("i/=2 executing");
i=i+22;
System.out.println("i is (after i+22) "+i);
System.out.println("i+=1 executing");
i++;
System.out.println("i is (after i++) "+i);
}
public void decrement()
{
i--;
System.out.println("i is "+i);
System.out.println("i*=2 executing");
i=i*2;
System.out.println("i is after i*2"+i);
System.out.println("i-=1 executing");
i=i-1;
System.out.println("i is after i-1 "+i);
}
public int value()
{
return i;
} }
class ThreadA
{
public ThreadA(final Counter c)
{
new Thread(new Runnable(){
public void run()
{
System.out.println("Thread A trying to increment");
c.increment();
System.out.println("Increment completed "+c.i);
}
}).start();
}
}
class ThreadB
{
public ThreadB(final Counter c)
{
new Thread(new Runnable(){
public void run()
{
System.out.println("Thread B trying to decrement");
c.decrement();
System.out.println("Decrement completed "+c.i);
}
}).start();
}
}
class ThreadInterference
{
public static void main(String args[]) throws Exception
{
Counter c=new Counter();
new ThreadA(c);
new ThreadB(c);
}
}
In the above code, ThreadA first got access to Counter object and incremented the value along with performing some extra operations. For the very first time ThreadA does not have a cached value of i. But after the execution of i++ (in first line) it will get cache the value. Later on the value is updated and gets 24. According to the program, as the variable i is not volatile so the changes will be done in the local cache of ThreadA,
Now when ThreadB accesses the decrement() method the value of i is as updated by ThreadA i.e. 24. How could that be possible?
Assuming that threads won't see each updates that other threads make to shared data is as inappropriate as assuming that all threads will see each other's updates immediately.
The important thing is to take account of the possibility of not seeing updates - not to rely on it.
There's another issue besides not seeing the update from other threads, mind you - all of your operations act in a "read, modify, write" sense... if another thread modifies the value after you've read it, you'll basically ignore it.
So for example, suppose i is 5 when we reach this line:
i = i * 2;
... but half way through it, another thread modifies it to be 4.
That line can be thought of as:
int tmp = i;
tmp = tmp * 2;
i = tmp;
If the second thread changes i to 4 after the first line in the "expanded" version, then even if i is volatile the write of 4 will still be effectively lost - because by that point, tmp is 5, it will be doubled to 10, and then 10 will be written out.
As specified in JLS 8.3.1.4:
The Java programming language allows threads to access shared
variables (§17.1). As a rule, to ensure that shared variables are
consistently and reliably updated, a thread should ensure that it has
exclusive use of such variables by obtaining a lock that,
conventionally, enforces mutual exclusion for those shared variables........A field may be
declared volatile, in which case the Java Memory Model ensures that
all threads see a consistent value for the variable
Although not always but there is still a chance that the shared values among threads are not consistenly and reliably updated, which would lead to some unpredictable outcome of program. In code given below
class Test {
static int i = 0, j = 0;
static void one() { i++; j++; }
static void two() {
System.out.println("i=" + i + " j=" + j);
}
}
If, one thread repeatedly calls the method one (but no more than Integer.MAX_VALUE times in all), and another thread repeatedly calls the method two then method two could occasionally print a value for j that is greater than the value of i, because the example includes no synchronization and, the shared values of i and j might be updated out of order.
But if you declare i and j to be volatile , This allows method one and method two to be executed concurrently, but guarantees that accesses to the shared values for i and j occur exactly as many times, and in exactly the same order, as they appear to occur during execution of the program text by each thread. Therefore, the shared value for j is never greater than that for i,because each update to i must be reflected in the shared value for i before the update to j occurs.
Now i came to know that common objects (the objects that are being shared by multiple threads) are not cached by those threads. As the object is common, Java Memory Model is smart enough to identify that common objects when cached by threads could produce surprising results.
How could that be possible?
Because there is nowhere in the JLS that says values have to be cached within a thread.
This is what the spec does say:
If you have a non-volatile variable x, and it's updated by a thread T1, there is no guarantee that T2 can ever observe the change of x by T1. The only way to guarantee that T2 sees a change of T1 is with a happens-before relationship.
It just so happens that some implementations of Java cache non-volatile variables within a thread in certain cases. In other words, you can't rely on a non-volatile variable being cached.
I have read article concerning atomic operation in Java but still have some doubts needing to be clarified:
int volatile num;
public void doSomething() {
num = 10; // write operation
System.out.println(num) // read
num = 20; // write
System.out.println(num); // read
}
So i have done w-r-w-r 4 operations on 1 method, are they atomic operations? What will happen if multiple threads invoke doSomething() method simultaneously ?
An operation is atomic if no thread will see an intermediary state, i.e. the operation will either have completed fully, or not at all.
Reading an int field is an atomic operation, i.e. all 32 bits are read at once. Writing an int field is also atomic, the field will either have been written fully, or not at all.
However, the method doSomething() is not atomic; a thread may yield the CPU to another thread while the method is being executing, and that thread may see that some, but not all, operations have been executed.
That is, if threads T1 and T2 both execute doSomething(), the following may happen:
T1: num = 10;
T2: num = 10;
T1: System.out.println(num); // prints 10
T1: num = 20;
T1: System.out.println(num); // prints 20
T2: System.out.println(num); // prints 20
T2: num = 20;
T2: System.out.println(num); // prints 20
If doSomething() were synchronized, its atomicity would be guaranteed, and the above scenario impossible.
volatile ensures that if you have a thread A and a thread B, that any change to that variable will be seen by both. So if it at some point thread A changes this value, thread B could in the future look at it.
Atomic operations ensure that the execution of the said operation happens "in one step." This is somewhat confusion because looking at the code 'x = 10;' may appear to be "one step", but actually requires several steps on the CPU. An atomic operation can be formed in a variety of ways, one of which is by locking using synchronized:
What the volatile keyword promises.
The lock of an object (or the Class in the case of static methods) is acquired, and no two objects can access it at the same time.
As you asked in a comment earlier, even if you had three separate atomic steps that thread A was executing at some point, there's a chance that thread B could begin executing in the middle of those three steps. To ensure the thread safety of the object, all three steps would have to be grouped together to act like a single step. This is part of the reason locks are used.
A very important thing to note is that if you want to ensure that your object can never be accessed by two threads at the same time, all of your methods must be synchronized. You could create a non-synchronized method on the object that would access the values stored in the object, but that would compromise the thread safety of the class.
You may be interested in the java.util.concurrent.atomic library. I'm also no expert on these matters, so I would suggest a book that was recommended to me: Java Concurrency in Practice
Each individual read and write to a volatile variable is atomic. This means that a thread won't see the value of num changing while it's reading it, but it can still change in between each statement. So a thread running doSomething while other threads are doing the same, will print a 10 or 20 followed by another 10 or 20. After all threads have finished calling doSomething, the value of num will be 20.
My answer modified according to Brian Roach's comment.
It's atomic because it is integer in this case.
Volatile can only ganrentee visibility among threads, but not atomic. volatile can make you see the change of the integer, but cannot ganrentee the integration in changes.
For example, long and double can cause unexpected intermediate state.
Atomic Operations and Synchronization:
Atomic executions are performed in a single unit of task without getting affected from other executions. Atomic operations are required in multi-threaded environment to avoid data irregularity.
If we are reading/writing an int value then it is an atomic operation. But generally if it is inside a method then if the method is not synchronized many threads can access it which can lead to inconsistent values. However, int++ is not an atomic operation. So by the time one threads read it’s value and increment it by one, other thread has read the older value leading to wrong result.
To solve data inconsistency, we will have to make sure that increment operation on count is atomic, we can do that using Synchronization but Java 5 java.util.concurrent.atomic provides wrapper classes for int and long that can be used to achieve this atomically without usage of Synchronization.
Using int might create data data inconsistencies as shown below:
public class AtomicClass {
public static void main(String[] args) throws InterruptedException {
ThreardProcesing pt = new ThreardProcesing();
Thread thread_1 = new Thread(pt, "thread_1");
thread_1.start();
Thread thread_2 = new Thread(pt, "thread_2");
thread_2.start();
thread_1.join();
thread_2.join();
System.out.println("Processing count=" + pt.getCount());
}
}
class ThreardProcesing implements Runnable {
private int count;
#Override
public void run() {
for (int i = 1; i < 5; i++) {
processSomething(i);
count++;
}
}
public int getCount() {
return this.count;
}
private void processSomething(int i) {
// processing some job
try {
Thread.sleep(i * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
OUTPUT: count value varies between 5,6,7,8
We can resolve this using java.util.concurrent.atomic that will always output count value as 8 because AtomicInteger method incrementAndGet() atomically increments the current value by one. shown below:
public class AtomicClass {
public static void main(String[] args) throws InterruptedException {
ThreardProcesing pt = new ThreardProcesing();
Thread thread_1 = new Thread(pt, "thread_1");
thread_1.start();
Thread thread_2 = new Thread(pt, "thread_2");
thread_2.start();
thread_1.join();
thread_2.join();
System.out.println("Processing count=" + pt.getCount());
}
}
class ThreardProcesing implements Runnable {
private AtomicInteger count = new AtomicInteger();
#Override
public void run() {
for (int i = 1; i < 5; i++) {
processSomething(i);
count.incrementAndGet();
}
}
public int getCount() {
return this.count.get();
}
private void processSomething(int i) {
// processing some job
try {
Thread.sleep(i * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Source: Atomic Operations in java
I have experience this weird behavior of volatile keyword recently. As far as i know,
volatile keyword is applied on to the variable to reflect the changes done on the data of
the variable by one thread onto the other thread.
volatile keyword prevents caching of the data on the thread.
I did a small test........
I used an integer variable named count, and used volatile keyword on it.
Then made 2 different threads to increment the variable value to 10000, so the end resultant should be 20000.
But thats not the case always, with volatile keyword i am getting not getting 20000 consistently, but 18534, 15000, etc.... and sometimes 20000.
But while i used synchronized keyword, it just worked fine, why....??
Can anyone please explain me this behaviour of volatile keyword.
i am posting my code with volatile keyword and as well as the one with synchronzied keyword.
The following code below behaves inconsistently with volatile keyword on variable count
public class SynVsVol implements Runnable{
volatile int count = 0;
public void go(){
for (int i=0 ; i<10000 ; i++){
count = count + 1;
}
}
#Override
public void run() {
go();
}
public static void main(String[] args){
SynVsVol s = new SynVsVol();
Thread t1 = new Thread(s);
Thread t2 = new Thread(s);
t1.start();
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Total Count Value: "+s.count);
}
}
The following code behaves perfectly with synchronized keyword on the method go().
public class SynVsVol implements Runnable{
int count = 0;
public synchronized void go(){
for (int i=0 ; i<10000 ; i++){
count = count + 1;
}
}
#Override
public void run() {
go();
}
public static void main(String[] args){
SynVsVol s = new SynVsVol();
Thread t1 = new Thread(s);
Thread t2 = new Thread(s);
t1.start();
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Total Count Value: "+s.count);
}
}
count = count + 1 is not atomic. It has three steps:
read the current value of the variable
increment the value
write the new value back to the variable
These three steps are getting interwoven, resulting in different execution paths, resulting in an incorrect value. Use AtomicInteger.incrementAndGet() instead if you want to avoid the synchronized keyword.
So although the volatile keyword acts pretty much as you described it, that only applies to each seperate operation, not to all three operations collectively.
The volatile keyword is not a synchronization primitive. It merely prevents caching of the value on the thread, but it does not prevent two threads from modifying the same value and writing it back concurrently.
Let's say two threads come to the point when they need to increment the counter, which is now set to 5. Both threads see 5, make 6 out of it, and write it back into the counter. If the counter were not volatile, both threads could have assumed that they know the value is 6, and skip the next read. However, it's volatile, so they both would read 6 back, and continue incrementing. Since the threads are not going in lock-step, you may see a value different from 10000 in the output, but there's virtually no chance that you would see 20000.
The fact that a variable is volatile does not mean every operation it's involved in is atomic. For instance, this line in SynVsVol.Go:
count = count + 1;
will first have count read, then incremented, and the result will then be written back. If some other thread will execute it at the same time, the results depend on the interleaving of the commands.
Now, when you add the syncronized, SynVsVol.Go executes atomically. Namely, the increment is done as a whole by a single thread, and the other one can't modify count until it is done.
Lastly, caching of member variables that are only modified within a syncronized block is much easier. The compiler can read their value when the monitor is acquired, cache it in a register, have all changes done on that register, and eventually flush it back to the main memory when the monitor is released. This is also the case when you call wait in a synchronized block, and when some other thread notifys you: cached member variables will be synchronized, and your program will remain coherent. That's guaranteed even if the member variable is not declared as volatile:
Synchronization ensures that memory writes by a thread before or
during a synchronized block are made visible in a predictable manner
to other threads which synchronize on the same monitor.
Your code is broken because it treats the read-and-increment operation on a volatile as atomic, which it is not. The code doesn't contain a data race, but it does contain a race condition on the int.