The tutorial http://tutorials.jenkov.com/java-concurrency/volatile.html says
Reads from and writes to other variables cannot be reordered to occur
after a write to a volatile variable, if the reads / writes originally
occurred before the write to the volatile variable. The reads / writes
before a write to a volatile variable are guaranteed to "happen
before" the write to the volatile variable.
What is meant by "before the write to the volatile variable"? Does it mean previous read/writes in the same method where we are writing to the volatile variable? Or is it a larger scope (also in methods higher up the call stack)?
JVM can reorder operations. For example if we have i, j variables and code
i = 1;
j = 2;
JVM can run this in reordered manner
j = 2;
i = 1;
But if the j variable marked as volatile then JVM runs operations only as
i = 1;
j = 2;
write to i "happens before the write to the volatile variable" j.
The JVM ensures that writes to a volatile variable happens-before any reads from it. Take two threads. It's guarateed that for a single thread, the execution follows an as-if-serial semantics. Basically you can assume that there is an implicit happens-before relationship b/w two executions in the same thread (the compiler is still free to reorder instructions). Basically a single thread has a total order b/w its instructions governed by the happens-before relationship trivially.
A multi-threaded program has many such partial orders (every thread has a total order in the local instruction set but there is no order globally across threads) but not a total order b/w the global instruction set. Synchronisation is all about giving your program as much total order as possible.
Coming back to volatile variables, when a thread reads from it, the JVM ensures that all writes to it happened before the read. Now because of this order, everything the writing thread did before it wrote to the variable become visible to the thread reading from it. So yes, to answer your question, even variables up in the call stack should be visible to the reading thread.
I'll try to draw a visual picture. The two threads can be imagined as two parallel rails, and write to a volatile variable can be one of the sleepers b/w them. You basically get a
A -----
|
|
------- B
shaped total order b/w the two threads of execution. Everything in A before the sleeper should be visible to B after the sleeper because of this total order.
The JMM is defined in terms of happens before relation which we'll call ->. If a->b, then the b should see everything of a. This means that there are constraints on reordering loads/stores.
If a is a volatile write and b is a subsequent volatile read of the same variable, then a->b. This is called the volatile variable rule.
If a occurs before b in the code, then a->b. This is called the program order rule.
If a->b and b->c, then a->c. This is called the transitivity rule.
So lets apply this to a simple example:
int a;
volatile int b;
thread1(){
a=1;
b=1
}
thread2(){
int rb=b;
int ra=a;
if(rb==1 and ra==0) print("violation");
}
So the question is if thread2 sees rb=1,will it see ra=1?
a=1->b=1 due to program order rule.
b=1->rb=b (since we see the value 1) due to the volatile variable rule.
rb=b->ra=a due to program order rule.
Now we can apply the transitivity rule twice and we can conclude that that a=1->ra=a. And therefor ra needs to be 1.
This means that:
a=1 and b=1 can't be reordered.
rb=b and ra=a can't be reordered
otherwise we could end up with an rb=1 and ra=0.
Related
Writes and reads to a volatile field prevent reordering of reads/writes before and after the volatile field respectively. Variable reads/writes before a write to a volatile variable can not be reordered to happen after it, and reads/writes after a read from a volatile variable can not be reordered to happen before it. But what is the scope of this prohibition? As I understand volatile variable prevents reordering only inside the block where it is used, am I right?
Let me give a concrete example for clarity. Let's say we have such code:
int i,j,k;
volatile int l;
boolean flag = true;
void someMethod() {
int i = 1;
if (flag) {
j = 2;
}
if (flag) {
k = 3;
l = 4;
}
}
Obviously, write to l will prevent write to k from reordering, but will it prevent reordering of writes to i and j in respect to l? In other words can writes to i and j happen after write to l?
UPDATE 1
Thanks guys for taking your time and answering my question - I appreciate this. The problem is you're answering the wrong question. My question is about scope, not about the basic concept. The question is basically how far in code does complier guarantee the "happens before" relation to the volatile field.
Obviously compiler can guarantee that inside the same code block, but what about enclosing blocks and peer blocks - that's what my question is about. #Stephen C said, that volatile guarantees happen before behavior inside the whole method's body, even in the enclosing block, but I can not find any confirmation to that. Is he right, is there a confirmation somewhere?
Let me give yet another concrete example about scoping to clarify things:
setVolatile() {
l = 5;
}
callTheSet() {
i = 6;
setVolatile();
}
Will compiler prohibit reordering of i write in this case? Or maybe compiler can not/is not programmed to track what happens in other methods in case of volatile, and i write can be reordered to happen before setVolatile()? Or maybe compiler doesn't reorder method calls at all?
I mean there is got to be a point somewhere, when compiler will not be able to track if some code should happen before some volatile field write. Otherwise one volatile field write/read might affect ordering of half of a program, if not more. This is a rare case, but it is possible.
Moreover, look at this quote
Under the new memory model, it is still true that volatile variables cannot be reordered with each other. The difference is that it is now no longer so easy to reorder normal field accesses around them.
"Around them". This phrase implies, that there is a scope where volatile field can prevent reordering.
Obviously, write to l will prevent write to k from reordering, but will it prevent reordering of writes to i and j?
It is not entirely clear what you mean by reordering; see my comments above.
However, in the Java 5+ memory model, we can say that the writes to i and j that happened before the write to l will be visible to another thread after it has read l ... provided that nothing writes i and j after write to l.
This does have the effect of constraining any reordering of the instructions that write to i and j. Specifically, they can't be moved to after the memory write barrier following the write to l, because that could lead them to not being visible to the second thread.
But what is the scope of this prohibition?
There isn't a prohibition per se.
You need to understand that instructions, reordering and memory barriers are just details of a specific way of implementing the Java memory model. The model is actually defined in terms of what is guaranteed to be visible in any "well-formed execution".
As I understand volatile prevents reordering inside the block where it is used, am I right?
Actually, no. The blocks don't come into the consideration. What matters is the (program source code) order of the statements within the method.
#Stephen C said, that volatile guarantees happen before behavior inside the whole method's body, even in the enclosing block, but I can not find any confirmation to that.
The confirmation is JLS 17.4.3. It states the following:
Among all the inter-thread actions performed by each thread t, the program order of t is a total order that reflects the order in which these actions would be performed according to the intra-thread semantics of t.
A set of actions is sequentially consistent if all actions occur in a total order (the execution order) that is consistent with program order, and furthermore, each read r of a variable v sees the value written by the write w to v such that:
w comes before r in the execution order, and
there is no other write w' such that w comes before w' and w' comes before r in the execution order.
Sequential consistency is a very strong guarantee that is made about visibility and ordering in an execution of a program. Within a sequentially consistent execution, there is a total order over all individual actions (such as reads and writes) which is consistent with the order of the program, and each individual action is atomic and is immediately visible to every thread.
If a program has no data races, then all executions of the program will appear to be sequentially consistent.
Notice that there is NO mention of blocks or scopes in this definition.
EDIT 2
The volatile ONLY gaurentee the happens-before relation.
Why it reorder in single thread
Considered we have two fields:
int i = 0;
int j = 0;
We have a method to write them
void write() {
i = 1;
j = 2;
}
As you know, compiler may reorder them. That is because compiler think it is not matter access which first. Because in single thread, they are 'happen together'.
Why can't reorder in multi thread
But now we have another method to read them in another thread:
void read() {
if(j==2) {
assert i==1;
}
}
If compiler still reorder it, this assert may fail. That means j has been 2, but i unexpectly is not 1. Which seems i=1 is happens after assert i==1.
What volatile do
The volatile only gaurantee the happens-before relation.
Now we add volatile
volatile int j = 0;
When we observe j==2 is true, that means j=2 is happened and i=2 is before it, it must happened. So the assert will never fail now.
'Prventing reorder' is just an approach that compiler to provide that guarantee.
Conclusion
The only things you should now is happens-before. Please refer to the link below of java specification. The reordering or not is just a side effect of this guarantee.
Answer for you question
Since l is volatile, acccess to i and j always before access to l in the someMethod. The fact is, every thing before the line l=4 will happen before before it.
EDIT 1
Since the post has been edit. Here is further explasion.
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
happens-before means:
If one action happens-before another, then the first is visible to and ordered before the second.
So the access to i and j happen-before access to l.
reference: https://docs.oracle.com/javase/specs/jls/se10/html/jls-17.html#jls-17.4.5
Origin answer
No, the volatile only protect itself, though it is not easy to reorder field access near volatile.
Under the new memory model, it is still true that volatile variables cannot be reordered with each other. The difference is that it is now no longer so easy to reorder normal field accesses around them. Writing to a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire. In effect, because the new memory model places stricter constraints on reordering of volatile field accesses with other field accesses, volatile or not, anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f.
The volatile keyword only guarantee that:
A write to a volatile field happens before every subsequent read of that same volatile.
reference: http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile
I am curious to know how volatile variable affects OTHER fields
Volatile variables do affect the other fields. JIT compiler can reorder the instructions if he thinks that reordering will not have any impact on the execution output. So if you have 6 independent variable stores JIT can reorder the instructions.
However if you make a variable volatile i.e. in your case variable l then JIT will not reorder any variable STORES after the volatile STORE. And I think that makes sense because in a multithreaded program if I get the value of variable l as 4, then I should get i as 1, because in my program i was written before l and which eventually is Program Order Semantics (If I am not wrong).
Note that volatile variables does two things:
Compiler will not reorder any stores after volatile store / not reorder any reads before volatile read.
Flushes the Load/Store buffer so that all the processor can see the changes.
EDIT:
Good blog here: http://jpbempel.blogspot.com/2013/05/volatile-and-memory-barriers.html
Maybe I know the "real scope" you are in dout.
Two types of reorder is the main reason of unordering instruction result:
1. Compiler optimization
2. Cpu processor recordering(maily caused by cache and main memory synchronize)
volatile keyword first need to confirm the flushing of volatile variable, at the meantime, other variables are also flushed to main memory.But because of compiler reordering, some writable instructions before the volatile valatile variable may be reordered after the volatile variable, the reader may be confused to read not the real time other variable values which is before the volatile variable in program order, so the rule of "variables writting instruction before the volatile variable is forced to run before the volatile" is made.This optimazation is done by Java Compiler or JIT.
The main point is optimization of compiler in instructions,like finding dead code , instruction reorder operation, the instructions code range is always a "basic block"(Except some other constant propagation optimization, etc.). A basic block is an set of instructions without jmp instruction inside, so this is a basic block. So in my opinion, the reorder operation is fixed in the range basic block.
the basic block in source code is always a block or the body of a method.
And also because java does not have inline function, the method call is used by dynamic invoke method instruction, the reorder operation should not be across two method.
So, the scope will not be larger than a "method body", or maybe only a area of "for" body , it's the basic block range.
This is all my thought, I'm not sure if it is right, someone can help to make it more accurate.
So I am reading this book titled Java Concurrency in Practice and I am stuck on this one explanation which I cannot seem to comprehend without an example. This is the quote:
When thread A writes to a volatile
variable and subsequently thread B
reads that same variable, the values
of all variables that were visible to
A prior to writing to the volatile
variable become visible to B after
reading the volatile variable.
Can someone give me a counterexample of why "the values of ALL variables that were visible to A prior to writing to the volatile variable become visible to B AFTER reading the volatile variable"?
I am confused why all other non-volatile variables do not become visible to B before reading the volatile variable?
Declaring a volatile Java variable means:
The value of this variable will never be cached thread-locally: all reads and writes will go straight to "main memory".
Access to the variable acts as though it is enclosed in a synchronized block, synchronized on itself.
Just for your reference, When is volatile needed ?
When multiple threads using the same
variable, each thread will have its
own copy of the local cache for that
variable. So, when it's updating the
value, it is actually updated in the
local cache not in the main variable
memory. The other thread which is
using the same variable doesn't know
anything about the values changed by
the another thread. To avoid this
problem, if you declare a variable as
volatile, then it will not be stored
in the local cache. Whenever thread
are updating the values, it is updated
to the main memory. So, other threads
can access the updated value.
From JLS §17.4.7 Well-Formed Executions
We only consider well-formed
executions. An execution E = < P, A,
po, so, W, V, sw, hb > is well formed
if the following conditions are true:
Each read sees a write to the same
variable in the execution. All reads
and writes of volatile variables are
volatile actions. For all reads r in
A, we have W(r) in A and W(r).v = r.v.
The variable r.v is volatile if and
only if r is a volatile read, and the
variable w.v is volatile if and only
if w is a volatile write.
Happens-before order is a partial
order. Happens-before order is given
by the transitive closure of
synchronizes-with edges and program
order. It must be a valid partial
order: reflexive, transitive and
antisymmetric.
The execution obeys
intra-thread consistency. For each
thread t, the actions performed by t
in A are the same as would be
generated by that thread in
program-order in isolation, with each
write wwriting the value V(w), given
that each read r sees the value
V(W(r)). Values seen by each read are
determined by the memory model. The
program order given must reflect the
program order in which the actions
would be performed according to the
intra-thread semantics of P.
The execution is happens-before consistent
(§17.4.6).
The execution obeys
synchronization-order consistency. For
all volatile reads r in A, it is not
the case that either so(r, W(r)) or
that there exists a write win A such
that w.v = r.v and so(W(r), w) and
so(w, r).
Useful Link : What do we really know about non-blocking concurrency in Java?
Thread B may have a CPU-local cache of those variables. A read of a volatile variable ensures that any intermediate cache flush from a previous write to the volatile is observed.
For an example, read the following link, which concludes with "Fixing Double-Checked Locking using Volatile":
http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
If a variable is non-volatile, then the compiler and the CPU, may re-order instructions freely as they see fit, in order to optimize for performance.
If the variable is now declared volatile, then the compiler no longer attempts to optimize accesses (reads and writes) to that variable. It may however continue to optimize access for other variables.
At runtime, when a volatile variable is accessed, the JVM generates appropriate memory barrier instructions to the CPU. The memory barrier serves the same purpose - the CPU is also prevent from re-ordering instructions.
When a volatile variable is written to (by thread A), all writes to any other variable are completed (or will atleast appear to be) and made visible to A before the write to the volatile variable; this is often due to a memory-write barrier instruction. Likewise, any reads on other variables, will be completed (or will appear to be) before the
read (by thread B); this is often due to a memory-read barrier instruction. This ordering of instructions that is enforced by the barrier(s), will mean that all writes visible to A, will be visible B. This however, does not mean that any re-ordering of instructions has not happened (the compiler may have performed re-ordering for other instructions); it simply means that if any writes visible to A have occurred, it would be visible to B. In simpler terms, it means that strict-program order is not maintained.
I will point to this writeup on Memory Barriers and JVM Concurrency, if you want to understand how the JVM issues memory barrier instructions, in finer detail.
Related questions
What is a memory fence?
What are some tricks that a processor does to optimize code?
Threads are allowed to cache variable values that other threads may have since updated since they read them. The volatile keyword forces all threads to not cache values.
This is simply an additional bonus the memory model gives you, if you work with volatile variables.
Normally (i.e. in the absence of volatile variables and synchronization), the VM can make variables from one thread visible to other threads in any order it wants, or not at all. E.g. the reading thread could read some mixture of earlier versions of another threads variable assignments. This is caused by the threads being maybe run on different CPUs with their own caches, which are only sometimes copied to the "main memory", and additionally by code reordering for optimization purposes.
If you used a volatile variable, as soon as thread B read some value X from it, the VM makes sure that anything which thread A has written before it wrote X is also visible to B. (And also everything which A got guaranteed as visible, transitively).
Similar guarantees are given for synchronized blocks and other types of locks.
i am going thru Java threads book. I came across this statement
Statement 1:- "volatile variables can be safely used only for single load or store operation and can't be
applied to long or double variales. These restrictions make the use of volatile variables uncommon"
I did not get what does single load or store operation mean here? why volatile can't be
applied to long or double variales?
Statement 2:- "A Volatile integer can not be used with the ++ operator because ++ operator contains
multiple instructions.The AtomicInteger class has a method that allows the integer it holds to be
incremented atomically."
Why Volatile integer can not be used with the ++ operator and how AtomicInteger addresses it?
Statement 1:- "volatile variables can be safely used only for single load or store operation and can't be applied to long or double variales. These restrictions make the use of volatile variables uncommon"
What?! I believe this is simply flat-out wrong. Maybe your book is out of date.
Statement 2:- "A Volatile integer can not be used with the ++ operator because ++ operator contains multiple instructions.The AtomicInteger class has a method that allows the integer it holds to be incremented atomically."
Exactly what it says. The ++ operator actually translates to machine code like this (in Java-like pseudocode):
sync_CPU_caches();
int processorRegister = variable;
processorRegister = processorRegister + 1;
variable = processorRegister;
sync_CPU_caches();
This is not thread-safe, because even though it has a memory barrier, and reads atomically, and writes atomically, it is not guaranteed that you won't get a thread switch in the middle, and processor registers are local to a CPU core (think of them as like "local variables" inside the CPU core). But an AtomicInteger is thread-safe - it probably is implemented using special machine code instructions such as compare-and-swap.
The main purpose of volatile variables is not to cause immediate thread-safe access to that variable, but to ensure a so called happens-before safety.
Theoretically a call to
volatile int i = 0;
and
int i = 0;
has no difference, as a 32-bit word is written atomically anyways (on 32 bit and higher machines to be correct). Since pointers are 32/64 bit ints as well internally, there is basically only one operation that volatile makes atomically, and that is if you use a 64 bit long in a 32 bit environment.
The happens-before however is something that actually messes up the above example. To understand this you need to know that threads don't use the actual memory of the variable in question but might make copies of it to speed up execution and can re-order the statements for optimization. Now if you have something like:
Thread A: value = 1; doIt = true;
Thread B: if (doIt) { doDoIt(value); }
It is possible that in Thread B doIt is true, but value is not yet 1, because the order of execution might have been changed by the JVM, or the new value has just not yet been broadcasted to the copy of Thread B's value.
If doIt is declared volatile instead, then at the moment of accessing it, the JVM ensures that all code before that access has already been executed and broadcasted. So the above example is the actual reason to use volatile.
I try to understand why this example is a correctly synchronized program:
a - volatile
Thread1:
x=a
Thread2:
a=5
Because there are conflicting accesses (there is a write to and read of a) so in every sequential consistency execution must be happens-before relation between that accesses.
Suppose one of sequential execution:
1. x=a
2. a=5
Is 1 happens-before 2, why?
Is 1 happens-before 2, why?
I'm not 100% sure I understand your question.
If you have a volatile variable a and one thread is reading from it and another is writing to it, the order of those accesses can be in either order. It is a race condition. What is guaranteed by the JVM and the Java Memory Model (JMM) depends on which operation happens first.
The write could have just happened and the read sees the updated value. Or the write could happen after the read. So x could be either 5 or the previous value of a.
every sequential consistency execution must be happens-before relation between that accesses
I'm not sure what this means so I'll try to be specific. The "happens before relation" with volatile means that all previous memory writes to a volatile variable prior to a read of the same variable are guaranteed to have finished. But this guarantee in no way explains the timing between the two volatile operations which is subject to the race condition. The reader is guaranteed to have seen the write, but only if the write happened before the read.
You might think this is a pretty weak guarantee, but in threads, whose performance is dramatically improved by using local CPU cache, reading the value of a field might come from a cached memory segment instead of central memory. The guarantee is critical to ensure that the local thread memory is invalidated and updated when a volatile read occurs so that threads can share data appropriately.
Again, the JVM and the JMM guarantee that if you are reading from a volatile field a, then any writes to the same field that have happened before the read, will be seen by it -- the value written will be properly published and visible to the reading thread. However, this guarantee in no way determines the ordering. It doesn't say that the write has to happen before the read.
No, a volatile read before (in synchronization order) a volatile write of the same variable does not necessarily happens-before the volatile write.
This means they can be in a "data race", because they are "conflicting accesses not ordered by a happens-before relationship". If that's true pretty much all programs contain data races:) But it's probably a spec bug. A volatile read and write should never be considered a data race. If all variables in a program are volatile, all executions are trivially sequentially consistent. see http://cs.oswego.edu/pipermail/concurrency-interest/2012-January/008927.html
Sorry, but you cannot say correctly how the JVM will optimize the code depending on the 'memory model' of the JVM. You have to use the high level tools of Java for defining what you want.
So volatile means only that there will be no "inter-thread cache" used for the variables.
If you want a stricter order, you have to use synchronized blocks.
http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html
Volatile and happens-before is only useful when the read of the field drives some condition. For example:
volatile int a;
int b =0;
Thread-1:
b = 5;
a = 10;
Thread-2
c = b + a;
In this case there is no happens-before, a can be either 10 or 0 and b can be either 5 or 0, so as a result c could be either 0, 5, 10 or 15. If the read of a implies some other condition then the happens-before is established for instance:
int b = 0;
volatile int a = 0;
Thread-1:
b = 5
a = 10;
Thread 2:
if(a == 10){
c = b + a;
}
In this case you will ensure c = 15 because the read of a==10 implies that the write of b = 5 happens-before the write of a = 10
Edit: Updating addition order as noted the inconsistency by Gray
In chapter 17 of JLS, it introduce a concept: happens-before consistent.
A set of actions A is happens-before consistent if for all reads r in A, where W(r) is the write action seen by r, it is not the case that either hb(r, W(r)) or that there exists a write w in A such that w.v = r.v and hb(W(r), w) and hb(w, r)"
In my understanding, it equals to following words:
..., it is the case that neither ... nor ...
So my first two questions are:
is my understanding right?
what does "w.v = r.v" mean?
It also gives an Example: 17.4.5-1
Thread 1 Thread 2
B = 1; A = 2;
r2 = A; r1 = B;
In first execution order:
1: B = 1;
3: A = 2;
2: r2 = A; // sees initial write of 0
4: r1 = B; // sees initial write of 0
The order itself has already told us that two threads are executed alternately, so my third question is: what does left number mean?
In my understanding, the reason of both r2 and r1 can see initial write of 0 is both A and B are not volatile field. So my fourth quesiton is: whether my understanding is right?
In second execution order:
1: r2 = A; // sees write of A = 2
3: r1 = B; // sees write of B = 1
2: B = 1;
4: A = 2;
According to definition of happens-before consistency, it is not difficult to understand this execution order is happens-before consistent(if my first understanding is correct).
So my fifth and sixth questions are: does it exist this situation (reads see writes that occur later) in real world? If it does, could you give me a real example?
Each thread can be on a different core with its own private registers which Java can use to hold values of variables, unless you force access to coherent shared memory. This means that one thread can write to a value storing in a register, and this value is not visible to another thread for some time, like the duration of a loop or whole function. (milli-seconds is not uncommon)
A more extreme example is that the reading thread's code is optimised with the assumption that since it never changes the value, it doesn't need to read it from memory. In this case the optimised code never sees the change performed by another thread.
In both cases, the use of volatile ensures that reads and write occur in a consistent order and both threads see the same value. This is sometimes described as always reading from main memory, though it doesn't have to be the case because the caches can talk to each other directly. (So the performance hit is much smaller than you might expect).
On normal CPUs, caches are "coherent" (can't hold stale / conflicting values) and transparent, not managed manually. Making data visible between threads just means doing an actual load or store instruction in asm to access memory (through the data caches), and optionally waiting for the store buffer to drain to give ordering wrt. other later operations.
happens-before
Let's take a look at definitions in concurrency theory:
Atomicity - is a property of operation that can be executed completely as a single transaction and can not be executed partially. For example Atomic operations[Example]
Visibility - if one thread made changes they are visible for other threads. volatile before Java 5 with happens-before
Ordering - compiler is able to change an ordering of operations/instructions of source code to make some optimisations.
For example happens-before which is a kind of memory barrier which helps to solve Visibility and Ordering issue. Good examples of happens-before are volatile[About], synchronized monitor[About]
A good example of atomicity is Compare and swap(CAS) realization of check then act(CTA) pattern which should be atomic and allows to change a variable in multithreading envirompment. You can write your own implementation if CTA:
volatile + synchronized
java.util.concurrent.atomic with sun.misc.Unsafe(memory allocation, instantiating without constructor call...) from Java 5 which uses JNI and CPU advantages.
CAS algoritm has thee parameters(A(address), O(old value), N(new value)).
If value by A(address) == O(old value) then put N(new value) into A(address),
else O(old value) = value from A(address) and repeat this actions again
Happens-before
Official doc
Two actions can be ordered by a happens-before relationship. If one action happens-before another, then the first is visible to and ordered before the second.
volatile[About] as an example
A write to a volatile field happens-before every subsequent read of that field.
Let's take a look at the example:
// Definitions
int a = 1;
int b = 2;
volatile boolean myVolatile = false;
// Thread A. Program order
{
a = 5;
b = 6;
myVolatile = true; // <-- write
}
//Thread B. Program order
{
//Thread.sleep(1000); //just to show that writing into `myVolatile`(Thread A) was executed before
System.out.println(myVolatile); // <-- read
System.out.println(a); //prints 5, not 1
System.out.println(b); //prints 6, not 2
}
Visibility - When Thread A changes/writes a volatile variable it also pushes all previous changes into RAM - Main Memory as a result all not volatile variable will be up to date and visible for another threads
Ordering:
All operations before writing into volatile variable in Thread A will be called before. JVM is able to reorder them but guarantees that no one operation before writing into volatile variable in Thread A will be called after it.
All operations after reading the volatile variable in Thread B will be called after. JVM is able to reorder them but guarantees that no one operation after reading a volatile variable in Thread B will be called before it.
[Concurrency vs Parallelism]
The Java Memory Model defines a partial ordering of all your actions of your program which is called happens-before.
To guarantee that a thread Y is able to see the side-effects of action X (irrelevant if X occurred in different thread or not) a happens-before relationship is defined between X and Y.
If such a relationship is not present the JVM may re-order the operations of the program.
Now, if a variable is shared and accessed by many threads, and written by (at least) one thread if the reads and writes are not ordered by the happens before relationship, then you have a data race.
In a correct program there are no data races.
Example is 2 threads A and B synchronized on lock X.
Thread A acquires lock (now Thread B is blocked) and does the write operations and then releases lock X. Now Thread B acquires lock X and since all the actions of Thread A were done before releasing the lock X, they are ordered before the actions of Thread B which acquired the lock X after thread A (and also visible to Thread B).
Note that this occurs on actions synchronized on the same lock. There is no happens before relationship among threads synchronized on different locks
In substance that is correct. The main thing to take out of this is: unless you use some form of synchronization, there is no guarantee that a read that comes after a write in your program order sees the effect of that write, as the statements might have been reodered.
does it exist this situation (reads see writes that occur later) in real world? If it does, could you give me a real example?
From a wall clock's perspective, obviously, a read can't see the effect of a write that has not happened yet.
From a program order's perspective, because statements can be reordered if there isn't a proper synchronization (happens before relationship), a read that comes before a write in your program, could see the effect of that write during execution because it has been executed after the write by the JVM.
Q1: is my understanding right?
A: Yes
Q2: what does "w.v = r.v" mean?
A: The value of w.v is same as that of r.v
Q3: What does left number mean?
A: I think it is statement ID like shown in "Table 17.4-A. Surprising results caused by statement reordering - original code". But you can ignore it because it does not apply to the conent of "Another execution order that is happens-before consistent is: " So the left number is shit completely. Do not stick to it.
Q4: In my understanding, the reason of both r2 and r1 can see initial write of 0 is both A and B are not volatile field. So my fourth quesiton is: whether my understanding is right?
A: That is one reason. re-order can also make it. "A program must be correctly synchronized to avoid the kinds of counterintuitive behaviors that can be observed when code is reordered."
Q5&6: In second execution order ... So my fifth and sixth questions are: does it exist this situation (reads see writes that occur later) in real world? If it does, could you give me a real example?
A: Yes. no synchronization in code, each thread read can see either the write of the initial value or the write by the other thread.
time 1: Thread 2: A=2
time 2: Thread 1: B=1 // Without synchronization, B=1 of Thread 1 can be interleaved here
time 3: Thread 2: r1=B // r1 value is 1
time 4: Thread 1: r2=A // r2 value is 2
Note "An execution is happens-before consistent if its set of actions is happens-before consistent"