How is a StoreStore barrier mapped to instructions under x86? - java

The JSR133 cookbook says that:
StoreStore Barriers The sequence: Store1; StoreStore; Store2 ensures
that Store1's data are visible to other processors (i.e., flushed to
memory) before the data associated with Store2 and all subsequent
store instructions. In general, StoreStore barriers are needed on
processors that do not otherwise guarantee strict ordering of flushes
from write buffers and/or caches to other processors or main memory.
And, from the cookbook we know that, for a synchronized block, the Java compiler will insert some barriers to prevent possible reordering:
MonitorEnter
[LoadLoad] <==inserted barrier
[LoadStore]<==inserted barrier
...
[LoadStore]<==inserted barrier
[StoreStore]<==inserted barrier
MonitorExit
However, since x86 does not allow reordering of Read-Read,Read-Write and Write-Write. So all the above barriers will be mapped to no-ops.That is equivalent to say that no barriers will be inserted between MonitorEnter and MonitorExit for x86 processors.
My confuse is that if we map StoreStore to no-op under x86, then how visibility is guaranteed?
To be more detailed, x86 DOES employ store buffer, so to make writes performed in the critical section be visible to other processor(s), we need to flush the store buffer, hence a Write barrier is needed. From the perspective of visibility a should be mapped to sfence/mfence/Lock#? But the cookbook says it should be mapped to no-op from the perspective of reordering prevention.
Or, the key point is that visibility guarantee is done by MonitorEnter and
MonitorExit itself? If it is the case, I think they might use so-called Read barrier and Write barrier to guarantee visibility, right?

Although we can say that memory barrier is capable of:
guarantee visibility(via flushing store buffer and/or applying
invalidate queue)
prevent reordering(inhibit reordering between load
and/or store that precedes or follows a memory barrier)
But this does NOT mean that a memory barrier should always do the above two things together, the Unsafe.doPutOrderedXX methods provided by the Sun JDK is such an example:
SomeClass temp = new SomeClass(); //S1
unsafe.putOrderedObject(this, valueOffset, null);
Object target = temp; //S2
unsafe.putOrderedObject here serves as a StoreStore barrier,hence prevents
reordering of S1 and S2, but it does NOT guarantee that the result of S1 will be visible to other processors/threads(since there is no such need).
More info on Unsafe and volatile:
Where is sun.misc.Unsafe documented?
http://mishadoff.com/blog/java-magic-part-4-sun-dot-misc-dot-unsafe/
http://jpbempel.blogspot.com/2013/05/volatile-and-memory-barriers.html
That's to say, under x86, for the purpose enter code hereof reordering prevention, a StoreStore can be mapped to no-op, for the purpose of visibility guarantee, a StoreStore should be mapped to some instructions like sfence/mfence/LOCK#.
Another example is the final keyword. To enforce the semantics of final(an observed variable's value cannot be changed), the compiler should insert a StoreStore barrier between writing to final fields and returning from that constructor.The reason to do this is that: ensure that writing to final fields should be visible to other processor(s) before writing the reference of constructed object to a reference variable. Actually, this means a requirement on ordering rather than visibility, hence it is not necessary for the results of writing to final being flushed(flushing Store Buffer/or Cache) before returning from that constructor. Therefore, under x86, the JVM do NOT insert any barrier for final fields.

Related

Does 'volatile' guarantee that any thread reads the most recently written value?

From the book Effective Java:
While the volatile modifier performs no mutual exclusion, it guarantees that any thread that reads the field will see the most recently written value
SO and many other sources claim similar things.
Is this true?
I mean really true, not a close-enough model, or true only on x86, or only in Oracle JVMs, or some definition of "most recently written" that's not the standard English interpretation...
Other sources (SO example) have said that volatile in Java is like acquire/release semantics in C++. Which I think do not offer the guarantee from the quote.
I found that in the JLS 17.4.4 it says "A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order)." But I don't quite understand.
There are quite some sources for and against this, so I'm hoping the answer is able to convince that many of those (on either side) are indeed wrong - for example reference or spec, or counter-example code.
Is this true?
I mean really true, not a close-enough model, or true only on x86, or only in Oracle JVMs, or some definition of "most recently written" that's not the standard English interpretation...
Yes, at least in the sense that a correct implementation of Java gives you this guarantee.
Unless you are using some exotic, experimental Java compiler/JVM (*), you can essentially take this as true.
From JLS 17.4.5:
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
(*) As Stephen C points out, such an exotic implementation that doesn't implement the memory model semantics described in the language spec can't usefully (or even legally) be described as "Java".
The quote per-se is correct in terms of what is tries to prove, but it is incorrect on a broader view.
It tries to make a distinction of sequential consistency and release/acquire semantics, at least in my understanding. The difference is rather "thin" between these two terms, but very important. I have tried to simplify the difference at the beginning of this answer or here.
The author is trying to say that volatile offers that sequential consistency, as implied by that:
"... it guarantees that any thread.."
If you look at the JLS, it has this sentence:
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
The tricky part there is that subsequent and it's meaning, and it has been discussed here. What is really wants to mean is "subsequent that observes that write". So happens-before is guaranteed when the reader observes the value that the writer has written.
This already implies that a write is not necessarily seen on the next read, and this can be the case where speculative execution is allowed. So in this regard, the quote is miss-leading.
The quote that you found:
A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order)
is a complicated to understand without a much broader context. In simple words, it established synchronizes-with order (and implicitly happens-before) between two threads, where volatile v variables is a shared variable. here is an answer where this has broader explanation and thus should make more sense.
It is not true. JMM is based on sequential consistency and for sequential consistency real time ordering isn't guaranteed; for that you need linearizability. In other words, reads and writes can be skewed as long as the program order isn't violated (or as long is it can't be proven po was violated).
A read of volatile variable a, needs to see the most recent written value before it in the memory order. But that doesn't imply real time ordering.
Good read about the topic:
https://concurrency-interest.altair.cs.oswego.narkive.com/G8KjyUtg/relativity-of-guarantees-provided-by-volatile.
I'll make it concrete:
Imagine there are 2 CPU's and (volatile) variable A with initial value 0. CPU1 does a store A=1 and CPU2 does a load of A. And both CPUs have the cacheline containing A in SHARED state.
The store is first speculatively executed and written to the store buffer; eventually the store commits and retires, but since the stored value is still in the store buffer; it isn't visible yet to the CPU2. Till so far it wasn't required for the cacheline to be in an EXCLUSIVE/MODIFIED state, so the cacheline on CPU2 still contains the old value and hence CPU2 can still read the old value.
So in the real time order, the write of A is ordered before the read of A=0, but in the synchronization order, the write of A=1 is ordered after the read of A=0.
Only when the store leaves the store buffer and wants to enter the L1 cache, the request for ownership (RFO) is send to all other CPU's which set the cacheline containing A to INVALID on CPU2 (RFO prefetching I'll leave out of the discussion). If CPU2 would now read A, it is guaranteed to see A=1 (the request will block till CPU1 has completed the store to the L1 cache).
On acknowledgement of the RFO the cacheline is set to MODIFIED on CPU1 and the store is written to the L1 cache.
So there is a period of time between when the store is executed/retired and when it is visible to another CPU. But the only way to determine this is if you would add special measuring equipment to the CPUs.
I believe a similar delaying effect can happen on the reading side with invalidation queues.
In practice this will not be an issue because store buffers have a limited capacity and need to be drained eventually (so a write can't be invisible indefinitely). So in day to day usage you could say that a volatile read, reads the most recent write.
A java volatile write/read provides release/acquire semantics, but keep in mind that the volatile write/read is stronger than release/acquire semantics. A volatile write/read is sequential consistent and release/acquire semantics isn't.

The Volatile Keyword and CPU Cache Coherence Protocol

The CPU has already guranteed the cache conherence by some protocols (like MESI). Why do we also need volatile in some languages(like java) to keep the visibility between multithreads.
The likely reason is those protocols aren't enabled when boot and must be triggered by some instructions like LOCK.
If really that, Why does not the CPU enable the protocol when boot?
Volatile prevents 3 different flavors of problems:
visibility
reordering
atomicity
I'm assuming X86..
First of all, caches on the X86 are always coherent. So it won't happen that after one CPU commits the store to some variable to the cache, another CPU will still load the old value for that variable. This is the domain of the MESI protocol.
Assuming that every put and get in the Java bytecode is translated (and not optimized away) to a store and a load on the CPU, then even without volatile, every get would see the most recent put to the same address.
The issue here is that the compiler (JIT in this case) has a lot of freedom to optimize code. For example if it detects that the same field is read in a loop, it could decide to hoist that variable out of the loop as is shown below.
for(...){
int tmp = a;
println(tmp);
}
After hoisting:
int tmp = a;
for(...){
println(tmp);
}
This is fine if that field is only touched by 1 thread. But if the field is updated by another thread, the first thread will never see the change. Using volatile prevents such visibility problems and this is effectively the behavior of:
C style volatile
the Java volatile before the Java memory model was introduced with JSR-133.
A VarHandle with opaque access mode.
Then there is another very important aspect of volatile; volatile prevents that loads and stores to different addresses in the instruction stream executed by some CPU are reordered. The JIT compiler and the CPU have a lot of liberty to reorder loads and stores. Although on the X86 only older stores can be reordered with newer loads to a different address due to store buffers.
So imagine the following code:
int a;
volatile int b;
thread1:
a=1;
b=1;
thread2:
if(b==1) print(a);
The fact that b is volatile prevents the store of a=1 to jump after the store b=1. And it also prevents the load of a to jump in before the load of b. So this way thread 2 is guaranteed to see a=1, when it reads b=1.
So using volatile, you can ensure that non volatile fields are visible to other threads.
If you want to understand how volatile works, I would suggest digging into the Java memory model which is expressed in synchronize-with and happens-before rules as Margeret Bloom already indicated. I have given some low level details, but in case of Java, it is best to work with this high level model instead of thinking in terms of hardware. Thinking exclusively in terms of hardware/fences is only for the experts, non portable and very fragile.

Memory Barrier Vs CAS

I find that CAS will flush all CPU write cache to main memory。 Is this similar to memory barrier?
If this is true, does this mean CAS can make java Happens-Before work?
For answer:
The CAS is CPU instruction.
The barrier is a StoreLoad barrier because what I care about is will the data are written before CAS can be read after CAS.
More Detail:
I have this question because I am writing a fork-join built in Java. The implementation is like this
{
//initialize result container
Objcet[] result = new Object[];
//worker finish state count
AtomicInteger state = new AtomicInteger(result.size);
}
//worker thread i
{
result[i] = new Object();
//this is a CAS operation
state.getAndDecrement();
if(state.get() == 0){
//do something useing result array
}
}
I want to know can (do something using result array) part see all result element which is written by other worker thread.
I find that CAS will flush all cpu write cache to main memory。 Is this similar to memory barrier?
It depends on what you mean by CAS. (A specific hardware instruction? An implementation strategy used in the implementation of some Java class?)
It depends on what kind of memory barrier you are talking about. There are a number of different kinds ...
It is not necessarily true that a CAS instruction flushes all dirty cache lines. It depends on how a particular instruction set / hardware implements the CAS instruction.
It is unclear what you mean by "make happens-before work". Certainly, under some circumstance a CAS instruction would provide the necessary memory coherency properties for a specific happens-before relationship. But not necessarily all relationships. It would depend on how the CAS instruction is implemented by the hardware.
To be honest, unless you are actually writing a Java compiler, you would do better to not try to understanding the intricacies of what a JIT compiler needs to do to implement the Java Memory Model. Just apply the happens before rules.
UPDATE
It turns out from your recent updates and comments that your actual question is about the behavior of AtomicInteger operations.
The memory semantics of the atomic types are specified in the package javadoc for java.util.concurrent.atomic as follows:
The memory effects for accesses and updates of atomics generally follow the rules for volatiles, as stated in The Java Language Specification (17.4 Memory Model):
get has the memory effects of reading a volatile variable.
set has the memory effects of writing (assigning) a volatile variable.
lazySet has the memory effects of writing (assigning) a volatile variable except that it permits reorderings with subsequent (but not previous) memory actions that do not themselves impose reordering constraints with ordinary non-volatile writes. Among other usage contexts, lazySet may apply when nulling out, for the sake of garbage collection, a reference that is never accessed again.
weakCompareAndSet atomically reads and conditionally writes a variable but does not create any happens-before orderings, so provides no guarantees with respect to previous or subsequent reads and writes of any variables other than the target of the weakCompareAndSet.
compareAndSet and all other read-and-update operations such as getAndIncrement have the memory effects of both reading and writing volatile variables.
As you can see, operations on Atomic types are specified to have memory semantics that are equivalent volatile variables. This should be sufficient to reason about your use of Java atomic types ... without resorting to dubious analogies with CAS instructions and memory barriers.
Your example is incomplete and it is difficult to understand what it is trying to do. Therefore, I can't comment on its correctness. However, you should be able to analyze it yourself using happens-before logic, etc.
I find that CAS will flush all CPU write cache to main memory。
Is this similar to memory barrier?
A CAS in Java on the X86 is implemented using a lock prefix and then it depends on the type of CAS what kind of instruction is actually being used; but that isn't that relevant for this discussion. A locked instruction effectively is a full barrier; so it includes all 4 fences: LoadLoad/LoadStore/StoreLoad/StoreStore. Since the X86 provides all but StoreLoad due to TSO, only the StoreLoad needs to be added; just as with a volatile write.
A StoreLoad doesn't force changes to be written to main memory; it only forces the CPU to wait executing loads till the store buffer has been be drained to the L1d. However, with MESI (Intel) based cache coherence protocols, it can happen that a cache-line that is in MODIFIED state on a different CPU, needs to be flushed to main memory before it can be returned as EXCLUSIVE. With MOESI (AMD) based cache coherence protocols, this is not an issue. If the cache-line is already in MODIFIED,EXCLUSIVE state on the core doing the StoreLoad, StoreLoad doesn't cause the cache line to be flushed to main memory. The cache is the source of truth.
If this is true, does this mean CAS can make java Happens-Before work?
From a memory model perspective, a successful CAS in java is nothing else than a volatile read followed by a volatile write. So there is a happens before relation between a volatile write of some field on some object instance and a subsequent volatile read on the same field on the same object instance.
Since you are working with Java, I would focus on the Java Memory Model and not too much on how it is implemented in the hardware. The JMM is allowing for executions that can't be explained based purely by thinking in fences.
Regarding your example:
result[i] = new Object();
//this is a CAS operation
state.getAndDecrement();
if(state.get() == 0){
//do something using result array
}
I'm not sure what the intended logic is. In your example, multiple threads at the same time could see that the state is 0, so all could start to do something with the array. If this behavior is undesirable, then this is caused by a race condition. I would use something like this:
result[i] = new Object();
//this is a CAS operation
int s = state.getAndDecrement();
if(s == 0){
//do something using result array
}
Now the other question is if there is a data race on the array content. There is a happens-before edge between the write to the array content and the write to 'state' (program order rule). There is a happens before edge between the write of the state and the read (volatile variable rule) and there is a happens before relation between the read of the state and the read of the array content (program order rule). So there is a happens before edge between writing to the array and reading its content in this particular example due to the transitive nature of the happens-before relation.
Personally I would not to try too be too smart and use something less array prone like an AtomicReferenceArray; then at least you don't need to worry about missing happens before edge between the write of the array and the read.

How does the JVM guarantee the visibility of member variable modifications in the referenced object when using synchronized?

I want to know how does the JVM guarantee the visibility of member variable modifications in the referenced object when using synchronized.
I know synchronized and volatile will provide visibility for variable modifications.
class Test{
public int a=0;
public void modify(){
a+=1;
}
}
//Example:
// Thread A:
volatile Test test=new Test();
synchronized(locker){
test.modify();
}
// then thread B:
synchronized(locker){
test.modify();
}
// Now, I think test.a==2 is true. Is it ok? How JVM implements it?
// I know the memory barrier, does it flush all cache to main storage?
Thread A call modify in a sychronized block first, and then pass the object to thread B (Write the reference to a volatile variable.).
Then thread B call modify again (in synchronized).
Is there any guarantee for a==2? And how is the JVM implemented?
Visibility between threads is enforced with Memory Barriers/Fences. In case of synchronized block JVM will insert a memory barrier after the execution of the block completes.
JVM implements memory barriers with CPU instruction e.g. a store barrier is done with sfence and load barrier is done with lfence instruction on x86. There is also mfence and possibly other instructions which can be specific to CPU architecture.
For your (still incomplete!) example, if we can assume the following:
The code in thread A initializing test is guaranteed to run before thread B uses it.
The locker variable contains a reference to the same object for threads A & B.
then we can prove that a == 2 will be true at the point you indicate. If precondition 1 is not guaranteed, then thread B may get an NPE. If precondition 2 is not guaranteed (i.e. threads A and B may synchronize on different objects) then there is not a proper happens-before relationship to ensure that thread B sees the result of thread A's actions on a.
(#NathanHughes commented that the volatile is unnecessary. I wouldn't necessarily agree with that. It depends on details of your example that you still haven't show us.)
How JVM implements it?
The actual implementation is Java platform and (in theory) version specific. The JVM spec Memory Model places constraints on how a program that obeys "the rules" will behave. It is entirely implementation specific how that actually happens.
I know the memory barrier, does it flush all cache to main storage?
That is implementation specific too. There are different kinds of memory barrier that work in different ways. The JIT compiler will emit native code that uses the appropriate instructions to meet the guarantees required by the JLS. If there is a way to do this without doing a full cache flush then the implementation may do that.
(There is a JVM command line option to tell the JIT compiler to output the native code. If you really want to know what is happening under the hood, that is a good place to start looking.)
But if you are trying to understand / analyze your application's thread-safety, you should be doing it in terms of the Java Memory Model. Also, use higher level concurrency abstractions that allow you to avoid the lower level pitfalls.

Java volatile effect on other variables [duplicate]

So I am reading this book titled Java Concurrency in Practice and I am stuck on this one explanation which I cannot seem to comprehend without an example. This is the quote:
When thread A writes to a volatile
variable and subsequently thread B
reads that same variable, the values
of all variables that were visible to
A prior to writing to the volatile
variable become visible to B after
reading the volatile variable.
Can someone give me a counterexample of why "the values of ALL variables that were visible to A prior to writing to the volatile variable become visible to B AFTER reading the volatile variable"?
I am confused why all other non-volatile variables do not become visible to B before reading the volatile variable?
Declaring a volatile Java variable means:
The value of this variable will never be cached thread-locally: all reads and writes will go straight to "main memory".
Access to the variable acts as though it is enclosed in a synchronized block, synchronized on itself.
Just for your reference, When is volatile needed ?
When multiple threads using the same
variable, each thread will have its
own copy of the local cache for that
variable. So, when it's updating the
value, it is actually updated in the
local cache not in the main variable
memory. The other thread which is
using the same variable doesn't know
anything about the values changed by
the another thread. To avoid this
problem, if you declare a variable as
volatile, then it will not be stored
in the local cache. Whenever thread
are updating the values, it is updated
to the main memory. So, other threads
can access the updated value.
From JLS §17.4.7 Well-Formed Executions
We only consider well-formed
executions. An execution E = < P, A,
po, so, W, V, sw, hb > is well formed
if the following conditions are true:
Each read sees a write to the same
variable in the execution. All reads
and writes of volatile variables are
volatile actions. For all reads r in
A, we have W(r) in A and W(r).v = r.v.
The variable r.v is volatile if and
only if r is a volatile read, and the
variable w.v is volatile if and only
if w is a volatile write.
Happens-before order is a partial
order. Happens-before order is given
by the transitive closure of
synchronizes-with edges and program
order. It must be a valid partial
order: reflexive, transitive and
antisymmetric.
The execution obeys
intra-thread consistency. For each
thread t, the actions performed by t
in A are the same as would be
generated by that thread in
program-order in isolation, with each
write wwriting the value V(w), given
that each read r sees the value
V(W(r)). Values seen by each read are
determined by the memory model. The
program order given must reflect the
program order in which the actions
would be performed according to the
intra-thread semantics of P.
The execution is happens-before consistent
(§17.4.6).
The execution obeys
synchronization-order consistency. For
all volatile reads r in A, it is not
the case that either so(r, W(r)) or
that there exists a write win A such
that w.v = r.v and so(W(r), w) and
so(w, r).
Useful Link : What do we really know about non-blocking concurrency in Java?
Thread B may have a CPU-local cache of those variables. A read of a volatile variable ensures that any intermediate cache flush from a previous write to the volatile is observed.
For an example, read the following link, which concludes with "Fixing Double-Checked Locking using Volatile":
http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
If a variable is non-volatile, then the compiler and the CPU, may re-order instructions freely as they see fit, in order to optimize for performance.
If the variable is now declared volatile, then the compiler no longer attempts to optimize accesses (reads and writes) to that variable. It may however continue to optimize access for other variables.
At runtime, when a volatile variable is accessed, the JVM generates appropriate memory barrier instructions to the CPU. The memory barrier serves the same purpose - the CPU is also prevent from re-ordering instructions.
When a volatile variable is written to (by thread A), all writes to any other variable are completed (or will atleast appear to be) and made visible to A before the write to the volatile variable; this is often due to a memory-write barrier instruction. Likewise, any reads on other variables, will be completed (or will appear to be) before the
read (by thread B); this is often due to a memory-read barrier instruction. This ordering of instructions that is enforced by the barrier(s), will mean that all writes visible to A, will be visible B. This however, does not mean that any re-ordering of instructions has not happened (the compiler may have performed re-ordering for other instructions); it simply means that if any writes visible to A have occurred, it would be visible to B. In simpler terms, it means that strict-program order is not maintained.
I will point to this writeup on Memory Barriers and JVM Concurrency, if you want to understand how the JVM issues memory barrier instructions, in finer detail.
Related questions
What is a memory fence?
What are some tricks that a processor does to optimize code?
Threads are allowed to cache variable values that other threads may have since updated since they read them. The volatile keyword forces all threads to not cache values.
This is simply an additional bonus the memory model gives you, if you work with volatile variables.
Normally (i.e. in the absence of volatile variables and synchronization), the VM can make variables from one thread visible to other threads in any order it wants, or not at all. E.g. the reading thread could read some mixture of earlier versions of another threads variable assignments. This is caused by the threads being maybe run on different CPUs with their own caches, which are only sometimes copied to the "main memory", and additionally by code reordering for optimization purposes.
If you used a volatile variable, as soon as thread B read some value X from it, the VM makes sure that anything which thread A has written before it wrote X is also visible to B. (And also everything which A got guaranteed as visible, transitively).
Similar guarantees are given for synchronized blocks and other types of locks.

Categories