"A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field."
So I know that volatile field can be used as synchronization in order to guarantee that that all the information that thread 1 has before writing to volatile field is going to be visible to thread 2 after reading that volatile.
But what about subsequent writes? Is the behavior the same?
Any help appreciated, can't find anything about it in the official docs.
Examples:
### Write -> Read
#Thread1 (Write)
xxx = "anyValue" - any variable with value before volatile
boolean volatile b = true
#Thread2 (Read)
if (b) { -> here we read volatile value
print(xxx) -> guaranteed visibility of 'xxx' 100%, will print 100% "anyValue"
}
### Write -> Write
#Thread1 (Write)
xxx = "anyValue" - any variable with value before volatile
boolean volatile b = true;
#Thread2 (Write)
b = false; -> here we write to volatile value
print(xxx); -> guaranteed visibility of 'xxx'???, what will be printed?
To give a bit more comprehensive answer by building up the happens-before relation out of its basic orders:
synchronization order: this is the total order of all synchronization actions. Since a volatile write is a synchronization action, the 2 volatile writes to the same or different variables are part of the synchronization order. The synchronization order will even order e.g. the lock of A and the volatile read of B because it is a total order.
synchronizes-with order. This is a partial order that only orders certain synchronization actions. For example, the release of a lock with all subsequent acquires of that same lock and the write of a volatile variable and all subsequent reads of that variable. So 2 volatile writes to different or the same variables are not ordered by the synchronizes-with order.
program order: in simple terms, it is the order as specified by the program code. In your case, the 2 volatiles writes are not ordered by program order since they are issued by different threads.
Now we get to the last step: the happens-before relation which is an order. It is the transitive closure of the union of the program order and the synchronizes-with order.
So even though the 2 volatile writes are part of the synchronization order, they are not part of the synchronizes-with order, and as a consequence, they are not part of the happens-before order. So they don't induce any happens-before edges.
Related
The tutorial http://tutorials.jenkov.com/java-concurrency/volatile.html says
Reads from and writes to other variables cannot be reordered to occur
after a write to a volatile variable, if the reads / writes originally
occurred before the write to the volatile variable. The reads / writes
before a write to a volatile variable are guaranteed to "happen
before" the write to the volatile variable.
What is meant by "before the write to the volatile variable"? Does it mean previous read/writes in the same method where we are writing to the volatile variable? Or is it a larger scope (also in methods higher up the call stack)?
JVM can reorder operations. For example if we have i, j variables and code
i = 1;
j = 2;
JVM can run this in reordered manner
j = 2;
i = 1;
But if the j variable marked as volatile then JVM runs operations only as
i = 1;
j = 2;
write to i "happens before the write to the volatile variable" j.
The JVM ensures that writes to a volatile variable happens-before any reads from it. Take two threads. It's guarateed that for a single thread, the execution follows an as-if-serial semantics. Basically you can assume that there is an implicit happens-before relationship b/w two executions in the same thread (the compiler is still free to reorder instructions). Basically a single thread has a total order b/w its instructions governed by the happens-before relationship trivially.
A multi-threaded program has many such partial orders (every thread has a total order in the local instruction set but there is no order globally across threads) but not a total order b/w the global instruction set. Synchronisation is all about giving your program as much total order as possible.
Coming back to volatile variables, when a thread reads from it, the JVM ensures that all writes to it happened before the read. Now because of this order, everything the writing thread did before it wrote to the variable become visible to the thread reading from it. So yes, to answer your question, even variables up in the call stack should be visible to the reading thread.
I'll try to draw a visual picture. The two threads can be imagined as two parallel rails, and write to a volatile variable can be one of the sleepers b/w them. You basically get a
A -----
|
|
------- B
shaped total order b/w the two threads of execution. Everything in A before the sleeper should be visible to B after the sleeper because of this total order.
The JMM is defined in terms of happens before relation which we'll call ->. If a->b, then the b should see everything of a. This means that there are constraints on reordering loads/stores.
If a is a volatile write and b is a subsequent volatile read of the same variable, then a->b. This is called the volatile variable rule.
If a occurs before b in the code, then a->b. This is called the program order rule.
If a->b and b->c, then a->c. This is called the transitivity rule.
So lets apply this to a simple example:
int a;
volatile int b;
thread1(){
a=1;
b=1
}
thread2(){
int rb=b;
int ra=a;
if(rb==1 and ra==0) print("violation");
}
So the question is if thread2 sees rb=1,will it see ra=1?
a=1->b=1 due to program order rule.
b=1->rb=b (since we see the value 1) due to the volatile variable rule.
rb=b->ra=a due to program order rule.
Now we can apply the transitivity rule twice and we can conclude that that a=1->ra=a. And therefor ra needs to be 1.
This means that:
a=1 and b=1 can't be reordered.
rb=b and ra=a can't be reordered
otherwise we could end up with an rb=1 and ra=0.
I am reading Java Concurrency in Practice, in "16.1.3 The Java Memory Model in 500 words or less", it says:
The Java Memory Model is specified in terms of actions, which include reads and writes to variables, locks and unlocks of monitors, and starting and joining with threads. The JMM defines a partial ordering called happens-before on all actions within the program. To guarantee that the thread executing action B can see the results of action A (whether or not A and B occur in different threads), there must be a happens-before relationship between A and B. In the absence of a happens-before ordering between two operations, the JVM is free to reorder them as it pleases.
Even though actions are only partially ordered, synchronization actions—lock acquisition and release, and reads and writes of volatile variables—are totally ordered. This makes it sensible to describe happens-before in terms of “subsequent” lock acquisitions and reads of volatile variables.
About "partial ordering", I have found this and this, but I don't quite understand "Even though actions are only partially ordered, synchronization actions—lock acquisition and release, and reads and writes of volatile variables—are totally ordered.". What does "synchronization actions are totally ordered" mean?
Analyzing the statement "synchronization actions are totally ordered":
"synchronization actions" is a set S of program operations (actions)
we have a relation R over set S : it is the happens-before relation. That is, given program statements a and b, aRb if and only if a happens-before b.
Then what the statement says, is "relation R is total over S".
"relation R is total over S", means that for every two operations a,b from set S (with a!=b), either aRb, or bRa. That is, either a happens-before b, or b happens-before a.
If we define the set S as the set of all lock acquisitions and lock releases performed on the same lock object X; then the set S is totally ordered by the happens-before relation: let be a the acquisition of lock X performed by thread T1, and b the lock acquisition performed by thread T2. Then either a happens-before b (in case T1 acquires the lock first. T1 will need to release the lock first, then T2 will be able to acquire it); or b happens-before a (in case T2 acquires the lock first).
Note: not all relations are total.
In example, the relation <= is total over the real numbers. That is, for every pair a,b of real numbers, it is true that either a<=b or b<=a. The total order here means that given any two items, we can always decide which comes first wrt. the given relation.
But the relation P: "is an ancestor of", is not a total relation over the set of all humans. Of course, for some pairs of humans a,b it is true that either aPb (a is an ancestor of b), or bPa (b is an ancestor of a). But for most of them, neither aPb nor bPa is true; that is, we can't use the relation to decide which item comes "first" (in genealogical terms).
Back to program statements, the happens-before relation R is obviously partial, over the set of all program statements (like in the "ancestor-of" example): given un-synchronized actions a,b (any operations performed by different threads, in absence of proper synchronization), neither aRb nor bRa holds.
Does volatile writes are reordered with non volatile writes.
For Ex:
I have two threads T1 and T2:
T1:
i = 10;
volatile boolean result = true;
T2:
while(!result){
}
System.out.println(i);
Does T2 always see the updated value of i(10) or old value?
Yes. There is a happens-before relationship for a volatile statement:
Please consider this stackoverflow question: Does Java volatile variables impose a happens-before relationship before it is read?
A write to a volatile field happens-before every subsequent read of
that same field. Writes and reads of volatile fields have similar
memory consistency effects as entering and exiting monitors, but do
not entail mutual exclusion locking.
Also you can read section 3.1.3 (Locking and visbility) in the great book called "Java Concurrency in Practice". There is a relevant explanation there about a similar visibility issue and the outline is this:
Locking is not just about mutual exclusion; it is also about memory visibility.To ensure that all threads see the most up to date values of shared mutable variables, the reading and writing threads must synchronize on a common lock
In your code the lock is the volatile variable
As far as I understand it, this is correctly synchronized, so no races occur and 10 is always printed.
The important parts are that within a thread, things occur in program order, and that writes to a volatile variable happen before reads that see that value. Together with the transitive closure rule, this means that the assignment to i happens before the print statement.
i = 10 happens before result = true. result = true happens before result is read as true in thread 2. result is read as true happens before System.out.println(i);. Therefore, i = 10 happens before System.out.println(i);.
I try to understand why this example is a correctly synchronized program:
a - volatile
Thread1:
x=a
Thread2:
a=5
Because there are conflicting accesses (there is a write to and read of a) so in every sequential consistency execution must be happens-before relation between that accesses.
Suppose one of sequential execution:
1. x=a
2. a=5
Is 1 happens-before 2, why?
Is 1 happens-before 2, why?
I'm not 100% sure I understand your question.
If you have a volatile variable a and one thread is reading from it and another is writing to it, the order of those accesses can be in either order. It is a race condition. What is guaranteed by the JVM and the Java Memory Model (JMM) depends on which operation happens first.
The write could have just happened and the read sees the updated value. Or the write could happen after the read. So x could be either 5 or the previous value of a.
every sequential consistency execution must be happens-before relation between that accesses
I'm not sure what this means so I'll try to be specific. The "happens before relation" with volatile means that all previous memory writes to a volatile variable prior to a read of the same variable are guaranteed to have finished. But this guarantee in no way explains the timing between the two volatile operations which is subject to the race condition. The reader is guaranteed to have seen the write, but only if the write happened before the read.
You might think this is a pretty weak guarantee, but in threads, whose performance is dramatically improved by using local CPU cache, reading the value of a field might come from a cached memory segment instead of central memory. The guarantee is critical to ensure that the local thread memory is invalidated and updated when a volatile read occurs so that threads can share data appropriately.
Again, the JVM and the JMM guarantee that if you are reading from a volatile field a, then any writes to the same field that have happened before the read, will be seen by it -- the value written will be properly published and visible to the reading thread. However, this guarantee in no way determines the ordering. It doesn't say that the write has to happen before the read.
No, a volatile read before (in synchronization order) a volatile write of the same variable does not necessarily happens-before the volatile write.
This means they can be in a "data race", because they are "conflicting accesses not ordered by a happens-before relationship". If that's true pretty much all programs contain data races:) But it's probably a spec bug. A volatile read and write should never be considered a data race. If all variables in a program are volatile, all executions are trivially sequentially consistent. see http://cs.oswego.edu/pipermail/concurrency-interest/2012-January/008927.html
Sorry, but you cannot say correctly how the JVM will optimize the code depending on the 'memory model' of the JVM. You have to use the high level tools of Java for defining what you want.
So volatile means only that there will be no "inter-thread cache" used for the variables.
If you want a stricter order, you have to use synchronized blocks.
http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html
Volatile and happens-before is only useful when the read of the field drives some condition. For example:
volatile int a;
int b =0;
Thread-1:
b = 5;
a = 10;
Thread-2
c = b + a;
In this case there is no happens-before, a can be either 10 or 0 and b can be either 5 or 0, so as a result c could be either 0, 5, 10 or 15. If the read of a implies some other condition then the happens-before is established for instance:
int b = 0;
volatile int a = 0;
Thread-1:
b = 5
a = 10;
Thread 2:
if(a == 10){
c = b + a;
}
In this case you will ensure c = 15 because the read of a==10 implies that the write of b = 5 happens-before the write of a = 10
Edit: Updating addition order as noted the inconsistency by Gray
This is about volatile piggyback.
Purpose: I want to reach a lightweight vars visibilty. Consistency of a_b_c is not important. I have a bunch of vars and I don't want to make them all volatile.
Is this code threadsafe?
class A {
public int a, b, c;
volatile int sync;
public void setup() {
a = 2;
b = 3;
c = 4;
}
public void sync() {
sync++;
}
}
final static A aaa = new A();
Thread0:
aaa.setup();
end
Thread1:
for(;;) {aaa.sync(); logic with aaa.a, aaa.b, aaa.c}
Thread2:
for(;;) {aaa.sync(); logic with aaa.a, aaa.b, aaa.c}
Java Memory Model defines the happens-before relationship which has the following properties (amongst others):
"Each action in a thread happens-before every action in that thread that comes later in the program order" (program order rule)
"A write to a volatile field happens-before every subsequent read of that same volatile" (volatile variable rule)
These two properties together with transitivity of the happens-before relationship imply the visibility guarantees that OP seeks in the following manner:
A write to a in thread 1 happens-before a write to sync in a call to sync() in thread 1 (program order rule).
The write to sync in the call to sync() in thread 1 happens-before a read to sync in a call to sync in thread 2 (volatile variable rule).
The read from sync in the call to sync() in thread 2 happens-before a read from a in thread 2 (program order rule).
This implies that the answer to the question is yes, i.e. the call to sync() in each iteration in threads 1 and 2 ensures visibility of changes to a, b and c to the other thread(s). Note that this ensures visibility only. No mutual exclusion guarantees exist and hence all invariants binding a, b and c may be violated.
See also Java theory and practice: Fixing the Java Memory Model, Part 2. In particular the section "New guarantees for volatile" which says
Under the new memory model, when thread A writes to a volatile
variable V, and thread B reads from V, any variable values that were
visible to A at the time that V was written are guaranteed now to be
visible to B.
Incrementing a value between threads is never thread-safe with just volatile. This only ensures that each thread gets an up to date value, not that the increment is atomic, because at the assembler level your ++ is actually several instructions that can be interleaved.
You should use AtomicInteger for a fast atomic increment.
Edit: Reading again what you need is actually a memory fence. Java has no memory fence instruction, but you can use a lock for the memory fence "side-effect". In that case declare the sync method synchronized to introduce an implicit fence:
void synchronized sync() {
sync++;
}
The pattern is usually like this.
public void setup() {
a = 2;
b = 3;
c = 4;
sync();
}
However, while this guarantees the other threads will see this change, the other threads can see an incomplete change. e.g. the Thread2 might see a = 2, b = 3, c = 0. or even possibly a = 2, b = 0, c = 4;
Using the sync() on the reader doesn't help much.
From javadoc:
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor. And because the happens-before relation
is transitive, all actions of a thread prior to unlocking
happen-before all actions subsequent to any thread locking that
monitor.
A write to a volatile field happens-before every subsequent read of
that same field. Writes and reads of volatile fields have similar
memory consistency effects as entering and exiting monitors, but do
not entail mutual exclusion locking.
So I think that writing to volatile var is not an equivalent to syncronization in this case and it doesn't guarantee happens-before order and visibility of changes in Thread1 to Thread2
You don't really have to manually synchronize at all, just use an automatically synchronized data structure, like java.util.concurrent.atomic.AtomicInteger.
You could alternatively make the sync() method synchronized.