Memory Barrier Vs CAS - java

I find that CAS will flush all CPU write cache to main memory。 Is this similar to memory barrier?
If this is true, does this mean CAS can make java Happens-Before work?
For answer:
The CAS is CPU instruction.
The barrier is a StoreLoad barrier because what I care about is will the data are written before CAS can be read after CAS.
More Detail:
I have this question because I am writing a fork-join built in Java. The implementation is like this
{
//initialize result container
Objcet[] result = new Object[];
//worker finish state count
AtomicInteger state = new AtomicInteger(result.size);
}
//worker thread i
{
result[i] = new Object();
//this is a CAS operation
state.getAndDecrement();
if(state.get() == 0){
//do something useing result array
}
}
I want to know can (do something using result array) part see all result element which is written by other worker thread.

I find that CAS will flush all cpu write cache to main memory。 Is this similar to memory barrier?
It depends on what you mean by CAS. (A specific hardware instruction? An implementation strategy used in the implementation of some Java class?)
It depends on what kind of memory barrier you are talking about. There are a number of different kinds ...
It is not necessarily true that a CAS instruction flushes all dirty cache lines. It depends on how a particular instruction set / hardware implements the CAS instruction.
It is unclear what you mean by "make happens-before work". Certainly, under some circumstance a CAS instruction would provide the necessary memory coherency properties for a specific happens-before relationship. But not necessarily all relationships. It would depend on how the CAS instruction is implemented by the hardware.
To be honest, unless you are actually writing a Java compiler, you would do better to not try to understanding the intricacies of what a JIT compiler needs to do to implement the Java Memory Model. Just apply the happens before rules.
UPDATE
It turns out from your recent updates and comments that your actual question is about the behavior of AtomicInteger operations.
The memory semantics of the atomic types are specified in the package javadoc for java.util.concurrent.atomic as follows:
The memory effects for accesses and updates of atomics generally follow the rules for volatiles, as stated in The Java Language Specification (17.4 Memory Model):
get has the memory effects of reading a volatile variable.
set has the memory effects of writing (assigning) a volatile variable.
lazySet has the memory effects of writing (assigning) a volatile variable except that it permits reorderings with subsequent (but not previous) memory actions that do not themselves impose reordering constraints with ordinary non-volatile writes. Among other usage contexts, lazySet may apply when nulling out, for the sake of garbage collection, a reference that is never accessed again.
weakCompareAndSet atomically reads and conditionally writes a variable but does not create any happens-before orderings, so provides no guarantees with respect to previous or subsequent reads and writes of any variables other than the target of the weakCompareAndSet.
compareAndSet and all other read-and-update operations such as getAndIncrement have the memory effects of both reading and writing volatile variables.
As you can see, operations on Atomic types are specified to have memory semantics that are equivalent volatile variables. This should be sufficient to reason about your use of Java atomic types ... without resorting to dubious analogies with CAS instructions and memory barriers.
Your example is incomplete and it is difficult to understand what it is trying to do. Therefore, I can't comment on its correctness. However, you should be able to analyze it yourself using happens-before logic, etc.

I find that CAS will flush all CPU write cache to main memory。
Is this similar to memory barrier?
A CAS in Java on the X86 is implemented using a lock prefix and then it depends on the type of CAS what kind of instruction is actually being used; but that isn't that relevant for this discussion. A locked instruction effectively is a full barrier; so it includes all 4 fences: LoadLoad/LoadStore/StoreLoad/StoreStore. Since the X86 provides all but StoreLoad due to TSO, only the StoreLoad needs to be added; just as with a volatile write.
A StoreLoad doesn't force changes to be written to main memory; it only forces the CPU to wait executing loads till the store buffer has been be drained to the L1d. However, with MESI (Intel) based cache coherence protocols, it can happen that a cache-line that is in MODIFIED state on a different CPU, needs to be flushed to main memory before it can be returned as EXCLUSIVE. With MOESI (AMD) based cache coherence protocols, this is not an issue. If the cache-line is already in MODIFIED,EXCLUSIVE state on the core doing the StoreLoad, StoreLoad doesn't cause the cache line to be flushed to main memory. The cache is the source of truth.
If this is true, does this mean CAS can make java Happens-Before work?
From a memory model perspective, a successful CAS in java is nothing else than a volatile read followed by a volatile write. So there is a happens before relation between a volatile write of some field on some object instance and a subsequent volatile read on the same field on the same object instance.
Since you are working with Java, I would focus on the Java Memory Model and not too much on how it is implemented in the hardware. The JMM is allowing for executions that can't be explained based purely by thinking in fences.
Regarding your example:
result[i] = new Object();
//this is a CAS operation
state.getAndDecrement();
if(state.get() == 0){
//do something using result array
}
I'm not sure what the intended logic is. In your example, multiple threads at the same time could see that the state is 0, so all could start to do something with the array. If this behavior is undesirable, then this is caused by a race condition. I would use something like this:
result[i] = new Object();
//this is a CAS operation
int s = state.getAndDecrement();
if(s == 0){
//do something using result array
}
Now the other question is if there is a data race on the array content. There is a happens-before edge between the write to the array content and the write to 'state' (program order rule). There is a happens before edge between the write of the state and the read (volatile variable rule) and there is a happens before relation between the read of the state and the read of the array content (program order rule). So there is a happens before edge between writing to the array and reading its content in this particular example due to the transitive nature of the happens-before relation.
Personally I would not to try too be too smart and use something less array prone like an AtomicReferenceArray; then at least you don't need to worry about missing happens before edge between the write of the array and the read.

Related

Does 'volatile' guarantee that any thread reads the most recently written value?

From the book Effective Java:
While the volatile modifier performs no mutual exclusion, it guarantees that any thread that reads the field will see the most recently written value
SO and many other sources claim similar things.
Is this true?
I mean really true, not a close-enough model, or true only on x86, or only in Oracle JVMs, or some definition of "most recently written" that's not the standard English interpretation...
Other sources (SO example) have said that volatile in Java is like acquire/release semantics in C++. Which I think do not offer the guarantee from the quote.
I found that in the JLS 17.4.4 it says "A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order)." But I don't quite understand.
There are quite some sources for and against this, so I'm hoping the answer is able to convince that many of those (on either side) are indeed wrong - for example reference or spec, or counter-example code.
Is this true?
I mean really true, not a close-enough model, or true only on x86, or only in Oracle JVMs, or some definition of "most recently written" that's not the standard English interpretation...
Yes, at least in the sense that a correct implementation of Java gives you this guarantee.
Unless you are using some exotic, experimental Java compiler/JVM (*), you can essentially take this as true.
From JLS 17.4.5:
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
(*) As Stephen C points out, such an exotic implementation that doesn't implement the memory model semantics described in the language spec can't usefully (or even legally) be described as "Java".
The quote per-se is correct in terms of what is tries to prove, but it is incorrect on a broader view.
It tries to make a distinction of sequential consistency and release/acquire semantics, at least in my understanding. The difference is rather "thin" between these two terms, but very important. I have tried to simplify the difference at the beginning of this answer or here.
The author is trying to say that volatile offers that sequential consistency, as implied by that:
"... it guarantees that any thread.."
If you look at the JLS, it has this sentence:
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
The tricky part there is that subsequent and it's meaning, and it has been discussed here. What is really wants to mean is "subsequent that observes that write". So happens-before is guaranteed when the reader observes the value that the writer has written.
This already implies that a write is not necessarily seen on the next read, and this can be the case where speculative execution is allowed. So in this regard, the quote is miss-leading.
The quote that you found:
A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order)
is a complicated to understand without a much broader context. In simple words, it established synchronizes-with order (and implicitly happens-before) between two threads, where volatile v variables is a shared variable. here is an answer where this has broader explanation and thus should make more sense.
It is not true. JMM is based on sequential consistency and for sequential consistency real time ordering isn't guaranteed; for that you need linearizability. In other words, reads and writes can be skewed as long as the program order isn't violated (or as long is it can't be proven po was violated).
A read of volatile variable a, needs to see the most recent written value before it in the memory order. But that doesn't imply real time ordering.
Good read about the topic:
https://concurrency-interest.altair.cs.oswego.narkive.com/G8KjyUtg/relativity-of-guarantees-provided-by-volatile.
I'll make it concrete:
Imagine there are 2 CPU's and (volatile) variable A with initial value 0. CPU1 does a store A=1 and CPU2 does a load of A. And both CPUs have the cacheline containing A in SHARED state.
The store is first speculatively executed and written to the store buffer; eventually the store commits and retires, but since the stored value is still in the store buffer; it isn't visible yet to the CPU2. Till so far it wasn't required for the cacheline to be in an EXCLUSIVE/MODIFIED state, so the cacheline on CPU2 still contains the old value and hence CPU2 can still read the old value.
So in the real time order, the write of A is ordered before the read of A=0, but in the synchronization order, the write of A=1 is ordered after the read of A=0.
Only when the store leaves the store buffer and wants to enter the L1 cache, the request for ownership (RFO) is send to all other CPU's which set the cacheline containing A to INVALID on CPU2 (RFO prefetching I'll leave out of the discussion). If CPU2 would now read A, it is guaranteed to see A=1 (the request will block till CPU1 has completed the store to the L1 cache).
On acknowledgement of the RFO the cacheline is set to MODIFIED on CPU1 and the store is written to the L1 cache.
So there is a period of time between when the store is executed/retired and when it is visible to another CPU. But the only way to determine this is if you would add special measuring equipment to the CPUs.
I believe a similar delaying effect can happen on the reading side with invalidation queues.
In practice this will not be an issue because store buffers have a limited capacity and need to be drained eventually (so a write can't be invisible indefinitely). So in day to day usage you could say that a volatile read, reads the most recent write.
A java volatile write/read provides release/acquire semantics, but keep in mind that the volatile write/read is stronger than release/acquire semantics. A volatile write/read is sequential consistent and release/acquire semantics isn't.

The Volatile Keyword and CPU Cache Coherence Protocol

The CPU has already guranteed the cache conherence by some protocols (like MESI). Why do we also need volatile in some languages(like java) to keep the visibility between multithreads.
The likely reason is those protocols aren't enabled when boot and must be triggered by some instructions like LOCK.
If really that, Why does not the CPU enable the protocol when boot?
Volatile prevents 3 different flavors of problems:
visibility
reordering
atomicity
I'm assuming X86..
First of all, caches on the X86 are always coherent. So it won't happen that after one CPU commits the store to some variable to the cache, another CPU will still load the old value for that variable. This is the domain of the MESI protocol.
Assuming that every put and get in the Java bytecode is translated (and not optimized away) to a store and a load on the CPU, then even without volatile, every get would see the most recent put to the same address.
The issue here is that the compiler (JIT in this case) has a lot of freedom to optimize code. For example if it detects that the same field is read in a loop, it could decide to hoist that variable out of the loop as is shown below.
for(...){
int tmp = a;
println(tmp);
}
After hoisting:
int tmp = a;
for(...){
println(tmp);
}
This is fine if that field is only touched by 1 thread. But if the field is updated by another thread, the first thread will never see the change. Using volatile prevents such visibility problems and this is effectively the behavior of:
C style volatile
the Java volatile before the Java memory model was introduced with JSR-133.
A VarHandle with opaque access mode.
Then there is another very important aspect of volatile; volatile prevents that loads and stores to different addresses in the instruction stream executed by some CPU are reordered. The JIT compiler and the CPU have a lot of liberty to reorder loads and stores. Although on the X86 only older stores can be reordered with newer loads to a different address due to store buffers.
So imagine the following code:
int a;
volatile int b;
thread1:
a=1;
b=1;
thread2:
if(b==1) print(a);
The fact that b is volatile prevents the store of a=1 to jump after the store b=1. And it also prevents the load of a to jump in before the load of b. So this way thread 2 is guaranteed to see a=1, when it reads b=1.
So using volatile, you can ensure that non volatile fields are visible to other threads.
If you want to understand how volatile works, I would suggest digging into the Java memory model which is expressed in synchronize-with and happens-before rules as Margeret Bloom already indicated. I have given some low level details, but in case of Java, it is best to work with this high level model instead of thinking in terms of hardware. Thinking exclusively in terms of hardware/fences is only for the experts, non portable and very fragile.

VarHandle get/setOpaque

I keep fighting to understand what VarHandle::setOpaque and VarHandle::getOpaque are really doing. It has not been easy so far - there are some things I think I get (but will not present them in the question itself, not to muddy the waters), but overall this is miss-leading at best for me.
The documentation:
Returns the value of a variable, accessed in program order...
Well in my understanding if I have:
int xx = x; // read x
int yy = y; // read y
These reads can be re-ordered. On the other had if I have:
// simplified code, does not compile, but reads happen on the same "this" for example
int xx = VarHandle_X.getOpaque(x);
int yy = VarHandle_Y.getOpaque(y);
This time re-orderings are not possible? And this is what it means "program order"? Are we talking about insertions of barriers here for this re-ordering to be prohibited? If so, since these are two loads, would the same be achieved? via:
int xx = x;
VarHandle.loadLoadFence()
int yy = y;
But it gets a lot trickier:
... but with no assurance of memory ordering effects with respect to other threads.
I could not come up with an example to even pretend I understand this part.
It seems to me that this documentation is targeted at people who know exactly what they are doing (and I am definitely not one)... So can someone shed some light here?
Well in my understanding if I have:
int xx = x; // read x
int yy = y; // read y
These reads can be re-ordered.
These reads may not only happen to be reordered, they may not happen at all. The thread may use an old, previously read value for x and/or y or values it did previously write to these variables whereas, in fact, the write may not have been performed yet, so the “reading thread” may use values, no other thread may know of and are not in the heap memory at that time (and probably never will).
On the other had if I have:
// simplified code, does not compile, but reads happen on the same "this" for example
int xx = VarHandle_X.getOpaque(x);
int yy = VarHandle_Y.getOpaque(y);
This time re-orderings are not possible? And this is what it means "program order"?
Simply said, the main feature of opaque reads and writes, is, that they will actually happen. This implies that they can not be reordered in respect to other memory access of at least the same strength, but that has no impact for ordinary reads and writes.
The term program order is defined by the JLS:
… the program order of t is a total order that reflects the order in which these actions would be performed according to the intra-thread semantics of t.
That’s the evaluation order specified for expressions and statements. The order in which we perceive the effects, as long as only a single thread is involved.
Are we talking about insertions of barriers here for this re-ordering to be prohibited?
No, there is no barrier involved, which might be the intention behind the phrase “…but with no assurance of memory ordering effects with respect to other threads”.
Perhaps, we could say that opaque access works a bit like volatile was before Java 5, enforcing read access to see the most recent heap memory value (which makes only sense if the writing end also uses opaque or an even stronger mode), but with no effect on other reads or writes.
So what can you do with it?
A typical use case would be a cancellation or interruption flag that is not supposed to establish a happens-before relationship. Often, the stopped background task has no interest in perceiving actions made by the stopping task prior to signalling, but will just end its own activity. So writing and reading the flag with opaque mode would be sufficient to ensure that the signal is eventually noticed (unlike the normal access mode), but without any additional negative impact on the performance.
Likewise, a background task could write progress updates, like a percentage number, which the reporting (UI) thread is supposed to notice timely, while no happens-before relationship is required before the publication of the final result.
It’s also useful if you just want atomic access for long and double, without any other impact.
Since truly immutable objects using final fields are immune to data races, you can use opaque modes for timely publishing immutable objects, without the broader effect of release/acquire mode publishing.
A special case would be periodically checking a status for an expected value update and once available, querying the value with a stronger mode (or executing the matching fence instruction explicitly). In principle, a happens-before relationship can only be established between the write and its subsequent read anyway, but since optimizers usually don’t have the horizon to identify such a inter-thread use case, performance critical code can use opaque access to optimize such scenario.
The opaque means that the thread executing opaque operation is guaranteed to observe its own actions in program order, but that's it.
Other threads are free to observe the threads actions in any order. On x86 it is a common case since it has
write ordered with store-buffer forwarding
memory model so even if the thread does store before load. The store can be cached in the store buffer and some thread being executed on any other core observes the thread action in reverse order load-store instead of store-load. So opaque operation is done on x86 for free (on x86 we actually also have acquire for free, see this extremely exhaustive answer for details on some other architectures and their memory models: https://stackoverflow.com/a/55741922/8990329)
Why is it useful? Well, I could speculate that if some thread observed a value stored with opaque memory semantic then subsequent read will observe "at least this or later" value (plain memory access does not provide such guarantees, does it?).
Also since Java 9 VarHandles are somewhat related to acquire/release/consume semantic in C I think it is worth noting that opaque access is similar to memory_order_relaxed which is defined in the Standard as follows:
For memory_order_relaxed, no operation orders memory.
with some examples provided.
I have been struggling with opaque myself and the documentation is certainly not easy to understand.
From the above link:
Opaque operations are bitwise atomic and coherently ordered.
The bitwise atomic part is obvious. Coherently ordered means that loads/stores to a single address have some total order, each reach sees the most recent address before it and the order is consistent with the program order. For some coherence examples, see the following JCStress test.
Coherence doesn't provide any ordering guarantees between loads/stores to different addresses so it doesn't need to provide any fences so that loads/stores to different addresses are ordered.
With opaque, the compiler will emit the loads/stores as it sees them. But the underlying hardware is still allowed to reorder load/stores to different addresses.
I upgraded your example to the message-passing litmus test:
thread1:
X.setOpaque(1);
Y.setOpaque(1);
thread2:
ry = Y.getOpaque();
rx = X.getOpaque();
if (ry == 1 && rx == 0) println("Oh shit");
The above could fail on a platform that would allow for the 2 stores to be reordered or the 2 loads (again ARM or PowerPC). Opaque is not required to provide causality. JCStress has a good example for that as well.
Also, the following IRIW example can fail:
thread1:
X.setOpaque(1);
thread2:
Y.setOpaque(1);
thread3:
rx_thread3 = X.getOpaque();
[LoadLoad]
ry_thread3 = Y.getOpaque();
thread4:
ry_thread4 = Y.getOpaque();
[LoadLoad]
rx_thread4 = X.getOpaque();
Can it be that we end up with rx_thread3=1,ry_thread3=0,ry_thread4=1 and rx_thread4 is 0?
With opaque this can happen. Even though the loads are prevented from being reordered, opaque accesses do not require multi-copy-atomicity (stores to different addresses issued by different CPUs can be seen in different orders).
Release/acquire is stronger than opaque, since with release/acquire it is allowed to fail, therefor with opaque, it is allowed to fail. So Opaque is not required to provide consensus.

How is a StoreStore barrier mapped to instructions under x86?

The JSR133 cookbook says that:
StoreStore Barriers The sequence: Store1; StoreStore; Store2 ensures
that Store1's data are visible to other processors (i.e., flushed to
memory) before the data associated with Store2 and all subsequent
store instructions. In general, StoreStore barriers are needed on
processors that do not otherwise guarantee strict ordering of flushes
from write buffers and/or caches to other processors or main memory.
And, from the cookbook we know that, for a synchronized block, the Java compiler will insert some barriers to prevent possible reordering:
MonitorEnter
[LoadLoad] <==inserted barrier
[LoadStore]<==inserted barrier
...
[LoadStore]<==inserted barrier
[StoreStore]<==inserted barrier
MonitorExit
However, since x86 does not allow reordering of Read-Read,Read-Write and Write-Write. So all the above barriers will be mapped to no-ops.That is equivalent to say that no barriers will be inserted between MonitorEnter and MonitorExit for x86 processors.
My confuse is that if we map StoreStore to no-op under x86, then how visibility is guaranteed?
To be more detailed, x86 DOES employ store buffer, so to make writes performed in the critical section be visible to other processor(s), we need to flush the store buffer, hence a Write barrier is needed. From the perspective of visibility a should be mapped to sfence/mfence/Lock#? But the cookbook says it should be mapped to no-op from the perspective of reordering prevention.
Or, the key point is that visibility guarantee is done by MonitorEnter and
MonitorExit itself? If it is the case, I think they might use so-called Read barrier and Write barrier to guarantee visibility, right?
Although we can say that memory barrier is capable of:
guarantee visibility(via flushing store buffer and/or applying
invalidate queue)
prevent reordering(inhibit reordering between load
and/or store that precedes or follows a memory barrier)
But this does NOT mean that a memory barrier should always do the above two things together, the Unsafe.doPutOrderedXX methods provided by the Sun JDK is such an example:
SomeClass temp = new SomeClass(); //S1
unsafe.putOrderedObject(this, valueOffset, null);
Object target = temp; //S2
unsafe.putOrderedObject here serves as a StoreStore barrier,hence prevents
reordering of S1 and S2, but it does NOT guarantee that the result of S1 will be visible to other processors/threads(since there is no such need).
More info on Unsafe and volatile:
Where is sun.misc.Unsafe documented?
http://mishadoff.com/blog/java-magic-part-4-sun-dot-misc-dot-unsafe/
http://jpbempel.blogspot.com/2013/05/volatile-and-memory-barriers.html
That's to say, under x86, for the purpose enter code hereof reordering prevention, a StoreStore can be mapped to no-op, for the purpose of visibility guarantee, a StoreStore should be mapped to some instructions like sfence/mfence/LOCK#.
Another example is the final keyword. To enforce the semantics of final(an observed variable's value cannot be changed), the compiler should insert a StoreStore barrier between writing to final fields and returning from that constructor.The reason to do this is that: ensure that writing to final fields should be visible to other processor(s) before writing the reference of constructed object to a reference variable. Actually, this means a requirement on ordering rather than visibility, hence it is not necessary for the results of writing to final being flushed(flushing Store Buffer/or Cache) before returning from that constructor. Therefore, under x86, the JVM do NOT insert any barrier for final fields.

Is unsynchronized read of integer threadsafe in java?

I see this code quite frequently in some OSS unit tests, but is it thread safe ? Is the while loop guaranteed to see the correct value of invoc ?
If no; nerd points to whoever also knows which CPU architecture this may fail on.
private int invoc = 0;
private synchronized void increment() {
invoc++;
}
public void isItThreadSafe() throws InterruptedException {
for (int i = 0; i < TOTAL_THREADS; i++) {
new Thread(new Runnable() {
public void run() {
// do some stuff
increment();
}
}).start();
}
while (invoc != TOTAL_THREADS) {
Thread.sleep(250);
}
}
No, it's not threadsafe. invoc needs to be declared volatile, or accessed while synchronizing on the same lock, or changed to use AtomicInteger. Just using the synchronized method to increment invoc, but not synchronizing to read it, isn't good enough.
The JVM does a lot of optimizations, including CPU-specific caching and instruction reordering. It uses the volatile keyword and locking to decide when it can optimize freely and when it has to have an up-to-date value available for other threads to read. So when the reader doesn't use the lock the JVM can't know not to give it a stale value.
This quote from Java Concurrency in Practice (section 3.1.3) discusses how both writes and reads need to be synchronized:
Intrinsic locking can be used to guarantee that one thread sees the effects of another in a predictable manner, as illustrated by Figure 3.1. When thread A executes a synchronized block, and subsequently thread B enters a synchronized block guarded by the same lock, the values of variables that were visible to A prior to releasing the lock are guaranteed to be visible to B upon acquiring the lock. In other words, everything A did in or prior to a synchronized block is visible to B when it executes a synchronized block guarded by the same lock. Without synchronization, there is no such guarantee.
The next section (3.1.4) covers using volatile:
The Java language also provides an alternative, weaker form of synchronization, volatile variables, to ensure that updates to a variable are propagated predictably to other threads. When a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations. Volatile variables are not cached in registers or in caches where they are hidden from other processors, so a read of a volatile variable always returns the most recent write by any thread.
Back when we all had single-CPU machines on our desktops we'd write code and never have a problem until it ran on a multiprocessor box, usually in production. Some of the factors that give rise to the visiblity problems, things like CPU-local caches and instruction reordering, are things you would expect from any multiprocessor machine. Elimination of apparently unneeded instructions could happen for any machine, though. There's nothing forcing the JVM to ever make the reader see the up-to-date value of the variable, you're at the mercy of the JVM implementors. So it seems to me this code would not be a good bet for any CPU architecture.
Well!
private volatile int invoc = 0;
Will do the trick.
And see Are java primitive ints atomic by design or by accident? which sites some of the relevant java definitions. Apparently int is fine, but double & long might not be.
edit, add-on. The question asks, "see the correct value of invoc ?". What is "the correct value"? As in the timespace continuum, simultaneity doesn't really exist between threads. One of the above posts notes that the value will eventually get flushed, and the other thread will get it. Is the code "thread safe"? I would say "yes", because it won't "misbehave" based on the vagaries of sequencing, in this case.
Theoretically, it is possible that the read is cached. Nothing in Java memory model prevents that.
Practically, that is extremely unlikely to happen (in your particular example). The question is, whether JVM can optimize across a method call.
read #1
method();
read #2
For JVM to reason that read#2 can reuse the result of read#1 (which can be stored in a CPU register), it must know for sure that method() contains no synchronization actions. This is generally impossible - unless, method() is inlined, and JVM can see from the flatted code that there's no sync/volatile or other synchronization actions between read#1 and read#2; then it can safely eliminate read#2.
Now in your example, the method is Thread.sleep(). One way to implement it is to busy loop for certain times, depending on CPU frequency. Then JVM may inline it, and then eliminate read#2.
But of course such implementation of sleep() is unrealistic. It is usually implemented as a native method that calls OS kernel. The question is, can JVM optimize across such a native method.
Even if JVM has knowledge of internal workings of some native methods, therefore can optimize across them, it's improbable that sleep() is treated that way. sleep(1ms) takes millions of CPU cycles to return, there is really no point optimizing around it to save a few reads.
--
This discussion reveals the biggest problem of data races - it takes too much effort to reason about it. A program is not necessarily wrong, if it is not "correctly synchronized", however to prove it's not wrong is not an easy task. Life is much simpler, if a program is correctly synchronized and contains no data race.
As far as I understand the code it should be safe. The bytecode can be reordered, yes. But eventually invoc should be in sync with the main thread again. Synchronize guarantees that invoc is incremented correctly so there is a consistent representation of invoc in some register. At some time this value will be flushed and the little test succeeds.
It is certainly not nice and I would go with the answer I voted for and would fix code like this because it smells. But thinking about it I would consider it safe.
If you're not required to use "int", I would suggest AtomicInteger as an thread-safe alternative.

Categories