In the following code:
class A {
private int number;
public void a() {
number = 5;
}
public void b() {
while(number == 0) {
// ...
}
}
}
If method b is called and then a new thread is started which fires method a, then method b is not guaranteed to ever see the change of number and thus b may never terminate.
Of course we could make number volatile to resolve this. However for academic reasons let's assume that volatile is not an option:
The JSR-133 FAQs tells us:
After we exit a synchronized block, we release the monitor, which has the effect of flushing the cache to main memory, so that writes made by this thread can be visible to other threads. Before we can enter a synchronized block, we acquire the monitor, which has the effect of invalidating the local processor cache so that variables will be reloaded from main memory.
This sounds like I just need both a and b to enter and exit any synchronized-Block at all, no matter what monitor they use. More precisely it sounds like this...:
class A {
private int number;
public void a() {
number = 5;
synchronized(new Object()) {}
}
public void b() {
while(number == 0) {
// ...
synchronized(new Object()) {}
}
}
}
...would eliminate the problem and will guarantee that b will see the change to a and thus will also eventually terminate.
However the FAQs also clearly state:
Another implication is that the following pattern, which some people
use to force a memory barrier, doesn't work:
synchronized (new Object()) {}
This is actually a no-op, and your compiler can remove it entirely,
because the compiler knows that no other thread will synchronize on
the same monitor. You have to set up a happens-before relationship for
one thread to see the results of another.
Now that is confusing. I thought that the synchronized-Statement will cause caches to flush. It surely can't flush a cache to main memory in way that the changes in the main memory can only be seen by threads which synchronize on the same monitor, especially since for volatile which basically does the same thing we don't even need a monitor, or am I mistaken there? So why is this a no-op and does not cause b to terminate by guarantee?
The FAQ is not the authority on the matter; the JLS is. Section 17.4.4 specifies synchronizes-with relationships, which feed into happens-before relationships (17.4.5). The relevant bullet point is:
An unlock action on monitor m synchronizes-with all subsequent lock actions on m (where "subsequent" is defined according to the synchronization order).
Since m here is the reference to the new Object(), and it's never stored or published to any other thread, we can be sure that no other thread will acquire a lock on m after the lock in this block is released. Furthermore, since m is a new object, we can be sure that there is no action that previously unlocked on it. Therefore, we can be sure that no action formally synchronizes-with this action.
Technically, you don't even need to do a full cache flush to be up to the JLS spec; it's more than the JLS requires. A typical implementation does that, because it's the easiest thing the hardware lets you do, but it's going "above and beyond" so to speak. In cases where escape analysis tells an optimizing compiler that we need even less, the compiler can perform less. In your example, escape analysis can could tell the compiler that the action has no effect (due to the reasoning above) and can be optimized out entirely.
the following pattern, which some people use to force a memory barrier, doesn't work:
It's not guaranteed to be a no-op, but the spec permits it to be a no-op. The spec only requires synchronization to establish a happens-before relationship between two threads when the two threads synchronize on the same object, but it actually would be easier to implement a JVM where the identity of the object did not matter.
I thought that the synchronized-Statement will cause caches to flush
There is no "cache" in the Java Language Specification. That's a concept that only exists in the details of some (well, O.K., virtually all) hardware platforms and JVM implementations.
Related
I want to know how does the JVM guarantee the visibility of member variable modifications in the referenced object when using synchronized.
I know synchronized and volatile will provide visibility for variable modifications.
class Test{
public int a=0;
public void modify(){
a+=1;
}
}
//Example:
// Thread A:
volatile Test test=new Test();
synchronized(locker){
test.modify();
}
// then thread B:
synchronized(locker){
test.modify();
}
// Now, I think test.a==2 is true. Is it ok? How JVM implements it?
// I know the memory barrier, does it flush all cache to main storage?
Thread A call modify in a sychronized block first, and then pass the object to thread B (Write the reference to a volatile variable.).
Then thread B call modify again (in synchronized).
Is there any guarantee for a==2? And how is the JVM implemented?
Visibility between threads is enforced with Memory Barriers/Fences. In case of synchronized block JVM will insert a memory barrier after the execution of the block completes.
JVM implements memory barriers with CPU instruction e.g. a store barrier is done with sfence and load barrier is done with lfence instruction on x86. There is also mfence and possibly other instructions which can be specific to CPU architecture.
For your (still incomplete!) example, if we can assume the following:
The code in thread A initializing test is guaranteed to run before thread B uses it.
The locker variable contains a reference to the same object for threads A & B.
then we can prove that a == 2 will be true at the point you indicate. If precondition 1 is not guaranteed, then thread B may get an NPE. If precondition 2 is not guaranteed (i.e. threads A and B may synchronize on different objects) then there is not a proper happens-before relationship to ensure that thread B sees the result of thread A's actions on a.
(#NathanHughes commented that the volatile is unnecessary. I wouldn't necessarily agree with that. It depends on details of your example that you still haven't show us.)
How JVM implements it?
The actual implementation is Java platform and (in theory) version specific. The JVM spec Memory Model places constraints on how a program that obeys "the rules" will behave. It is entirely implementation specific how that actually happens.
I know the memory barrier, does it flush all cache to main storage?
That is implementation specific too. There are different kinds of memory barrier that work in different ways. The JIT compiler will emit native code that uses the appropriate instructions to meet the guarantees required by the JLS. If there is a way to do this without doing a full cache flush then the implementation may do that.
(There is a JVM command line option to tell the JIT compiler to output the native code. If you really want to know what is happening under the hood, that is a good place to start looking.)
But if you are trying to understand / analyze your application's thread-safety, you should be doing it in terms of the Java Memory Model. Also, use higher level concurrency abstractions that allow you to avoid the lower level pitfalls.
I'm trying to understand the need for volatile in double-checked locking (I'm aware there are better ways than DCL though) I've read a few SO questions similar to mine, but none seem to explain what I'm looking for. I've even found some upvoted answers on SO that have said volatile is not needed (even when the object is mutable) however, everything I've read says otherwise.
What I want to know is why volatile is necessary in DCL if synchronized creates a happens-before relationship and prevents reordering?
Here is my understanding of how DCL works and an example:
// Does not work
class Foo {
private Helper helper = null; // 1
public Helper getHelper() { // 2
if (helper == null) { // 3
synchronized(this) { // 4
if (helper == null) { // 5
helper = new Helper(); // 6
} // 7
} // 8
} // 9
return helper; // 10
}
This does not work because the Helper object is not immutable or volatile and we know that
volatile causes every write to be flushed to memory and for every read to come from memory. This is important so that no thread sees a stale object.
So in the example I listed, it's possible for Thread A to begin initializing a new Helper object at Line 6. Then Thread B comes along and see a half initialized object at line 3. Thread B then jumps to line 10 and returns a half initialized Helper object.
Adding volatile fixes this with a happens before relationship and no reordering can be done by the JIT compiler. So the Helper object cannot be written to the helper reference until it is fully constructed (?, at least this is what I think it is telling me...).
However, after reading JSR-133 documentation, I became a bit confused. It states
Synchronization ensures that memory writes by a thread before or
during a synchronized block are made visible in a predictable manner
to other threads which synchronize on the same monitor. After we exit
a synchronized block, we release the monitor, which has the effect of
flushing the cache to main memory, so that writes made by this thread
can be visible to other threads. Before we can enter a synchronized
block, we acquire the monitor, which has the effect of invalidating
the local processor cache so that variables will be reloaded from main
memory. We will then be able to see all of the writes made visible by
the previous release.
So synchronized in Java creates a memory barrier and a happens before relationship.
So the actions are being flushed to memory, so it makes me question why volatile is needed on the variable.
The documentation also states
This means that any memory operations which were visible to a thread
before exiting a synchronized block are visible to any thread after it
enters a synchronized block protected by the same monitor, since all
the memory operations happen before the release, and the release
happens before the acquire.
My guess as to why we need the volatile keyword and why synchronize is not enough, is because the memory operations are not visible to other threads until Thread A exits the synchronized block and Thread B enters the same block on the same lock.
It's possible that Thread A is initializing the object at line 6 and Thread B comes along at Line 3 before there is a flush by Thread A at Line 8.
However, this SO answer seems to contradict that as the synchronized block prevents reordering "from inside a synchronized block, to outside it"
If helper is not null, what ensures that the code will see all the effects of the construction of the helper? Without volatile, nothing would do so.
Consider:
synchronized(this) { // 4
if (helper == null) { // 5
helper = new Helper(); // 6
} // 7
Suppose internally this is implemented as first setting helper to a non-null value and then calling the constructor to create a valid Helper object. No rule prevents this.
Another thread may see helper as non-null but the constructor hasn't even run yet, much less made its effects visible to another thread.
It is vital not to permit any other thread to see helper set to a non-null value until we can guarantee that all consequences of the constructor are visible to that thread.
By the way, getting code like this correct is extremely difficult. Worse, it can appear to work fine 100% of the time and then suddenly break on a different JVM, CPU, library, platform, or whatever. It is generally advised that writing this kind of code be avoided unless proven to be needed to meet performance requirements. This kind of code is hard to write, hard to understand, hard to maintain, and hard to get right.
#David Schwartz's answer is pretty good but there is one thing that I'm not sure is stated well.
My guess as to why we need the volatile keyword and why synchronize is not enough, is because the memory operations are not visible to other threads until Thread A exits the synchronized block and Thread B enters the same block on the same lock.
Actually not the same lock but any lock because locks come with memory barriers. volatile is not about locking but it is around crossing memory barriers while synchronized blocks are both locks and memory barriers. You need the volatile because even though Thread A has properly initialized the Helper instance and published it to helper field, Thread B needs to also cross a memory barrier to ensure that it sees all of the updates to Helper.
So in the example I listed, it's possible for Thread A to begin initializing a new Helper object at Line 6. Then Thread B comes along and see a half initialized object at line 3. Thread B then jumps to line 10 and returns a half initialized Helper object.
Right. It is possible that Thread A might initialize the Helper and publish it before it hits the end of the synchronized block. There is nothing stopping it from happening. And because the JVM is allowed to reorder the instructions from the Helper constructor until later, it could be published to helper field but not be fulling initialized. And even if Thread A does reach the end of the synchronized block and Helper then gets fully initialized, there is still nothing that ensures that Thread B sees all of the updated memory.
However, this SO answer seems to contradict that as the synchronized block prevents reordering "from inside a synchronized block, to outside it"
No, that answer is not contradictory. You are confusing what happens with just Thread A and what happens to other threads. In terms of Thread A (and central memory), exiting the synchronized block makes sure that Helper's constructor has fully finished and published to the helper field. But this means nothing until Thread B (or other threads) also cross a memory barrier. Then they too will invalidate the local memory cache and see all of the updates.
That's why the volatile is necessary.
Consider the following code sample:
private Object lock = new Object();
private volatile boolean doWait = true;
public void conditionalWait() throws Exception {
synchronized (lock) {
if (doWait) {
lock.wait();
}
}
}
public void cancelWait() throws Exception {
doWait = false;
synchronized (lock) {
lock.notifyAll();
}
}
If I understand the Java Memory Model correctly, then above code is not Thread-safe. It might very well block because the compiler might decide to rearrange the code as follows:
public void cancelWait() throws Exception {
synchronized (lock) {
lock.notifyAll();
}
doWait = false;
}
In this case it might happen that thread T1 calls the cancelWait() method, aquire the lock, call notifyAll() and release lock. After this a parallel thread T2 could call conditionalWait() and aquire the now available lock. The variable doWait still has value true, thus thread T2 executes lock.wait() and blocks.
Is my understanding correct? If not, then please provide according references from the Java Specification which disprove above scenario.
Is there a solution that resolves this issue that does not require pulling doWait into the synchronized block?
The question you are asking is actually
Can a monitor enter be re-ordered above a volatile store?
No, your transformation cannot happen. Take a look at the grid linked at the top of http://gee.cs.oswego.edu/dl/jmm/cookbook.html.
First Operation: Volatile Store
Second Operation: Monitor Enter
Result: No
So the compiler cannot re-order as you suggest.
Your code is broken, but not because of reordering or visibility issues. Reordering problems occur in the absence of sufficient synchronization, which is not the case here. You have done everything possible, in terms of marking things volatile or synchronized, to let the JVM know to make the right things visible across threads.
Your problem here is that you're making several false assumptions:
You're thinking wait can never return until it gets a notification (this may not happen frequently, but it can happen, this is called a "spurious wakeup").
You're assuming that another thread can't barge in between the time the notification happens and the time that the waiting thread can reacquire the monitor. (Object#wait releases the monitor, and upon reacquiring it the thread needs to re-check what the current state is, instead of proceeding based on possibly outdated assumptions.)
You're assuming you can predict that the notify will happen after the wait (can't say whether that's true in this case since you didn't post a complete working example, but in general this is not something you want to assume).
There are lots of toy examples (thinking of the even-odd assignment) that get away with this because they are limited to only 2 threads, the race condition that causes spurious wakeups doesn't happen often on PC JVMs, and the program forces the two threads to act in lock-step so the order in which things happen is predictable. But those aren't realistic assumptions for the real world.
The fix for these bad assumptions is to wait in a loop using a condition variable to decide when you're done waiting (see this Oracle tutorial):
private final Object lock = new Object(); // final to emphasize this shouldn't change
private volatile boolean doWait = true;
public void conditionalWait() throws InterruptedException {
synchronized (lock) {
while (doWait) {
lock.wait();
}
}
}
public void cancelWait() {
doWait = false;
synchronized (lock) {
lock.notifyAll();
}
}
(I narrowed the exceptions thrown, the only thing thrown by notifyAll is IllegalMonitorStateException, which is unchecked and won't happen as long as you're using the right locks, it's only thrown as a result of programmer error.
Object#wait throws InterruptedException as well as IllegalMonitorStateException, it's ok to let it be thrown here.)
It would be just as well here to move the references to the doWait variable into the synchronized blocks, if all references to it are made while holding a lock then you don't need to make it volatile. But this isn't required.
The Java memory model guarantees sequential consistency when your program is correctly synchronized. Since your code above is correctly synchronized, then the reordering doesn't happen.
Happens Before Order
A program is correctly synchronized if and only if all sequentially consistent executions are free of data races.
If a program is correctly synchronized, then all executions of the program will appear to be sequentially consistent (§17.4.3).
This is an extremely strong guarantee for programmers. Programmers do not need to reason about reorderings to determine that their code contains data races. Therefore they do not need to reason about reorderings when determining whether their code is correctly synchronized. Once the determination that the code is correctly synchronized is made, the programmer does not need to worry that reorderings will affect his or her code.
This can be confusing since sequential consistency is defined previously in the inter-thread section of the spec (dealing with only a single thread).
Programs and Program Order
A set of actions is sequentially consistent if all actions occur in a total order (the execution order) that is consistent with program order, and furthermore, each read r of a variable v sees the value written by the write w to v such that:
w comes before r in the execution order, and
there is no other write w' such that w comes before w' and w' comes before r in the execution order.
Sequential consistency is a very strong guarantee that is made about visibility and ordering in an execution of a program. Within a sequentially consistent execution, there is a total order over all individual actions (such as reads and writes) which is consistent with the order of the program, and each individual action is atomic and is immediately visible to every thread.
If a program has no data races, then all executions of the program will appear to be sequentially consistent.
So what sequential consistency boils down to is that your program, when correctly synchronized, must appear to work like each read and write was completed exactly in the order specified in your program. No reordering are allowed (or allowed to be visible).
Normally when you talk about a write being reordered you're talking about a p-threads memory model, used by C++ (I think), which specifies when writes can and cannot be re-ordered past a memory barrier. It's a popular memory model and a lot of people know it.
Java doesn't have the concept of memory barriers. Java is similar, but not the same as, the p-thread spec, so don't get the two confused. In Java, either you have a program that works exactly in the order you specify in your program, or you have no guarantees at all if you don't synchronize. It's one or the other, and your case the write to the volatile has to appear in program order.
Re. your question in your comment below: I don't think it's that hard to find happens-before in the spec. Synchronization Order says:
Every execution has a synchronization order. A synchronization order is a total order over all of the synchronization actions of an execution. For each thread t, the synchronization order of the synchronization actions (§17.4.2) in t is consistent with the program order (§17.4.3) of t.
Synchronization actions induce the synchronized-with relation on actions, defined as follows:
An unlock action on monitor m synchronizes-with all subsequent lock actions on m (where "subsequent" is defined according to the synchronization order).
And back to some definitions in Happens Before Order:
Two actions can be ordered by a happens-before relationship. If one action happens-before another, then the first is visible to and ordered before the second.
If we have two actions x and y, we write hb(x, y) to indicate that x happens-before y.
If x and y are actions of the same thread and x comes before y in program order, then hb(x, y).
If an action x synchronizes-with a following action y, then we also have hb(x, y).
So, the unlock of your monitor synchronized (lock) in cancelWait() synchronizes-with the lock acquire action in conditionalWait(). Synchroizes-with creates a happens-before relationship (see the very last line of that quote directly above). Therefore the assignment of doWait=false; must be visible when it is read in conditionalWait().
(Happens Before Order also says:
If hb(x, y) and hb(y, z), then hb(x, z).
so we know that if the volatile is assigned before the lock release, and a new lock acquire then happens after the lock release, it must be that the volatile assignment happens-before the lock acquire and is therefore visible.)
According JSL specification it's impossible
http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.4.5
also you could look into
Java memory model : compiler rearranging code lines
http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile
My teacher in an upper level Java class on threading said something that I wasn't sure of.
He stated that the following code would not necessarily update the ready variable. According to him, the two threads don't necessarily share the static variable, specifically in the case when each thread (main thread versus ReaderThread) is running on its own processor and therefore doesn't share the same registers/cache/etc and one CPU won't update the other.
Essentially, he said it is possible that ready is updated in the main thread, but NOT in the ReaderThread, so that ReaderThread will loop infinitely.
He also claimed it was possible for the program to print 0 or 42. I understand how 42 could be printed, but not 0. He mentioned this would be the case when the number variable is set to the default value.
I thought perhaps it is not guaranteed that the static variable is updated between the threads, but this strikes me as very odd for Java. Does making ready volatile correct this problem?
He showed this code:
public class NoVisibility {
private static boolean ready;
private static int number;
private static class ReaderThread extends Thread {
public void run() {
while (!ready) Thread.yield();
System.out.println(number);
}
}
public static void main(String[] args) {
new ReaderThread().start();
number = 42;
ready = true;
}
}
There isn't anything special about static variables when it comes to visibility. If they are accessible any thread can get at them, so you're more likely to see concurrency problems because they're more exposed.
There is a visibility issue imposed by the JVM's memory model. Here's an article talking about the memory model and how writes become visible to threads. You can't count on changes one thread makes becoming visible to other threads in a timely manner (actually the JVM has no obligation to make those changes visible to you at all, in any time frame), unless you establish a happens-before relationship.
Here's a quote from that link (supplied in the comment by Jed Wesley-Smith):
Chapter 17 of the Java Language Specification defines the happens-before relation on memory operations such as reads and writes of shared variables. The results of a write by one thread are guaranteed to be visible to a read by another thread only if the write operation happens-before the read operation. The synchronized and volatile constructs, as well as the Thread.start() and Thread.join() methods, can form happens-before relationships. In particular:
Each action in a thread happens-before every action in that thread that comes later in the program's order.
An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor.
A write to a volatile field happens-before every subsequent read of that same field. Writes and reads of volatile fields have similar memory consistency effects as entering and exiting monitors, but do not entail mutual exclusion locking.
A call to start on a thread happens-before any action in the started thread.
All actions in a thread happen-before any other thread successfully returns from a join on that thread.
He was talking about visibility and not to be taken too literally.
Static variables are indeed shared between threads, but the changes made in one thread may not be visible to another thread immediately, making it seem like there are two copies of the variable.
This article presents a view that is consistent with how he presented the info:
http://jeremymanson.blogspot.com/2008/11/what-volatile-means-in-java.html
First, you have to understand a little something about the Java memory model. I've struggled a bit over the years to explain it briefly and well. As of today, the best way I can think of to describe it is if you imagine it this way:
Each thread in Java takes place in a separate memory space (this is clearly untrue, so bear with me on this one).
You need to use special mechanisms to guarantee that communication happens between these threads, as you would on a message passing system.
Memory writes that happen in one thread can "leak through" and be seen by another thread, but this is by no means guaranteed. Without explicit communication, you can't guarantee which writes get seen by other threads, or even the order in which they get seen.
...
But again, this is simply a mental model to think about threading and volatile, not literally how the JVM works.
Basically it's true, but actually the problem is more complex. Visibility of shared data can be affected not only by CPU caches, but also by out-of-order execution of instructions.
Therefore Java defines a Memory Model, that states under which circumstances threads can see consistent state of the shared data.
In your particular case, adding volatile guarantees visibility.
They are "shared" of course in the sense that they both refer to the same variable, but they don't necessarily see each other's updates. This is true for any variable, not just static.
And in theory, writes made by another thread can appear to be in a different order, unless the variables are declared volatile or the writes are explicitly synchronized.
Within a single classloader, static fields are always shared. To explicitly scope data to threads, you'd want to use a facility like ThreadLocal.
When you initialize static primitive type variable java default assigns a value for static variables
public static int i ;
when you define the variable like this the default value of i = 0;
thats why there is a possibility to get you 0.
then the main thread updates the value of boolean ready to true. since ready is a static variable , main thread and the other thread reference to the same memory address so the ready variable change. so the secondary thread get out from while loop and print value.
when printing the value initialized value of number is 0. if the thread process has passed while loop before main thread update number variable. then there is a possibility to print 0
#dontocsata
you can go back to your teacher and school him a little :)
few notes from the real world and regardless what you see or be told.
Please NOTE, the words below are regarding this particular case in the exact order shown.
The following 2 variable will reside on the same cache line under virtually any know architecture.
private static boolean ready;
private static int number;
Thread.exit (main thread) is guaranteed to exit and exit is guaranteed to cause a memory fence, due to the thread group thread removal (and many other issues). (it's a synchronized call, and I see no single way to be implemented w/o the sync part since the ThreadGroup must terminate as well if no daemon threads are left, etc).
The started thread ReaderThread is going to keep the process alive since it is not a daemon one!
Thus ready and number will be flushed together (or the number before if a context switch occurs) and there is no real reason for reordering in this case at least I can't even think of one.
You will need something truly weird to see anything but 42. Again I do presume both static variables will be in the same cache line. I just can't imagine a cache line 4 bytes long OR a JVM that will not assign them in a continuous area (cache line).
I see this code quite frequently in some OSS unit tests, but is it thread safe ? Is the while loop guaranteed to see the correct value of invoc ?
If no; nerd points to whoever also knows which CPU architecture this may fail on.
private int invoc = 0;
private synchronized void increment() {
invoc++;
}
public void isItThreadSafe() throws InterruptedException {
for (int i = 0; i < TOTAL_THREADS; i++) {
new Thread(new Runnable() {
public void run() {
// do some stuff
increment();
}
}).start();
}
while (invoc != TOTAL_THREADS) {
Thread.sleep(250);
}
}
No, it's not threadsafe. invoc needs to be declared volatile, or accessed while synchronizing on the same lock, or changed to use AtomicInteger. Just using the synchronized method to increment invoc, but not synchronizing to read it, isn't good enough.
The JVM does a lot of optimizations, including CPU-specific caching and instruction reordering. It uses the volatile keyword and locking to decide when it can optimize freely and when it has to have an up-to-date value available for other threads to read. So when the reader doesn't use the lock the JVM can't know not to give it a stale value.
This quote from Java Concurrency in Practice (section 3.1.3) discusses how both writes and reads need to be synchronized:
Intrinsic locking can be used to guarantee that one thread sees the effects of another in a predictable manner, as illustrated by Figure 3.1. When thread A executes a synchronized block, and subsequently thread B enters a synchronized block guarded by the same lock, the values of variables that were visible to A prior to releasing the lock are guaranteed to be visible to B upon acquiring the lock. In other words, everything A did in or prior to a synchronized block is visible to B when it executes a synchronized block guarded by the same lock. Without synchronization, there is no such guarantee.
The next section (3.1.4) covers using volatile:
The Java language also provides an alternative, weaker form of synchronization, volatile variables, to ensure that updates to a variable are propagated predictably to other threads. When a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations. Volatile variables are not cached in registers or in caches where they are hidden from other processors, so a read of a volatile variable always returns the most recent write by any thread.
Back when we all had single-CPU machines on our desktops we'd write code and never have a problem until it ran on a multiprocessor box, usually in production. Some of the factors that give rise to the visiblity problems, things like CPU-local caches and instruction reordering, are things you would expect from any multiprocessor machine. Elimination of apparently unneeded instructions could happen for any machine, though. There's nothing forcing the JVM to ever make the reader see the up-to-date value of the variable, you're at the mercy of the JVM implementors. So it seems to me this code would not be a good bet for any CPU architecture.
Well!
private volatile int invoc = 0;
Will do the trick.
And see Are java primitive ints atomic by design or by accident? which sites some of the relevant java definitions. Apparently int is fine, but double & long might not be.
edit, add-on. The question asks, "see the correct value of invoc ?". What is "the correct value"? As in the timespace continuum, simultaneity doesn't really exist between threads. One of the above posts notes that the value will eventually get flushed, and the other thread will get it. Is the code "thread safe"? I would say "yes", because it won't "misbehave" based on the vagaries of sequencing, in this case.
Theoretically, it is possible that the read is cached. Nothing in Java memory model prevents that.
Practically, that is extremely unlikely to happen (in your particular example). The question is, whether JVM can optimize across a method call.
read #1
method();
read #2
For JVM to reason that read#2 can reuse the result of read#1 (which can be stored in a CPU register), it must know for sure that method() contains no synchronization actions. This is generally impossible - unless, method() is inlined, and JVM can see from the flatted code that there's no sync/volatile or other synchronization actions between read#1 and read#2; then it can safely eliminate read#2.
Now in your example, the method is Thread.sleep(). One way to implement it is to busy loop for certain times, depending on CPU frequency. Then JVM may inline it, and then eliminate read#2.
But of course such implementation of sleep() is unrealistic. It is usually implemented as a native method that calls OS kernel. The question is, can JVM optimize across such a native method.
Even if JVM has knowledge of internal workings of some native methods, therefore can optimize across them, it's improbable that sleep() is treated that way. sleep(1ms) takes millions of CPU cycles to return, there is really no point optimizing around it to save a few reads.
--
This discussion reveals the biggest problem of data races - it takes too much effort to reason about it. A program is not necessarily wrong, if it is not "correctly synchronized", however to prove it's not wrong is not an easy task. Life is much simpler, if a program is correctly synchronized and contains no data race.
As far as I understand the code it should be safe. The bytecode can be reordered, yes. But eventually invoc should be in sync with the main thread again. Synchronize guarantees that invoc is incremented correctly so there is a consistent representation of invoc in some register. At some time this value will be flushed and the little test succeeds.
It is certainly not nice and I would go with the answer I voted for and would fix code like this because it smells. But thinking about it I would consider it safe.
If you're not required to use "int", I would suggest AtomicInteger as an thread-safe alternative.