On my machine the following code runs indefinitely (Java 1.7.0_07):
public static void main(String[] args) throws InterruptedException {
Thread backgroundThread = new Thread(new Runnable() {
public void run() {
int i = 0;
while (!stopRequested) {
i++;
}
}
});
backgroundThread.start();
TimeUnit.SECONDS.sleep(1);
stopRequested = true;
}
However, add a single lock object and a single synchronized statement NOT around stopRequested (in fact, nothing occurs in the synchronized block), and it terminates:
public static void main(String[] args) throws InterruptedException {
Thread backgroundThread = new Thread(new Runnable() {
public void run() {
Object lock = new Object();
int i = 0;
while (!stopRequested) {
synchronized (lock) {}
i++;
}
}
});
backgroundThread.start();
TimeUnit.SECONDS.sleep(1);
stopRequested = true;
}
In the original code, the variable stopRequested is "hoisted", becoming:
if (!stopRequested)
while (true)
i++;
However, in the modified version, it seems this optimization is not occurring, why? (In fact, why is synchronized not optimized away entirely?)
VM is unable to reason that the lock is not synchronized by other threads, so it cannot be optimized away.
Per Java Memory Model, all synchronization blocks are totally ordered, and this order (on the same lock) helps to establish happens-before relation. That's why VM can't remove a synchronization block; unless VM can prove that only one thread is ever synchronizing on an object, then all these sync blocks can be removed with no impact on happens-before relation.
If the lock is a local object, VM could do escape analysis to elide the lock. We've been hearing about escape analysis for years, but as the example shows, and as I've tested not very long ago, it doesn't seems to be working yet.
There might be a reason why lock elision isn't being done. The optimization is great for code that uses local Vector or StringBuffer etc. But that's only in old codes; nobody does that for a long time.
Some code might even depend on the stronger pre-java-5 model, in which no lock can be elided ever. There might be many programs, similar to OP's crafted example, that are incorrect in the new model, but have been working for years in the past. Lock elision may break these programs.
While this might look like a memory visibility issue, it is usually a JIT optimisation issue in simple examples. This is because the JIT can detect whether you are modifying the flag in that thread can inline it if you don't. Effectively turning it into an infinite loop.
One way you can tell this is that visibility issues are short lived, usually too short for you to see. While they are random they are usually one micro-second to a milli-second. i.e. until the thread context switches and when it runs again it doesn't keep the old value with it. The fact you can see examples where it consistently turned into an infinite loop which never "detects" the change is a give away.
If you just slow down the loop with a Thread.sleep(10) this can prevent it running long enough to be compiled. It has to loop 10,000+ times to be optimised. This usually "fixes" the problem.
Adding thread safety code such as using a volatile variable or adding a synchronized block can prevent optimisation from being made.
Related
I'm exploring an example of a simple android game and I have a question about its synchronization logic.
Given two fields:
private boolean mRun = false;
private final Object mRunLock = new Object();
Method setRunning in a worker thread class:
public void setRunning(boolean b) {
synchronized (mRunLock) {
mRun = b;
}
}
And method run in the same class:
public void run() {
while (mRun) {
Canvas c = null;
try {
c = mSurfaceHolder.lockCanvas(null);
synchronized (mSurfaceHolder) {
if (mMode == STATE_RUNNING) updatePhysics();
synchronized (mRunLock) {
if (mRun) doDraw(c);
}
}
} finally {
if (c != null) {
mSurfaceHolder.unlockCanvasAndPost(c);
}
}
}
}
Is it correct to not synchronize mRun in the while statement? I think setRunning might potencially be called while mRun is being checked for true.
I don't think the code is correct.
You should probably do something like:
while (true) {
synchronized (mRunLock) {
if (mRun) break;
}
// ...
}
Without this, you don't have a guarantee that writing to mRun happens-before the read in the condition.
It will sort-of work without it, because you are reading mRun inside a synchronized block inside the loop; provided that read is executed, the value will be updated. But the value you read in the loop expression on the next iteration could be the same value as was read on the previous iteration in the synchronized (mRunLock) { if (mRun) doDraw(c); }.
Critically, it isn't guaranteed to read an up-to-date value on the initial iteration. If false is cached, the loop won't execute.
Making mRun volatile would be easier than using synchronization, though.
You need to keep the 'synchronized' statements. If you don't (though note that android, which isn't really java, may not be adhering to the same memory model as actual java), then any thread is free to make a temporary clone for any field of any instance it wants, and synchronize any writes to the clone at some undefined later point in time with any other thread's clone.
To avoid the issues with these 'clones'*, you need to establish CBCA relationships ("comes before/comes after") - if the thread model ensures that line X in thread A definitely ran after line Y in thread B, then any field writes done by line Y will guaranteed be visible in line X.
In other words, with the synchronized statements, if the mRunLock lock in your run() method has to 'wait' for the setRunning method to finish running, you just established a CBCA relationship between the two, and it's crucial because that means the mRun write done by setRunning is now visible. If you didn't, it may be visible, it may not be, it depends on the chip in your phone and the phase of the moon.
Note that boolean writes are otherwise atomic. So it's not so much about any issues that would occur if you read whilst the field is being written (that is not a problem in itself if the field's type is decreed as being atomic, which all primitives other than double and long are), it's ensuring visibility of any changes.
In plain jane java you'd probably use an AtomicBoolean for this and avoid using any synchronized anything. Note also that nesting synchronized() on different locks (you lock on mSurfaceHolder, and then lock on mRunLock) can lead to deadlocks if any code does it 'in reverse' (locks on mRunLock first, then locks on mSurfaceHolder).
Are you running into any problems with this code, or just wondering 'is it correct'? If the latter: Yes, it is correct.
*) Whilst this clone thing sounds tedious and errorprone, the only alternative is that any field write by any thread is immediately visible by any other thread. That would slow everything waaaaay down; the VM has no idea which writes have the potential to be read soon by another thread, and if you know anything about modern CPU architecture, each core has its own cache that is orders of magnitude (100 to 1000 times!) faster than system memory. This alternative of 'all writes must always be visible everywhere' would pretty much mean that fields can never be in any caches ever. That'd be disastrous for performance. This memory model is therefore basically a necessary evil. There are languages that don't have it; they tend to be orders of magnitude slower than java.
In the following scenario, the boolean 'done' gets set to true which should end the program. Instead the program just keeps going on even though the while(!done) is no longer a valid scenario thus it should have halted. Now if I were to add in a Thread sleep even with zero sleep time, the program terminates as expected. Why is that?
public class Sample {
private static boolean done;
public static void main(String[] args) throws InterruptedException {
done = false;
new Thread(() -> {
System.out.println("Running...");
int count = 0;
while (!done) {
count++;
try {
Thread.sleep(0); // program only ends if I add this line.
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}).start();
Thread.sleep(2000);
done = true; // this is set to true after 2 seconds so program should end.
System.out.println("Done!"); // this gets printed after 2 seconds
}
}
EDIT: I am looking to understand why the above needs Thread.sleep(0) to terminate. I do not want to use volatile keyword unless it is an absolute must and I do understand that would work by exposing my value to all threads which is not my intention to expose.
Each thread have a different cached version of done created for performance, your counter thread is too busy making the calculations for count that it doesnt give a chance to reload done.
volatile ensures that any read/write is done on the main memory, always update the cpu cache copy.
Thread.sleep always pause the current thread, so even if 0 your counter thread is interrupted by some time <1ms, that is enough time for the thread to be adviced of done variable change.
I am no Java expert man, I don't even program in java, but let me try.
A thread on stackoverflow explains the Java Memory model: Are static variables shared between threads?
Important part: https://docs.oracle.com/javase/6/docs/api/java/util/concurrent/package-summary.html#MemoryVisibility
Chapter 17 of the Java Language Specification defines the
happens-before relation on memory operations such as reads and writes
of shared variables. The results of a write by one thread are
guaranteed to be visible to a read by another thread only if the write
operation happens-before the read operation. The synchronized and
volatile constructs, as well as the Thread.start() and Thread.join()
methods, can form happens-before relationships.
If you go through the thread, it mentions the "Happens before" logic when executing threads that share a variable. So my guess is when you call Thread.sleep(0), the main thread is able to set the done variable properly making sure that it "Happens first". Though, in a multi-threaded environment even that is not guaranteed. But since the code-piece is so small it makes it work in this case.
To sum it up, I just ran your program with a minor change to the variable "done" and the program worked as expected:
private static volatile boolean done;
Thank you. Maybe someone else can give you a better explanation :P
I'm looking at a code sample from "Java Concurrency in Practice" by Brian Goetz. He says that it is possible that this code will stay in an infinite loop because "the value of 'ready' might never become visible to the reader thread". I don't understand how this can happen...
public class NoVisibility {
private static boolean ready;
private static int number;
private static class ReaderThread extends Thread {
public void run() {
while (!ready)
Thread.yield();
System.out.println(number);
}
}
public static void main(String[] args) {
new ReaderThread().start();
number = 42;
ready = true;
}
}
Because ready isn't marked as volatile and the value may be cached at the start of the while loop because it isn't changed within the while loop. It's one of the ways the jitter optimizes the code.
So it's possible that the thread starts before ready = true and reads ready = false caches that thread-locally and never reads it again.
Check out the volatile keyword.
The reason is explained in the section following the one with the code sample.
3.1.1 Stale data
NoVisibility demonstrated on of the ways that insufficiently synchronized programs can cause surprising results: stale data. When the reader thread examines ready, it may see an out-of-date value. Unless synchronization is used every time a variable is accessed, it is possible to see a stale value for that variable.
The Java Memory Model allows the JVM to optimize reference accesses and such as if it is a single threaded application, unless the field is marked as volatile or the accesses with a lock being held (the story gets a bit complicated with locks actually).
In the example, you provided, the JVM could infer that ready field may not be modified within the current thread, so it would replace !ready with false, causing an infinite loop. Marking the the field as volatile would cause the JVM to check the field value every time (or at least ensure that ready changes propagate to the running thread).
The problem is rooted in the hardware -- each CPU has different behavior with respect to cache coherence, memory visibility, and reordering of operations. Java is in better shape here than C++ because it defines a cross-platform memory model that all programmers can count on. When Java runs on a system whose memory model is weaker than that required by the Java Memory Model, the JVM has to make up the difference.
Languages like C "inherit" the memory model of the underlying hardware. There is work afoot to give C++ a formal memory model so that C++ programs can mean the same thing on different platforms.
private static boolean ready;
private static int number;
The way the memory model can work is that each thread could be reading and writing to its own copy of these variables (the problem affects non-static member variables too). This is a consequence of the way the underlying architecture can work.
Jeremy Manson and Brian Goetz:
In multiprocessor systems, processors generally have one or more layers of memory cache,which improves performance both by speeding access to data (because the data is closer to the processor) and reducing traffic on the shared memory bus (because many memory operations can be satisfied by local caches.) Memory caches can improve performance tremendously, but they present a host of new challenges. What, for example, happens when two processors examine the same memory location at the same time? Under what conditions will they see the same value?
So, in your example, the two threads might run on different processors, each with a copy of ready in their own, separate caches. The Java language provides the volatile and synchronized mechanisms for ensuring that the values seen by the threads are in sync.
public class NoVisibility {
private static boolean ready = false;
private static int number;
private static class ReaderThread extends Thread {
#Override
public void run() {
while (!ready) {
Thread.yield();
}
System.out.println(number);
}
}
public static void main(String[] args) throws InterruptedException {
new ReaderThread().start();
number = 42;
Thread.sleep(20000);
ready = true;
}
}
Place the Thread.sleep() call for 20 secs what will happen is JIT will kick in during those 20 secs and it will optimize the check and cache the value or remove the condition altogether. And so the code will fail on visibility.
To stop that from happening you MUST use volatile.
I came across following example in book 'Java Concurrency in Practice'.
public class NoVisibility {
private static boolean ready;
private static int number;
private static class ReaderThread extends Thread {
public void run() {
while (!ready)
Thread.yield();
System.out.println(number);
}
}
public static void main(String[] args) {
new ReaderThread().start();
number = 42;
ready = true;
}
}
Its stated further as:
NoVisibility could loop forever because the value of ready might never become
visible to the reader thread. Even more strangely, NoVisibility could print
zero because the write to ready might be made visible to the reader thread before
the write to number, a phenomenon known as reordering.
I can understand reordering issue, but I a not able to comprehend the visibility issue. Why the value of ready might never become visible to reader thread? Once main thread writes value in ready, sooner or later reader thread would get its chance to run and it can read value of ready. Why is it that change made by main thread in ready might not be visible to reader thread?
ReaderThread's run() method may never see the latest value of ready because it's free to assume and optimize that the value will not change outside of it's thread. This assumption can be taken away by using the relevant concurrency features of the language like adding the keyword volatile to ready's declaration.
I believe this is a new problem that started happening with multi-core CPUs and separate CPU caches.
There would be no need to worry if you were actually reading and modifying memory, and even with multi-CPUs you'd be safe except that each CPU now has it's own cache. The memory location would be cached and the other thread will never see it because it will be operating exclusively out of the cache.
When you make it volatile it forces both threads to go directly to memory every time--so it slows things down quite a bit but it's thread safe.
I found a piece of code where the thread seems to starve. Below is a simplified example. Is this an example for starvation? What is the reason why the thread does not terminate?
Note: Changing the sleep to 1 will sometimes result in termination. The commented out Thread.yield() would solve the problem (for me).
public class Foo {
public static boolean finished = false;
public static void main(String[] args) {
Runnable worker = new Runnable() {
#Override
public void run() {
try {
Thread.sleep(10);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
finished = true;
}
};
new Thread(worker).start();
while (!finished) {
// Thread.yield();
}
}
}
You probably need to get informed on the Java Memory Model. Multithreading isn't just about interleaving the actions of threads; it is about the visibility of actions by one thread to another.
At the bottom of this issue lies the need for aggressive optimization in the face of concurrency: any mechanism which ensures memory coherency between threads is expensive, and much (most) of the data is not shared between threads. Therefore the data not explicitly marked volatile, or protected by locks, is treated as thread-local by default (without strict guarantees, of course).
In your case, finished is such a variable which is allowed to be treated as thread-local if it pleases the runtime. It does please it because the
while (!finished);
loop can be rewritten to just
if (!finished) while (true);
If you did any important work inside the loop, it would perform a bit better because the read of finished wouldn't be needlessly repeated, thus possibly destroying one whole CPU cache line.
The above discussion should be enough to answer your immediate question, "is this starvation": the reason the loop doesn't finish is not starvation, but the inability to see the write by the other thread.
There's no starvation here, because you're not doing any work. Starvation means various threads are trying to access the same, limited set of resources. What are the resources each thread is trying to access here? They're not "eating" anything, so they can't starve.