When is non-volatile field write to main memory - java

Because the done is non-volatile, so I will expect thread 1 will keep executing and printing out "Done".
But when I run the program, here is the output from console
Done
Undo
This means that thread 2's update is seen by thread 1, right? (But done isn't a volatile field.).
My explanation is that thread 1 and thread 2 are running in the same core. So that they can see update of the filed, please correct me if I'm wrong.
Overall, my question is why thread 1 can see the change of thread 2? Is this related to CPU cache write back/through to main memory? If it is, when does it happen?
public class Done {
boolean done = true;
public void m1() throws InterruptedException {
while (this.done) {
System.out.println("Done");
}
System.out.println("Undo");
}
public void undo() {
done = false;
}
public static void main(String[] args) {
ExecutorService es = Executors.newCachedThreadPool();
Done v = new Done();
es.submit(() -> {
try {
v.m1();
} catch (InterruptedException e) {
e.printStackTrace();
}
}); // thread 1
es.submit(() -> {
v.undo();
}); // thread 2
es.shutdown();
}
}

The Java memory model's guarantees work in only one way. If something is guaranteed, like the visibility of a volatile write, then it'll work 100% of the time.
If there's no guarantee, it doesn't mean it'll never happen. Sometimes non-volatile writes will be seen by other threads. If you run this code many times on different machines with different JVMs, you'll probably see different results.

Related

Why is the write to non-volatile visible to the main-thread?

Imagine the following program.
class Main {
static class Whatever {
int x = 0;
}
public static void main(String[] args) {
Whatever whatever = new Whatever();
Thread t = new Thread(() -> {
whatever.x = 1;
});
t.start();
try {
t.join();
}
catch (InterruptedException e) {
}
System.out.println(whatever.x);
}
}
The main-thread has cached whatever and x is set to 0. The other thread starts, caches whatever and sets the cached x to 1.
The output is
1
so the main-thread has seen the write. Why is that?
Why was the write done to the shared cache and why has the main-thread invalidated its cache to read from the shared cache? Why don't I need volatile here?
Because of the main thread joining on it. See
17.4.5 in the JLS:
All actions in a thread happen-before any other thread successfully returns from a join() on that thread.
Btw it is true that not having a happens-before doesn’t necessarily mean something won’t be visible.

Real world example of Memory Consistency Errors in multi-threading?

In the tutorial of java multi-threading, it gives an exmaple of Memory Consistency Errors. But I can not reproduce it. Is there any other method to simulate Memory Consistency Errors?
The example provided in the tutorial:
Suppose a simple int field is defined and initialized:
int counter = 0;
The counter field is shared between two threads, A and B. Suppose thread A increments counter:
counter++;
Then, shortly afterwards, thread B prints out counter:
System.out.println(counter);
If the two statements had been executed in the same thread, it would be safe to assume that the value printed out would be "1". But if the two statements are executed in separate threads, the value printed out might well be "0", because there's no guarantee that thread A's change to counter will be visible to thread B — unless the programmer has established a happens-before relationship between these two statements.
I answered a question a while ago about a bug in Java 5. Why doesn't volatile in java 5+ ensure visibility from another thread?
Given this piece of code:
public class Test {
volatile static private int a;
static private int b;
public static void main(String [] args) throws Exception {
for (int i = 0; i < 100; i++) {
new Thread() {
#Override
public void run() {
int tt = b; // makes the jvm cache the value of b
while (a==0) {
}
if (b == 0) {
System.out.println("error");
}
}
}.start();
}
b = 1;
a = 1;
}
}
The volatile store of a happens after the normal store of b. So when the thread runs and sees a != 0, because of the rules defined in the JMM, we must see b == 1.
The bug in the JRE allowed the thread to make it to the error line and was subsequently resolved. This definitely would fail if you don't have a defined as volatile.
This might reproduce the problem, at least on my computer, I can reproduce it after some loops.
Suppose you have a Counter class:
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
Let thread_A set flag as true, and save the time into
modifyTime.
Let another thread, let's say thread_B, read the Counter's flag. If thread_B still get false even when it is later than modifyTime, then we can say we have reproduced the problem.
Example code
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
public class App {
public static void main(String[] args) {
while (!test());
}
private static boolean test() {
final Holder holder = new Holder();
new Thread(new Runnable() {
#Override
public void run() {
try {
Thread.sleep(10);
holder.flag = true;
holder.modifyTime = System.currentTimeMillis();
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
long lastCheckStartTime = 0L;
long lastCheckFailTime = 0L;
while (true) {
lastCheckStartTime = System.currentTimeMillis();
if (holder.flag) {
break;
} else {
lastCheckFailTime = System.currentTimeMillis();
System.out.println(lastCheckFailTime);
}
}
if (lastCheckFailTime > holder.modifyTime
&& lastCheckStartTime > holder.modifyTime) {
System.out.println("last check fail time " + lastCheckFailTime);
System.out.println("modify time " + holder.modifyTime);
return true;
} else {
return false;
}
}
}
Result
last check time 1565285999497
modify time 1565285999494
This means thread_B get false from Counter's flag filed at time 1565285999497, even thread_A has set it as true at time 1565285999494(3 milli seconds ealier).
The example used is too bad to demonstrate the memory consistency issue. Making it work will require brittle reasoning and complicated coding. Yet you may not be able to see the results. Multi-threading issues occur due to unlucky timing. If someone wants to increase the chances of observing issue, we need to increase chances of unlucky timing.
Following program achieves it.
public class ConsistencyIssue {
static int counter = 0;
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter);
}
private static class Increment implements Runnable{
#Override
public void run() {
for(int i = 1; i <= 10000; i++)
counter++;
}
}
}
Execution 1 output: 10963,
Execution 2 output: 14552
Final count should have been 20000, but it is less than that. Reason is count++ is multi step operation,
1. read count
2. increment count
3. store it
two threads may read say count 1 at once, increment it to 2. and write out 2. But if it was a serial execution it should have been 1++ -> 2++ -> 3.
We need a way to make all 3 steps atomic. i.e to be executed by only one thread at a time.
Solution 1: Synchronized
Surround the increment with Synchronized. Since counter is static variable you need to use class level synchronization
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
synchronized (ConsistencyIssue.class) {
counter++;
}
}
Now it outputs: 20000
Solution 2: AtomicInteger
public class ConsistencyIssue {
static AtomicInteger counter = new AtomicInteger(0);
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter.get());
}
private static class Increment implements Runnable {
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
counter.incrementAndGet();
}
}
}
We can do with semaphores, explicit locking too. but for this simple code AtomicInteger is enough
Sometimes when I try to reproduce some real concurrency problems, I use the debugger.
Make a breakpoint on the print and a breakpoint on the increment and run the whole thing.
Releasing the breakpoints in different sequences gives different results.
Maybe to simple but it worked for me.
Please have another look at how the example is introduced in your source.
The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement. To see this, consider the following example.
This example illustrates the fact that multi-threading is not deterministic, in the sense that you get no guarantee about the order in which operations of different threads will be executed, which might result in different observations across several runs. But it does not illustrate a memory consistency error!
To understand what a memory consistency error is, you need to first get an insight about memory consistency. The simplest model of memory consistency has been introduced by Lamport in 1979. Here is the original definition.
The result of any execution is the same as if the operations of all the processes were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program
Now, consider this example multi-threaded program, please have a look at this image from a more recent research paper about sequential consistency. It illustrates what a real memory consistency error might look like.
To finally answer your question, please note the following points:
A memory consistency error always depends on the underlying memory model (A particular programming languages may allow more behaviours for optimization purposes). What's the best memory model is still an open research question.
The example given above gives an example of sequential consistency violation, but there is no guarantee that you can observe it with your favorite programming language, for two reasons: it depends on the programming language exact memory model, and due to undeterminism, you have no way to force a particular incorrect execution.
Memory models are a wide topic. To get more information, you can for example have a look at Torsten Hoefler and Markus Püschel course at ETH Zürich, from which I understood most of these concepts.
Sources
Leslie Lamport. How to Make a Multiprocessor Computer That Correctly Executes Multiprocessor Programs, 1979
Wei-Yu Chen, Arvind Krishnamurthy, Katherine Yelick, Polynomial-Time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays, 2003
Design of Parallel and High-Performance Computing course, ETH Zürich

Java Thread seemingly skipping conditional statement [duplicate]

This question already has answers here:
Why doesnt this Java loop in a thread work?
(4 answers)
Closed 3 years ago.
For a recent library I'm writing, I wrote a thread which loops indefinitely. In this loop, I start with a conditional statement checking a property on the threaded object. However it seems that whatever initial value the property has, will be what it returns even after being updated.
Unless I do some kind of interruption such as Thread.sleep or a print statement.
I'm not really sure how to ask the question unfortunately. Otherwise I would be looking in the Java documentation. I have boiled down the code to a minimal example that explains the problem in simple terms.
public class App {
public static void main(String[] args) {
App app = new App();
}
class Test implements Runnable {
public boolean flag = false;
public void run() {
while(true) {
// try {
// Thread.sleep(1);
// } catch (InterruptedException e) {}
if (this.flag) {
System.out.println("True");
}
}
}
}
public App() {
Test t = new Test();
Thread thread = new Thread(t);
System.out.println("Starting thread");
thread.start();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {}
t.flag = true;
System.out.println("New flag value: " + t.flag);
}
}
Now, I would presume that after we change the value of the flag property on the running thread, we would immediately see the masses of 'True' spitting out to the terminal. However, we don't..
If I un-comment the Thread.sleep lines inside the thread loop, the program works as expected and we see the many lines of 'True' being printed after we change the value in the App object. As an addition, any print method in place of the Thread.sleep also works, but some simple assignment code does not. I assume this is because it is pulled out as un-used code at compile time.
So, my question is really: Why do I have to use some kind of interruption to get the thread to check conditions correctly?
So, my question is really: Why do I have to use some kind of interruption to get the thread to check conditions correctly?
Well you don't have to. There are at least two ways to implement this particular example without using "interruption".
If you declare flag to be volatile, then it will work.
It will also work if you declare flag to be private, write synchronized getter and setter methods, and use those for all accesses.
public class App {
public static void main(String[] args) {
App app = new App();
}
class Test implements Runnable {
private boolean flag = false;
public synchronized boolean getFlag() {
return this.flag;
}
public synchronized void setFlag(boolean flag) {
return this.flag = flag;
}
public void run() {
while(true) {
if (this.getFlag()) { // Must use the getter here too!
System.out.println("True");
}
}
}
}
public App() {
Test t = new Test();
Thread thread = new Thread(t);
System.out.println("Starting thread");
thread.start();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {}
t.setFlag(true);
System.out.println("New flag value: " + t.getFlag());
}
But why do you need to do this?
Because unless you use either a volatile or synchronized (and you use synchronized correctly) then one thread is not guaranteed to see memory changes made by another thread.
In your example, the child thread does not see the up-to-date value of flag. (It is not that the conditions themselves are incorrect or "don't work". They are actually getting stale inputs. This is "garbage in, garbage out".)
The Java Language Specification sets out precisely the conditions under which one thread is guaranteed to see (previous) writes made by another thread. This part of the spec is called the Java Memory Model, and it is in JLS 17.4. There is a more easy to understand explanation in Java Concurrency in Practice by Brian Goetz et al.
Note that the unexpected behavior could be due to the JIT deciding to keep the flag in a register. It could also be that the JIT compiler has decided it does not need force memory cache write-through, etcetera. (The JIT compiler doesn't want to force write-through on every memory write to every field. That would be a major performance hit on multi-core systems ... which most modern machines are.)
The Java interruption mechanism is yet another way to deal with this. You don't need any synchronization because the method calls that. In addition, interruption will work when the thread you are trying to interrupt is currently waiting or blocked on an interruptible operation; e.g. in an Object::wait call.
Because the variable is not modified in that thread, the JVM is free to effectively optimize the check away. To force an actual check, use the volatile keyword:
public volatile boolean flag = false;

java when I invoked Thread.sleep(), the data's visibility

Look at this code:
public class VolatileTest {
private static boolean ready = false;
public static void main(String[] args) throws InterruptedException {
Thread t1 = new Thread(){
#Override
public void run() {
ready = true;
System.out.println("t2 thread should stop!");
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
Thread t2 = new Thread(){
#Override
public void run() {
while(!ready){
System.out.println("invoking..");
}
System.out.println("I was finished");
}
};
t1.start();
t2.start();
}
}
I think the result of this code maybe:
t2 thread should stop!
invoking..
I was finished
because of in the multithreading, when the t1 modify 'ready' variable to true,then I made t1 sleep. At the moment, I think, to t2 the 'ready' variable is false!!! because t1 thread is not stop, the variable in t1 is invisible in t2.
But in fact.. I test many times. the result is always this:
Am my idea is wrong?
First of all, despite calling your class VolatileTest, you are not actually using volatile anywhere in your code.
Since the ready variable is not declared as volatile AND you are accessing it without any explicit synchronization, the behavior is not specified. Specifically, the JLS does not say whether the assignment made in thread 1 to the ready variable will be visible within thread 2.
Indeed, there is not even guaranteed that the run() method for thread 1 will be called before the run() method for thread 2.
Now it seems that your code (as written!) is behaving in a way that is consistent with the write of true always being visible immediately. However, there is no guarantee that that "always" is actually always, or that this will be the case on every Java platform.
I would not be surprised if the syscall associated with sleep is triggering memory cache flushing before the second thread is scheduled. That would be sufficient to cause consistent behavior. Moreover, there is likely to be serendipitous synchronization1 due to the println calls. However, these are not effects you should ever rely on.
1 - Somewhere in the output stream stack for System.out, the println call is likely to synchronize on the stream's shared data structures. Depending on the ordering of the events, this can have the effect of inserting a happens before relationship between the write and read events.
As I mentioned in my comment, there are no guarantees. ("There is no guarantee what value thread t2 will see for ready, because of improper synchronization in your code. It could be true, it could be false. In your case, t2 saw true. That is consistent with "there is no guarantee what value t2 will see")
You can easily get your test to fail by running it multiple times.
When I run below code that does your test 100 times, I always get 14-22 "notReadies", so 14-22% of the cases you will not see the change to ready in Thread t2.
public class NonVolatileTest {
private static boolean ready = false;
private static volatile int notReadies = 0;
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 100; i++) {
ready = false;
// Copy original Thread 1 code from the OP here
Thread t2 = new Thread() {
#Override
public void run() {
if (!ready) {
notReadies++;
}
while (!ready) {
System.out.println("invoking..");
}
System.out.println("I was finished");
}
};
t1.start();
t2.start();
// To reduce total test run time, reduce the sleep in t1 to a
// more suitable value like "100" instead of "5000".
t1.join();
t2.join();
}
System.out.println("Notreadies: " + notReadies);
}
}

get unexpected values using synchronize

I have a simple snippet of code and try to experiment a little with it, but in the next code I get unclear for me order of output data:
public class Main {
static int n = 100;
public static synchronized int decreaseValue(){
return --n;
}
public static void main(String[] args) throws InterruptedException, IOException {
Thread t1 = new Thread(new Runnable() {
#Override
public void run() {
while(true){
try {
System.out.println("Thread1: "+ decreaseValue());
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
},"Thread1");
t1.start();
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
while(true){
try {
System.out.println("Thread2: "+ decreaseValue());
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
},"Thread2");
t2.start();
while(true){
try {
System.out.println("Main Thread: "+ decreaseValue());
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Can not understand, why I get the such values in the next order:
Thread1: 89
Thread2: 90
Main Thread: 88
PLEASE PAY ATTENTION TO THE N VALUE NOT TO THE ORDER OF CALLING THREADS:
Thread1: 99
Thread2: 98
Main Thread: 97
Main Thread: 95
Thread2: 94
Thread1: 96
Main Thread: 92
You must have read somewhere that synchronized must be used to ensure proper ordering, or something similar. The word "ordering" pertains to a different concept from the one you have in mind: it means that there will always be some definite ordering to the execution of synchronized blocks. The ordering is not known in advance, but it will be there every time. Without synchronized, you won't even get that guarantee: one thread could perceive one order, another thread a different order, or not perceive any actions by other threads at all.
About your edit:
If you are concerned about printouts happening out of order, this is because your println statements are outside of synchronized and so can interleave independently of the calls to decreaseValue.
Thread runs in parallelly. You can't predict their execution order. You can set thread priority which prioritize its execution order.
The threads are running concurrently and nothing is imposing the order on wich they should call your decreaseValue() function. You would expect them to be in order since you are doing the same amount of sleep :), but as soon as a thread will start/resume from a sleep, the CPU will place the thread in a running queue (creating an execution order, this is the order that synchronize will guarantee), because of this the order of your printing depends on how the CPU will place the thread in the running queue.
The print on the console is also synchronized (but not in the same block with your decreaseValue), if you are questioning the printing order. The same logic applies to printing as the one to decrease the values.
If you would like to see the prints in the same order that the value was decremented you can move print in the decreaseValue() function. But this will not affect the order in how the values will be decremented.

Categories