This code should produce even and uneven output because there is no synchronized on any methods. Yet the output on my JVM is always even. I am really confused as this example comes straight out of Doug Lea.
public class TestMethod implements Runnable {
private int index = 0;
public void testThisMethod() {
index++;
index++;
System.out.println(Thread.currentThread().toString() + " "
+ index );
}
public void run() {
while(true) {
this.testThisMethod();
}
}
public static void main(String args[]) {
int i = 0;
TestMethod method = new TestMethod();
while(i < 20) {
new Thread(method).start();
i++;
}
}
}
Output
Thread[Thread-8,5,main] 135134
Thread[Thread-8,5,main] 135136
Thread[Thread-8,5,main] 135138
Thread[Thread-8,5,main] 135140
Thread[Thread-8,5,main] 135142
Thread[Thread-8,5,main] 135144
I tried with volatile and got the following (with an if to print only if odd):
Thread[Thread-12,5,main] 122229779
Thread[Thread-12,5,main] 122229781
Thread[Thread-12,5,main] 122229783
Thread[Thread-12,5,main] 122229785
Thread[Thread-12,5,main] 122229787
Answer to comments:
the index is infact shared, because we have one TestMethod instance but many Threads that call testThisMethod() on the one TestMethod that we have.
Code (no changes besides the mentioned above):
public class TestMethod implements Runnable {
volatile private int index = 0;
public void testThisMethod() {
index++;
index++;
if(index % 2 != 0){
System.out.println(Thread.currentThread().toString() + " "
+ index );
}
}
public void run() {
while(true) {
this.testThisMethod();
}
}
public static void main(String args[]) {
int i = 0;
TestMethod method = new TestMethod();
while(i < 20) {
new Thread(method).start();
i++;
}
}
}
First off all: as others have noted there's no guarantee at all, that your threads do get interrupted between the two increment operations.
Note that printing to System.out pretty likely forces some kind of synchronization on your threads, so your threads are pretty likely to have just started a time slice when they return from that, so they will probably complete the two incrementation operations and then wait for the shared resource for System.out.
Try replacing the System.out.println() with something like this:
int snapshot = index;
if (snapshot % 2 != 0) {
System.out.println("Oh noes! " + snapshot);
}
You don't know that. The point of automatic scheduling is that it makes no guarantees. It might treat two threads that run the same code completely different. Or completely the same. Or completely the same for an hour and then suddenly different...
The point is, even if you fix the problems mentioned in the other answers, you still cannot rely on things coming out a particular way; you must always be prepared for any possible interleaving that the Java memory and threading model allows, and that includes the possibility that the println always happens after an even number of increments, even if that seems unlikely to you on the face of it.
The result is exactly as I would expect. index is being incremented twice between outputs, and there is no interaction between threads.
To turn the question around - why would you expect odd outputs?
EDIT: Whoops. I wrongly assumed a new runnable was being created per Thread, and therefore there was a distinct index per thread, rather than shared. Disturbing how such a flawed answer got 3 upvotes though...
You have not marked index as volatile. This means that the compiler is allowed to optimize accesses to it, and it probably merges your 2 increments to one addition.
You get the output of the very first thread you start, because this thread loops and gives no chance to other threads to run.
So you should Thread.sleep() or (not recommended) Thread.yield() in the loop.
Related
In the tutorial of java multi-threading, it gives an exmaple of Memory Consistency Errors. But I can not reproduce it. Is there any other method to simulate Memory Consistency Errors?
The example provided in the tutorial:
Suppose a simple int field is defined and initialized:
int counter = 0;
The counter field is shared between two threads, A and B. Suppose thread A increments counter:
counter++;
Then, shortly afterwards, thread B prints out counter:
System.out.println(counter);
If the two statements had been executed in the same thread, it would be safe to assume that the value printed out would be "1". But if the two statements are executed in separate threads, the value printed out might well be "0", because there's no guarantee that thread A's change to counter will be visible to thread B — unless the programmer has established a happens-before relationship between these two statements.
I answered a question a while ago about a bug in Java 5. Why doesn't volatile in java 5+ ensure visibility from another thread?
Given this piece of code:
public class Test {
volatile static private int a;
static private int b;
public static void main(String [] args) throws Exception {
for (int i = 0; i < 100; i++) {
new Thread() {
#Override
public void run() {
int tt = b; // makes the jvm cache the value of b
while (a==0) {
}
if (b == 0) {
System.out.println("error");
}
}
}.start();
}
b = 1;
a = 1;
}
}
The volatile store of a happens after the normal store of b. So when the thread runs and sees a != 0, because of the rules defined in the JMM, we must see b == 1.
The bug in the JRE allowed the thread to make it to the error line and was subsequently resolved. This definitely would fail if you don't have a defined as volatile.
This might reproduce the problem, at least on my computer, I can reproduce it after some loops.
Suppose you have a Counter class:
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
Let thread_A set flag as true, and save the time into
modifyTime.
Let another thread, let's say thread_B, read the Counter's flag. If thread_B still get false even when it is later than modifyTime, then we can say we have reproduced the problem.
Example code
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
public class App {
public static void main(String[] args) {
while (!test());
}
private static boolean test() {
final Holder holder = new Holder();
new Thread(new Runnable() {
#Override
public void run() {
try {
Thread.sleep(10);
holder.flag = true;
holder.modifyTime = System.currentTimeMillis();
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
long lastCheckStartTime = 0L;
long lastCheckFailTime = 0L;
while (true) {
lastCheckStartTime = System.currentTimeMillis();
if (holder.flag) {
break;
} else {
lastCheckFailTime = System.currentTimeMillis();
System.out.println(lastCheckFailTime);
}
}
if (lastCheckFailTime > holder.modifyTime
&& lastCheckStartTime > holder.modifyTime) {
System.out.println("last check fail time " + lastCheckFailTime);
System.out.println("modify time " + holder.modifyTime);
return true;
} else {
return false;
}
}
}
Result
last check time 1565285999497
modify time 1565285999494
This means thread_B get false from Counter's flag filed at time 1565285999497, even thread_A has set it as true at time 1565285999494(3 milli seconds ealier).
The example used is too bad to demonstrate the memory consistency issue. Making it work will require brittle reasoning and complicated coding. Yet you may not be able to see the results. Multi-threading issues occur due to unlucky timing. If someone wants to increase the chances of observing issue, we need to increase chances of unlucky timing.
Following program achieves it.
public class ConsistencyIssue {
static int counter = 0;
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter);
}
private static class Increment implements Runnable{
#Override
public void run() {
for(int i = 1; i <= 10000; i++)
counter++;
}
}
}
Execution 1 output: 10963,
Execution 2 output: 14552
Final count should have been 20000, but it is less than that. Reason is count++ is multi step operation,
1. read count
2. increment count
3. store it
two threads may read say count 1 at once, increment it to 2. and write out 2. But if it was a serial execution it should have been 1++ -> 2++ -> 3.
We need a way to make all 3 steps atomic. i.e to be executed by only one thread at a time.
Solution 1: Synchronized
Surround the increment with Synchronized. Since counter is static variable you need to use class level synchronization
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
synchronized (ConsistencyIssue.class) {
counter++;
}
}
Now it outputs: 20000
Solution 2: AtomicInteger
public class ConsistencyIssue {
static AtomicInteger counter = new AtomicInteger(0);
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter.get());
}
private static class Increment implements Runnable {
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
counter.incrementAndGet();
}
}
}
We can do with semaphores, explicit locking too. but for this simple code AtomicInteger is enough
Sometimes when I try to reproduce some real concurrency problems, I use the debugger.
Make a breakpoint on the print and a breakpoint on the increment and run the whole thing.
Releasing the breakpoints in different sequences gives different results.
Maybe to simple but it worked for me.
Please have another look at how the example is introduced in your source.
The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement. To see this, consider the following example.
This example illustrates the fact that multi-threading is not deterministic, in the sense that you get no guarantee about the order in which operations of different threads will be executed, which might result in different observations across several runs. But it does not illustrate a memory consistency error!
To understand what a memory consistency error is, you need to first get an insight about memory consistency. The simplest model of memory consistency has been introduced by Lamport in 1979. Here is the original definition.
The result of any execution is the same as if the operations of all the processes were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program
Now, consider this example multi-threaded program, please have a look at this image from a more recent research paper about sequential consistency. It illustrates what a real memory consistency error might look like.
To finally answer your question, please note the following points:
A memory consistency error always depends on the underlying memory model (A particular programming languages may allow more behaviours for optimization purposes). What's the best memory model is still an open research question.
The example given above gives an example of sequential consistency violation, but there is no guarantee that you can observe it with your favorite programming language, for two reasons: it depends on the programming language exact memory model, and due to undeterminism, you have no way to force a particular incorrect execution.
Memory models are a wide topic. To get more information, you can for example have a look at Torsten Hoefler and Markus Püschel course at ETH Zürich, from which I understood most of these concepts.
Sources
Leslie Lamport. How to Make a Multiprocessor Computer That Correctly Executes Multiprocessor Programs, 1979
Wei-Yu Chen, Arvind Krishnamurthy, Katherine Yelick, Polynomial-Time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays, 2003
Design of Parallel and High-Performance Computing course, ETH Zürich
I'm learning multithreaded counter and I'm wondering why no matter how many times I ran the code it produces the right result.
public class MainClass {
public static void main(String[] args) {
Counter counter = new Counter();
for (int i = 0; i < 3; i++) {
CounterThread thread = new CounterThread(counter);
thread.start();
}
}
}
public class CounterThread extends Thread {
private Counter counter;
public CounterThread(Counter counter) {
this.counter = counter;
}
public void run() {
for (int i = 0; i < 10; i++) {
this.counter.add();
}
this.counter.print();
}
}
public class Counter {
private int count = 0;
public void add() {
this.count = this.count + 1;
}
public void print() {
System.out.println(this.count);
}
}
And this is the result
10
20
30
Not sure if this is just a fluke or is this expected? I thought the result is going to be
10
10
10
Try increasing the loop count from 10 to 10000 and you'll likely see some differences in the output.
The most logical explanation is that with only 10 additions, a thread is too fast to finish before the next thread gets started and adds on top of the previous result.
I'm learning multithreaded counter and I'm wondering why no matter how many times I ran the code it produces the right result.
<ttdr> Check out #manouti's answer. </ttdr>
Even though you are sharing the same Counter object, which is unsynchronized, there are a couple of things that are causing your 3 threads to run (or look like they are running) serially with data synchronization. I had to work hard on my 8 proc Intel Linux box to get it to show any interleaving.
When threads start and when they finish, there are memory barriers that are crossed. According to the Java Memory Model, the guarantee is that the thread that does the thread.join() will see the results of the thread published to it but I suspect a central memory flush happens when the thread finishes. This means that if the threads run serially (and with such a small loop it's hard for them not to) they will act as if there is no concurrency because they will see each other's changes to the Counter.
Putting a Thread.sleep(100); at the front of the thread run() method causes it to not run serially. It also hopefully causes the threads to cache the Counter and not see the results published by other threads that have already finished. Still needed help though.
Starting the threads in a loop after they all have been instantiated helps concurrency.
Another thing that causes synchronization is:
System.out.println(this.count);
System.out is a Printstream which is a synchronized class. Every time a thread calls println(...) it is publishing its results to central memory. If you instead recorded the value and then displayed it later, it might show better interleaving.
I really wonder if some Java compiler inlining of the Counter class at some point is causing part of the artificial synchronization. For example, I'm really surprised that a Thread.sleep(1000) at the front and end of the thread.run() method doesn't show 10,10,10.
It should be noted that on a non-intel architecture, with different memory and/or thread models, this might be easier to reproduce.
Oh, as commentary and apropos of nothing, typically it is recommended to implement Runnable instead of extending Thread.
So the following is my tweaks to your test program.
public class CounterThread extends Thread {
private Counter counter;
int result;
...
public void run() {
try {
Thread.sleep(100);
} catch (InterruptedException e1) {
Thread.currentThread().interrupt(); // good pattern
return;
}
for (int i = 0; i < 10; i++) {
counter.add();
}
result = counter.count;
// no print here
}
}
Then your main could do something like:
Counter counter = new Counter();
List<CounterThread> counterThreads = new ArrayList<>();
for (int i = 0; i < 3; i++) {
counterThread.add(new CounterThread(counter));
}
// start in a loop after constructing them all which improves the overlap chances
for (CounterThread counterThread : counterThreads) {
counterThread.start();
}
// wait for them to finish
for (CounterThread counterThread : counterThreads) {
counterThread.join();
}
// print the results
for (CounterThread counterThread : counterThreads) {
System.out.println(counterThread.result);
}
Even with this, I never see 10,10,10 output on my box and I often see 10,20,30. Closest I get is 12,12,12.
Shows you how hard it is to properly test a threaded program. Believe me, if this code was in production and you were expecting the "free" synchronization is when it would fail you. ;-)
Whenever I run this program it gives me different result. Can someone explain to me, or give me some topics where I could find answer in order to understand what happens in the code?
class IntCell {
private int n = 0;
public int getN() {return n;}
public void setN(int n) {this.n = n;}
}
public class Count extends Thread {
static IntCell n = new IntCell();
public void run() {
int temp;
for (int i = 0; i < 200000; i++) {
temp = n.getN();
n.setN(temp + 1);
}
}
public static void main(String[] args) {
Count p = new Count();
Count q = new Count();
p.start();
q.start();
try { p.join(); q.join(); }
catch (InterruptedException e) { }
System.out.println("The value of n is " + n.getN());
}
}
The reason is simple: you don't get and modify your counter atomically such that your code is prone to race condition issues.
Here is an example that illustrates the problem:
Thread #1 calls n.getN() gets 0
Thread #2 calls n.getN() gets 0
Thread #1 calls n.setN(1) to set n to 1
Thread #2 is not aware that thread #1 has already set n to 1 so still calls n.setN(1) to set n to 1 instead of 2 as you would expect, this is called a race condition issue.
Your final result would then depend on the total amount of race condition issues met while executing your code which is unpredictable so it changes from one test to another.
One way to fix it, is to get and set your counter in a synchronized block in order to do it atomically as next, indeed it will enforce the threads to acquire an exclusive lock on the instance of IntCell assigned to n before being able to execute this section of code.
synchronized (n) {
temp = n.getN();
n.setN(temp + 1);
}
Output:
The value of n is 400000
You could also consider using AtomicInteger instead of int for your counter in order to rely on methods of type addAndGet(int delta) or incrementAndGet() to increment your counter atomically.
The access to the IntCell n static variable is concurrent between your two threads :
static IntCell n = new IntCell();
public void run() {
int temp;
for (int i = 0; i < 200000; i++) {
temp = n.getN();
n.setN(temp + 1);
}
}
Race conditions make that you cannot have a predictable behavior when n.setN(temp + 1); is performed as it depends on which thread has previously called :temp = n.getN();.
If it the current thread, you have the value put by the thread otherwise you have the last value put by the other thread.
You could add synchronization mechanism to avoid the problem of unexpected behavior.
You are running 2 threads in parallel and updating a shared variable by these 2 threads, that is why your answer is always different. It is not a good practice to update shared variable like this.
To understand, you should first understand Multithreading and then notify and wait, simple cases
You modify the same number n with two concurrent Threads. If Thread1 reads n = 2, then Thread2 reads n = 2 before Thread2 has written the increment, Thread1 will increment n to 3, but Thread2 will no more increment, but write another "3" to n. If Thread1 finishes its incrementation before Thread2 reads, both will increment.
Now both Threads are concurrent and you can never tell which one will get what CPU cycle. This depends on what else runs on your machine. So You will always lose a different number of incrementations by the above mentioned overwriting situation.
To solve it, run real incrementations on n via n++. They go in a single CPU cycle.
I'm trying to understand the difference in behaviour of an ArrayList and a Vector. Does the following snippet in any way illustrate the difference in synchronization ? The output for the ArrayList (f1) is unpredictable while the output for the Vector (f2) is predictable. I think it may just be luck that f2 has predictable output because modifying f2 slightly to get the thread to sleep for even a ms (f3) causes an empty vector ! What's causing that ?
public class D implements Runnable {
ArrayList<Integer> al;
Vector<Integer> vl;
public D(ArrayList al_, Vector vl_) {
al = al_;
vl = vl_;
}
public void run() {
if (al.size() < 20)
f1();
else
f2();
} // 1
public void f1() {
if (al.size() == 0)
al.add(0);
else
al.add(al.get(al.size() - 1) + 1);
}
public void f2() {
if (vl.size() == 0)
vl.add(0);
else
vl.add(vl.get(vl.size() - 1) + 1);
}
public void f3() {
if (vl.size() == 0) {
try {
Thread.sleep(1);
vl.add(0);
} catch (InterruptedException e) {
System.out.println(e.getMessage());
}
} else {
vl.add(vl.get(vl.size() - 1) + 1);
}
}
public static void main(String... args) {
Vector<Integer> vl = new Vector<Integer>(20);
ArrayList<Integer> al = new ArrayList<Integer>(20);
for (int i = 1; i < 40; i++) {
new Thread(new D(al, vl), Integer.toString(i)).start();
}
}
}
To answer the question: Yes vector is synchronized, this means that concurrent actions on the data structure itself won't lead to unexpected behavior (e.g. NullPointerExceptions or something). Hence calls like size() are perfectly safe with a Vector in concurrent situations, but not with an ArrayList (note if there are only read accesses ArrayLists are safe too, we get into problems as soon as at least one thread writes to the datastructure, e.g. add/remove)
The problem is, that this low level synchronization is basically completely useless and your code already demonstrates this.
if (al.size() == 0)
al.add(0);
else
al.add(al.get(al.size() - 1) + 1);
What you want here is to add a number to your datastructure depending on the current size (ie if N threads execute this, in the end we'd want the list to contain the numbers [0..N)). Sadly that does not work:
Assume that 2 threads execute this code sample concurrently on an empty list/vector. The following timeline is quite possible:
T1: size() # go to true branch of if
T2: size() # alas we again take the true branch.
T1: add(0)
T2: add(0) # ouch
Both execute size() and get back the value 0. They then go into the true branch of the and both add 0 to the datastructure. That's not what you want.
Hence you'll have to synchronize in your business logic anyhow to make sure that size() and add() are executed atomically. Hence the synchronization of vector is quite useless in almost any scenario (contrary to some claims on modern JVMs the performance hit of an uncontended lock is completely negligible though, but the Collections API is much nicer so why not use it)
In The Beginning (Java 1.0) there was the "synchronized vector".
Which entailed a potentially HUGE performance hit.
Hence the addition of "ArrayList" and friends in Java 1.2 onwards.
Your code illustrates the rationale for making vectors synchronized in the first place. But it's simply unnecessary most of the time, and better done in other ways most of the rest of the time.
IMHO...
PS:
An interesting link:
http://www.coderanch.com/t/523384/java/java/ArrayList-Vector-size-incrementation
Vectors are Thread safe. ArrayLists are not. That is why ArrayList is faster than the vector.
The below link has nice info about this.
http://www.javaworld.com/javaworld/javaqa/2001-06/03-qa-0622-vector.html
I'm trying to understand the difference in behaviour of an ArrayList
and a Vector
Vector is synchronized while ArrayList is not. ArrayList is not thread-safe.
Does the following snippet in any way illustrate the difference in
synchronization ?
No difference since only Vector is sunchronized
Currently I can't understand when we should use volatile to declare variable.
I have do some study and searched some materials about it for a long time and know that when a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations.
However, I still can't understand in what scenario we should use it. I mean can someone provide any example code which can prove that using "volatile" brings benefit or solve problems compare to without using it?
Here is an example of why volatile is necessary. If you remove the keyword volatile, thread 1 may never terminate. (When I tested on Java 1.6 Hotspot on Linux, this was indeed the case - your results may vary as the JVM is not obliged to do any caching of variables not marked volatile.)
public class ThreadTest {
volatile boolean running = true;
public void test() {
new Thread(new Runnable() {
public void run() {
int counter = 0;
while (running) {
counter++;
}
System.out.println("Thread 1 finished. Counted up to " + counter);
}
}).start();
new Thread(new Runnable() {
public void run() {
// Sleep for a bit so that thread 1 has a chance to start
try {
Thread.sleep(100);
} catch (InterruptedException ignored) {
// catch block
}
System.out.println("Thread 2 finishing");
running = false;
}
}).start();
}
public static void main(String[] args) {
new ThreadTest().test();
}
}
The following is a canonical example of the necessity of volatile (in this case for the str variable. Without it, hotspot lifts the access outside the loop (while (str == null)) and run() never terminates. This will happen on most -server JVMs.
public class DelayWrite implements Runnable {
private String str;
void setStr(String str) {this.str = str;}
public void run() {
while (str == null);
System.out.println(str);
}
public static void main(String[] args) {
DelayWrite delay = new DelayWrite();
new Thread(delay).start();
Thread.sleep(1000);
delay.setStr("Hello world!!");
}
}
Eric, I have read your comments and one in particular strikes me
In fact, I can understand the usage of volatile on the concept
level. But for practice, I can't think
up the code which has concurrency
problems without using volatile
The obvious problem you can have are compiler reorderings, for example the more famous hoisting as mentioned by Simon Nickerson. But let's assume that there will be no reorderings, that comment can be a valid one.
Another issue that volatile resolves are with 64 bit variables (long, double). If you write to a long or a double, it is treated as two separate 32 bit stores. What can happen with a concurrent write is the high 32 of one thread gets written to high 32 bits of the register while another thread writes the low 32 bit. You can then have a long that is neither one or the other.
Also, if you look at the memory section of the JLS you will observe it to be a relaxed memory model.
That means writes may not become visible (can be sitting in a store buffer) for a while. This can lead to stale reads. Now you may say that seems unlikely, and it is, but your program is incorrect and has potential to fail.
If you have an int that you are incrementing for the lifetime of an application and you know (or at least think) the int wont overflow then you don't upgrade it to a long, but it is still possible it can. In the case of a memory visibility issue, if you think it shouldn't effect you, you should know that it still can and can cause errors in your concurrent application that are extremely difficult to identify. Correctness is the reason to use volatile.
The volatile keyword is pretty complex and you need to understand what it does and does not do well before you use it. I recommend reading this language specification section which explains it very well.
They highlight this example:
class Test {
static volatile int i = 0, j = 0;
static void one() { i++; j++; }
static void two() {
System.out.println("i=" + i + " j=" + j);
}
}
What this means is that during one() j is never greater than i. However, another Thread running two() might print out a value of j that is much larger than i because let's say two() is running and fetches the value of i. Then one() runs 1000 times. Then the Thread running two finally gets scheduled again and picks up j which is now much larger than the value of i. I think this example perfectly demonstrates the difference between volatile and synchronized - the updates to i and j are volatile which means that the order that they happen in is consistent with the source code. However the two updates happen separately and not atomically so callers may see values that look (to that caller) to be inconsistent.
In a nutshell: Be very careful with volatile!
A minimalist example in java 8, if you remove volatile keyword it will never end.
public class VolatileExample {
private static volatile boolean BOOL = true;
public static void main(String[] args) throws InterruptedException {
new Thread(() -> { while (BOOL) { } }).start();
TimeUnit.MILLISECONDS.sleep(500);
BOOL = false;
}
}
To expand on the answer from #jed-wesley-smith, if you drop this into a new project, take out the volatile keyword from the iterationCount, and run it, it will never stop. Adding the volatile keyword to either str or iterationCount would cause the code to end successfully. I've also noticed that the sleep can't be smaller than 5, using Java 8, but perhaps your mileage may vary with other JVMs / Java versions.
public static class DelayWrite implements Runnable
{
private String str;
public volatile int iterationCount = 0;
void setStr(String str)
{
this.str = str;
}
public void run()
{
while (str == null)
{
iterationCount++;
}
System.out.println(str + " after " + iterationCount + " iterations.");
}
}
public static void main(String[] args) throws InterruptedException
{
System.out.println("This should print 'Hello world!' and exit if str or iterationCount is volatile.");
DelayWrite delay = new DelayWrite();
new Thread(delay).start();
Thread.sleep(5);
System.out.println("Thread sleep gave the thread " + delay.iterationCount + " iterations.");
delay.setStr("Hello world!!");
}