This fails
public void testWeak() throws Exception {
waitGC();
{
Sequence a = Sequence.valueOf("123456789");
assert Sequence.used() == 1;
a.toString();
}
waitGC();
}
private void waitGC() throws InterruptedException {
Runtime.getRuntime().gc();
short count = 0;
while (count < 100 && Sequence.used() > 0) {
Thread.sleep(10);
count++;
}
assert Sequence.used() == 0: "Not removed!";
}
The test fails. Telling Not removed!.
This works:
public void testAWeak() throws Exception {
waitGC();
extracted();
waitGC();
}
private void extracted() throws ChecksumException {
Sequence a = Sequence.valueOf("123456789");
assert Sequence.used() == 1;
a.toString();
}
private void waitGC() throws InterruptedException {
Runtime.getRuntime().gc();
short count = 0;
while (count < 100 && Sequence.used() > 0) {
Thread.sleep(10);
count++;
}
assert Sequence.used() == 0: "Not removed!";
}
It seems like the curly brackets does not affect the weakness.
Some official resources?
Scope is a compile-time thing. It is not determining the reachability of objects at runtime, only has an indirect influence due to implementation details.
Consider the following variation of your test:
static boolean WARMUP;
public void testWeak1() throws Exception {
variant1();
WARMUP = true;
for(int i=0; i<10000; i++) variant1();
WARMUP = false;
variant1();
}
private void variant1() throws Exception {
AtomicBoolean track = new AtomicBoolean();
{
Trackable a = new Trackable(track);
a.toString();
}
if(!WARMUP) System.out.println("variant1: "
+(waitGC(track)? "collected": "not collected"));
}
public void testWeak2() throws Exception {
variant2();
WARMUP = true;
for(int i=0; i<10000; i++) variant2();
WARMUP = false;
variant2();
}
private void variant2() throws Exception {
AtomicBoolean track = new AtomicBoolean();
{
Trackable a = new Trackable(track);
a.toString();
if(!WARMUP) System.out.println("variant2: "
+(waitGC(track)? "collected": "not collected"));
}
}
static class Trackable {
final AtomicBoolean backRef;
public Trackable(AtomicBoolean backRef) {
this.backRef = backRef;
}
#Override
protected void finalize() throws Throwable {
backRef.set(true);
}
}
private boolean waitGC(AtomicBoolean b) throws InterruptedException {
for(int count = 0; count < 10 && !b.get(); count++) {
Runtime.getRuntime().gc();
Thread.sleep(1);
}
return b.get();
}
on my machine, it prints:
variant1: not collected
variant1: collected
variant2: not collected
variant2: collected
If you can’t reproduce it, you may have to raise the number of warmup iterations.
What it demonstrates: whether a is in scope (variant 2) or not (variant 1) doesn’t matter, in either case, the object has not been collected in cold execution, but got collected after a number of warmup iterations, in other words, after the optimizer kicked in.
Formally, a is always eligible for garbage collection at the point we’re invoking waitGC(), as it is unused from this point. This is how reachability is defined:
A reachable object is any object that can be accessed in any potential continuing computation from any live thread.
In this example, the object can not be accessed by potential continuing computation, as no such subsequent computation that would access the object exists. However, there is no guaranty that a particular JVM’s garbage collector is always capable of identifying all of those objects at each time. In fact, even a JVM not having a garbage collector at all would still comply to the specification, though perhaps not the intent.
The possibility of code optimizations having an effect on the reachability analysis has also explicitly mentioned in the specification:
Optimizing transformations of a program can be designed that reduce the number of objects that are reachable to be less than those which would naively be considered reachable. For example, a Java compiler or code generator may choose to set a variable or parameter that will no longer be used to null to cause the storage for such an object to be potentially reclaimable sooner.
So what happens technically?
As said, scope is a compile-time thing. At the bytecode level, leaving the scope defined by the curly braces has no effect. The variable a is out of scope, but its storage within the stack frame still exists holding the reference until overwritten by another variable or until the method completes. The compiler is free to reuse the storage for another variable, but in this example, no such variable exists. So the two variants of the example above actually generate identical bytecode.
In an unoptimized execution, the still existing reference within the stack frame is treated like a reference preventing the object’s collection. In an optimized execution, the reference is only held until its last actual use. Inlining of its fields can allow its collection even earlier, up to the point that it is collected right after construction (or not getting constructed at all, if it hadn’t a finalize() method). The extreme end is finalize() called on strongly reachable object in Java 8…
Things change, when you insert another variable, e.g.
private void variant1() throws Exception {
AtomicBoolean track = new AtomicBoolean();
{
Trackable a = new Trackable(track);
a.toString();
}
String message = "variant1: ";
if(!WARMUP) System.out.println(message
+(waitGC(track)? "collected": "not collected"));
}
Then, the storage of a is reused by message after a’s scope ended (that’s of course, compiler specific) and the object gets collected, even in the unoptimized execution.
Note that the crucial aspect is the actual overwriting of the storage. If you use
private void variant1() throws Exception {
AtomicBoolean track = new AtomicBoolean();
{
Trackable a = new Trackable(track);
a.toString();
}
if(!WARMUP)
{
String message = "variant1: "
+(waitGC(track)? "collected": "not collected");
System.out.println(message);
}
}
The message variable uses the same storage as a, but its assignment only happens after the invocation of waitGC(track), so you get the same unoptimized execution behavior as in the original variant.
By the way, don’t use short for local loop variables. Java always uses int for byte, short, char, and int calculations (as you know, e.g. when trying to write shortVariable = shortVariable + 1;) and requiring it to cut the result value to short (which still happens implicitly when you use shortVariable++), adds an additional operation, so if you thought, using short improved the efficiency, notice that it actually is the opposite.
Related
I know, that in theory, to implement a correct singleton, in addition to double checked locking and synchronized we should make an instance field volatile.
But in real life I cannot get an example, that would expose the problem. Maybe there is a JVM flag that would disable some optimisation, or allow runtime to do such code permutation?
Here is the code, that, as I understand, should print to console from time to time, but it doesn't:
class Data {
int i;
Data() {
i = Math.abs(new Random().nextInt()) + 1; // Just not 0
}
}
class Keeper {
private Data data;
Data getData() {
if (data == null)
synchronized (this) {
if (data == null)
data = new Data();
}
return data;
}
}
#Test
void foo() throws InterruptedException {
Keeper[] sharedInstance = new Keeper[]{new Keeper()};
Thread t1 = new Thread(() -> {
while (true)
sharedInstance[0] = new Keeper();
});
t1.start();
final Thread t2 = new Thread(() -> {
while (true)
if (sharedInstance[0].getData().i == 0)
System.out.println("GOT IT!!"); // This actually does not happen
});
t2.start();
t1.join();
}
Could someone provide a code, that clearly demonstrates described theoretical lack of volatile problem?
Very good article about it
https://shipilev.net/blog/2014/safe-public-construction/
You can find examples in the end.
And be aware about
x86 is Total Store Order hardware, meaning the stores are visible for all processors in some total order. That is, if compiler actually presented the program stores in the same order to hardware, we may be reasonably sure the initializing stores of the instance fields would be visible before seeing the reference to the object itself. Even if your hardware is total-store-ordered, you can not be sure the compiler would not reorder within the allowed memory model spec. If you turn off -XX:+StressGCM -XX:+StressLCM in this experiment, all cases would appear correct since the compiler did not reorder much.
This program does not terminate!
public class Main extends Thread {
private int i = 0;
private int getI() {return i; }
private void setI(int j) {i = j; }
public static void main(String[] args) throws InterruptedException {
Main main = new Main();
main.start();
Thread.sleep(1000);
main.setI(10);
}
public void run() {
System.out.println("Awaiting...");
while (getI() == 0) ;
System.out.println("Done!");
}
}
I understand this happens because the CPU core running the Awaiting loop always sees the cached copy of i and misses the update.
I also understand that if I make volatileprivate int i = 0; then the while (getI()... will behave[1] as if every time it is consulting the main memory - so it will see the updated value and my program will terminate.
My question is: If I make
synchronized private int getI() {return i; }
It surprisingly works!! The program terminates.
I understand that synchronized is used in preventing two different threads from simultaneously entering a method - but here is only one thread that ever enters getI(). So what sorcery is this?
Edit 1
This (synchronization) guarantees that changes to the state of the object are visible to all threads
So rather than directly having the private state field i, I made following changes:
In place of private int i = 0; I did private Data data = new Data();, i = j changed to data.i = j and return i changed to return data.i
Now the getI and setI methods are not doing anything to the state of the object in which they are defined (and may be synchronized). Even now using the synchronized keyword is causing the program to terminate! The fun is in knowing that the object whose state is actually changing (Data) has no synchronization or anything built into it. Then why?
[1] It will probably just behave as that, what actually, really happens is unclear to me
It is just coincidence or platform dependent or specific JVM dependent, it is not guaranteed by JLS. So, do not depend on it.
The question has been posted before but no real example was provided that works. So Brian mentions that under certain conditions the AssertionError can occur in the following code:
public class Holder {
private int n;
public Holder(int n) { this.n = n; }
public void assertSanity() {
if (n!=n)
throw new AssertionError("This statement is false");
}
}
When holder is improperly published like this:
class someClass {
public Holder holder;
public void initialize() {
holder = new Holder(42);
}
}
I understand that this would occur when the reference to holder is made visible before the instance variable of the object holder is made visible to another thread. So I made the following example to provoke this behavior and thus the AssertionError with the following class:
public class Publish {
public Holder holder;
public void initialize() {
holder = new Holder(42);
}
public static void main(String[] args) {
Publish publish = new Publish();
Thread t1 = new Thread(new Runnable() {
public void run() {
for(int i = 0; i < Integer.MAX_VALUE; i++) {
publish.initialize();
}
System.out.println("initialize thread finished");
}
});
Thread t2 = new Thread(new Runnable() {
public void run() {
int nullPointerHits = 0;
int assertionErrors = 0;
while(t1.isAlive()) {
try {
publish.holder.assertSanity();
} catch(NullPointerException exc) {
nullPointerHits++;
} catch(AssertionError err) {
assertionErrors ++;
}
}
System.out.println("Nullpointerhits: " + nullPointerHits);
System.out.println("Assertion errors: " + assertionErrors);
}
});
t1.start();
t2.start();
}
}
No matter how many times I run the code, the AssertionError never occurs. So for me there are several options:
The jvm implementation (in my case Oracle's 1.8.0.20) enforces that the invariants set during construction of an object are visible to all threads.
The book is wrong, which I would doubt as the author is Brian Goetz ... nuf said
I'm doing something wrong in my code above
So the questions I have:
- Did someone ever provoke this kind of AssertionError successfully? With what code then?
- Why isn't my code provoking the AssertionError?
Your program is not properly synchronized, as that term is defined by the Java Memory Model.
That does not, however, mean that any particular run will exhibit the assertion failure you are looking for, nor that you necessarily can expect ever to see that failure. It may be that your particular VM just happens to handle that particular program in a way that turns out never to expose that synchronization failure. Or it may turn out the although susceptible to failure, the likelihood is remote.
And no, your test does not provide any justification for writing code that fails to be properly synchronized in this particular way. You cannot generalize from these observations.
You are looking for a very rare condition. Even if the code reads an unintialized n, it may read the same default value on the next read so the race you are looking for requires an update right in between these two adjacent reads.
The problem is that every optimizer will coerce the two reads in your code into one, once it starts processing your code, so after that you will never get an AssertionError even if that single read evaluates to the default value.
Further, since the access to Publish.holder is unsynchronized, the optimizer is allowed to read its value exactly once and assume unchanged during all subsequent iterations. So an optimized second thread would always process the same object which will never turn back to the uninitialized state. Even worse, an optimistic optimizer may go as far as to assume that n is always 42 as you never initialize it to something else in this runtime and it will not consider the case that you want a race condition. So both loops may get optimized to no-ops.
In other words: if your code doesn’t fail on the first access, the likeliness of spotting the error in subsequent iterations dramatically drops down, possibly to zero. This is the opposite of your idea to let the code run inside a long loop hoping that you will eventually encounter the error.
The best chances for getting a data race are on the first, non-optimized, interpreted execution of your code. But keep in mind, the chance for that specific data race are still extremely low, even when running the entire test code in pure interpreted mode.
I am reading this book called "Java Concurrency in Practice" and the author gives an example of an unsafe object publication. Here is the example.
public Holder holder;
public void initialize(){
holder = new Holder(42);
}
and
public class Holder {
private int n;
public Holder(int n) { this.n = n; }
public void assertSanity() {
if (n != n)
throw new AssertionError("This statement is false.");
}
}
So does this mean that other thread has access to an object when it is not even fully constructed? I guess that when a thread A calls holder.initialize(); and thread B calls holder.assertSanity(); the condition n != n will not be met if thread A has not yet executed this.n = n;
Does this also mean that if I have a simpler code like
int n;
System.out.println(n == n); //false?
A problem can occur if the assertSanity method is pre-empted between the first and second load of n (the first load would see 0 and the second load would see the value set by the constructor). The problem is that the basic operations are:
Allocate space for the object
Call the constructor
Set holder to the new instance
The compiler/JVM/CPU is allowed to reorder steps #2 and #3 since there are no memory barriers (final, volatile, synchronized, etc.)
From your second example, it's not clear if "n" is a local variable or a member variable or how another thread might be simultaneously mutating it.
Your understanding is correct. That is exactly the problem the author seek to illustrate. There are no guards in Java that ensure an object is fully constructed prior to accessing in when multiple threads are concerned. Holder is not thread-safe as it contains mutable state. The use of synchronization is required to fix this.
I'm not sure I understand your 2nd example, it lacks context.
public static void main(String[] args) {
A a = new A();
System.out.println(a.n);
}
static class A{
public int n;
public A(){
new Thread(){
public void run() {
System.out.println(A.this.n);
};
}.start();
try {
Thread.currentThread().sleep(1000);
n=3;
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
This example result in "0 3" which means reference to an object could be used by another thread even before its constructor done. You may find the rest answer here. Wish it could help .
Currently I can't understand when we should use volatile to declare variable.
I have do some study and searched some materials about it for a long time and know that when a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations.
However, I still can't understand in what scenario we should use it. I mean can someone provide any example code which can prove that using "volatile" brings benefit or solve problems compare to without using it?
Here is an example of why volatile is necessary. If you remove the keyword volatile, thread 1 may never terminate. (When I tested on Java 1.6 Hotspot on Linux, this was indeed the case - your results may vary as the JVM is not obliged to do any caching of variables not marked volatile.)
public class ThreadTest {
volatile boolean running = true;
public void test() {
new Thread(new Runnable() {
public void run() {
int counter = 0;
while (running) {
counter++;
}
System.out.println("Thread 1 finished. Counted up to " + counter);
}
}).start();
new Thread(new Runnable() {
public void run() {
// Sleep for a bit so that thread 1 has a chance to start
try {
Thread.sleep(100);
} catch (InterruptedException ignored) {
// catch block
}
System.out.println("Thread 2 finishing");
running = false;
}
}).start();
}
public static void main(String[] args) {
new ThreadTest().test();
}
}
The following is a canonical example of the necessity of volatile (in this case for the str variable. Without it, hotspot lifts the access outside the loop (while (str == null)) and run() never terminates. This will happen on most -server JVMs.
public class DelayWrite implements Runnable {
private String str;
void setStr(String str) {this.str = str;}
public void run() {
while (str == null);
System.out.println(str);
}
public static void main(String[] args) {
DelayWrite delay = new DelayWrite();
new Thread(delay).start();
Thread.sleep(1000);
delay.setStr("Hello world!!");
}
}
Eric, I have read your comments and one in particular strikes me
In fact, I can understand the usage of volatile on the concept
level. But for practice, I can't think
up the code which has concurrency
problems without using volatile
The obvious problem you can have are compiler reorderings, for example the more famous hoisting as mentioned by Simon Nickerson. But let's assume that there will be no reorderings, that comment can be a valid one.
Another issue that volatile resolves are with 64 bit variables (long, double). If you write to a long or a double, it is treated as two separate 32 bit stores. What can happen with a concurrent write is the high 32 of one thread gets written to high 32 bits of the register while another thread writes the low 32 bit. You can then have a long that is neither one or the other.
Also, if you look at the memory section of the JLS you will observe it to be a relaxed memory model.
That means writes may not become visible (can be sitting in a store buffer) for a while. This can lead to stale reads. Now you may say that seems unlikely, and it is, but your program is incorrect and has potential to fail.
If you have an int that you are incrementing for the lifetime of an application and you know (or at least think) the int wont overflow then you don't upgrade it to a long, but it is still possible it can. In the case of a memory visibility issue, if you think it shouldn't effect you, you should know that it still can and can cause errors in your concurrent application that are extremely difficult to identify. Correctness is the reason to use volatile.
The volatile keyword is pretty complex and you need to understand what it does and does not do well before you use it. I recommend reading this language specification section which explains it very well.
They highlight this example:
class Test {
static volatile int i = 0, j = 0;
static void one() { i++; j++; }
static void two() {
System.out.println("i=" + i + " j=" + j);
}
}
What this means is that during one() j is never greater than i. However, another Thread running two() might print out a value of j that is much larger than i because let's say two() is running and fetches the value of i. Then one() runs 1000 times. Then the Thread running two finally gets scheduled again and picks up j which is now much larger than the value of i. I think this example perfectly demonstrates the difference between volatile and synchronized - the updates to i and j are volatile which means that the order that they happen in is consistent with the source code. However the two updates happen separately and not atomically so callers may see values that look (to that caller) to be inconsistent.
In a nutshell: Be very careful with volatile!
A minimalist example in java 8, if you remove volatile keyword it will never end.
public class VolatileExample {
private static volatile boolean BOOL = true;
public static void main(String[] args) throws InterruptedException {
new Thread(() -> { while (BOOL) { } }).start();
TimeUnit.MILLISECONDS.sleep(500);
BOOL = false;
}
}
To expand on the answer from #jed-wesley-smith, if you drop this into a new project, take out the volatile keyword from the iterationCount, and run it, it will never stop. Adding the volatile keyword to either str or iterationCount would cause the code to end successfully. I've also noticed that the sleep can't be smaller than 5, using Java 8, but perhaps your mileage may vary with other JVMs / Java versions.
public static class DelayWrite implements Runnable
{
private String str;
public volatile int iterationCount = 0;
void setStr(String str)
{
this.str = str;
}
public void run()
{
while (str == null)
{
iterationCount++;
}
System.out.println(str + " after " + iterationCount + " iterations.");
}
}
public static void main(String[] args) throws InterruptedException
{
System.out.println("This should print 'Hello world!' and exit if str or iterationCount is volatile.");
DelayWrite delay = new DelayWrite();
new Thread(delay).start();
Thread.sleep(5);
System.out.println("Thread sleep gave the thread " + delay.iterationCount + " iterations.");
delay.setStr("Hello world!!");
}