Is Initialization On Demand Holder idiom thread safe without a final modifier - java

I have a hunch that using the holder idiom without declaring the holder field as final is not thread safe (due to the way immutability works in Java). Can somebody confirm this (hopefully with some sources)?
public class Something {
private long answer = 1;
private Something() {
answer += 10;
answer += 10;
}
public int getAnswer() {
return answer;
}
private static class LazyHolder {
// notice no final
private static Something INSTANCE = new Something();
}
public static Something getInstance() {
return LazyHolder.INSTANCE;
}
}
EDIT: I definitely want sourced statements, not just assertions like "it works" -- please explain/prove it's safe
EDIT2: A little modification to make my point more clear - can I be sure that the getAnswer() method will return 21 regardless of calling thread?

The class initialization procedure guarantees that if a static field's value is set using a static initializer (i.e. static variable = someValue;) that value is visible to all threads:
10 - If the execution of the initializers completes normally, then acquire LC, label the Class object for C as fully initialized, notify all waiting threads, release LC, and complete this procedure normally.
Regarding your edit, let's imagine a situation with two threads T1 and T2, executing in that order from a wall clock's perspective:
T1: Something s = Something.getInstance();
T2: Something s = Something.getInstance(); i = s.getAnswer();
Then you have:
T1 acquire LC, T1 run Something INSTANCE = new Something();, which initialises answer, T1 release LC
T2 tries to acquire LC, but already locked by T1 => waits. When T1 releases LC, T2 acquire LC, reads INSTANCE then reads answer.
So you can see that you have a proper happens-before relationship between the write and the read to answer, thanks to the LC lock.

It is thread-safe for sure, but is mutable. So anyone who gets it may assign it to something else. And that is first thing to worry about (even before considering thread-safety).

Related

Volatile Java reordering

Firstly let me say that I am aware of this being a fairly common topic here but searching for it I couldn't quite find another question that clarifies the following situation. I am very sorry if this is a possible duplicate but here you go:
I am new to concurrency and have been given the following code in order to answer questions:
a) Why any other output aside from "00" would be possible?
b) How to amend the code so that "00" will ALWAYS print.
boolean flag = false;
void changeVal(int val) {
if(this.flag){
return;
}
this.initialInt = val;
this.flag = true;
}
int initialInt = 1;
class MyThread extends Thread {
public void run(){
changeVal(0);
System.out.print(initialInt);
}
}
void execute() throws Exception{
MyThread t1 = new MyThread();
MyThread t2 = new MyThread();
t1.start(); t2.start(); t1.join(); t2.join();
System.out.println();
}
For a) my answer would be the following: In the absence of any volatile / synchronization construct the compiler could reorder some of the instructions. In particular, "this.initialInt = val;" and "this.flag = true;" could be switched so that this situation could occur: The threads are both started and t1 charges ahead. Given the reordered instructions it first sets flag = true. Now before it reaches the now last statement of "this.initialInt = val;" the other thread jumps in, checks the if-condition and immediately returns thus printing the unchanged initialInt value of 1. Besides this, I believe that without any volatile / synchronization it is not for certain whether t2 might see the assignment performed to initialInt in t1 so it may also print "1" as the default value.
For b) I think that flag could be made volatile. I have learned that when t1 writes to a volatile variable setting flag = true then t2, upon reading out this volatile variable in the if-statement will see any write operations performed before the volatile write, hence initialInt = val, too. Therefore, t2 will already have seen its initialInt value changed to 0 and must always print 0.
This will only work, however, if the use of volatile successfully prevents any reordering as I described in a). I have read about volatile accomplishing such things but I am not sure whether this always works here in the absence of any further synchronized blocks or any such locks. From this answer I have gathered that nothing happening before a volatile store (so this.flag = true) can be reordered as to appear beyond it. In that case initialInt = val could not be moved down and I should be correct, right? Or not ? :)
Thank you so much for your help. I am looking forward to your replies.
This example will alway print 00 , because you do changeVal(0) before the printing .
to mimic the case where 00 might not be printed , you need to move initialInt = 1; to the context of a thread like so :
class MyThread extends Thread {
public void run(){
initialInt = 1;
changeVal(0);
System.out.print(initialInt);
}
}
now you might have a race condition , that sets initialInt back to 1 in thread1 before it is printed in thread2
another alternative that might results in a race-condition but is harder to understand , is switching the order of setting the flag and setting the value
void changeVal(int val) {
if(this.flag){
return;
}
this.flag = true;
this.initialInt = val;
}
There are no explicit synchronizations, so all kinds of interleavings are possible, and changes made by one thread are not necessarily visible to the other so, it is possible that the changes to flag are visible before the changes to initialInt, causing 10 or 01 output, as well as 00 output. 11 is not possible, because operations performed on variables are visible to the thread performing them, and effects of changeVal(0) will always visible for at least one of the threads.
Making changeVal synchronized, or making flag volatile would fix the issue. flag is the last variable changed in the critical section, so declaring it as volatile would create a happened-before relationship, making changes to initialInt visible.

Using java semaphores as locks between two Runnable classes

I have three objects which are instances of two different classes implementing Runnable interface. One of the objects changes counters of the other two objects, but I want to make sure the whole update operation is not interrupted by the other threads (i.e. I want to use a lock for my critical section).
In the code below (this is an illustration of the actual code, not itself), I want to make sure the code in the critical section is executed without any interruptions.
One thought I have is defining a binary Semaphore, m, in the Worker class and surround every operation that touches value and operations with m.acquire() followed by m.release(). But, in the 'Runner' class, I have a call to incrementValue() and if I surround CS with acquire()/release() calls while I have the same thing within incrementValue(), it does not make sense.
I am a bit confused about where I should be putting my semaphores to achieve mutual exclusion.
Thanks
class Worker implements Runnable{
int value;
int operations;
// Semaphore m = new Semaphore(1);
...
...
void incrementValue(int n){
// m.acquire() here??
this.operations++;
this.value += n;
// m.release() here??
}
...
#Override
public void run(){
...
this.operations++;
this.value = getRandomNum();
...
}
}
class Runner implements Runnable {
Worker a, b;
...
...
#Override
public void run(){
...
// Start of the CS
// a.m.acquire() here?
// b.m.acquire() here?
a.incrementValue(x);
System.out.println("Value in WorkerA incremented by " + x);
b.incrementValue(y);
System.out.println("Value in WorkerB incremented by " + y);
// a.m.release() here?
// b.m.release() here?
// end of the CS
...
}
...
}
Sounds like the problem you are facing is the same problem that ReentrantLock is meant to solve. ReentrantLock lets you do this:
final ReentrantLock m = new ReentrantLock();
void foo() {
m.lock();
doFooStuff();
m.unlock();
}
void bar() {
m.lock();
foo();
doAdditionalBarStuff();
m.unlock();
}
The lock() call checks to see whether or not the calling thread already owns the lock. If the caller does not, then it first acquires the lock, waiting if necessary, and finally, before it returns it sets a count variable to 1.
Subsequent lock() calls from the same thread will see that the thread already owns the lock, and they will simply increment the counter and return.
The unlock() calls decrement the counter, and only release the lock when the count reaches zero.

Combination of Singleton class and volatile variable

As far as I know, volatile variables will be always read and written from the main memory. Then I think about the Singleton class. Here is how my program is:
1. Singleton class
public class Singleton {
private static Singleton sin;
private static volatile int count;
static{
sin = new Singleton();
count = 0;
}
private Singleton(){
}
public static Singleton getInstance(){
return sin;
}
public String test(){
count++;
return ("Counted increased!" + count);
}
}
2. Main class
public class Java {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
Derived d1 = new Derived("d1");
d1.start();
Derived d2 = new Derived("d2");
d2.start();
Derived d3 = new Derived("d3");
d3.start();
}
;
}
class Derived extends Thread {
String name;
public Derived(String name){
this.name = name;
}
public void run() {
Singleton a = Singleton.getInstance();
for (int i = 0; i < 10; i++) {
System.out.println("Current thread: "+ name + a.test());
}
}
}
I know this maybe a dumb question, but i'm not good at multithreading in Java thus this problem confuses me a lot. I thought the static volatile int count variable in Singleton class will always have the latest value, but apparently it does not...
Can someone help me to understand this?
Thank you very much.
The problem is that volatile has nothing to do with thread synchronization. Even though the read from static volatile int count would indeed always return the latest value, multiple threads may write the same new value back into it.
Consider this scenario with two threads:
count is initialized zero
Thread A reads count, sees zero
Thread B reads count, sees zero
Thread A advances count to 1, stores 1
Thread B advances count to 1, stores 1
Thread A writes "Counted increased! 1"
Thread B writes "Counted increased! 1"
Both threads read the latest value, but since ++ is not an atomic operation, once the read is complete, each thread is on its own. Both threads independently compute the next value, and then store it back into the count variable. The net effect is that a variable is incremented once, even though both threads performed the increment.
If you would like to increment an int from multiple threads, use AtomicInteger.
As Jon Skeet indicated, it would be best if you use AtomicInteger. Using volatile variables reduces the risk of memory consistency errors, but it doesn't eliminate the need to synchronize atomic action.
I think this modification would help with your problem.
public synchronized String test(){
count++;
return ("Counted increased!" + count);
}
Reader threads are not doing any locking and until writer thread comes out of synchronized block, memory will not be synchronized and value of 'sin' will not be updated in main memory. both threads reads the same values and thus updates it by adding one, if you want to resolve make test method synchronised.
Read more: http://javarevisited.blogspot.com/2011/06/volatile-keyword-java-example-tutorial.html#ixzz3PGYRMtgE

Synchronized data read/write to/from main memory

When a synchronized method is completed, will it push only the data modified by it to main memory, or all the member variables, similarly when a synchronized method executes, will it read only the data it needs from main memory or will it clear all the member variables in the cache and read their values from main memory ? For example
public class SharedData
{
int a; int b; int c; int d;
public SharedData()
{
a = b = c = d = 10;
}
public synchronized void compute()
{
a = b * 20;
b = a + 10;
}
public synchronized int getResult()
{
return b*c;
}
}
In the above code assume compute is executed by threadA and getResult is executed by threadB. After the execution of compute, will threadA update main memory with a and b or will it update a,b,c and d. And before executing getResult will threadB get only the value of b and c from main memory or will it clear the cache and fetch values for all member variables a,b,c and d ?
synchronized ensures you have a consistent view of the data. This means you will read the latest value and other caches will get the latest value. Caches are smart enough to talk to each other via a special bus (not something required by the JLS, but allowed) This bus means that it doesn't have to touch main memory to get a consistent view.
I think following thread should answer your question.
Memory effects of synchronization in Java
In practice, the whole cache is not flushed.
1. synchronized keyword on a method or on an atomic statement, will lock the access to
the resource that it can modify, by allowing only one thread to gain the lock.
2. Now preventing of caching of values into the variables is done by volatile keyword.
Using volatile keyword will ask the JVM to make the thread that access the instance variable to reconcile its copy of the instance variable with the one saved in the memory.
3. Moreover in your above example, if threadA execute the compute(), then threadB canNot access the getResult() method simultaneously, as they both are synchronized methods, and only one thread can have access to the all the synchronized methods of the object, cause its not the method that is locked but the Object. Its like this... Every object has one lock, and the thread which wants to access its synchronized block must get that lock
4. Even every class has a lock, that is used to protect the crucial state of the static variables in the class.
Before answering your question lets clear few terms related to Multi-Threaded environments to understand the basic things.
Race-Condition : When two or more thread trying to perform read or write operations on same variable on same time (here same variable = shared data between thread) eg. In your question Thraed-A
execute b = a + 10 which is write operation on b and At same time Thread-B can execute b*c which is read operation on b. So here race condition is happening.
We can handle race condition by using two ways one is by using Synchronized method or block and second by using Volatile Keyword.
Volatile : Volatile keyword in Java guarantees that the value of the volatile variable will always be read from the main memory and not from Thread’s local cache. Normal variable without Volatile keyword is temporarily stored in Local cache for quick access and easy read write operations. Volatile doesn't block your thread it just make sure the write and read operation is perform in the sync. In the context of your example we can avoid race condition by making all variable as volatile
Synchronized : Synchronization is achived by blocking of thread. It means it use lock and key mechanism in such a way that only one thread can execute this block of code at a time. So Thread-B is waiting in the doors of syncronized block until Thread-A finish completely and release key. if you use syncronized in front of static method then lock is considered on your class (.class) and if method is non static or instance method (same as your case) at that time lock is considered on instance of class or current object read
Now come to the point lets modify your example with few print statements and kotlin code
class SharedData {
var a: Int
var b: Int
var c: Int
var d = 10
init {
c = d
b = c
a = b
}
#Synchronized
fun compute() : Pair<Int,Int>{
a = b * 20
b = a + 10
return a to b
}
#Synchronized
fun getComputationResult() : Int{
return b * c
}
}
#Test
fun testInstanceNotShared (){
println("Instance Not Shared Example")
val threadA = Thread{
val pair = SharedData().compute()
println("Running inside ${Thread.currentThread().name} compute And get A = ${pair.first}, B = ${pair.second} ")
}
threadA.name = "threadA"
threadA.start()
val threadB = Thread{
println("Running inside ${Thread.currentThread().name} getComputationResult = ${SharedData().getComputationResult()}")
}
threadB.name = "threadB"
threadB.start()
threadA.join()
threadB.join()
}
// Output
//Instance Not Shared Example
//Running inside threadB getComputationResult = 100
//Running inside threadA compute And get A = 200, B = 210
#Test
fun testInstanceShared (){
println("Instance Shared Example")
val sharedInstance = SharedData()
val threadA = Thread{
val pair = sharedInstance.compute()
println("Running inside ${Thread.currentThread().name} compute And get A = ${pair.first}, B = ${pair.second} ")
}
threadA.name = "threadA"
threadA.start()
val threadB = Thread{
println("Running inside ${Thread.currentThread().name} getComputationResult = ${sharedInstance.getComputationResult()}")
}
threadB.name = "threadB"
threadB.start()
threadA.join()
threadB.join()
}
//Instance Shared Example
//Running inside threadB getComputationResult = 2100
//Running inside threadA compute And get A = 200, B = 210
From above two test case you can identify that answer to your question is actually hidden in way how you call those methods (compute, getComputationResult) in multi-threaded environment.
After the execution of compute, will threadA update main memory
There is no guarantee that threadA Update the value of variable a,b,c,d on main memory but if you use Volatile keyword in front of those variable then it gives you 100% guarantee that those variable is updated with correct state immediately after modification happen
before executing getResult will threadB get only the value of b and c from main memory or will it clear the cache and fetch values for all member variables a,b,c and d
No
Addition to this - if you notice that in second test example even when two thread call the method same time you will get exact result. Means calling compute, getComputationResult same time still the getComputationResult method return the updated value from compute method this is because Synchronized and Volatile provide functionality called happens before which make sure that every write operations should be called before subsequent read operation.

synchronisation on a block of code

I understood synchronization of a block of code means that particular code will be accessed by only one thread at time even many thread are waiting to access that.
when we are writing thread class in run method we starting synchronized block by giving object.
for example
class MyThread extends Thread{
String sa;
public MyThread(String s){
sa=s;
}
public void run(){
synchronized(sa){
if(sa.equals("notdone"){
//do some thing on object
}
}
}
}
here we gave sa object to synchronized block what is the need of that.any how we are going to provide synchronization for that particular block of code
I would suggest
extend Runnable rather than Thread.
don't lock in the Runnable on an external. Instead you should be calling a method which may use an internal lock.
String is not a good choice as a lock. It means that "hi" and "hi" will share a lock but new String("hi") will not.
if you are locking all other threads for the life of the thread, why are you using multiple threads?
The parameter object of the synchronized block is the object on which the block locks.
Thus all synchronized blocks with the same object are excluding each other's (and all synchronized methods' of this same object) simultaneous execution.
So if you have this example
class ExampleA extends Thread() {
public ExampleA(Object l) {
this.x = l;
}
private Object x;
public void run() {
synchronized(x) { // <-- synchronized-block A
// do something
}
}
}
class ExampleB extends Thread() {
public ExampleB(Object l) {
this.x = l;
}
private Object x;
public void run() {
synchronized(x) { // <-- synchronized-block B
// do something else
}
}
}
Object o1 = new Object();
Object o2 = new Object();
Thread eA1 = new ExampleA(o1);
Thread eA2 = new ExampleA(o2);
Thread eB1 = new ExampleB(o1);
Thread eB2 = new ExampleB(o2);
eA1.start(); eA2.start(); eB1.start(); eB2.start();
Now we have two synchronized blocks (A and B, in classes ExampleA and ExampleB), and we have two lock objects (o1 and o2).
If we now look at the simultaneous execution, we can see that:
A1 can be executed in parallel to A2 and B2, but not to B1.
A2 can be executed in parallel to A1 and B1, but not to B2.
B1 can be executed in parallel to A2 and B2, but not to A1.
B2 can be executed in parallel to A1 and B1, but not to A2.
Thus, the synchronization depends only on the parameter object, not on the choice of synchronization block.
In your example, you are using this:
synchronized(sa){
if(sa.equals("notdone"){
//do some thing on object
}
}
This looks like you try to avoid that someone changes your instance variable sa to another string while you are comparing it and working - but it does not avoid this.
Synchronization does not work on a variable, it works on an object - and the object in question should usually be either some object which contains the variable (the current MyThread object in your case, reachable by this), or a special object used just for synchronization, and which is not changed.
As Peter Lawrey said, String objects usually are bad choices for synchronization locks, since all equal String literals are the same object (i.e. would exclude each other's synchronized blocks), while a equal non-literal string (e.g. created at runtime) is not the same object, and thus would not exclude synchronized blocks by other such objects or literals, which often leads to subtle bugs.
All threads synchronized on this objects will wait until current thread finishes its work. This is useful for example if you have read/write operation to collection that your wish to synchronized. So you can write sychronized block in methods set and get. In this case if one thread is reading information not all other threads that want to either read or write will wait.
So the question is what is the function of the object that a block synchronizes on?
All instances of Object have what is called a monitor. In normal execution this monitor is unowned.
A thread wishing to enter a synchronized block must take possession of the object monitor. Only one thread can posses the monitor at a time, however. So, if the monitor is currently unowned, the thread takes possession and executes the synchronized block of code. The thread releases the monitor when it leaves the synchronized block.
If the monitor is currently owned, then the thread needing to enter the synchronized block must wait for the monitor to be freed so it can take ownership and enter the block. More than one thread can be waiting and if so, then only one will be given ownership of the monitor. The rest will go back to waiting.

Categories