Non-volatile variable value during wait() and notifyall() call in 2 threads - java

Lets say I have two threads A and B and inside these both 2 threads I have synchronized block in which an int variable is modified continously.
For example, thread A enter synchronized block modify int variable then call these 2 methods:
notifyall(); //to wake thread B which is in waiting state and
wait():
and after that thread B acquire lock and do same steps as thread A and process keep on repeating. All changes to int variable happens inside synchronized block of both threads.
My question is do I need to make int variable volatile. Do thread flush to main memory before they go to waiting state and reload data in registers when thread acquire lock again as a result of notifyall(); call.

If A and B run alternatively rather than concurrently, and if they switch off via wait() and notifyAll() invocations on the same Object, and if no other threads access the variable in question, then thread safety does not require the variable to be volatile.
Note that o.wait() and o.notifyAll() must be invoked inside a method or block synchronized on o -- that synchronization is sufficient to ensure that the two threads see all each others' writes to any variable before the switch-off.
Do be careful to ensure that the two threads are synchronizing on the same object, which is not clear from your question. You have no effective synchronization at all if, say, the two threads are waiting on and notifying different instances of the same class.

The answer is no you do not need to make the variable volatile. The reasoning being, writes that occur to a variable within a synchronized block will be visible to subsequent threads entering a synchronized block on the same object.
So it has the same memory semantics as a volatile read and write.

Not sure about java. But in C: https://www.kernel.org/doc/Documentation/volatile-considered-harmful.txt
If shared_data were declared volatile, the locking would still be
necessary. But the compiler would also be prevented from optimizing access
to shared_data within the critical section, when we know that nobody else
can be working with it. While the lock is held, shared_data is not
volatile. When dealing with shared data, proper locking makes volatile
unnecessary - and potentially harmful.

Related

in Java do i need to mark an int field as volatile if threads increment it only within synchronized method? [duplicate]

In the following simple scenario:
class A {
int x;
Object lock;
...
public void method(){
synchronized(lock){
// modify/read x and act upon its value
}
}
}
Does x need to be volatile? I know that synchronized guarantees atomicity, but I am not sure about visibility though... does lock -> modify -> unlock -> lock guarantee, that after the second lock the value of x will be "fresh"?
No it does not, because synchronized already has a memory barrier inserted after it, so all threads will see the update that the current thread performs, taking into account that the other threads will synchronize on the same lock.
Volatile, just like synchronized, has memory barriers that are attached to it - depending on the CPU, it is store/load/full barrier that ensures that an update from one thread is visible to the other(s). I assume this is performed with CPU cache invalidation.
EDIT
From what I've just read, the store buffers are flushed to the CPU cache, and this is how the visibility is achieved.
Simplified answer: If thread A updates a field and then releases a lock, then thread B will be guaranteed to see the update after thread B has acquired the same lock.
Note, "release a lock" means exit a synchronized block, and "acquire the same lock" means synchronize on the same object.

Safe multithreading in java

I am new to multi threading in java.
I have gone through some online references but can't get clarity regarding how to properly implement thread concurrency and addressing resource access conflicts.
(like where to use synchronized and volatile and how to design code that dont even need them).
Can somebody suggest some guidelines or provide any valuable online references you have come across for implementing a safer multi threading project?
Thanks in advance.
Didn't go through your code, but here's something important to begin using synchronize and volatile keywords.
Essentially, volatile is used to indicate that a variable's value will be modified by different threads.
Declaring a volatile Java variable means:
The value of this variable will never be cached thread-locally: all reads and writes will go straight to "main memory"; This means that threads are making changes directly to a (volatile)variable where other threads also have a hold on. Everyone(every thread) has control and they can make changes which are reflected globally.
Here is an excellent example to understand more about volatile variables
If a variable is not declared volatile : The problem with threads not seeing the latest value of a variable because it has not yet been written back to main memory by another thread, is called a "visibility" problem. The updates of one thread are not visible to other threads
Declaring a synchronized Java variable means:
Synchronized blocks in Java are marked with the synchronized keyword and is synchronized on some object. All synchronized blocks synchronized on the same object can only have one thread executing inside them at the same time. All other threads attempting to enter the synchronized block are blocked until the thread inside the synchronized block exits the block.
Usage :
If you want a count variable to be incremented by some threads then make it volatile.
public class SharedObject {
public volatile int counter = 0;
}
However if you need your counter increment to be atomic( one thread at a time) make it synchronized too.
public synchronized void add(int value){
this.counter += value;
}

If synchronized creates a happen-before relationship and prevents reordering why is volatile needed for DCL

I'm trying to understand the need for volatile in double-checked locking (I'm aware there are better ways than DCL though) I've read a few SO questions similar to mine, but none seem to explain what I'm looking for. I've even found some upvoted answers on SO that have said volatile is not needed (even when the object is mutable) however, everything I've read says otherwise.
What I want to know is why volatile is necessary in DCL if synchronized creates a happens-before relationship and prevents reordering?
Here is my understanding of how DCL works and an example:
// Does not work
class Foo {
private Helper helper = null; // 1
public Helper getHelper() { // 2
if (helper == null) { // 3
synchronized(this) { // 4
if (helper == null) { // 5
helper = new Helper(); // 6
} // 7
} // 8
} // 9
return helper; // 10
}
This does not work because the Helper object is not immutable or volatile and we know that
volatile causes every write to be flushed to memory and for every read to come from memory. This is important so that no thread sees a stale object.
So in the example I listed, it's possible for Thread A to begin initializing a new Helper object at Line 6. Then Thread B comes along and see a half initialized object at line 3. Thread B then jumps to line 10 and returns a half initialized Helper object.
Adding volatile fixes this with a happens before relationship and no reordering can be done by the JIT compiler. So the Helper object cannot be written to the helper reference until it is fully constructed (?, at least this is what I think it is telling me...).
However, after reading JSR-133 documentation, I became a bit confused. It states
Synchronization ensures that memory writes by a thread before or
during a synchronized block are made visible in a predictable manner
to other threads which synchronize on the same monitor. After we exit
a synchronized block, we release the monitor, which has the effect of
flushing the cache to main memory, so that writes made by this thread
can be visible to other threads. Before we can enter a synchronized
block, we acquire the monitor, which has the effect of invalidating
the local processor cache so that variables will be reloaded from main
memory. We will then be able to see all of the writes made visible by
the previous release.
So synchronized in Java creates a memory barrier and a happens before relationship.
So the actions are being flushed to memory, so it makes me question why volatile is needed on the variable.
The documentation also states
This means that any memory operations which were visible to a thread
before exiting a synchronized block are visible to any thread after it
enters a synchronized block protected by the same monitor, since all
the memory operations happen before the release, and the release
happens before the acquire.
My guess as to why we need the volatile keyword and why synchronize is not enough, is because the memory operations are not visible to other threads until Thread A exits the synchronized block and Thread B enters the same block on the same lock.
It's possible that Thread A is initializing the object at line 6 and Thread B comes along at Line 3 before there is a flush by Thread A at Line 8.
However, this SO answer seems to contradict that as the synchronized block prevents reordering "from inside a synchronized block, to outside it"
If helper is not null, what ensures that the code will see all the effects of the construction of the helper? Without volatile, nothing would do so.
Consider:
synchronized(this) { // 4
if (helper == null) { // 5
helper = new Helper(); // 6
} // 7
Suppose internally this is implemented as first setting helper to a non-null value and then calling the constructor to create a valid Helper object. No rule prevents this.
Another thread may see helper as non-null but the constructor hasn't even run yet, much less made its effects visible to another thread.
It is vital not to permit any other thread to see helper set to a non-null value until we can guarantee that all consequences of the constructor are visible to that thread.
By the way, getting code like this correct is extremely difficult. Worse, it can appear to work fine 100% of the time and then suddenly break on a different JVM, CPU, library, platform, or whatever. It is generally advised that writing this kind of code be avoided unless proven to be needed to meet performance requirements. This kind of code is hard to write, hard to understand, hard to maintain, and hard to get right.
#David Schwartz's answer is pretty good but there is one thing that I'm not sure is stated well.
My guess as to why we need the volatile keyword and why synchronize is not enough, is because the memory operations are not visible to other threads until Thread A exits the synchronized block and Thread B enters the same block on the same lock.
Actually not the same lock but any lock because locks come with memory barriers. volatile is not about locking but it is around crossing memory barriers while synchronized blocks are both locks and memory barriers. You need the volatile because even though Thread A has properly initialized the Helper instance and published it to helper field, Thread B needs to also cross a memory barrier to ensure that it sees all of the updates to Helper.
So in the example I listed, it's possible for Thread A to begin initializing a new Helper object at Line 6. Then Thread B comes along and see a half initialized object at line 3. Thread B then jumps to line 10 and returns a half initialized Helper object.
Right. It is possible that Thread A might initialize the Helper and publish it before it hits the end of the synchronized block. There is nothing stopping it from happening. And because the JVM is allowed to reorder the instructions from the Helper constructor until later, it could be published to helper field but not be fulling initialized. And even if Thread A does reach the end of the synchronized block and Helper then gets fully initialized, there is still nothing that ensures that Thread B sees all of the updated memory.
However, this SO answer seems to contradict that as the synchronized block prevents reordering "from inside a synchronized block, to outside it"
No, that answer is not contradictory. You are confusing what happens with just Thread A and what happens to other threads. In terms of Thread A (and central memory), exiting the synchronized block makes sure that Helper's constructor has fully finished and published to the helper field. But this means nothing until Thread B (or other threads) also cross a memory barrier. Then they too will invalidate the local memory cache and see all of the updates.
That's why the volatile is necessary.

Do variables accessed within synchronised block must be declared volatile?

in an example like this:
...
public void foo() {
...
synchronized (lock) {
varA += some_value;
}
...
}
...
The question is, does varA must be declared volatile in order to prevent per-thread caching or it is enough to access it only within synchronized blocks?
Thanks!
No, you don't need to.
synchronized blocks imply a memory barrier.
From JSR-133:
But there is more to synchronization than mutual exclusion. Synchronization ensures that memory writes by a thread before or during a synchronized block are made visible in a predictable manner to other threads which synchronize on the same monitor. After we exit a synchronized block, we release the monitor, which has the effect of flushing the cache to main memory, so that writes made by this thread can be visible to other threads. Before we can enter a synchronized block, we acquire the monitor, which has the effect of invalidating the local processor cache so that variables will be reloaded from main memory. We will then be able to see all of the writes made visible by the previous release.
So long as every access to it is from within a synchronized block then you are fine.
There is a memory barrier associated with each synchronized block that will make sure the variables accessed within are exposed correctly.

static array variables would need to be locked?

so let's say that I have a static variable, which is an array of size 5.
And let's say I have two threads, T1 and T2, they both are trying to change the element at index 0 of that array. And then use the element at index 0.
In this case, I should lock the array until T1 is finished using the element right?
Another question is let's say T1 and T2 are already running, T1 access element at index 0 first, then lock it. But then right after T2 tries to access element at index 0, but T1 hasn't unlocked index 0 yet. Then in this case, in order for T2 to access element at index 0, what should T2 do? should T2 use call back function after T1 unlocks index 0 of the array?
Synchronization in java is (technically) not about refusing other threads access to an object, it about ensuring unique usage of it (at one time) between threads using synchronization locks. So T2 can access the object while T1 has synchronization lock, but will be unable to obtain the synchronization lock until T1 releases it.
You synchronize (lock) when you're going to have multiple threads accessing something.
The second thread is going to block until the first thread releases the lock (exits the synchronized block)
More fine-grained control can be had by using java.util.concurrent.locks and using non-blocking checks if you don't want threads to block.
1) Basically, yes. You needn't necessarily lock the array, you could lock at a higher level of granularity (say, the enclosing class if it were a private variable). The important thing is that no part of the code tries to modify or read from the array without holding the same lock. If this condition is violated, undefined behaviour could result (including, but not limited to, seeing old values, seeing garbage values that never existed, throwing exceptions, and going into infinite loops).
2) This depends partly on the synchronization scheme you're using, and your desired semantics. With the standard synchronized keyword, T2 would block indefinitely until the monitor is released by T1, at which point T2 will acquire the monitor and continue with the logic inside the synchronized block.
If you want finer-grained control over the behaviour when a lock is contended, you could use explicit Lock objects. These offer tryLock methods (both with a timeout, and returning immediately) which return true or false according to whether the lock could be obtained. Thus you could then test the return value and take whatever action you like if the lock isn't immediately obtained (such as registering a callback function, incrementing a counter and giving feedback to a user before trying again, etc.).
However, this custom reaction is seldom necessary, and notably increases the complexity of your locking code, not to mention the large possibility of mistakes if you forget to always release the lock in a finally block if and only if it was acquired successfully, etc. As a general rule, just go with synchronized unless/until you can show that it's providing a significant bottleneck to your application's required throughput.
I should lock the array until T1 is finished using the element right?
Yes, to avoid race conditions that would be a good idea.
what should T2 do
Look the array, then read the value. At this time you know noone else can modify it. When using locks such as monitors an queue is automatically kept by the system. Hence if T2 tries to access an object locked by T1 it will block (hang) until T1 releases the lock.
Sample code:
private Obect[] array;
private static final Object lockObject = new Object();
public void modifyObject() {
synchronized(lockObject) {
// read or modify the objects
}
}
Technically you could also synchronize on the array itself.
You don't lock a variable; you lock a mutex, which protects
a specific range of code. And the rule is simple: if any thread
modifies an object, and more than one thread accesses it (for
any reason), all accesses must be fully synchronized. The usual
solution is to define a mutex to protect the variable, request
a lock on it, and free the lock once the access has finished.
When a thread requests a lock, it is suspended until that lock
has been freed.
In C++, it is usual to use RAII to ensure that the lock is
freed, regardless of how the block is exited. In Java,
a synchronized block will acquire the lock at the start
(waiting until it is available), and leave the lock when the
program leaves the block (for whatever reasons).
Have you considered using AtomicReferenceArray? http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/atomic/AtomicReferenceArray.html It provides a #getAndSet method, that provides a thread safe atomic way to update indexes.
T1 access element at index 0 first, then lock it.
Lock first on static final mutex variable then access your static variable.
static final Object lock = new Object();
synchronized(lock) {
// access static reference
}
or better access on class reference
synchronized(YourClassName.class) {
// access static reference
}

Categories