visibility guarantees of synchronized and volatile [duplicate] - java

I read this in an upvoted comment on StackOverflow:
But if you want to be safe, you can add simple synchronized(this) {}
at the end of you #PostConstruct [method]
[note that variables were NOT volatile]
I was thinking that happens-before is forced only if both write and read is executed in synchronized block or at least read is volatile.
Is the quoted sentence correct? Does an empty synchronized(this) {} block flush all variables changed in current method to "general visible" memory?
Please consider some scenerios
what if second thread never calls lock on this? (suppose that second thread reads in other methods). Remember that question is about: flush changes to other threads, not give other threads a way (synchronized) to poll changes made by original thread. Also no-synchronization in other methods is very likely in Spring #PostConstruct context - as original comment says.
is memory visibility of changes forced only in second and subsequent calls by another thread? (remember that this synchronized block is a last call in our method) - this would mark this way of synchronization as very bad practice (stale values in first call)

Much of what's written about this on SO, including many of the answers/comments in this thread, are, sadly, wrong.
The key rule in the Java Memory Model that applies here is: an unlock operation on a given monitor happens-before a subsequent lock operation on that same monitor. If only one thread ever acquires the lock, it has no meaning. If the VM can prove that the lock object is thread-confined, it can elide any fences it might otherwise emit.
The quote you highlight assumes that releasing a lock acts as a full fence. And sometimes that might be true, but you can't count on it. So your skeptical questions are well-founded.
See Java Concurrency in Practice, Ch 16 for more on the Java Memory Model.

All writes that occur prior to a monitor exit are visible to all threads after a monitor enter.
A synchronized(this){} can be turned into bytecode like
monitorenter
monitorexit
So if you have a bunch of writes prior to the synchronized(this){} they would have occurred before the monitorexit.
This brings us to the next point of my first sentence.
visible to all threads after a monitor enter
So now, in order for a thread to ensure the writes ocurred it must execute the same synchronization ie synchornized(this){}. This will issue at the very least a monitorenter and establish your happens before ordering.
So to answer your question
Does an empty synchronized(this) {} block flush all variables changed
in current method to "general visible" memory?
Yes, as long as you maintain the same synchronization when you want to read those non-volatile variables.
To address your other questions
what if second thread never calls lock on this? (suppose that second
thread reads in other methods). Remember that question is about: flush
changes to other threads, not give other threads a way (synchronized)
to poll changes made by original thread. Also no-synchronization in
other methods is very likely in Spring #PostConstruct context
Well in this case using synchronized(this) without any other context is relatively useless. There is no happens-before relationship and it's in theory just as useful as not including it.
is memory visibility of changes forced only in second and subsequent
calls by another thread? (remember that this synchronized block is a
last call in our method) - this would mark this way of synchronization
as very bad practice (stale values in first call)
Memory visibility is forced by the first thread calling synchronized(this), in that it will write directly to memory. Now, this doesn't necessarily mean each threads needs to read directly from memory. They can still read from their own processor caches. Having a thread call synchronized(this) ensures it pulls the value of the field(s) from memory and retrieve most up to date value.

Related

Java Memory Model interaction of synchronization, volatile and (stamped) locks

Is the volatile modifier required when working with locks to guarantee memory visibility?
Trying to fully understand concurrency, memory visibility and execution control I came across several sources saying that variables updated in synchronized blocks do not require the field to be volatile (mostly no sources given and actually one page saying synchronized methods and volatility fields need to be used in conjunction).
When approaching the jls chapter 17.4.5 I found:
Two actions can be ordered by a happens-before relationship. If one action
happens-before another, then the first is visible to and ordered before the second.
Is this the section which says that subsequent synchronized method calls guarding the same variable variable will ensure it to be visible to the second thread? If this is the case does the same hold true for locks since we can also guarantee an order?
On the other hand what happens when suddenly we have write locks allowing 2 threads to access the field. Does the entire construct collapse and threads are never guaranteed to updated their cache even in the event if the variable is unlocked?
In short code
int field; //volatile not needed because we have a definite happens-before relationship
Lock lock;
void update(){
//No matter how many threads access this method they will always have
//the most up to date field value to work with.
lock.lock()
field *= 2;
lock.unlock();
}
From the API documentation for Lock:
https://docs.oracle.com/javase/10/docs/api/java/util/concurrent/locks/Lock.html
All Lock implementations must enforce the same memory synchronization
semantics as provided by the built-in monitor lock, as described in
Chapter 17 of The Java™ Language Specification:
A successful lock operation has the same memory synchronization effects as a successful Lock action.
A successful unlock operation has the same memory synchronization effects as a successful Unlock action.
Unsuccessful locking and unlocking operations, and reentrant
locking/unlocking operations, do not require any memory
synchronization effects.
That's a little unclear imo but the gist of it is that yes, Lock is required to work the same way as a monitor (what the synchronized keyword does) and therefore your example does always make the most recent update of field visible without explicitly using the volatile keyword.
P.S. Get Brian Goetz's Java Concurrency in Practice, it explains all of this stuff in a lot more detail. It's basically the bible of all things concurrency in Java.
...and actually one page saying synchronized methods and volatility fields need to be used in conjunction.
You can distill everything you need to know about memory visibility and synchronized blocks down to one simple rule. That is, whatever thread A does to shared variables and objects before it exits from a synchronized (o) {...} block is guaranteed to become visible to thread B by the time thread B enters a synchronized (o) {...} block for the same object, o.
And, as #markspace already said, any implementation of java.util.concurrent.locks.Lock is required to work in the same way.
Is the volatile modifier required when working with locks to guarantee memory visibility?
volatile variable only guarantees memory visibility but not the atomicity. This is one of the main difference between volatile and the synchronized block in Java. So when you use synchronized blocks, variables do not have to be volatile. But If your variable is volatile and performing any compound actions on that variable, then you need to guard the update to the volatile variable with the lock.
Is this the section which says that subsequent synchronized method calls guarding the same variable will ensure it to be visible to the second thread? If this is the case does the same hold true for locks since we can also guarantee an order?
Yes. Because locks will give you both visibility and atomicity.
On the other hand what happens when suddenly we have write locks allowing 2 threads to access the field. Does the entire construct collapse and threads are never guaranteed to updated their cache even in the event if the variable is unlocked?
If you guarding the update to the variable on the same lock only one thread can work on that variable at any given time. So it guarantees consistency. But If you use different locks every time to guard that variable, then more than one thread will modify the variable state and can potentially make the variable state inconsistent. So, in this case, both visibility and atomicity are guaranteed but still, it can lead to inconsistency.

Background Thread doesn't work, but with a simple 'System.out' works

A strange thing. Code below works, if the condition desiredHealth < p.getFakeHealth() is true, it DOES SOMETHING.
#Override
public void run(){
while(game_running){
System.out.println("asd");
if(desiredHealth < player.getFakeHealth()){
DOES SOMETHING
}
}
BUT... without 'System.out' it does not work. It doesn't check the condition.
It is somehow on lower priority, or something.
#Override
public void run(){
while(game_running){
if(desiredHealth < player.getFakeHealth())
DOES SOMETHING
}
}
I'm new to threads, so please, dont shout at me :)
Just for info, this thread is a normal class which 'extends Thread' and yes - it is running. Also 'game_running' is true all the time.
the variable must be volatile because the volatile keyword indicates that a value may change between different accesses, even if it does not appear to be modified.
So, be sure game_running is declared volatile.
Explanation:
Ahh, I have seen this on an older SO question. I'm gonna try to find it for further information.
Your problem is happening because the output stream's print is blocking the current thread and one of the desiredHealth and player.getFakeHealth() expressions get a second chance to be evaluated/changed by other thread and voilà! Magic happens. This is because printf on glibc is synchronized, so when you print, the rest of the operations are waiting for the println operation to complete.
Resolution:
We don't have enough context(who is initializing the player, who does the changes and so on), but it's obvious that you have a threading issue, something is not properly synchronized and your background thread works with bad values. One of the reasons might be that some variables are not volatile and if your background thread reads a cached value, you already have a problem.
One of the topics you need to study regarding concurrency is the Java memory model (that's the official spec but I suggest you read a tutorial or a good book, as the spec is rather complicated for beginners).
One of the issues when different threads work with the same memory (use the same variable - e.g. when one is writing into a variable, the other makes decisions based on their value) is that for optimization reasons, the values written by one thread are not always seen by the other.
For example, one thread could run on one CPU, and that variable is loaded into a register in that CPU. If it needed to write it back to main memory all the time, it would slow processing. So it manipulates it in that register, and only writes it back to memory when it's necessary. But what if another thread is expecting to see the values the first thread is writing?
In that case, it won't see them until they are written back, which may never happen.
There are several ways to ensure that write operations are "committed" to memory before another thread needs to use them. One is to use synchronization, another is to use the volatile keyword.
System.out.println() in fact includes a synchronized operation, so it may cause such variables to be committed to memory, and thus enable the thread to see the updated value.
Declaring the variable as volatile means that any changes in it are seen by all the other threads immediately. So using volatile variables is also a way of ensuring that they are seen.
The variable that is used to decide whether to keep the thread running should normally be declared volatile. But also, in your case, the variables desiredHealth (if it's written by a different thread) and whatever variables getFakeHealth() relies on (if they are written by a different thread) should be volatile or otherwise synchronized.
The bottom line is that whatever information is shared between two threads needs to be synchronized or at the very least use volatile. Information that is not shared can be left alone.

Volatile and synchronized keywords in Java [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Difference between volatile and synchronized in JAVA (j2me)
I am a bit confused with the 2 java keywords synchronized and volatile.
To what i understand, since java is a multi-threaded language, and by using the keyword synchronized will force it to be executed in 1 thread. Am i correct ?
And volatile also does the same thing ?
Both volatile and synchronized guarantee visibility, but synchronized also provides atomicity:
if one thread thread reads a volatile variable, it has the guarantee to see the previous writes to that same variable, including if they were done by other threads
synchronized blocks give the same guarantee (provided the write and the read are done holding the same monitor) but also provides atomicity guarantees: all instructions within a synchronized block will look atomic from another thread synchronized on the same lock.
For long and double read and write operation are not atomic.
volatile is a modifier you can put on a variable to make read and write operations atomic (including long and double)
The article Atomic Access got more information about it.
Basically it is for making sure that if one thread changed the value of the variable the rest of the threads will see that change and will not able to read it while its written.
Java multi-threading involves two problems, ensuring that multiple operations can be done consistently, without mixing actions by different threads, and making a change in a variable's value available to threads other than the on doing the change.
In reality, a variable does not naturally exist at a single location in the hardware. There may be copies in the internal state of different threads, or in different hardware caches. Simply assigning to a variable automatically changes its value from the point of view of the thread doing the assignment.
If the variable is marked "volatile" other threads will get the changed value.
"synchronized" also ensures changes become visible. Specifically, any change done in one thread before the end of a synchronized block will be visible to reads done by another thread in a subsequent block synchronized on the same object.
In addition, blocks that are synchronized on the same object are forced to run sequentially, not in parallel. That allows one to do things like adding one to a variable, knowing that its value will not change between reading the old value and writing the new one. It also allows consistent changes to multiple variables.
The best way I know to learn what is needed to write solid concurrent code in Java is to read Java Concurrency in Practice
Essentially, volatile is used to indicate that a variable's value will be modified by different threads.
Synchronized is a keyword whose overall purpose is to only allow one thread at a time into a particular section of code thus allowing us to protect, for example, variables or data from being corrupted by simultaneous modifications from different threads.
It is as complicated as described lengthy in Java Memory Model.
Synchronized means that only one thread can access the method or code block at a given time.
Volatile handles the communication between threads. Here is a nice explanation: http://jeremymanson.blogspot.be/2008/11/what-volatile-means-in-java.html
Volatile keyword will make every thread to read or write as an atomic operation in memory location. If you do not use volatile, that variable may be cached by threads, and threads may be reading/writing cached copy instead of actual copy.

What is synchronized statement used for?

What is the usage of synchronized statements?
These are used for when you are building a program with many "threads". When main starts, it starts with one thread, which executes the steps in a sequence. You can start many more threads, which can then execute code at the same time. If you're executing the same code at the same time, things might behave in ways you don't want:
y = x+20;
// at this moment, before the next instruction starts, some other thread performs
// the above step, which sets 'y' (an object property) to something different.
int b = y+10; // this would not be x+20, as you might expect.
What you want to do is put a 'lock' over this block of code, to make sure that no other thread can start executing any code that is "synchronized on" the variable y.
synchronized (y) {
y = x+20;
int b = y+10;
} // lock gets released here
Now, all other threads have to wait for whichever thread got there first to exit the block and release the lock, at which point another thread grabs the lock, enters the block of code, executes it, and releases the lock. Note that y has to be an object (Integer), not a primitive type.
You can also add 'synchronized' to methods, which synchronizes on 'this' (the instance object), or the class in the case of a static method.
Writing multi-threaded code is hard, because of problems like this. Synchronization is one tool, though it has one major problem - deadlocks. There is a lot of information online about deadlocks.
It is a java built in form of mutual exclusion. This is used for multithreaded applications.
Sun concurrency tutorial
This has a section about synchronized, but you should read the whole thing if you are trying to use multithreaded applications.
Wiki mutex
It creates a section of code which, with respect to two or more threads, can (a) only be executed by one thread at a time, and (b) forms a memory barrier.
While understanding the concept of mutual-exclusion preventing concurrent execution of the code is quite easy, equally important is the memory barrier.
A memory barrier forms a "happens before" relationship between two threads. Any changes to memory made by a thread before acquiring a lock is guaranteed to be observed by another thread after it acquires the same lock. Due to the effects of CPU caches and their interaction with main memory, this is critical to preventing observation and update of stale cached memory and preventing race conditions between threads.
Only 1 thread at a time can access a synchronized block.
This is a basic language construct. If you're not at all familiar with it you'll need to review.
Invoking a synchronized instance method of an object acquires a lock on the object, and invoking a synchronized static method of a class acquires a lock on the class. A synchronized statement can be used to acquire a lock on any object, not just this object, when executing a block of the code in a method. This block is referred to as a synchronized block. The general form of a synchronized statement is as follows:
 
synchronized (expr) {
statements;
}
 
The expression expr must evaluate to an object reference. If the object is already locked by another thread, the thread is blocked until the lock is released. When a lock is obtained on the object, the statements in the synchronized block are executed, and then the lock is released.

Disadvantage of synchronized methods in Java

What are the disadvantages of making a large Java non-static method synchronized? Large method in the sense it will take 1 to 2 mins to complete the execution.
If you synchronize the method and try to call it twice at the same time, one thread will have to wait two minutes.
This is not really a question of "disadvantages". Synchronization is either necessary or not, depending on what the method does.
If it is critical that the code runs only once at the same time, then you need synchronization.
If you want to run the code only once at the same time to preserve system resources, you may want to consider a counting Semaphore, which gives more flexibility (such as being able to configure the number of concurrent executions).
Another interesting aspect is that synchronization can only really be used to control access to resources within the same JVM. If you have more than one JVM and need to synchronize access to a shared file system or database, the synchronized keyword is not at all sufficient. You will need to get an external (global) lock for that.
If the method takes on the order of minutes to execute, then it may not need to be synchronized at such a coarse level, and it may be possible to use a more fine-grained system, perhaps by locking only the portion of a data structure that the method is operating on at the moment. Certainly, you should try to make sure that your critical section isn't really 2 minutes long - any method that takes that long to execute (regardless of the presence of other threads or locks) should be carefully studied as a candidate for parallelization. For a computation this time-consuming, you could be acquiring and releasing hundreds of locks and still have it be negligible. (Or, to put it another way, even if you need to introduce a lot of locks to parallelize this code, the overhead probably won't be significant.)
Since your method takes a huge amount of time to run, the relatively tiny amount of time it takes to acquire the synchronized lock should not be important.
A bigger problem could appear if your program is multithreaded (which I'm assuming it is, since you're making the method synchronized), and more than one thread needs to access that method, it could become a bottleneck. To prevent this, you might be able to rewrite the method so that it does not require synchronization, or use a synchronized block to reduce the size of the protected code (in general, the smaller the amount of code that is protected by the synchronize keyword, the better).
You can also look at the java.util.concurrent classes, as you may find a better solution there as well.
If the object is shared by multiple threads, if one thread tries to call the synchronized method on the object while another's call is in progress, it will be blocked for 1 to 2 minutes. In the worst case, you could end up with a bottleneck where the throughput of your system is dominated by executing these computations one at a time.
Whether this is a problem or not depends on the details of your application, but you probably should look at more fine-grained synchronization ... if that is practical.
In simple two lines Disadvantage of synchronized methods in Java :
Increase the waiting time of the thread
Create performance problem
First drawback is that threads that are blocked waiting to execute synchronize code can't be interrupted.Once they're blocked their stuck there, until they get the lock for the object the code is synchronizing on.
Second drawback is that the synchronized block must be within the same method in other words we can't start a synchronized block in one method and end the syncronized block in another for obvious reasons.
The third drawback is that we can't test to see if an object's intrinsic lock is available or find out any other information about the lock also if the lock isn't available we can't timeout after we waited lock for a while. When we reach the beginning of a synchronized block we can either get the lock and continue executing or block at that line of code until we get the lock.
The fourth drawback is that if multiple threads are awaiting to get lock, it's not first come first served. There isn't set order in which the JVM will choose the next thread that gets the lock, so the first thread that blocked could be the last thread to get the lock and vice Versa.
so instead of using synchronization we can prevent thread interference using classes that implement the java.util.concurrent locks.lock interface.
In simple two lines Disadvantage of synchronized methods in Java :
1. Increase the waiting time of the thread
2. Create a performance problem

Categories