How to lock multiple resources in java multithreading - java

I have a requirement of locking several objects in one method in my java class. For an example look at the following class:
public class CounterMultiplexer {
private int counter =0;
private int multiPlexer =5;
private Object mutex = new Object();
public void calculate(){
synchronized(mutex){
counter ++;
multiPlexer = multiPlexer*counter;
}
}
public int getCounter(){
return counter;
}
public int getMux(){
return multiPlexer;
}
}
In the above code, I have two resources that could access by a more than one thread. Those two resources are counter and the multiPlexer properties. As you can see in the above code I have locked both the resources using a mutex.
Is this way of locking is correct? Do I need to use nested Synchronized statements to lock both resources inside the calculate method?

So you've got the idea of mutex (and atomicity) correct. However there's an additional wrinkle in the Java memory model which is visibility that you have to take into consideration.
Basically, both reads and writes must be synchronized, or the read is not guaranteed to see the write. For your getters, it would be very easy for the JIT to hoist those values into a register and never re-read them, meaning the value written would never be seen. This is called a data race because the order of the write and the read cannot be guaranteed.
To break the data race, you have to use memory ordering semantics. This boils down to synchronizing both the reads and the writes. And you have to do this every time you need to use synchronization anywhere, not just in the specific case you have above.
You could use almost any method (like AtomicInteger) but probably the easiest is either to re-use the mutex you already have, or to make the two primitive values volatile. Either works, but you must use at least one.
public class CounterMultiplexer {
private int counter =0;
private int multiPlexer =5;
private Object mutex = new Object();
public void claculate(){
synchronized(mutex){
counter ++;
multiPlexer = multiPlexer*counter;
}
}
public int getCounter(){
synchronized(mutex){
return counter;
}
}
public int getMux(){
synchronized(mutex){
return multiPlexer;
}
}
}
So to get into this more, we have to read the spec. You can also get Brian Goetz's Java Concurrency in Practice which I highly recommend because he covers this sort of thing in detail and with simple examples that make it very clear that you must syncrhonize on both reads and writes, always.
The relevant section of the spec is Chapter 17, and in particular section 17.4 Memory Model.
Just to quote the relevant parts:
The Java programming language memory model works by examining each read in an execution trace and checking that the write observed by that read is valid according to certain rules.
That bit is important. Each read is checked. The model doesn't work by checking the writes alone and then assuming the reads can see the write.
Two actions can be ordered by a happens-before relationship. If one action happens-before another, then the first is visible to and ordered before the second.
The happens-before is what allows reads to see a write. Without it, the JVM is free to optimize your program in ways that might preclude seeing the write (like hoisting a value into a register).
The happens-before relation defines when data races take place.
A set of synchronization edges, S, is sufficient if it is the minimal set such that the transitive closure of S with the program order determines all of the happens-before edges in the execution. This set is unique.
It follows from the above definitions that:
An unlock on a monitor happens-before every subsequent lock on that monitor.
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
So happens-before defines when a data race does (or does not) take place. How volatile works I think is obvious from the description above. For a monitor (your mutex), it's important to note that happens-before is established by a unlock followed by a later lock, so to establish happens-before for the read, you do need to lock the monitor again just before the read.
We say that a read r of a variable v is allowed to observe a write w to v if, in the happens-before partial order of the execution trace:
r is not ordered before w (i.e., it is not the case that hb(r, w)), and
there is no intervening write w' to v (i.e. no write w' to v such that hb(w, w') and hb(w', r)).
Informally, a read r is allowed to see the result of a write w if there is no happens-before ordering to prevent that read.
"Allowed to observe" means the read actually will see the write. So happens-before is what we need to see the write, and either the lock (mutex in your program) or volatile will work.
There's lots more (other things cause happens-before) and there's the API too with classes in java.utli.concurrent that will also cause memory ordering (and visibility) semantics. But there's the gory details on your program.

No you don't need to use nested synchronized statements to lock both resource inside the calculate method. But you need to add synchronized clause in get methods also, synchronization is needed for both reading/writing into the resource.
public int getCounter(){
synchronized(mutex){
return counter;
}
}
public int getMux(){
synchronized(mutex){
return multiPlexer;
}
}

It is fine (better even) to use just a single mutex to protect both fields. The monitor object has nothing to do really with the fields or the object that holds them. In fact, it is good practice to use dedicated lock objects (instead of say this). You just have to make sure that all access to these fields end up using the same monitor.
However, it is not enough to wrap the setter in a synchronized block, all access to the (non-volatile) variables (including the getters) must be behind the same monitor.

Since the counter and the multiPlexer are locked simultaneously, they can be considered as a single resource. Moreover, the whole instance of the class CounterMultiplexer can be thought of as a single resource. In Java, considering an instance as a single resource is a most widespread approach. For this case, special synchronozed methods were introduced:
public synchronized void claculate(){
counter ++;
multiPlexer = multiPlexer*counter;
}
public synchronized int getCounter(){
return counter;
}
public synchronized int getMux(){
return multiPlexer;
}
The mutex variable is not needed anymore.

An alternative way to approach this kind of problem is to have all your member variables be final and for the calculate method to return a new instance of CounterMultiplexer. This guarantees that any instance of CounterMultiplexer is always in a consistent state. Depending on how you use this class, this approach would likely require synchronization outside of this class.
Synchronizing within the getters still allows for another thread to read one of the two member variables from before the change and one from after.

Related

Doesn't "volatile" ensure that other threads see a consistent value for the variable? [duplicate]

How do atomic / volatile / synchronized work internally?
What is the difference between the following code blocks?
Code 1
private int counter;
public int getNextUniqueIndex() {
return counter++;
}
Code 2
private AtomicInteger counter;
public int getNextUniqueIndex() {
return counter.getAndIncrement();
}
Code 3
private volatile int counter;
public int getNextUniqueIndex() {
return counter++;
}
Does volatile work in the following way? Is
volatile int i = 0;
void incIBy5() {
i += 5;
}
equivalent to
Integer i = 5;
void incIBy5() {
int temp;
synchronized(i) { temp = i }
synchronized(i) { i = temp + 5 }
}
I think that two threads cannot enter a synchronized block at the same time... am I right? If this is true then how does atomic.incrementAndGet() work without synchronized? And is it thread-safe?
And what is the difference between internal reading and writing to volatile variables / atomic variables? I read in some article that the thread has a local copy of the variables - what is that?
You are specifically asking about how they internally work, so here you are:
No synchronization
private int counter;
public int getNextUniqueIndex() {
return counter++;
}
It basically reads value from memory, increments it and puts back to memory. This works in single thread but nowadays, in the era of multi-core, multi-CPU, multi-level caches it won't work correctly. First of all it introduces race condition (several threads can read the value at the same time), but also visibility problems. The value might only be stored in "local" CPU memory (some cache) and not be visible for other CPUs/cores (and thus - threads). This is why many refer to local copy of a variable in a thread. It is very unsafe. Consider this popular but broken thread-stopping code:
private boolean stopped;
public void run() {
while(!stopped) {
//do some work
}
}
public void pleaseStop() {
stopped = true;
}
Add volatile to stopped variable and it works fine - if any other thread modifies stopped variable via pleaseStop() method, you are guaranteed to see that change immediately in working thread's while(!stopped) loop. BTW this is not a good way to interrupt a thread either, see: How to stop a thread that is running forever without any use and Stopping a specific java thread.
AtomicInteger
private AtomicInteger counter = new AtomicInteger();
public int getNextUniqueIndex() {
return counter.getAndIncrement();
}
The AtomicInteger class uses CAS (compare-and-swap) low-level CPU operations (no synchronization needed!) They allow you to modify a particular variable only if the present value is equal to something else (and is returned successfully). So when you execute getAndIncrement() it actually runs in a loop (simplified real implementation):
int current;
do {
current = get();
} while(!compareAndSet(current, current + 1));
So basically: read; try to store incremented value; if not successful (the value is no longer equal to current), read and try again. The compareAndSet() is implemented in native code (assembly).
volatile without synchronization
private volatile int counter;
public int getNextUniqueIndex() {
return counter++;
}
This code is not correct. It fixes the visibility issue (volatile makes sure other threads can see change made to counter) but still has a race condition. This has been explained multiple times: pre/post-incrementation is not atomic.
The only side effect of volatile is "flushing" caches so that all other parties see the freshest version of the data. This is too strict in most situations; that is why volatile is not default.
volatile without synchronization (2)
volatile int i = 0;
void incIBy5() {
i += 5;
}
The same problem as above, but even worse because i is not private. The race condition is still present. Why is it a problem? If, say, two threads run this code simultaneously, the output might be + 5 or + 10. However, you are guaranteed to see the change.
Multiple independent synchronized
void incIBy5() {
int temp;
synchronized(i) { temp = i }
synchronized(i) { i = temp + 5 }
}
Surprise, this code is incorrect as well. In fact, it is completely wrong. First of all you are synchronizing on i, which is about to be changed (moreover, i is a primitive, so I guess you are synchronizing on a temporary Integer created via autoboxing...) Completely flawed. You could also write:
synchronized(new Object()) {
//thread-safe, SRSLy?
}
No two threads can enter the same synchronized block with the same lock. In this case (and similarly in your code) the lock object changes upon every execution, so synchronized effectively has no effect.
Even if you have used a final variable (or this) for synchronization, the code is still incorrect. Two threads can first read i to temp synchronously (having the same value locally in temp), then the first assigns a new value to i (say, from 1 to 6) and the other one does the same thing (from 1 to 6).
The synchronization must span from reading to assigning a value. Your first synchronization has no effect (reading an int is atomic) and the second as well. In my opinion, these are the correct forms:
void synchronized incIBy5() {
i += 5
}
void incIBy5() {
synchronized(this) {
i += 5
}
}
void incIBy5() {
synchronized(this) {
int temp = i;
i = temp + 5;
}
}
Declaring a variable as volatile means that modifying its value immediately affects the actual memory storage for the variable. The compiler cannot optimize away any references made to the variable. This guarantees that when one thread modifies the variable, all other threads see the new value immediately. (This is not guaranteed for non-volatile variables.)
Declaring an atomic variable guarantees that operations made on the variable occur in an atomic fashion, i.e., that all of the substeps of the operation are completed within the thread they are executed and are not interrupted by other threads. For example, an increment-and-test operation requires the variable to be incremented and then compared to another value; an atomic operation guarantees that both of these steps will be completed as if they were a single indivisible/uninterruptible operation.
Synchronizing all accesses to a variable allows only a single thread at a time to access the variable, and forces all other threads to wait for that accessing thread to release its access to the variable.
Synchronized access is similar to atomic access, but the atomic operations are generally implemented at a lower level of programming. Also, it is entirely possible to synchronize only some accesses to a variable and allow other accesses to be unsynchronized (e.g., synchronize all writes to a variable but none of the reads from it).
Atomicity, synchronization, and volatility are independent attributes, but are typically used in combination to enforce proper thread cooperation for accessing variables.
Addendum (April 2016)
Synchronized access to a variable is usually implemented using a monitor or semaphore. These are low-level mutex (mutual exclusion) mechanisms that allow a thread to acquire control of a variable or block of code exclusively, forcing all other threads to wait if they also attempt to acquire the same mutex. Once the owning thread releases the mutex, another thread can acquire the mutex in turn.
Addendum (July 2016)
Synchronization occurs on an object. This means that calling a synchronized method of a class will lock the this object of the call. Static synchronized methods will lock the Class object itself.
Likewise, entering a synchronized block requires locking the this object of the method.
This means that a synchronized method (or block) can be executing in multiple threads at the same time if they are locking on different objects, but only one thread can execute a synchronized method (or block) at a time for any given single object.
volatile:
volatile is a keyword. volatile forces all threads to get latest value of the variable from main memory instead of cache. No locking is required to access volatile variables. All threads can access volatile variable value at same time.
Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable.
This means that changes to a volatile variable are always visible to other threads. What's more, it also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.
When to use: One thread modifies the data and other threads have to read latest value of data. Other threads will take some action but they won't update data.
AtomicXXX:
AtomicXXX classes support lock-free thread-safe programming on single variables. These AtomicXXX classes (like AtomicInteger) resolves memory inconsistency errors / side effects of modification of volatile variables, which have been accessed in multiple threads.
When to use: Multiple threads can read and modify data.
synchronized:
synchronized is keyword used to guard a method or code block. By making method as synchronized has two effects:
First, it is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
When to use: Multiple threads can read and modify data. Your business logic not only update the data but also executes atomic operations
AtomicXXX is equivalent of volatile + synchronized even though the implementation is different. AmtomicXXX extends volatile variables + compareAndSet methods but does not use synchronization.
Related SE questions:
Difference between volatile and synchronized in Java
Volatile boolean vs AtomicBoolean
Good articles to read: ( Above content is taken from these documentation pages)
https://docs.oracle.com/javase/tutorial/essential/concurrency/sync.html
https://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html
https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/package-summary.html
I know that two threads can not enter in Synchronize block at the same time
Two thread cannot enter a synchronized block on the same object twice. This means that two threads can enter the same block on different objects. This confusion can lead to code like this.
private Integer i = 0;
synchronized(i) {
i++;
}
This will not behave as expected as it could be locking on a different object each time.
if this is true than How this atomic.incrementAndGet() works without Synchronize ?? and is thread safe ??
yes. It doesn't use locking to achieve thread safety.
If you want to know how they work in more detail, you can read the code for them.
And what is difference between internal reading and writing to Volatile Variable / Atomic Variable ??
Atomic class uses volatile fields. There is no difference in the field. The difference is the operations performed. The Atomic classes use CompareAndSwap or CAS operations.
i read in some article that thread has local copy of variables what is that ??
I can only assume that it referring to the fact that each CPU has its own cached view of memory which can be different from every other CPU. To ensure that your CPU has a consistent view of data, you need to use thread safety techniques.
This is only an issue when memory is shared at least one thread updates it.
Synchronized Vs Atomic Vs Volatile:
Volatile and Atomic is apply only on variable , While Synchronized apply on method.
Volatile ensure about visibility not atomicity/consistency of object , While other both ensure about visibility and atomicity.
Volatile variable store in RAM and it’s faster in access but we can’t achive Thread safety or synchronization whitout synchronized keyword.
Synchronized implemented as synchronized block or synchronized method while both not. We can thread safe multiple line of code with help of synchronized keyword while with both we can’t achieve the same.
Synchronized can lock the same class object or different class object while both can’t.
Please correct me if anything i missed.
A volatile + synchronization is a fool proof solution for an operation(statement) to be fully atomic which includes multiple instructions to the CPU.
Say for eg:volatile int i = 2; i++, which is nothing but i = i + 1; which makes i as the value 3 in the memory after the execution of this statement.
This includes reading the existing value from memory for i(which is 2), load into the CPU accumulator register and do with the calculation by increment the existing value with one(2 + 1 = 3 in accumulator) and then write back that incremented value back to the memory. These operations are not atomic enough though the value is of i is volatile. i being volatile guarantees only that a SINGLE read/write from memory is atomic and not with MULTIPLE. Hence, we need to have synchronized also around i++ to keep it to be fool proof atomic statement. Remember the fact that a statement includes multiple statements.
Hope the explanation is clear enough.
The Java volatile modifier is an example of a special mechanism to guarantee that communication happens between threads. When one thread writes to a volatile variable, and another thread sees that write, the first thread is telling the second about all of the contents of memory up until it performed the write to that volatile variable.
Atomic operations are performed in a single unit of task without interference from other operations. Atomic operations are necessity in multi-threaded environment to avoid data inconsistency.

Initialization safety in java

Just to make sure I understand the concepts presented in java concurrency in practice.
Lets say I have the following program:
public class Stuff{
private int x;
public Stuff(int x){
this.x=x;
}
public int getX(){return x;}
}
public class UseStuff(){
private Stuff s;
public void makeStuff(int x){
s=new Stuff(x);
}
public int useStuff(){
return s.getX();
}
}
If I let multiple threads to play with this code, then I'm not only in trouble because s might be pointing to multiple instances if two or more threads are entering to the makeStuff method, but even if just one thread creates a new Stuff, then an other thread who is just entered to useStuff can return the value 0 (predefined int value) or the value assigned to "x" by its constructor.
That all depends on whether the constructor has finished initializing x.
So at this point, to make it thread safe I must do one thing and then I can choose from two different ways.
First I must make makeStuff() atomic, so "s" will point to one object at a time.
Then I either make useStuff synchronized as well which ensures the I get back the Stuff object x var only after its constructor has finished building it, OR i can make Stuff's x final, and by this the JMM makes sure that x's value will only be visible after it has been initialized.
Do I understand the importance of final fields in the context of concurrency and JMM?
Do I understand the importance of final fields in the context of concurrency and JMM?
Not quite. The spec writes:
final fields also allow programmers to implement thread-safe immutable objects without synchronization. A thread-safe immutable object is seen as immutable by all threads, even if a data race is used to pass references to the immutable object between threads. This can provide safety guarantees against misuse of an immutable class by incorrect or malicious code
If you make x final, this guarantees that every thread that obtains a reference to a Stuff instance will observe x to have been assigned. It does not guarantee that any thread will obtain such a reference.
That is, in the absence of synchronization action in useStuff(), the runtime is permitted to satisfy a read of s from a register, which might return a stale value.
The cheapest correctly synchronized variant of this code is declaring s volatile, which ensures that writes to s happen-before (and are therefore visible to) subsequent reads of s. If you do that, you need not even make x final (because the write to x happens-before the write of s, the read of s happens-before the read of x, and happens-before is transitive).
Some answers claim that s can only refer to one object at a time. This is wrong; because there is no memory barrier, different threads can have their own notion about the value of s. In order for all threads to see a consistent value assigned to s, you need to declare s as volatile, or use some other memory barrier.
If you do this, you won't need to declare x as final for the correct value to be visible to all threads (but you might still want to; fields shouldn't be mutable without a reason). That's because the initialization of x happens-before the assignment of s in "source code order," and the write of the volatile field s happens-before other thread reads that value from s. If you subsequently modified the value of a non-final field x, however, you could run into trouble because the modification isn't guaranteed to be visible to other threads. Making Stuff immutable would eliminate that possibility.
Of course, there's nothing to stop threads from clobbering the value assigned to s, so different threads could still see different values for x. This isn't really a threading issue though. Even a single thread could write and then read different values of x over time. But preventing this behavior in a multi-threaded environment requires atomicity, that is, checking to see whether s has a value and assigning one if not should appear as one indivisible action to other threads. An AtomicReference would be the best solution, but the synchronized keyword would work too.
What are you trying to protect by making things synchronized? Are you concerned that thread A will call makeStuff and then thread B will call getStuff afterwards and the value won't be there? I'm not sure how synchronizing any of this will help that. Depending on what problem you are trying to avoid, it might be as simple as marking s as volatile.
I'm not sure what you're doing there. Why are you trying to create an object and then assign it to a field? Why save it if it can be overwritten by other call to makeStuff? It seems like you use UseStuff both as an proxy and as a factory to your actual Stuff model object. You better separate the two:
public class StuffFactory {
public static Stuff createStuff(int value) {
return new StuffProxy(value);
}
}
public class StuffProxy extends Stuff {
// Replacement for useStuff from your original UseStuff class
#Override
public int getX() {
//Put custom logic here
return super.getX();
}
}
The logic here is that each thread is responsible for creation of their own Stuff objects (using the factory) so concurrent access no longer an issue.

Proper use of volatile variables and synchronized blocks

I am trying to wrap my head around thread safety in java (or in general). I have this class (which I hope complies with the definition of a POJO) which also needs to be compatible with JPA providers:
public class SomeClass {
private Object timestampLock = new Object();
// are "volatile"s necessary?
private volatile java.sql.Timestamp timestamp;
private volatile String timestampTimeZoneName;
private volatile BigDecimal someValue;
public ZonedDateTime getTimestamp() {
// is synchronisation necessary here? is this the correct usage?
synchronized (timestampLock) {
return ZonedDateTime.ofInstant(timestamp.toInstant(), ZoneId.of(timestampTimeZoneName));
}
}
public void setTimestamp(ZonedDateTime dateTime) {
// is this the correct usage?
synchronized (timestampLock) {
this.timestamp = java.sql.Timestamp.from(dateTime.toInstant());
this.timestampTimeZoneName = dateTime.getZone().getId();
}
}
// is synchronisation required?
public BigDecimal getSomeValue() {
return someValue;
}
// is synchronisation required?
public void setSomeValue(BigDecimal val) {
someValue = val;
}
}
As stated in the commented rows in the code, is it necessary to define timestamp and timestampTimeZoneName as volatile and are the synchronized blocks used as they should be? Or should I use only the synchronized blocks and not define timestamp and timestampTimeZoneName as volatile? A timestampTimeZoneName of a timestamp should not be erroneously matched with another timestamp's.
This link says
Reads and writes are atomic for all variables declared volatile
(including long and double variables)
Should I understand that accesses to someValue in this code through the setter/getter are thread safe thanks to volatile definitions? If so, is there a better (I do not know what "better" might mean here) way to accomplish this?
To determine if you need synchronized, try to imagine a place where you can have a context switch that would break your code.
In this case, if the context switch happens where I put the comment, then in getTimestamp() you're going to be reading different values from each timestamp type.
Also, although assignments are atomic, this expression java.sql.Timestamp.from(dateTime.toInstant()); certainly isn't, so you can get a context switch inbetween dateTime.toInstant() and the call to from. In short you definitely need the synchronized blocks.
synchronized (timestampLock) {
this.timestamp = java.sql.Timestamp.from(dateTime.toInstant());
//CONTEXT SWITCH HERE
this.timestampTimeZoneName = dateTime.getZone().getId();
}
synchronized (timestampLock) {
return ZonedDateTime.ofInstant(timestamp.toInstant(), ZoneId.of(timestampTimeZoneName));
}
In terms of volatile, I'm pretty sure they're required. You have to guarantee that each thread definitely is getting the most updated version of a variable.
This is the contract of volatile. And although it may be covered by the synchronized block, and volatile not actually necessary here, it's good to write anyway. If the synchronized block does the job of volatile already, the VM won't do the guarantee twice. This means volatile won't cost you any more, and it's a very good flashing light that says to the programmer: "I'M USED IN MULTIPLE THREADS".
For someValue: If there's no synchronized block here, then volatile is definitely necessary. If you call a set in one thread, the other thread has no queue that tells it that may have been updated outside of this thread. So it may use an old and cached value. The JIT can do a lot of funny optimizations if it assumes single thread. Ones that can simply break your program.
Now I'm not entirely certain if synchronized is required here. My guess is no. I would add it anyway to be safe though. Or you can let java worry about the synchronization and use http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/atomic/AtomicInteger.html
Nothing new here, this is just a more explicit version of something #Cruncher already said:
You need synchronized whenever it is important for two or more fields in your program to be consistent with one another. Suppose you have two parallel lists, and your code depends on them both being the same length. That's called an invariant as in, the two lists are invariably the same length.
How can you write a method, append(x,y), that adds a new pair of values to the lists without temporarily breaking the invariant? You can't. The method must add one item to the first list, breaking the invariant, and then add the other item to the second list, fixing it again. There's no other way.
In a single-threaded program, that temporary broken state is no problem because no other method can possibly use the lists while append(x,y) is running. That's no longer true in a multithreaded program. In the worst case, append(x,y) could add x to the x list, and then the scheduler could suspend the thread at that exact moment to allow other threads to run. The CPUs could execute millions of instructions before append(x,y) gets to finish the job and make the lists right again. During all of that time, other threads would see the broken invariant, and possibly corrupt your data or crash the program as a result.
The fix is for append(x,y) to be synchronized on some object, and (this is the important part), for every other method that uses the lists to be synchronized on the same object. Since only one thread can be synchronized on a given object at a given time, it will not be possible for any other thread to see the lists in an inconsistent state.
So, if thread A calls append(x,y), and thread B tries to look at the lists "at the same time", will thread B see the what the lists looked like before or after thread A did its work? That's called a data race. And with only the synchronization that I have described so far, there's no way to know which thread will win. All we've done so far is to guarantee one particular invariant.
If it matters which thread wins the race, then that means that there is some higher-level invariant that also needs protection. You will have to add more synchronization to protect that one too. "Thread safety" -- two little words to name a subject that is both broad and deep.
Good Luck, and Have Fun!
// is synchronisation required?
public BigDecimal getSomeValue() {
return someValue;
}
// is synchronisation required?
public void setSomeValue(BigDecimal val) {
someValue = val;
}
I think Yes you are require to put the synchronization block because consider an example in which one thread is setting the value and at the same time other thread is trying to read from getter method, like here in the example you will see the syncronization block.So, if you take your variable inside the method then you must require the synchronization block.

How to synchronize getter and setter in threading

public class IntermediateMessage {
private final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private final Lock read = readWriteLock.readLock();
private final Lock write = readWriteLock.writeLock();
private volatile double ratio;
public IntermediateMessage(){
this.ratio=1.0d;
}
public IntermediateMessage(double ratio){
this.ratio = ratio;
}
public double getRatio(){
read.lock();
try{
return this.ratio;
}
finally{
read.unlock();
}
}
public void setRatio(double ratio){
write.lock();
try{
this.ratio = ratio;
}
finally{
write.unlock();
}
}
}
I have this object. I have an instance of this object in my application and one thread is writing to the ratio variable while the other threads are reading the ratio. Is this correct way to protect the ratio variable? Do I need to declare ratio as volatile?
Do you need locking at all? Most likely not, according to the limited requirements you've described. But read this to be sure...
You have just one thread writing.
This means that the variable value can never be "out of date" due to competing writers "clobbering" one another (no possible race condition). So no locking is required to give integrity when considering the individual variable in isolation.
You have not mentioned whether some form of atomic, consistent modification of multiple variables is required. I assume it isn't.
IF ratio must always be consistent with other variables (e.g. in other objects) - i.e. if a set of variables must change in synchrony as a group with no one reading just part of the changes - then locking is required to give atomic consistency to the set of variables. Then consistent variables must be modified together within in a single locked region and readers must obtain the same lock before reading any of these set of variables (waiting in a blocked state, if necessary).
IF ratio can be changed at any time as a lone variable and need not be kept consistent with other variables, then no locking is required give atomic consistency to a set of variables.
Do you need the volatile modifier? Well, yes!
You have multiple threads reading.
The variable can change at any moment, including an instant before it's about to be read.
The volatile modifier is used in multi-threaded apps to guarantee that the value read by "readers" always matches the value written by "writers".
You are doing some overkill on the synchronization that is going to cause some inefficiency.
The java keyword "volatile" means that variable won't be cached, and that it will have synchronized access for multiple threads.
So you are locking a variable that is already by default synchronized.
So you should either remove the volatile keyword, or remove the reentrant locks. Probably the volatile as you will be more efficient with multiple reads the way you are currently synchronizing.
For reading/writing a primitive value, volatile alone is sufficient.
Provided two threads are trying to read and write on the same object and you want the data integrity to be mantained. Just make your getter and setter synchronized. When a method is synchonized, only one thread will be able to call a synchronize method. While one thread is executing one of the synchronized method, no other thread will be able to call any of the synchronized method. So in your case if you have your get & set method synchronized, you can be sure if a thread is reading/writing no other thread can do the reading/writing.
Hope it helps!
Make ratio final and it will be thread safe.

When to use volatile and synchronized

I know there are many questions about this, but I still don't quite understand. I know what both of these keywords do, but I can't determine which to use in certain scenarios. Here are a couple of examples that I'm trying to determine which is the best to use.
Example 1:
import java.net.ServerSocket;
public class Something extends Thread {
private ServerSocket serverSocket;
public void run() {
while (true) {
if (serverSocket.isClosed()) {
...
} else { //Should this block use synchronized (serverSocket)?
//Do stuff with serverSocket
}
}
}
public ServerSocket getServerSocket() {
return serverSocket;
}
}
public class SomethingElse {
Something something = new Something();
public void doSomething() {
something.getServerSocket().close();
}
}
Example 2:
public class Server {
private int port;//Should it be volatile or the threads accessing it use synchronized (server)?
//getPort() and setPort(int) are accessed from multiple threads
public int getPort() {
return port;
}
public void setPort(int port) {
this.port = port;
}
}
Any help is greatly appreciated.
A simple answer is as follows:
synchronized can always be used to give you a thread-safe / correct solution,
volatile will probably be faster, but can only be used to give you a thread-safe / correct in limited situations.
If in doubt, use synchronized. Correctness is more important than performance.
Characterizing the situations under which volatile can be used safely involves determining whether each update operation can be performed as a single atomic update to a single volatile variable. If the operation involves accessing other (non-final) state or updating more than one shared variable, it cannot be done safely with just volatile. You also need to remember that:
updates to non-volatile long or a double may not be atomic, and
Java operators like ++ and += are not atomic.
Terminology: an operation is "atomic" if the operation either happens entirely, or it does not happen at all. The term "indivisible" is a synonym.
When we talk about atomicity, we usually mean atomicity from the perspective of an outside observer; e.g. a different thread to the one that is performing the operation. For instance, ++ is not atomic from the perspective of another thread, because that thread may be able to observe state of the field being incremented in the middle of the operation. Indeed, if the field is a long or a double, it may even be possible to observe a state that is neither the initial state or the final state!
The synchronized keyword
synchronized indicates that a variable will be shared among several threads. It's used to ensure consistency by "locking" access to the variable, so that one thread can't modify it while another is using it.
Classic Example: updating a global variable that indicates the current time
The incrementSeconds() function must be able to complete uninterrupted because, as it runs, it creates temporary inconsistencies in the value of the global variable time. Without synchronization, another function might see a time of "12:60:00" or, at the comment marked with >>>, it would see "11:00:00" when the time is really "12:00:00" because the hours haven't incremented yet.
void incrementSeconds() {
if (++time.seconds > 59) { // time might be 1:00:60
time.seconds = 0; // time is invalid here: minutes are wrong
if (++time.minutes > 59) { // time might be 1:60:00
time.minutes = 0; // >>> time is invalid here: hours are wrong
if (++time.hours > 23) { // time might be 24:00:00
time.hours = 0;
}
}
}
The volatile keyword
volatile simply tells the compiler not to make assumptions about the constant-ness of a variable, because it may change when the compiler wouldn't normally expect it. For example, the software in a digital thermostat might have a variable that indicates the temperature, and whose value is updated directly by the hardware. It may change in places that a normal variable wouldn't.
If degreesCelsius is not declared to be volatile, the compiler is free to optimize this:
void controlHeater() {
while ((degreesCelsius * 9.0/5.0 + 32) < COMFY_TEMP_IN_FAHRENHEIT) {
setHeater(ON);
sleep(10);
}
}
into this:
void controlHeater() {
float tempInFahrenheit = degreesCelsius * 9.0/5.0 + 32;
while (tempInFahrenheit < COMFY_TEMP_IN_FAHRENHEIT) {
setHeater(ON);
sleep(10);
}
}
By declaring degreesCelsius to be volatile, you're telling the compiler that it has to check its value each time it runs through the loop.
Summary
In short, synchronized lets you control access to a variable, so you can guarantee that updates are atomic (that is, a set of changes will be applied as a unit; no other thread can access the variable when it's half-updated). You can use it to ensure consistency of your data. On the other hand, volatile is an admission that the contents of a variable are beyond your control, so the code must assume it can change at any time.
There is insufficient information in your post to determine what is going on, which is why all the advice you are getting is general information about volatile and synchronized.
So, here's my general advice:
During the cycle of writing-compiling-running a program, there are two optimization points:
at compile time, when the compiler might try to reorder instructions or optimize data caching.
at runtime, when the CPU has its own optimizations, like caching and out-of-order execution.
All this means that instructions will most likely not execute in the order that you wrote them, regardless if this order must be maintained in order to ensure program correctness in a multithreaded environment. A classic example you will often find in the literature is this:
class ThreadTask implements Runnable {
private boolean stop = false;
private boolean work;
public void run() {
while(!stop) {
work = !work; // simulate some work
}
}
public void stopWork() {
stop = true; // signal thread to stop
}
public static void main(String[] args) {
ThreadTask task = new ThreadTask();
Thread t = new Thread(task);
t.start();
Thread.sleep(1000);
task.stopWork();
t.join();
}
}
Depending on compiler optimizations and CPU architecture, the above code may never terminate on a multi-processor system. This is because the value of stop will be cached in a register of the CPU running thread t, such that the thread will never again read the value from main memory, even thought the main thread has updated it in the meantime.
To combat this kind of situation, memory fences were introduced. These are special instructions that do not allow regular instructions before the fence to be reordered with instructions after the fence. One such mechanism is the volatile keyword. Variables marked volatile are not optimized by the compiler/CPU and will always be written/read directly to/from main memory. In short, volatile ensures visibility of a variable's value across CPU cores.
Visibility is important, but should not be confused with atomicity. Two threads incrementing the same shared variable may produce inconsistent results even though the variable is declared volatile. This is due to the fact that on some systems the increment is actually translated into a sequence of assembler instructions that can be interrupted at any point. For such cases, critical sections such as the synchronized keyword need to be used. This means that only a single thread can access the code enclosed in the synchronized block. Other common uses of critical sections are atomic updates to a shared collection, when usually iterating over a collection while another thread is adding/removing items will cause an exception to be thrown.
Finally two interesting points:
synchronized and a few other constructs such as Thread.join will introduce memory fences implicitly. Hence, incrementing a variable inside a synchronized block does not require the variable to also be volatile, assuming that's the only place it's being read/written.
For simple updates such as value swap, increment, decrement, you can use non-blocking atomic methods like the ones found in AtomicInteger, AtomicLong, etc. These are much faster than synchronized because they do not trigger a context switch in case the lock is already taken by another thread. They also introduce memory fences when used.
Note: In your first example, the field serverSocket is actually never initialized in the code you show.
Regarding synchronization, it depends on whether or not the ServerSocket class is thread safe. (I assume it is, but I have never used it.) If it is, you don't need to synchronize around it.
In the second example, int variables can be atomically updated so volatile may suffice.
volatile solves “visibility” problem across CPU cores. Therefore, value from local registers is flushed and synced with RAM. However, if we need consistent value and atomic op, we need a mechanism to defend the critical data. That can be achieved by either synchronized block or explicit lock.

Categories