Java concurrency in practice - safe publication, immutable object and volatile - java

I'm reading "Java concurrency in practice" and one thing is confusing me.
class OneValueCache {
private final BigInteger lastNumber;
private final BigInteger[] lastFactors;
public OneValueCache(BigInteger lastNumber, BigInteger[] lastFactors) {
this.lastNumber = lastNumber;
this.lastFactors = Arrays.copyOf(lastFactors, lastFactors.length);
}
public BigInteger[] getFactors(BigInteger i) {
if (lastNumber == null || !lastNumber.equals(i)) {
return null;
}
return Arrays.copyOf(lastFactors, lastFactors.length);
}
}
class VolatileCachedFactorized implements Servlet {
private volatile OneValueCache cache = new OneValueCache(null, null);
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = cache.getFactors(i);
if (factors == null) {
factors = factor(i);
cache = new OneValueCache(i, factors);
}
encodeIntoResponse(resp, factors);
}
}
In above code author uses volatile with reference to immutable OneValueCache, but a few page later he writes:
Immutable objects can be used safely by any thread without additional synchronization, even when synchronization is not used to publish them.
So .. volatile is not necessary in above code?

There are kind of 2 level of "thread-safety" that is being applied here. One is at reference level ( done using volatile). Think of an example where a thread reads the value to be null vs other thread seeing some reference value ( changed in between). Volatile will guarantee the publication of one thread is visible to another. But aAnother level of thread safety will be required to safeguard the internal members themselves which have the potential to be changed. Just having a volatile will have no impact on the data within the Cache ( like lastNumber, lastFactors). So immutability will help in that case.
As a general rule ( referred here) as a good thread safe programming practice
Do not assume that declaring a reference volatile guarantees safe
publication of the members of the referenced object
This is the same reason why putting a volatile keyword in front of a HasMap variable does not make it threadsafe.

cache is not a cache, it is a reference to a cache. The reference needs to be volatile in order that the switch of cache is visible to all threads.
Even after assignment to cache, other threads may be using the old cache, which they can safely do. But if you want the new cache to be seen as soon as it is switched, volatile is needed. There is still a window where threads might be using the old cache, but volatile guarantees that subsequent accessors will see the new cache. Do not confuse 'safety' with 'timeliness'.
Another way to look at this is to note that immutability is a property of the cache object, and cannot affect the use of any reference to that object. (And obviously the reference is not immutable, since we assign to it).

Related

is synchronized needed in getValue() ? & volatile needed?

I've a class in multithreading application:
public class A {
private volatile int value = 0; // is volatile needed here?
synchronized public void increment() {
value++; // Atomic is better, agree
}
public int getValue() { // synchronized needed ?
return value;
}
}
The keyword volatile gives you the visibility aspects and without that you may read some stale value. A volatile read adds a memory barrier such that the compiler, hardware or the JVM can't reorder the memory operations in ways that would violate the visibility guarantees provided by the memory model. According to the memory model, a write to a volatile field happens-before every subsequent read of that same field, thus you are guaranteed to read the latest value.
The keyword synchronized is also needed since you are performing a compound action value++ which has to be done atomically. You read the value, increment it in the CPU and then write it back. All these actions has to be done atomically. However, you don't need to synchronize the read path since the keyword volatile guarantees the visibility. In fact, use of both volatile and synchronize on the read path would be confusing and would offer no performance or safety benefit.
The use of atomic variables is generally encouraged, since they use non blocking synchronization using CAS instructions built into the CPU which yields low lock contention and higher throughput. If it were written using the atomic variables, it would be something like this.
public class A {
private final LongAdder value = new LongAdder();
public void increment() {
value.add(1);
}
public int getValue() {
return value.intValue();
}
}

Using volatile to publish immutable objects?

public class VolatileCachedFactorizer extends GenericServlet implements Servlet {
private volatile OneValueCache cache = new OneValueCache(null, null);
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = cache.getFactors(i);
if (factors == null) {
factors = factor(i); //----------> thread A
cache = new OneValueCache(i, factors); //---------> thread B
}
encodeIntoResponse(resp, factors);
}
}
public class OneValueCache {
private final BigInteger lastNum;
private final BigInteger[] lastFactors;
public OneValueCache(BigInteger i, BigInteger[] lastFactors){
this.lastNum = i;
this.lastFactors = lastFactors;
}
public BigInteger[] getFactors(BigInteger i){
if(lastNum == null || !lastNum.equals(i))
return null;
else
return Arrays.copyOf(lastFactors, lastFactors.length);
}
}
This is the code from the book Java concurrency in practice, my question is in this code specifically, we can remove the final keyword from the OneValueCache and still preserve the thread-safe, right, I am not sure why are these final keyword necessary.
Thanks.
It is not necessary in this very situation, but it is a bit complicated to reason about when done without the "final" keywords.
Basically there are two concurrency problems we are trying to solve:
1) The visibility of the "cache" reference - solved by using "volatile" here.
2) State consistency (safe publication) of the OneValueCache object. As stated in the "Java Concurrency In Practice" book:
The publication requirements for an object depend on its mutability:
Immutable objects can be published through any mechanism;
Effectively immutable objects must be safely published;
...
So if you remove "final" usages from OneValueCache then you are making this class more of an effectively immutable class, at least from the visibility standpoint, because "final" has memory visibility semantics (somewhat similar to "volatile") under concurrency.
So now instead of forgetting about object state consistency for any usages of the class you are forcing yourself to always think about safe publication when using it.
It also resembles what is described in chapter "16.1.4 Piggybacking on synchronization", because you would use the happens-before of writing/reading the volatile reference to guarantee that the OneValueCache object is in consistent state to all the threads after the construction. Basically it seems to be just a different explanation of the "safe publication" problem in this context.

Does making all fields final of an object guarantees safe publication?

I was reading about safe publication from "Java Concurrency in Practice" and needs help to understand this one example. I know it is simple but looks like i got too much into it and got confused.
public class VolatileCachedFactorizer implements Servlet {
private volatile OneValueCache cache =
new OneValueCache(null, null);
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = cache.getFactors(i);
if (factors == null) {
factors = factor(i);
cache = new OneValueCache(i, factors);
}
encodeIntoResponse(resp, factors);
}
}
class OneValueCache {
private final BigInteger lastNumber;
private final BigInteger[] lastFactors;
public OneValueCache(BigInteger i,
BigInteger[] factors) {
lastNumber = i;
lastFactors = Arrays.copyOf(factors, factors.length);
}
public BigInteger[] getFactors(BigInteger i) {
if (lastNumber == null || !lastNumber.equals(i))
return null;
else
return Arrays.copyOf(lastFactors, lastFactors.length);
}
}
Above are the two classes, One is VolatileCachedFactorizer which is a servlet and will be initialized only once by the container and each request would call service method to get the factors of the number passed.
Now, OneValueCache is an immutable object which would cache the latest number in the cache along with its factors.
Now, as per the book it is safely published.
My question is that OneValueCache is not declared final in VolatileCachedFactorizer although all its fields are final. When the constructor of OneValueCache is executed from service method then isn't the following scenario possible -
lastNumber would be properly initialized (as it is final) but lastFactors is not as both these statements are atomic. So, are there chances that it might be in an improper state.
If OneValueCache was declared final in VolatileCachedFactorizer then JVM would guarantee that it would be properly initialized.
Thanks
The short answer to your specific question is no. But the thing to note is that the OneValueCache instance "cache" is created in the service method and immediately made visible to all other threads since that is made possible due to the characteristics of the volatile keyword. If thread A writes to a volatile variable then once it is finished then thread B would see all the changes that thread A made if it encounters the volatile variable.
With volatile there are no reordering of memory operations and once the object is created then it is immediately available to be read since the volatile variables are not cached within a local cache wherein it is not visible to other threads on another processor.
If your confusion is on the cache object being properly instantiated then yes, then only the properly constructed immutable cache object be made visible to other threads.

Using volatile keyword with mutable object

In Java, I understand that volatile keyword provides visibility to variables. The question is, if a variable is a reference to a mutable object, does volatile also provide visibility to the members inside that object?
In the example below, does it work correctly if multiple threads are accessing volatile Mutable m and changing the value?
example
class Mutable {
private int value;
public int get()
{
return a;
}
public int set(int value)
{
this.value = value;
}
}
class Test {
public volatile Mutable m;
}
This is sort of a side note explanation on some of the details of volatile. Writing this here because it is too much for an comment. I want to give some examples which show how volatile affects visibility, and how that changed in jdk 1.5.
Given the following example code:
public class MyClass
{
private int _n;
private volatile int _volN;
public void setN(int i) {
_n = i;
}
public void setVolN(int i) {
_volN = i;
}
public int getN() {
return _n;
}
public int getVolN() {
return _volN;
}
public static void main() {
final MyClass mc = new MyClass();
Thread t1 = new Thread() {
public void run() {
mc.setN(5);
mc.setVolN(5);
}
};
Thread t2 = new Thread() {
public void run() {
int volN = mc.getVolN();
int n = mc.getN();
System.out.println("Read: " + volN + ", " + n);
}
};
t1.start();
t2.start();
}
}
The behavior of this test code is well defined in jdk1.5+, but is not well defined pre-jdk1.5.
In the pre-jdk1.5 world, there was no defined relationship between volatile accesses and non-volatile accesses. therefore, the output of this program could be:
Read: 0, 0
Read: 0, 5
Read: 5, 0
Read: 5, 5
In the jdk1.5+ world, the semantics of volatile were changed so that volatile accesses affect non-volatile accesses in exactly the same way as synchronization. therefore, only certain outputs are possible in the jdk1.5+ world:
Read: 0, 0
Read: 0, 5
Read: 5, 0 <- not possible
Read: 5, 5
Output 3. is not possible because the reading of "5" from the volatile _volN establishes a synchronization point between the 2 threads, which means all actions from t1 taken before the assignment to _volN must be visible to t2.
Further reading:
Fixing the java memory model, part 1
Fixing the java memory model, part 2
In your example the volatile keyword only guarantees that the last reference written, by any thread, to 'm' will be visible to any thread reading 'm' subsequently.
It doesn't guarantee anything about your get().
So using the following sequence:
Thread-1: get() returns 2
Thread-2: set(3)
Thread-1: get()
it is totally legitimate for you to get back 2 and not 3. volatile doesn't change anything to that.
But if you change your Mutable class to this:
class Mutable {
private volatile int value;
public int get()
{
return a;
}
public int set(int value)
{
this.value = value;
}
}
Then it is guaranteed that the second get() from Thread-1 shall return 3.
Note however that volatile typically ain't the best synchronization method.
In you simple get/set example (I know it's just an example) a class like AtomicInteger, using proper synchronization and actually providing useful methods, would be better.
volatile only provides guarantees about the reference to the Object that is declared so. The members of that instance don't get synchronized.
According to the Wikipedia, you have:
(In all versions of Java) There is a global ordering on the reads and
writes to a volatile variable. This
implies that every thread accessing a
volatile field will read its current
value before continuing, instead of
(potentially) using a cached value.
(However, there is no guarantee about
the relative ordering of volatile
reads and writes with regular reads
and writes, meaning that it's
generally not a useful threading
construct.)
(In Java 5 or later) Volatile reads and writes establish a happens-before
relationship, much like acquiring and
releasing a mutex.
So basically what you have is that by declaring the field volatile, interacting with it creates a "point of synchronization", after which any change will be visible in other threads. But after that, using get() or set() is unsynched. The Java Spec has a more thorough explanation.
Use of volatile rather than a fully synchronized value is essentially an optimization. The optimization comes from the weaker guarantees provided for a volatile value compared with a synchronized access. Premature optimmization is the root of all evil; in this case, the evil could be hard to track down because it would be in the form of race conditions and such like. So if you need to ask, you probably ought not to use it.
volatile does not "provide visibility". Its only effect is to prevent processor caching of the variable, thus providing a happens-before relation on concurrent reads and writes. It does not affect the members of an object, nor does it provide any synchronisation synchronized locking.
As you haven't told us what the "correct" behaviour of your code is, the question cannot be answered.

How do you ensure multiple threads can safely access a class field?

When a class field is accessed via a getter method by multiple threads, how do you maintain thread safety? Is the synchronized keyword sufficient?
Is this safe:
public class SomeClass {
private int val;
public synchronized int getVal() {
return val;
}
private void setVal(int val) {
this.val = val;
}
}
or does the setter introduce further complications?
If you use 'synchronized' on the setter here too, this code is threadsafe. However it may not be sufficiently granular; if you have 20 getters and setters and they're all synchronized, you may be creating a synchronization bottleneck.
In this specific instance, with a single int variable, then eliminating the 'synchronized' and marking the int field 'volatile' will also ensure visibility (each thread will see the latest value of 'val' when calling the getter) but it may not be synchronized enough for your needs. For example, expecting
int old = someThing.getVal();
if (old == 1) {
someThing.setVal(2);
}
to set val to 2 if and only if it's already 1 is incorrect. For this you need an external lock, or some atomic compare-and-set method.
I strongly suggest you read Java Concurrency In Practice by Brian Goetz et al, it has the best coverage of Java's concurrency constructs.
In addition to Cowan's comment, you could do the following for a compare and store:
synchronized(someThing) {
int old = someThing.getVal();
if (old == 1) {
someThing.setVal(2);
}
}
This works because the lock defined via a synchronized method is implicitly the same as the object's lock (see java language spec).
From my understanding you should use synchronized on both the getter and the setter methods, and that is sufficient.
Edit: Here is a link to some more information on synchronization and what not.
If your class contains just one variable, then another way of achieving thread-safety is to use the existing AtomicInteger object.
public class ThreadSafeSomeClass {
private final AtomicInteger value = new AtomicInteger(0);
public void setValue(int x){
value.set(x);
}
public int getValue(){
return value.get();
}
}
However, if you add additional variables such that they are dependent (state of one variable depends upon the state of another), then AtomicInteger won't work.
Echoing the suggestion to read "Java Concurrency in Practice".
For simple objects this may suffice. In most cases you should avoid the synchronized keyword because you may run into a synchronization deadlock.
Example:
public class SomeClass {
private Object mutex = new Object();
private int val = -1; // TODO: Adjust initialization to a reasonable start
// value
public int getVal() {
synchronized ( mutex ) {
return val;
}
}
private void setVal( int val ) {
synchronized ( mutex ) {
this.val = val;
}
}
}
Assures that only one thread reads or writes to the local instance member.
Read the book "Concurrent Programming in Java(tm): Design Principles and Patterns (Java (Addison-Wesley))", maybe http://java.sun.com/docs/books/tutorial/essential/concurrency/index.html is also helpful...
Synchronization exists to protect against thread interference and memory consistency errors. By synchronizing on the getVal(), the code is guaranteeing that other synchronized methods on SomeClass do not also execute at the same time. Since there are no other synchronized methods, it isn't providing much value. Also note that reads and writes on primitives have atomic access. That means with careful programming, one doesn't need to synchronize the access to the field.
Read Sychronization.
Not really sure why this was dropped to -3. I'm simply summarizing what the Synchronization tutorial from Sun says (as well as my own experience).
Using simple atomic variable access is
more efficient than accessing these
variables through synchronized code,
but requires more care by the
programmer to avoid memory consistency
errors. Whether the extra effort is
worthwhile depends on the size and
complexity of the application.

Categories