How does the synchronized keyword work on an instance variable? - java

I'm looking at some legacy code that's of the form
public class Client {
private final Cache cache;
.....
Client( final Cache cache) {
this.cache = cache;
}
public Value get(Key key) {
synchronized(cache){
return this.cache.get(key);
}
}
public void put(Key k , Value v) {
synchronized(this.cache){
return cache.put(k, v);
}
}
}
}
I've never seen an instance variable that could be modified being used as a lock Object, since typically locks are usually final Object instances or just direct Locks via the Java API.
How does the synchronized key word have any effect in this case? Isnt a new lock created for each instance of the Client object?
Would the usage of the synchronized keyword force the cache to be updated before a get/put operation is applied?
Why would synchronization be necessary before a get? Is it to get the cache to be updated to the latest values assuming another thread applied a put interim.

synchronized provides the same guarantees irrespective of whether it is used on a static variable or an instance variable. i.e., memory visibility and atomicity. In your case, it provides thread safety at instance level for the attribute cache.
So, coming to your questions
You are right. Each instance of Clientwill have its own lock. But this is useful when an instance of Client is shared between multiple clients.
After the execution of the synchronized block, CPU local caches will be flushed into the main memory and this ensures memory visibility for other threads. At the start of execution of the synchronized block, local CPU caches will be invalidated and loaded from the main memory. So yes, synchronized will cause the instance variable cache to have the up to date values. Please see Synchronization for more details.
The reason is the same as 2. i.e., to provide memory visibility.

Related

Do I need getVolatile memory access semantics if the value is set with setVolatile?

Now, before some zealot quotes Knuth forgetting what he spent his life on - the question has a mostly educational purpose, as I struggle to understand memory barriers.
Lets assume:
public class Var<T> {
private T finalized = null;
private boolean isFinal = false;
private T init;
public Var(T init) { this.init = init; }
public T getFinal() { return finalized; }
public T get() { return init; }
public void set(T val) {
if (isSet)
throw new IllegalStateException();
if (val == null)
throw new NullPointerException(); //null means finalized is uninitialized.
init = val;
}
public void freeze() {
isSet = true;
if (!field.compareAndSet(this, null, init))
throw new IllegalStateException();
}
private static void field =
MethodHandles.lookup().findVarHandle(Var.class, "finalized", Object.class);
}
The idea is that a single thread accesses the object in its initial, mutable state, calling get and set. Afterwards it calls freeze and makes the object available
to other threads, which use only getFinal. I want to guarantee those threads see
the frozen value. I don't care what happens if in the mutable state multiple threads access it - it is considered not thread safe at that stage.
The questions are:
do I need an additional memory barrier in getFinal?
do I need it if T is immutable (contains only final fields)?
if I share a reference to a Var before calling freeze, would a getVolatile in getFinal change things? Lets assume again T is immutable, as otherwise, as I understand, reader threads could see it in an unitialized state.
Memory barriers are not a sane mental model for Java. For Java, you need to understand the Java Memory Model.
What you need to ensure is that you should not have a data race; so there should be some happens-before edge between a write and a read (or another write).
In your case, you need to make sure there is a happens-before edge between the construction and configuration of the MyObject-instance, and reading the MyObject-instance. The simplest approach would be:
class MyObject{int a,int b;}
class MyObjectContainer{
volatile MyObject value;
void set(MyObject value){this.value = value;}
MyObject get(){return value;}
}
MyObject object = new MyObject();
object.setA(10)
object.setB(20)
myObjectContainer.set(object);
And a different thread can then call myObjectContainer.get() safely.
The 'value' needs to be volatile. Otherwise, there would be a data race. This is because volatile generates the happens-before edge (volatile variable rule) required. In combination with the program order rule and the transitivity rule, we get a guaranteed happens-before edge. The above approach is also known under the name 'safe publication'.
The above will only work if the MyObject instance is not modified after it has been written to the myObjectContaier. So it is 'effectively' immutable.
On the X86 a volatile-read is extremely cheap because every load already has acquire-semantics. A volatile-read only restricts compiler optimizations (compiler fence) and is not a CPU memory fence. The burden on the X86 is on the volatile store.
In exceptional cases, the volatile-store can be a performance bottleneck because it effectively stalls the loads till the store buffer has been drained. In those cases, you can play with relaxed memory ordering (so officially they are classified as data races) using either Unsafe or the VarHandle. In your case, an acquire-load and release-store would be the first steps down the rabbit hole. Depending on the situation, even a [StoreStore] fence before an opaque-store and an [LoadLoad] after an opaque-load would be as low as you could possibly get.
Do I need getVolatile memory access semantics if the value is set with setVolatile?
The short answer is yes.
do I need an additional memory barrier in getFinal?
You need memory barrier protection, yes. Just because field. compareAndSet(...) has memory protection around the update doesn't mean that another thread will see the update.
If an instance of Var is shared between threads such that one thread calls set() and another calls get() then init needs to be volatile. If one thread calls freeze and another calls getFinal() then finalized needs to be volatile as well.
do I need it if T is immutable (contains only final fields)?
The T object will not be shared in an inconsistent state if all fields are final but that does not protect the Var object. One thread could set the immutable object T and then another thread could call get and still see the init field as `null.
if I share a reference to a Var before calling freeze, would a getVolatile in getFinal change things?
That doesn't change anything.

Will use of volatile impact app's performance in Android?

I'm trying to make my module instance as singleton. Came across this article where it speaks about various method to achieve singleton instance creation and also touches broken double check locking and ways to avoid this using volatile.
After reading more about volatile, it seems the thread will never cache this variable and would always read from main memory for it's computation. If I implement my singleton using volatile keyword will my app get impacted by performance since it reads from main memory always.
Image source
Got the following doubts,
Where will the thread local memory reside? In which layer would they be L1,L2 or L3 cache?
If I implement my singleton using volatile which means that i'm reading from main memory each time - so does this mean that i'm increasing the CPU cycle?
Will my app get performance impact if I use this in UI thread?
Sample volatile implementation,
private static volatile ResourceService resourceInstance;
//lazy Initialiaztion
public static ResourceService getInstance () {
if (resourceInstance == null) { // first check
synchronized(ResourceService.class) {
if (resourceInstance == null) { // double check
// creating instance of ResourceService for only one time
resourceInstance = new ResourceService ();
}
}
}
return resourceInstance;
}

Do we need to synchronize writes if we are synchronizing reads?

I have few doubts about synchronized blocks.
Before my questions I would like to share the answers from another related post Link for Answer to related question. I quote Peter Lawrey from the same answer.
synchronized ensures you have a consistent view of the data. This means you will read the latest value and other caches will get the
latest value. Caches are smart enough to talk to each other via a
special bus (not something required by the JLS, but allowed) This
bus means that it doesn't have to touch main memory to get a
consistent view.
If you only use synchronized, you wouldn't need volatile. Volatile is useful if you have a very simple operation for which synchronized
would be overkill.
In reference to above I have three questions below :
Q1. Suppose in a multi threaded application there is an object or a primitive instance field being only read in a synchronized block (write may be happening in some other method without synchronization). Also Synchronized block is defined upon some other Object. Does declaring it volatile (even if it is read inside Synchronized block only) makes any sense ?
Q2. I understand the value of the states of the object upon which Synchronization has been done is consistent. I am not sure for the state of other objects and primitive fields being read in side the Synchronized block. Suppose changes are made without obtaining a lock but reading is done by obtaining a lock. Does state of all the objects and value of all primitive fields inside a Synchronized block will have consistent view always. ?
Q3. [Update] : Will all fields being read in a synchronized block will be read from main memory regardless of what we lock on ? [answered by CKing]
I have a prepared a reference code for my questions above.
public class Test {
private SomeClass someObj;
private boolean isSomeFlag;
private Object lock = new Object();
public SomeClass getObject() {
return someObj;
}
public void setObject(SomeClass someObj) {
this.someObj = someObj;
}
public void executeSomeProcess(){
//some process...
}
// synchronized block is on a private someObj lock.
// inside the lock method does the value of isSomeFlag and state of someObj remain consistent?
public void someMethod(){
synchronized (lock) {
while(isSomeFlag){
executeSomeProcess();
}
if(someObj.isLogicToBePerformed()){
someObj.performSomeLogic();
}
}
}
// this is method without synchronization.
public void setSomeFlag(boolean isSomeFlag) {
this.isSomeFlag = isSomeFlag;
}
}
The first thing you need to understand is that there is a subtle difference between the scenario being discussed in the linked answer and the scenario you are talking about. You speak about modifying a value without synchronization whereas all values are modified within a synchronized context in the linked answer. With this understanding in mind, let's address your questions :
Q1. Suppose in a multi threaded application there is an object or a primitive instance field being only read in a synchronized block (write may be happening in some other method without synchronization). Also Synchronized block is defined upon some other Object. Does declaring it volatile (even if it is read inside Synchronized block only) makes any sense ?
Yes it does make sense to declare the field as volatile. Since the write is not happening in a synchronized context, there is no guarantee that the writing thread will flush the newly updated value to main memory. The reading thread may still see inconsistent values because of this.
Suppose changes are made without obtaining a lock but reading is done by obtaining a lock. Does state of all the objects and value of all primitive fields inside a Synchronized block will have consistent view always. ?
The answer is still no. The reasoning is the same as above.
Bottom line : Modifying values outside synchronized context will not ensure that these values get flushed to main memory. (as the reader thread may enter the synchronized block before the writer thread does) Threads that read these values in a synchronized context may still end up reading older values even if they get these values from the main memory.
Note that this question talks about primitives so it is also important to understand that Java provides Out-of-thin-air safety for 32-bit primitives (all primitives except long and double) which means that you can be assured that you will atleast see a valid value (if not consistent).
All synchronized does is capture the lock of the object that it is synchronized on. If the lock is already captured, it will wait for its release. It does not in any way assert that that object's internal fields won't change. For that, there is volatile
When you synchronize on an object monitor A, it is guaranteed that another thread synchronizing on the same monitor A afterwards will see any changes made by the first thread to any object. That's the visibility guarantee provided by synchronized, nothing more.
A volatile variable guarantees visibility (for the variable only, a volatile HashMap doesn't mean the contents of the map would be visible) between threads regardless of any synchronized blocks.

ConcurrentHashMap of Future and double-check locking

Given:
A lazy initialized singleton class implemented with double-check locking pattern with all the relevant volatile and synchronized stuff in getInstance. This singleton launches asynchronous operations via an ExecutorService,
There are seven type of tasks, each one identified by a unique key,
When a task is launched, it is stored in a cached based on ConcurrentHashMap,
When a client ask for a task, if the task in the cache is done, a new one is launched and cached; if it is running, the task is retrieved from the cache and passed to the client.
Here is a excerpt of the code:
private static volatile TaskLauncher instance;
private ExecutorService threadPool;
private ConcurrentHashMap<String, Future<Object>> tasksCache;
private TaskLauncher() {
threadPool = Executors.newFixedThreadPool(7);
tasksCache = new ConcurrentHashMap<String, Future<Object>>();
}
public static TaskLauncher getInstance() {
if (instance == null) {
synchronized (TaskLauncher.class) {
if (instance == null) {
instance = TaskLauncher();
}
}
}
return instance;
}
public Future<Object> getTask(String key) {
Future<Object> expectedTask = tasksCache.get(key);
if (expectedTask == null || expectedTask.isDone()) {
synchronized (tasksCache) {
if (expectedTask == null || expectedTask.isDone()) {
// Make some stuff to create a new task
expectedTask = [...];
threadPool.execute(expectedTask);
taskCache.put(key, expectedTask);
}
}
}
return expectedTask;
}
I got one major question, and another minor one:
Do I need to perform double-check locking control in my getTask method? I know ConcurrentHashMap is thread-safe for read operations, so my get(key) is thread-safe and may not need double-check locking (but yet quite unsure of this…). But what about the isDone() method of Future?
How do you chose the right lock object in a synchronized block? I know it must no be null, so I use first the TaskLauncher.class object in getInstance() and then the tasksCache, already initialized, in the getTask(String key) method. And has this choice any importance in fact?
Do I need to perform double-check locking control in my getTask method?
You don't need to do double-checked locking (DCL) here. (In fact, it is very rare that you need to use DCL. In 99.9% of cases, regular locking is just fine. Regular locking on a modern JVM is fast enough that the performance benefit of DCL is usually too small to make a noticeable difference.)
However, synchronization is necessary unless you declared tasksCache to be final. And if tasksCache is not final, then simple locking should be just fine.
I know ConcurrentHashMap is thread-safe for read operations ...
That's not the issue. The issue is whether reading the value of the taskCache reference is going to give you the right value if the TaskLauncher is created and used on different threads. The thread-safety of fetching a reference from a variable is not affected one way or another by the thread-safety of the referenced object.
But what about the isDone() method of Future?
Again ... that has no bearing on whether or not you need to use DCL or other synchronization.
For the record, the memory semantics "contract" for Future is specified in the javadoc:
"Memory consistency effects: Actions taken by the asynchronous computation happen-before actions following the corresponding Future.get() in another thread."
In other words, no extra synchronization is required when you call get() on a (properly implemented) Future.
How do you chose the right lock object in a synchronized block?
The locking serves to synchronize access to the variables read and written by different threads while hold the lock.
In theory, you could write your entire application to use just one lock. But if you did that, you would get the situation where one thread waits for another, despite the first thread not needing to use the variables that were used by the other one. So normal practice is use a lock that is associated with the variables.
The other thing you need to be ensure is that when two threads need to access the same set of variables, they use the same object (or objects) as locks. If they use different locks, then they don't achieve proper synchronization ...
(There are also issues about whether lock on this or on a private lock, and about the order in which locks should be acquired. But these are beyond the scope of the question you asked.)
Those are the general "rules". To decide in a specific case, you need to understand precisely what you are trying to protect, and choose the lock accordingly.
AbstractQueuedSync used in side FutureTask has a variable state of a
thread and its a volatile (thread safe) variable. So need not to worry about isDone() method.
private volatile int state;
Choice of lock object is based on the instance type and situation,
Lets say you have multiple objects and they have Sync blocks on
TaskLauncher.class then all the methods in all the instances with be
synchronized by this single lock (use this if you want to share a
single shared memory across all the instances).
If all instances have their own shared memory b/w threads and methods use this. Using this will save you one extra lock object as well.
In your case you can use
TaskLauncher.class ,tasksCache, this its all same in terms of synchronization as its singelton.

Concurrent access to unmodifiableMap

#Singleton
#LocalBean
#Startup
#ConcurrencyManagement(ConcurrencyManagementType.BEAN)
public class DeliverersHolderSingleton {
private volatile Map<String, Deliverer> deliverers;
#PostConstruct
private void init() {
Map<String, Deliverer> deliverersMod = new HashMap<>();
for (String delivererName : delivererNames) {
/*gettig deliverer by name*/
deliverersMod.put(delivererName, deliverer);
}
deliverers = Collections.unmodifiableMap(deliverersMod);
}
public Deliverer getDeliverer(String delivererName) {
return deliverers.get(delivererName);
}
#Schedule(minute="*", hour="*")
public void maintenance() {
init();
}
}
Singleton is used for storing data. Data is updated once per minute.
Is it possible, that read from the unmodifiableMap will be a problem with the synchronization? Is it possible that it will occurs reordering in init method and link to the collection will published, but collection not filled completely?
The Java Memory Model guarantees that there is a happens-before relationship between a write and a subsequent read to a volatile variable. In other words, if you write to a volatile variable and subsequently read that same variable, you have the guarantee that the write operation will be visible, even if multiple threads are involved:
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
It goes further and guarantees that any operation that happened before the write operation will also be visible at the reading point (thanks to the program order rule and the fact that the happens-before relationship is transitive).
Your getDeliverers method reads from the volatile variable so it will see the latest write operated on the line deliverers = Collections.unmodifiableMap(deliverersMod); as well as the preceding operations where the map is populated.
So your code is thread safe and your getDeliverers method will return a result based on the latest version of your map.
Thread safety issues here:
multiple reads from the HashMap - is thread safe, because multiple reads are allowed as long as there are no modifications to the collection and writes to the HashMap will not happen, because the map is an unmodifiableMap()
read/write on deliverers - is thread safe, because all java reference assignments are atomic
I can see no thread-unsafe operations here.
I would like to note that the name of init() metod is misleading, it suggests that it is called once during initialization; I'd suggest calling it rebuild() or recreate().
According to the Reordering Grid found here http://g.oswego.edu/dl/jmm/cookbook.html, the 1st operation being Normal Store cannot be reordered with the second operation being Volatile Store, so in your case, as long as the immutable map is not null, there wouldn't be any reordering problems.
Also, all writes that occur prior to a volatile store will be visible, so you will not see any publishing issues.

Categories