Is the following code thread-safe? [duplicate] - java

This question already has answers here:
Java double checked locking
(11 answers)
Closed 7 years ago.
The following code uses a double checked pattern to initialize variables. I believe the code is thread safe, as the map wont partially assigned even if two threads are getting into getMap() method at the same time. So I don't have to make the map as volatile as well. Is the reasoning correct? NOTE: The map is immutable once it is initialized.
class A {
private Map<String, Integer> map;
private final Object lock = new Object();
public static Map<String, Integer> prepareMap() {
Map<String, Integer> map = new HashMap<>();
map.put("test", 1);
return map;
}
public Map<String, Integer> getMap() {
if (map == null) {
synchronized (lock) {
if (map == null) {
map = prepareMap();
}
}
}
return map;
}
}

According to the top names in the Java world, no it is not thread safe. You can read why here: http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
You better off using ConcurrentHashmap or synchronizing your Map.
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ConcurrentHashMap.html
Edit: If you only want to make the initialization of the map thread safe (so that two or more maps are not accidentally created) then you can do two things. 1) initialize the map when it is declared. 2) make the getMap() method synchronized.

No, your reasoning is wrong, access to the map is not thread safe, because the threads that call getMap() after the initialization may not invoke synchronized(lock) and thus are not in happens-before relation to other threads.
The map has to be volatile.

The code could be optimized by inlining to
public Map<String,Integer> getMap()
{
if(map == null)
{
synchronized(lock)
{
if(map == null)
{
map = new HashMap<>(); // partial map exposed
map.put("test", 1);
}
}
}
return map;
}
}
Having a HashMap under concurrent read and write is VERY dangerous, don't do it. Google HashMap infinite loop.
Solutions -
Expand synchronized to the entire method, so that reading map variable is also under lock. This is a little expensive.
Declare map as volatile, to prevent reordering optimization. This is simple, and pretty cheap.
Use an immutable map. The final fields will also prevent exposing partial object state. In your particular example, we can use Collections.singletonMap. But for maps with more entries, I'm not sure JDK has a public implementation.

This is just one example of how things can go wrong. To fully understand the issues, there is no substitute for reading The "Double-Checked Locking is Broken" Declaration, referenced in a prior answer.
To get anything approaching the full flavor, think about two processors, A and B, each with its own caches, and a main memory that they share.
Suppose Thread A, running on Processor A, first calls getMap. It does several assignments inside the synchronized block. Suppose the assignment to map gets written to main memory first, before Thread A reaches the end of the synchronized block.
Meanwhile, on Processor B, Thread B also calls getMap, and does not happen to have the memory location representing map in its cache. It goes out to main memory to get it, and its read happens to hit just after Thread A's assignment to map, so it sees a non-null map. Thread B does not enter the synchronized block.
At this point, Thread B can go ahead and attempt to use the HashMap, despite the fact that Thread A's work on creating it has not yet been written to main memory. Thread B may even have the memory pointed to by map in its cache because of a prior use.
If you are tempted to try to work around this, consider the following quote from the referenced article:
There are lots of reasons it doesn't work. The first couple of reasons
we'll describe are more obvious. After understanding those, you may be
tempted to try to devise a way to "fix" the double-checked locking
idiom. Your fixes will not work: there are more subtle reasons why
your fix won't work. Understand those reasons, come up with a better
fix, and it still won't work, because there are even more subtle
reasons.
This answer only contains one of the most obvious reasons.

No, it is not thread safe.
The basic reason is that you can have reordering of operations you don't even see in the Java code. Let's imagine a similar pattern with an even simpler class:
class Simple {
int value = 42;
}
In the analogous getSimple() method, you assign /* non-volatile */ simple = new Simple (). What happens here?
the JVM allocates some space for the new object
the JVM sets some bit of this space to 42 (for value)
the JVM returns the address of this space, which is then assigned to space
Without synchronization instructions to prohibit it, these instructions can be reordered. In particular, steps 2 and 3 can be ordered such that simple gets the new object's address before the constructor finishes! If another thread then reads simple.value, it'll see a value 0 (the field's default value) instead of 42. This is called seeing a partially-constructed object. Yes, that's weird; yes, I've seen things like that happen. It's a real bug.
You can imagine how if the object is a non-trivial object, like HashMap, the problem is even worse; there are a lot more operations, and so more possibilities for weird ordering.
Marking the field as volatile is a way of telling the JVM, "any thread that reads a value from this field must also read all operations that happened before that value was written." That prohibits those weird reorderings, which guarantees you'll see the fully-constructed object.

Unless you declare the lock as volatile, this code may be translated to non-thread-safe bytecode.
The compiler may optimize the expression map == null, cache the value of the expression and thus read the map property only once.
volatile Map<> map instructs the Java VM to always read the property map when it is accessed. Thsi would forbid such optimization from the complier.
Please refer to JLS Chapter 17. Threads and Locks

Related

Is "double checked locking" broken here in java?

I find an example for double checked locking.
However, I think this example is invalid because it's possible that another thread may see a non-null reference to a DoorControlManage object of door 1 but see the default values for fields of the DoorControlManage object of door 1 rather than the values set in the constructor.
(Ref: https://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html)
Could you let me know whether I am right?
Thanks a lot!
public class DoorControlManager {
private static HashMap<Integer, DoorControlManager> mInstances = new HashMap<>();
public static DoorControlManager getInstance(int door) {
if (!mInstances.containsKey(door)) {
synchronized (mInstances) {
if (!mInstances.containsKey(door)) {
mInstances.put(slotId, new DoorControlManager(door));
}
}
}
return mInstances.get(slotId);
}
...
}
Yes this code is broken, though not for the normal reason.
In this case, you have different threads accessing HashMap without proper synchronization. Since HashMap is not a thread-safe class, this is not thread-safe. It is possible that the first containsKey call will see stale values the internals of the map, and behave in unspecified (implementation dependent) ways.
Making "simple" changes to concurrency sensitive code can completely destroy the properties that make the original version thread-safe. If you are going to attempt to write "clever" code like this, you need to have a deep understanding of Java concurrency ... and how the Java Memory Model really works.
There are a couple of ways that this code could be written correctly:
Use a ConcurrentHashMap and implement the getInstance method as:
return mInstances.computeIfAbsent(
slotId, () -> new DoorControlManager(door));
Keep using a HashMap and don't use the DCL pattern. Simply lock before testing.
Note that DCL initialization pattern in Java 5+ is not broken, provided that the you are initializing a single field and the field is declared as volatile. But there are other (better) ways to achieve the same effect, so its use is not recommended.

Is iterating over a list retrieved in a synchronized block thread-safe?

I am a bit confused regarding one pattern I have seen in some legacy code of ours.
The controller uses a map as a cache, with an approach that should be thread safe, however I am still not confident it indeed is. We have a map, which is properly synchronized during addition and retrieval, however, there is a bit of logic outside of the synchronized block, that does some additional filtering.
(the map itself and the lists are never accessed outside of this method, so concurrent modification is not an issue; the map holds some stable parameters, which basically never change, but are used often).
The code looks like the following sample:
public class FooBarController {
private final Map<String, List<FooBar>> fooBarMap =
new HashMap<String, List<FooBar>>();
public FooBar getFooBar(String key, String foo, String bar) {
List<FooBar> foobarList;
synchronized (fooBarMap) {
if (fooBarMap.get(key) == null) {
foobarList = queryDbByKey(key);
fooBarMap.put(key, foobarList);
} else {
foobarList = fooBarMap.get(key);
}
}
for(FooBar fooBar : foobarList) {
if(foo.equals(fooBar.getFoo()) && bar.equals(fooBar.getBar()))
return fooBar;
}
return null;
}
private List<FooBar> queryDbByKey(String key) {
// ... (simple Hibernate-query)
}
// ...
}
Based on what I know about the JVM memory model, this should be fine, since if one thread populates a list, another one can only retrieve it from the map with proper synchronization in place, ensuring that the entries of the list is visible. (putting the list happens-before getting it)
However, we keep seeing cases, where an entry expected to be in the map is not found, combined with the typical notorious symptoms of concurrency issues (e.g. intermittent failures in production, which I cannot reproduce in my development environment; different threads can properly retrieve the value etc.)
I am wondering if iterating through the elements of the List like this is thread-safe?
The code you provided is correct in terms of concurrency. Here are the guarantees:
only one thread at a time adds values to map, because of synchronization on map object
values added by thread become visible for all other threads, that enter synchronized block
Given that, you can be sure that all threads that iterate a list see the same elements. The issues you described are indeed strange but I doubt they're related to the code you provided.
It could be thread safe only if all access too fooBarMap are synchronized. A little out of scope, but safer may be to use a ConcurrentHashmap.
There is a great article on how hashmaps can be synchronized here.
In situation like this it's best option to use ConcurrentHashMap.
Verify if all Update-Read are in order.
As I understood from your question. There are fix set of params which never changes. One of the ways I preferred in situation like this is:
I. To create the map cache during start up and keep only one instance of it.
II. Read the map Instance anytime anywhere in the application.
In the for loop you are returning reference to fooBar objects in the foobarList.
So the method calling getFooBar() has access to the Map through this fooBar reference object.
try to clone fooBar before returning from getFooBar()

Java: is using synchronized(this) an advisable practice when creating a ConcurrentHashMap object?

I just finished developing a java web service server for a distributed programming course I am attending. One of the requirements was to guarantee multi-thread safety to our project hence I decided to use ConcurrentHashMap objects to store my data.
At the end of it all I am left with a question regarding this snippet of code:
public List<THost> getHList() throws ClusterUnavailable_Exception{
logger.entering(logger.getName(), "getHList");
if(hMap==null){
synchronized(this){
if(hMap==null){
hMap=createHMap();
}
}
}
if(hMap==null){
ClusterUnavailable cu = new ClusterUnavailable();
cu.setMessage("Data unavailable.");
ClusterUnavailable_Exception exc = new ClusterUnavailable_Exception("Data unavailable.", new ClusterUnavailable());
throw exc;
}
else{
List<THost> hList = new ArrayList<THost>(hMap.values());
logger.info("Returning list of hosts. Number of hosts returned = "+hList.size());
logger.exiting(logger.getName(), "getHList");
return hList;
}
}
do I have to use the synchronized statement when creating the concurrenthashmap object itself in order to guarantee that the service will not have any unpredictable behavior in a multi-threaded environment?
Don't bother. Eagerly initialize the Map, make the field final, and drop the synchronization until you have proven that it is actually necessary. The cost is minuscule and the "obviously safe and correct" solution will almost never be too slow.
You mentioned this is a class project -- focus on getting the code working. Concurrency is hard enough without inventing additional obstacles that you must then hurdle over.
The simple solution is to avoid the problem by eagerly initializing. And unless you have clear evidence (i.e. profiling) that eager initialization is a performance problem, that is also the best solution.
As to your question, the answer is that the synchronized block is necessary for correctness. Without it you can get the following sequence of events.
thread 1 calls getHList()
thread 1 sees that hMap is null and starts to create a map.
thread 2 calls getHList()
thread 2 sees that hMap is null and starts to create a map.
thread 1 finishes creating, and assigns the new map to hMap, and returns that map.
thread 2 finishes creating, and assigns the second new map to hMap, and returns that map.
In short, thread 1 and thread 2 could get different maps if they simultaneously call getHList() while hMap has its initial null value.
(In the above, I'm assuming that getHList() is a getter for hMap. However, the method as written won't compile, and its declared return type doesn't match the type of hMap ... so it is unclear what it is really intended to do.)
The below line has nothing to do with ConcurrentHashMap. Its just creating an instance of ConcurrentHashMap object.
Its just like synchronizing any object creation in JAVA.
hMap=new ConcurrentHashMap<BigInteger, THost>();
Double check locking pattern is broken before Java 1.5 (and is inefficient in Java 1.6 and later). See: http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
Consider using a Initialization-on-demand holder or a single element enum type.

Java concurrency scenario -- do I need synchronization or not?

Here's the deal. I have a hash map containing data I call "program codes", it lives in an object, like so:
Class Metadata
{
private HashMap validProgramCodes;
public HashMap getValidProgramCodes() { return validProgramCodes; }
public void setValidProgramCodes(HashMap h) { validProgramCodes = h; }
}
I have lots and lots of reader threads each of which will call getValidProgramCodes() once and then use that hashmap as a read-only resource.
So far so good. Here's where we get interesting.
I want to put in a timer which every so often generates a new list of valid program codes (never mind how), and calls setValidProgramCodes.
My theory -- which I need help to validate -- is that I can continue using the code as is, without putting in explicit synchronization. It goes like this:
At the time that validProgramCodes are updated, the value of validProgramCodes is always good -- it is a pointer to either the new or the old hashmap. This is the assumption upon which everything hinges. A reader who has the old hashmap is okay; he can continue to use the old value, as it will not be garbage collected until he releases it. Each reader is transient; it will die soon and be replaced by a new one who will pick up the new value.
Does this hold water? My main goal is to avoid costly synchronization and blocking in the overwhelming majority of cases where no update is happening. We only update once per hour or so, and readers are constantly flickering in and out.
Use Volatile
Is this a case where one thread cares what another is doing? Then the JMM FAQ has the answer:
Most of the time, one thread doesn't
care what the other is doing. But when
it does, that's what synchronization
is for.
In response to those who say that the OP's code is safe as-is, consider this: There is nothing in Java's memory model that guarantees that this field will be flushed to main memory when a new thread is started. Furthermore, a JVM is free to reorder operations as long as the changes aren't detectable within the thread.
Theoretically speaking, the reader threads are not guaranteed to see the "write" to validProgramCodes. In practice, they eventually will, but you can't be sure when.
I recommend declaring the validProgramCodes member as "volatile". The speed difference will be negligible, and it will guarantee the safety of your code now and in future, whatever JVM optimizations might be introduced.
Here's a concrete recommendation:
import java.util.Collections;
class Metadata {
private volatile Map validProgramCodes = Collections.emptyMap();
public Map getValidProgramCodes() {
return validProgramCodes;
}
public void setValidProgramCodes(Map h) {
if (h == null)
throw new NullPointerException("validProgramCodes == null");
validProgramCodes = Collections.unmodifiableMap(new HashMap(h));
}
}
Immutability
In addition to wrapping it with unmodifiableMap, I'm copying the map (new HashMap(h)). This makes a snapshot that won't change even if the caller of setter continues to update the map "h". For example, they might clear the map and add fresh entries.
Depend on Interfaces
On a stylistic note, it's often better to declare APIs with abstract types like List and Map, rather than a concrete types like ArrayList and HashMap. This gives flexibility in the future if concrete types need to change (as I did here).
Caching
The result of assigning "h" to "validProgramCodes" may simply be a write to the processor's cache. Even when a new thread starts, "h" will not be visible to a new thread unless it has been flushed to shared memory. A good runtime will avoid flushing unless it's necessary, and using volatile is one way to indicate that it's necessary.
Reordering
Assume the following code:
HashMap codes = new HashMap();
codes.putAll(source);
meta.setValidProgramCodes(codes);
If setValidCodes is simply the OP's validProgramCodes = h;, the compiler is free to reorder the code something like this:
1: meta.validProgramCodes = codes = new HashMap();
2: codes.putAll(source);
Suppose after execution of writer line 1, a reader thread starts running this code:
1: Map codes = meta.getValidProgramCodes();
2: Iterator i = codes.entrySet().iterator();
3: while (i.hasNext()) {
4: Map.Entry e = (Map.Entry) i.next();
5: // Do something with e.
6: }
Now suppose that the writer thread calls "putAll" on the map between the reader's line 2 and line 3. The map underlying the Iterator has experienced a concurrent modification, and throws a runtime exception—a devilishly intermittent, seemingly inexplicable runtime exception that was never produced during testing.
Concurrent Programming
Any time you have one thread that cares what another thread is doing, you must have some sort of memory barrier to ensure that actions of one thread are visible to the other. If an event in one thread must happen before an event in another thread, you must indicate that explicitly. There are no guarantees otherwise. In practice, this means volatile or synchronized.
Don't skimp. It doesn't matter how fast an incorrect program fails to do its job. The examples shown here are simple and contrived, but rest assured, they illustrate real-world concurrency bugs that are incredibly difficult to identify and resolve due to their unpredictability and platform-sensitivity.
Additional Resources
The Java Language Specification - 17 Threads and Locks sections: §17.3 and §17.4
The JMM FAQ
Doug Lea's concurrency books
No, the code example is not safe, because there is no safe publication of any new HashMap instances. Without any synchronization, there is a possibility that a reader thread will see a partially initialized HashMap.
Check out #erickson's explanation under "Reordering" in his answer. Also I can't recommend Brian Goetz's book Java Concurrency in Practice enough!
Whether or not it is okay with you that reader threads might see old (stale) HashMap references, or might even never see a new reference, is beside the point. The worst thing that can happen is that a reader thread might obtain reference to and attempt to access a HashMap instance that is not yet initialized and not ready to be accessed.
No, by the Java Memory Model (JMM), this is not thread-safe.
There is no happens-before relation between writing and reading the HashMap implementation objects. So, although the writer thread appears to write out the object first and then the reference, a reader thread may not see the same order.
As also mentioned there is no guarantee that the reaer thread will ever see the new value. In practice with current compilers on existing hardware the value should get updated, unless the loop body is sufficienly small that it can be sufficiently inlined.
So, making the reference volatile is adequate under the new JMM. It is unlikely to make a substantial difference to system performance.
The moral of this story: Threading is difficult. Don't try to be clever, because sometimes (may be not on your test system) you wont be clever enough.
As others have already noted, this is not safe and you shouldn't do this. You need either volatile or synchronized here to force other threads to see the change.
What hasn't been mentioned is that synchronized and especially volatile are probably a lot faster than you think. If it's actually a performance bottleneck in your app, then I'll eat this web page.
Another option (probably slower than volatile, but YMMV) is to use a ReentrantReadWriteLock to protect access so that multiple concurrent readers can read it. And if that's still a performance bottleneck, I'll eat this whole web site.
public class Metadata
{
private HashMap validProgramCodes;
private ReadWriteLock lock = new ReentrantReadWriteLock();
public HashMap getValidProgramCodes() {
lock.readLock().lock();
try {
return validProgramCodes;
} finally {
lock.readLock().unlock();
}
}
public void setValidProgramCodes(HashMap h) {
lock.writeLock().lock();
try {
validProgramCodes = h;
} finally {
lock.writeLock().unlock();
}
}
}
I think your assumptions are correct. The only thing I would do is set the validProgramCodes volatile.
private volatile HashMap validProgramCodes;
This way, when you update the "pointer" of validProgramCodes you guaranty that all threads access the same latest HasMap "pointer" because they don't rely on local thread cache and go directly to memory.
The assignment will work as long as you're not concerned about reading stale values, and as long as you can guarantee that your hashmap is properly populated on initialization. You should at the least create the hashMap with Collections.unmodifiableMap on the Hashmap to guarantee that your readers won't be changing/deleting objects from the map, and to avoid multiple threads stepping on each others toes and invalidating iterators when other threads destroy.
( writer above is right about the volatile, should've seen that)
While this is not the best solution for this particular problem (erickson's idea of a new unmodifiableMap is), I'd like to take a moment to mention the java.util.concurrent.ConcurrentHashMap class introduced in Java 5, a version of HashMap specifically built with concurrency in mind. This construct does not block on reads.
Check this post about concurrency basics. It should be able to answer your question satisfactorily.
http://walivi.wordpress.com/2013/08/24/concurrency-in-java-a-beginners-introduction/
I think it's risky. Threading results in all kinds of subtly issues that are a giant pain to debug. You might want to look at FastHashMap, which is intended for read-only threading cases like this.
At the least, I'd also declare validProgramCodes to be volatile so that the reference won't get optimized into a register or something.
If I read the JLS correctly (no guarantees there!), accesses to references are always atomic, period. See Section 17.7 Non-atomic Treatment of double and long
So, if the access to a reference is always atomic and it doesn't matter what instance of the returned Hashmap the threads see, you should be OK. You won't see partial writes to the reference, ever.
Edit: After review of the discussion in the comments below and other answers, here are references/quotes from
Doug Lea's book (Concurrent Programming in Java, 2nd Ed), p 94, section 2.2.7.2 Visibility, item #3: "
The first time a thread access a field
of an object, it sees either the
initial value of the field or the
value since written by some other
thread."
On p. 94, Lea goes on to describe risks associated with this approach:
The memory model guarantees that, given the eventual occurrence of the above operations, a particular update to a particular field made by one thread will eventually be visible to another. But eventually can be an arbitrarily long time.
So when it absolutely, positively, must be visible to any calling thread, volatile or some other synchronization barrier is required, especially in long running threads or threads that access the value in a loop (as Lea says).
However, in the case where there is a short lived thread, as implied by the question, with new threads for new readers and it does not impact the application to read stale data, synchronization is not required.
#erickson's answer is the safest in this situation, guaranteeing that other threads will see the changes to the HashMap reference as they occur. I'd suggest following that advice simply to avoid the confusion over the requirements and implementation that resulted in the "down votes" on this answer and the discussion below.
I'm not deleting the answer in the hope that it will be useful. I'm not looking for the "Peer Pressure" badge... ;-)

Is it safe to get values from a java.util.HashMap from multiple threads (no modification)?

There is a case where a map will be constructed, and once it is initialized, it will never be modified again. It will however, be accessed (via get(key) only) from multiple threads. Is it safe to use a java.util.HashMap in this way?
(Currently, I'm happily using a java.util.concurrent.ConcurrentHashMap, and have no measured need to improve performance, but am simply curious if a simple HashMap would suffice. Hence, this question is not "Which one should I use?" nor is it a performance question. Rather, the question is "Would it be safe?")
Jeremy Manson, the god when it comes to the Java Memory Model, has a three part blog on this topic - because in essence you are asking the question "Is it safe to access an immutable HashMap" - the answer to that is yes. But you must answer the predicate to that question which is - "Is my HashMap immutable". The answer might surprise you - Java has a relatively complicated set of rules to determine immutability.
For more info on the topic, read Jeremy's blog posts:
Part 1 on Immutability in Java:
http://jeremymanson.blogspot.com/2008/04/immutability-in-java.html
Part 2 on Immutability in Java:
http://jeremymanson.blogspot.com/2008/07/immutability-in-java-part-2.html
Part 3 on Immutability in Java:
http://jeremymanson.blogspot.com/2008/07/immutability-in-java-part-3.html
Your idiom is safe if and only if the reference to the HashMap is safely published. Rather than anything relating the internals of HashMap itself, safe publication deals with how the constructing thread makes the reference to the map visible to other threads.
Basically, the only possible race here is between the construction of the HashMap and any reading threads that may access it before it is fully constructed. Most of the discussion is about what happens to the state of the map object, but this is irrelevant since you never modify it - so the only interesting part is how the HashMap reference is published.
For example, imagine you publish the map like this:
class SomeClass {
public static HashMap<Object, Object> MAP;
public synchronized static setMap(HashMap<Object, Object> m) {
MAP = m;
}
}
... and at some point setMap() is called with a map, and other threads are using SomeClass.MAP to access the map, and check for null like this:
HashMap<Object,Object> map = SomeClass.MAP;
if (map != null) {
.. use the map
} else {
.. some default behavior
}
This is not safe even though it probably appears as though it is. The problem is that there is no happens-before relationship between the set of SomeObject.MAP and the subsequent read on another thread, so the reading thread is free to see a partially constructed map. This can pretty much do anything and even in practice it does things like put the reading thread into an infinite loop.
To safely publish the map, you need to establish a happens-before relationship between the writing of the reference to the HashMap (i.e., the publication) and the subsequent readers of that reference (i.e., the consumption). Conveniently, there are only a few easy-to-remember ways to accomplish that[1]:
Exchange the reference through a properly locked field (JLS 17.4.5)
Use static initializer to do the initializing stores (JLS 12.4)
Exchange the reference via a volatile field (JLS 17.4.5), or as the consequence of this rule, via the AtomicX classes
Initialize the value into a final field (JLS 17.5).
The ones most interesting for your scenario are (2), (3) and (4). In particular, (3) applies directly to the code I have above: if you transform the declaration of MAP to:
public static volatile HashMap<Object, Object> MAP;
then everything is kosher: readers who see a non-null value necessarily have a happens-before relationship with the store to MAP and hence see all the stores associated with the map initialization.
The other methods change the semantics of your method, since both (2) (using the static initalizer) and (4) (using final) imply that you cannot set MAP dynamically at runtime. If you don't need to do that, then just declare MAP as a static final HashMap<> and you are guaranteed safe publication.
In practice, the rules are simple for safe access to "never-modified objects":
If you are publishing an object which is not inherently immutable (as in all fields declared final) and:
You already can create the object that will be assigned at the moment of declarationa: just use a final field (including static final for static members).
You want to assign the object later, after the reference is already visible: use a volatile fieldb.
That's it!
In practice, it is very efficient. The use of a static final field, for example, allows the JVM to assume the value is unchanged for the life of the program and optimize it heavily. The use of a final member field allows most architectures to read the field in a way equivalent to a normal field read and doesn't inhibit further optimizationsc.
Finally, the use of volatile does have some impact: no hardware barrier is needed on many architectures (such as x86, specifically those that don't allow reads to pass reads), but some optimization and reordering may not occur at compile time - but this effect is generally small. In exchange, you actually get more than what you asked for - not only can you safely publish one HashMap, you can store as many more not-modified HashMaps as you want to the same reference and be assured that all readers will see a safely published map.
For more gory details, refer to Shipilev or this FAQ by Manson and Goetz.
[1] Directly quoting from shipilev.
a That sounds complicated, but what I mean is that you can assign the reference at construction time - either at the declaration point or in the constructor (member fields) or static initializer (static fields).
b Optionally, you can use a synchronized method to get/set, or an AtomicReference or something, but we're talking about the minimum work you can do.
c Some architectures with very weak memory models (I'm looking at you, Alpha) may require some type of read barrier before a final read - but these are very rare today.
The reads are safe from a synchronization standpoint but not a memory standpoint. This is something that is widely misunderstood among Java developers including here on Stackoverflow. (Observe the rating of this answer for proof.)
If you have other threads running, they may not see an updated copy of the HashMap if there is no memory write out of the current thread. Memory writes occur through the use of the synchronized or volatile keywords, or through uses of some java concurrency constructs.
See Brian Goetz's article on the new Java Memory Model for details.
After a bit more looking, I found this in the java doc (emphasis mine):
Note that this implementation is not
synchronized. If multiple threads
access a hash map concurrently, and at
least one of the threads modifies the
map structurally, it must be
synchronized externally. (A structural
modification is any operation that
adds or deletes one or more mappings;
merely changing the value associated
with a key that an instance already
contains is not a structural
modification.)
This seems to imply that it will be safe, assuming the converse of the statement there is true.
One note is that under some circumstances, a get() from an unsynchronized HashMap can cause an infinite loop. This can occur if a concurrent put() causes a rehash of the Map.
http://lightbody.net/blog/2005/07/hashmapget_can_cause_an_infini.html
There is an important twist though. It's safe to access the map, but in general it's not guaranteed that all threads will see exactly the same state (and thus values) of the HashMap. This might happen on multiprocessor systems where the modifications to the HashMap done by one thread (e.g., the one that populated it) can sit in that CPU's cache and won't be seen by threads running on other CPUs, until a memory fence operation is performed ensuring cache coherence. The Java Language Specification is explicit on this one: the solution is to acquire a lock (synchronized (...)) which emits a memory fence operation. So, if you are sure that after populating the HashMap each of the threads acquires ANY lock, then it's OK from that point on to access the HashMap from any thread until the HashMap is modified again.
According to http://www.ibm.com/developerworks/java/library/j-jtp03304/ # Initialization safety you can make your HashMap a final field and after the constructor finishes it would be safely published.
...
Under the new memory model, there is something similar to a happens-before relationship between the write of a final field in a constructor and the initial load of a shared reference to that object in another thread.
...
This question is addressed in Brian Goetz's "Java Concurrency in Practice" book (Listing 16.8, page 350):
#ThreadSafe
public class SafeStates {
private final Map<String, String> states;
public SafeStates() {
states = new HashMap<String, String>();
states.put("alaska", "AK");
states.put("alabama", "AL");
...
states.put("wyoming", "WY");
}
public String getAbbreviation(String s) {
return states.get(s);
}
}
Since states is declared as final and its initialization is accomplished within the owner's class constructor, any thread who later reads this map is guaranteed to see it as of the time the constructor finishes, provided no other thread will try to modify the contents of the map.
So the scenario you described is that you need to put a bunch of data into a Map, then when you're done populating it you treat it as immutable. One approach that is "safe" (meaning you're enforcing that it really is treated as immutable) is to replace the reference with Collections.unmodifiableMap(originalMap) when you're ready to make it immutable.
For an example of how badly maps can fail if used concurrently, and the suggested workaround I mentioned, check out this bug parade entry: bug_id=6423457
Be warned that even in single-threaded code, replacing a ConcurrentHashMap with a HashMap may not be safe. ConcurrentHashMap forbids null as a key or value. HashMap does not forbid them (don't ask).
So in the unlikely situation that your existing code might add a null to the collection during setup (presumably in a failure case of some kind), replacing the collection as described will change the functional behaviour.
That said, provided you do nothing else concurrent reads from a HashMap are safe.
[Edit: by "concurrent reads", I mean that there are not also concurrent modifications.
Other answers explain how to ensure this. One way is to make the map immutable, but it's not necessary. For example, the JSR133 memory model explicitly defines starting a thread to be a synchronised action, meaning that changes made in thread A before it starts thread B are visible in thread B.
My intent is not to contradict those more detailed answers about the Java Memory Model. This answer is intended to point out that even aside from concurrency issues, there is at least one API difference between ConcurrentHashMap and HashMap, which could scupper even a single-threaded program which replaced one with the other.]
http://www.docjar.com/html/api/java/util/HashMap.java.html
here is the source for HashMap. As you can tell, there is absolutely no locking / mutex code there.
This means that while its okay to read from a HashMap in a multithreaded situation, I'd definitely use a ConcurrentHashMap if there were multiple writes.
Whats interesting is that both the .NET HashTable and Dictionary<K,V> have built in synchronization code.
If the initialization and every put is synchronized you are save.
Following code is save because the classloader will take care of the synchronization:
public static final HashMap<String, String> map = new HashMap<>();
static {
map.put("A","A");
}
Following code is save because the writing of volatile will take care of the synchronization.
class Foo {
volatile HashMap<String, String> map;
public void init() {
final HashMap<String, String> tmp = new HashMap<>();
tmp.put("A","A");
// writing to volatile has to be after the modification of the map
this.map = tmp;
}
}
This will also work if the member variable is final because final is also volatile. And if the method is a constructor.

Categories