can hashmap have duplicate keys in multithreading environment [duplicate] - java

This question already has an answer here:
HashMap holding duplicate keys
(1 answer)
Closed 6 years ago.
If we do not use Collections.synchronizedMap() and let say i have a multi-threaded environment.
I know about race condition, re-sizing issue etc.
My question is can there be a case 2 threads Ta and Tb having same object and trying to put into a map.
Can there ever be 2 entries, if not how it is prevented. Is there a fraction of time diff between 2 put calls of 2 different threads running at same time.
As per my understanding, for both Ta and Tb both will check before putting, so can there be case of duplicate keys here.
Taking into consideration that we have overridden hashcode and equals properly.

The Javadoc for HashMap states:
Note that this implementation is not synchronized. If multiple threads
access a hash map concurrently, and at least one of the threads
modifies the map structurally, it must be synchronized externally. (A
structural modification is any operation that adds or deletes one or
more mappings; merely changing the value associated with a key that an
instance already contains is not a structural modification.) This is
typically accomplished by synchronizing on some object that naturally
encapsulates the map. If no such object exists, the map should be
"wrapped" using the Collections.synchronizedMap method.
So the docs say that you must synchronize access somehow, but do not say what will happen if you do not. That means that the behaviour when you do this is undefined -- all bets are off.
You can look at the source code for HashMap yourself. The heart of put is:
for (Entry<K,V> e = table[i]; e != null; e = e.next) {
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
modCount++;
addEntry(hash, key, value, i);
return null;
(Edit - this is the implementation in Java 6. Java 8's is dramatically different -- which reinforces the point)
We can speculate about the outcome if two threads attempt this simultaneously -- but it is pretty difficult to reason about. Sometimes it will result in two entries with the same key, sometimes it won't. It depends on timing.
TreeMap's put() is completely different of course, and its quirks when abused in this way will be different.
Any such behaviour is a quirk of the implementation, and the implementation may change in future without warning, because we are talking about undefined behaviour. The implementation makes no promises to you that it won't:
silently drop entries
go into an infinite loop
NullPointerException
claim huge amounts of memory
corrupt the store so that entries with other keys are lost
make previously removed entries reappear
create entries containing garbage from heap memory
etc.
The docs do state that a modification from elsewhere, while an Iterator is working on the object, will cause the Iterator to throw a ConcurrentModificationException -- but this is a different concern from synchronization, and could still happen if you used a SynchronizedMap
In summary, don't do it.

Related

ConcurrentHashMap thread-safety without using putIfAbsent

I'am trying to clarify HashMap vs ConcurrentHashMap regarding type-safety and also performance. I came across a lot of good articles, but still getting troubles figuring it all out.
Let's take the following example using a ConcurrentHashMap, where I will try to add a value for a key not already there and returning it, the new way of doing it would be:
private final Map<K,V> map = new ConcurrentHashMap<>();
return map.putIfAbsent(k, new Object());
let's assume we don't want to use the putIfAbsent method, the above code should look something like this:
private final Map<K,V> map = new ConcurrentHashMap<>();
synchronized (map) {
V value = map.get(key); //Edit adding the value fetch inside synchronized block
if (!nonNull(value)) {
map.put(key, new Object());
}
}
return map.get(key)
Is the problem with this approach the fact that the whole map is locked whereas in first approach the putIfAbsent method only synchronizes on the bucket on which the hash of the key is, and thus leading to less performance ? Would the second approach work fine with just a HashMap ?
Is the problem with this approach the fact that the whole map is locked
There are two problems with this approach.
It's not intrinsic
The fact that you've acquired the lock on the map reference has zero effect whatsoever, except in regards to any other code that (tries) to acquire this lock. Crucially, ConcurrentHashmap itself does not acquire this lock.
So, if, during that second snippet (with synchronized), some other thread does this:
map.putIfAbsent(key, new Object());
Then it may occur that your map.get(key) call returns null, and nevertheless your followup map.put call ends up overwriting. In other words, that both your thread, and that hypothetical thread running putIfAbsent, both decided to write.
Presumably, if that is just fine in your book, that'd be weird. Why use putIfAbsent and check if map.get returns null in the first place?
Had the other thread done this:
synchronized (map) {
map.putIfAbsent(key, new Object());
}
then there'd be no problem; either your get-check-if-null-then-set code will set and the putIfAbsent call is a noop, or vice versa, but they couldn't possibly both 'decide to write'.
Which leads us to;
This is pointless
There are two different ways to achieve concurrency with maps: Intrinsic and extrinsic. There is zero point in doing both, and they do not interact.
If you have structure whereby all access (both read and write) out of a plain old entirely non-multicore capable java.util.HashMap goes through some shared lock (the hashmap instance itself, or any other lock, long as all threads that interact with that particular map instance use the same one), then that works fine and there is therefore no reason or point to using ConcurrentHashMap instead.
The point of ConcurrentHashMap is to streamline concurrent processes without the use of extrinsic locking: To let the map do the locking.
One of the reasons you want this is that the ConcurrentHashMap impl is significantly faster at the jobs it is capable of doing; these jobs are spelled out explicitly: It's the methods that ConcurrentHashMap has.
Atomicity
The central problem of your code snippet is that it lacks atomicity. Check-then-act is fundamentally broken in concurrent models (in your case: Check: Is key 'k' associated with no value or null?, then Act: Set the mapping of key 'k' to value 'v'). This is broken because what if the thing you checked changes in between? What if you have two threads that both 'check-and-act' and then run simultaneously; then they both check first, then both act first, and broken things ensue: One of the two threads will be acting upon a state that isn't equal to the state as it was when you checked, which means your check's broken.
The right model is act-then-check: Act first, and then check the result of the operation. Of course, this requires redefining, and integrating, the code you wrote explicitly in your snippet, into the very definition of your 'act' phase.
In other words, putIfAbsent is not a convenience method! is a fundamental operation! It's the only way (short of extrinsic locking) to convey the notion of: "Perform the action of associating 'v' with 'k', but only if there is no association yet. I'll check the results of this operation next". There is no way to break that down into if (!map.containsKey(key)) map.put(key, v); because check-then-act does not work in concurrent modelling.
Conclusions
Either get rid of concurrenthashmap, or get rid of synchronized. Having code that uses both is probably broken and even if it isn't, it's error prone, confusing, and I can guarantee you there's a much better way to write it (better in that it is more idiomatic, easier to read, more flexible in the face of future change requests, easier to test, and less likely to have hard-to-test-for bugs in it).
If you can state all operations you need to perform 100% in terms of the methods that CHM has, then do that, because CHM is vastly superior. It even has mechanisms for arbitrary operations: For example, unlike basic hashmaps, you can iterate through a CHM even if other threads are also messing with it, whereas with a normal hashmap you need to hold the lock for the entire duration of the operation, which means any other thread trying to do anything to that hashmap, even just 'ask for its size', need to wait. Hence, for most use cases, CHM results in orders of magnitude better performance.
in first approach the putIfAbsent method only synchronizes on the bucket
That is incorrect, ConcurrentHashMap doesn't synchronize on anything, it uses different mechanics to ensure thread safety.
Would the second approach work fine with just a HashMap ?
Yes, except the second approach is flawed. If using synchronization to make a Map thread-safe, then all access of the Map should use synchronization. As such, it would be best to call Collections.synchronizedMap(map). Performance will be worse than using ConcurrentHashMap.
private final Map<Integer, Object> map = Collections.synchronizedMap(new HashMap<>());
let's assume we don't want to use the putIfAbsent method.
Why? Oh, because it wastes a allocation if the key is already in the map, which is why we should be using computeIfAbsent() instead
map.computeIfAbsent(key, k -> new Object());

What is the default behavior of clear in java concurrent hashmap

Internally does it lock all the rows and mark each key to be deleted? So that if another thread want to access a key that is about to be deleted it will provide the right behavior?
Or do we have to synchronized the clear function
synchronized (this) {
myMap.clear();
}
For example, consider the following
myMap = {1: 1, 2: 1, 3: 1}
//Thread1
myMap.clear()
//Thread2
myMap.compute(1, (k, v) -> {v == null ? 0 : k + 1})
What happens when thread1 execute first and thread2 want to access key1 but key1 is not yet deleted?
Is the result
{}
or
{1: 0}
The behavior of clear() in that context is unspecified1. The javadoc states:
"For aggregate operations such as putAll and clear, concurrent retrievals may reflect insertion or removal of only some entries."
Looking at the source code, clear() is implemented by clearing each of the segments one at a time. Each segment is locked while clearing, but there is no lock on the entire map. This means that another thread may add entries to a segment that has just been cleared .... before the overall clear() call returns.
So, in practice, either of the results / behaviors you propose is possible, depending on the size of maps, the distribution of keys between segments, the version of Java you are using, and ... timing.
Internally does it lock all the rows and mark each key to be deleted?
No. Each segment is locked (one at a time) while entries in the segment are removed. (This is done to avoid memory anomalies which might corrupt the segments' hash chains, etcetera)
Regarding this:
synchronized (this) {
myMap.clear();
}
That will not block other threads from inserting elements while the clear() is in progress. It will just stop two threads (executing the same code) from clearing at the same time.
If you want to guarantee that clear() clears the map, you would need to wrap the map using Collections.synchronizedMap wrapper, and use that consistently. In practice, that defeats the purpose of using ConcurrentHashMap.
Follow-up question:
So potentially there could be infinite loop of clearing right? If another thread keep adding element the size of the map is always > 0, the thread that is trying to clear will keep on running.
Nope. There will be no infinite loop. The clear() method is not looking at the size of the map.
What will actually happen is that the clear() call will return and the map won't necessarily be empty.
1 - On careful rereading, I've realized that the quoted javadoc doesn't directly answer the question. In fact, if you look at the "contract" in the Map.clear() javadoc, it states there that the map will be empty after the call returns. This is implicitly contradicted by the javadoc for the ConcurrentHashMap.clear() javadoc, and explicitly contradicted by what the code actually does.
Collectively considering two points from the official documentation, kind of provides the idea.
Retrieval operations (including get) generally do not block, so may overlap with update operations (including put and remove)
For aggregate operations such as putAll and clear, concurrent retrievals may reflect insertion or removal of only some entries.
So for your question you will get the value from the map if the key has not been deleted yet. However, you should not rely on the internal synchronization details of the ConcurrentHashMap and should base your code only on the thread safety guarantees provided by the class.

WeakHashMap and Concurrent Modification

I'm reading the Java Doc about the WeakHashMap and I get the basic concept.
Because of the GC thread(s) acting in the background, you can get 'unusual behavior', such as a ConcurrentModificationException when iterating and etc.
The thing I don't get is that if the default implementation is not synchronized and does not contain lock in any way, then how come there is no possibility of getting an inconsistent state.
Say you have 2 threads. A GC thread deleting some key at a certain index and at same time and at the same index, a user thread is inserting in the array a key value pair.
To me, if there is no synchronization, then there is a high risk of getting a hash map that is inconsistent.
Even worse, doing something like this might actually be super dangerous because v might actually be null.
if (map.contains(k)) {
V v = map.get(k)
}
Am I missing something?
The inconsistent state issues you mention do not arise because the GC does not actively restructure WeakHashMaps. When the garbage collector frees the referent of a weak reference, the corresponding entry is not physically removed from the map; the entry merely becomes stale, with no key. At some later point, the entry may be physically removed during some other operation on the map, but the GC won't take on that responsibility.
You can see one Java version's implementation of this design on grepcode.
What you're describing is what the documentation explicitly states:
Because the garbage collector may discard keys at any time, a WeakHashMap may behave as though an unknown thread is silently removing entries.
The only mistake you're making is the assumption that you can protect the state by synchronizing. That doesn't work because the synchronization would not be mutual on the part of the GC. To quote the documentation:
In particular, even if you synchronize on a WeakHashMap instance and invoke none of its mutator methods, it is possible for the size method to return smaller values over time, for the isEmpty method to return false and then true, for the containsKey method to return true and later false for a given key, for the get method to return a value for a given key but later return null, for the put method to return null and the remove method to return false for a key that previously appeared to be in the map, and for successive examinations of the key set, the value collection, and the entry set to yield successively smaller numbers of elements.
Referring to
even if you synchronize on a WeakHashMap [...] it is possible for the size method to return smaller values over time
the javadoc sufficiently explains to me that there is a possibility for an inconsistent state and that it is completely independent from synchronization.
A few examples later, the given example is referred to, too:
for the containsKey method to return true and later false for a given key
So basically, one should never rely on the state of a WeakHashMap. but use it as atomic as possible. The given example should therefore be rephrased to
V v = map.get(k);
if(null != v) {
}
or
Optional.ofNullable(map.get(k)).ifPresent(() -> { } );
This class is intended primarily for use with key objects whose equals methods test for object identity using the == operator. Once such a key is discarded it can never be recreated, so it is impossible to do a lookup of that key in a WeakHashMap at some later time and be surprised that its entry has been removed.
So if one uses WeakHashMap for objects whose equals() is based on identity check, all is fine. The first case you mentioned ("A GC thread deleting some key at a certain index and at same time and at the same index, a user thread is inserting in the array a key value pair.") is impossible because as long as the user thread keeps a reference to the key object it cannot be discarded by GC.
And the same stands for the second example:
if (map.contains(k)) {
V v = map.get(k)
}
You keep reference k so the corresponding object is reachable and cannot be discarded.
But
This class will work perfectly well with key objects whose equals
methods are not based upon object identity, such as String instances.
With such recreatable key objects, however, the automatic removal of
WeakHashMap entries whose keys have been discarded may prove to be
confusing.

Creating a ConcurrentHashMap that supports "snapshots"

I'm attempting to create a ConcurrentHashMap that supports "snapshots" in order to provide consistent iterators, and am wondering if there's a more efficient way to do this. The problem is that if two iterators are created at the same time then they need to read the same values, and the definition of the concurrent hash map's weakly consistent iterators does not guarantee this to be the case. I'd also like to avoid locks if possible: there are several thousand values in the map and processing each item takes several dozen milliseconds, and I don't want to have to block writers during this time as this could result in writers blocking for a minute or longer.
What I have so far:
The ConcurrentHashMap's keys are Strings, and its values are instances of ConcurrentSkipListMap<Long, T>
When an element is added to the hashmap with putIfAbsent, then a new skiplist is allocated, and the object is added via skipList.put(System.nanoTime(), t).
To query the map, I use map.get(key).lastEntry().getValue() to return the most recent value. To query a snapshot (e.g. with an iterator), I use map.get(key).lowerEntry(iteratorTimestamp).getValue(), where iteratorTimestamp is the result of System.nanoTime() called when the iterator was initialized.
If an object is deleted, I use map.get(key).put(timestamp, SnapShotMap.DELETED), where DELETED is a static final object.
Questions:
Is there a library that already implements this? Or barring that, is there a data structure that would be more appropriate than the ConcurrentHashMap and the ConcurrentSkipListMap? My keys are comparable, so maybe some sort of concurrent tree would better support snapshots than a concurrent hash table.
How do I prevent this thing from continually growing? I can delete all of the skip list entries with keys less than X (except for the last key in the map) after all iterators that were initialized on or before X have completed, but I don't know of a good way to determine when this has happened: I can flag that an iterator has completed when its hasNext method returns false, but not all iterators are necessarily going to run to completion; I can keep a WeakReference to an iterator so that I can detect when it's been garbage collected, but I can't think of a good way to detect this other than by using a thread that iterates through the collection of weak references and then sleeps for several minutes - ideally the thread would block on the WeakReference and be notified when the wrapped reference is GC'd, but I don't think this is an option.
ConcurrentSkipListMap<Long, WeakReference<Iterator>> iteratorMap;
while(true) {
long latestGC = 0;
for(Map.Entry<Long, WeakReference<Iterator>> entry : iteratorMap.entrySet()) {
if(entry.getValue().get() == null) {
iteratorMap.remove(entry.getKey());
latestGC = entry.getKey();
} else break;
}
// remove ConcurrentHashMap entries with timestamps less than `latestGC`
Thread.sleep(300000); // five minutes
}
Edit: To clear up some confusion in the answers and comments, I'm currently passing weakly consistent iterators to code written by another division in the company, and they have asked me to increase the strength of the iterators' consistency. They are already aware of the fact that it is infeasible for me to make 100% consistent iterators, they just want a best effort on my part. They care more about throughput than iterator consistency, so coarse-grained locks are not an option.
What is your actual use case that requires a special implementation? From the Javadoc of ConcurrentHashMap (emphasis added):
Retrievals reflect the results of the most recently completed update operations holding upon their onset. ... Iterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration. They do not throw ConcurrentModificationException. However, iterators are designed to be used by only one thread at a time.
So the regular ConcurrentHashMap.values().iterator() will give you a "consistent" iterator, but only for one-time use by a single thread. If you need to use the same "snapshot" multiple times and/or by multiple threads, I suggest making a copy of the map.
EDIT: With the new information and the insistence for a "strongly consistent" iterator, I offer this solution. Please note that the use of a ReadWriteLock has the following implications:
Writes will be serialized (only one writer at a time) so write performance may be impacted.
Concurrent reads are allowed as long as there is no write in progress, so read performance impact should be minimal.
Active readers block writers but only as long as it takes to retrieve the reference to the current "snapshot". Once a thread has the snapshot, it no longer blocks writers no matter how long it takes to process the information in the snapshot.
Readers are blocked while any write is active; once the write finishes then all readers will have access to the new snapshot until a new write replaces it.
Consistency is achieved by serializing the writes and making a copy of the current values on each and every write. Readers that hold a reference to a "stale" snapshot can continue to use the old snapshot without worrying about modification, and the garbage collector will reclaim old snapshots as soon as no one is using it any more. It is assumed that there is no requirement for a reader to request a snapshot from an earlier point in time.
Because snapshots are potentially shared among multiple concurrent threads, the snapshots are read-only and cannot be modified. This restriction also applies to the remove() method of any Iterator instances created from the snapshot.
import java.util.*;
import java.util.concurrent.locks.*;
public class StackOverflow16600019 <K, V> {
private final ReadWriteLock locks = new ReentrantReadWriteLock();
private final HashMap<K,V> map = new HashMap<>();
private Collection<V> valueSnapshot = Collections.emptyList();
public V put(K key, V value) {
locks.writeLock().lock();
try {
V oldValue = map.put(key, value);
updateSnapshot();
return oldValue;
} finally {
locks.writeLock().unlock();
}
}
public V remove(K key) {
locks.writeLock().lock();
try {
V removed = map.remove(key);
updateSnapshot();
return removed;
} finally {
locks.writeLock().unlock();
}
}
public Collection<V> values() {
locks.readLock().lock();
try {
return valueSnapshot; // read-only!
} finally {
locks.readLock().unlock();
}
}
/** Callers MUST hold the WRITE LOCK. */
private void updateSnapshot() {
valueSnapshot = Collections.unmodifiableCollection(
new ArrayList<V>(map.values())); // copy
}
}
I've found that the ctrie is the ideal solution - it's a concurrent hash array mapped trie with constant time snapshots
Solution1) What about just synchronizing on the puts, and on the iteration. That should give you a consistent snapshot.
Solution2) Start iterating and make a boolean to say so, then override the puts, putAll so that they go into a queue, when the iteration is finished simply make those puts with the changed values.

Does re-putting an object into a ConcurrentHashMap cause a "happens-before" memory relation?

I'm working with existing code that has an object store in the form of a ConcurrentHashMap. Within the map are stored mutable objects, use by multiple threads. No two threads try to modify an object at once by design. My concern is regarding the visibility of the modifications between the threads.
Currently the objects' code has synchronization on the "setters" (guarded by the object itself). There is no synchronization on the "getters" nor are the members volatile. This, to me, would mean that visibility is not guaranteed. However, when an object is modified it is re-put back into the map (the put() method is called again, same key). Does this mean that when another thread pulls the object out of the map, it will see the modifications?
I've researched this here on stackoverflow, in JCIP, and in the package description for java.util.concurrent. I've basically confused myself I think... but the final straw that caused me to ask this question was from the package description, it states:
Actions in a thread prior to placing an object into any concurrent collection happen-before actions subsequent to the access or removal of that element from the collection in another thread.
In relation to my question, do "actions" include the modifications to the objects stored in the map before the re-put()? If all this does result in visibility across threads, is this an efficient approach? I'm relatively new to threads and would appreciate your comments.
Edit:
Thank you all for you responses! This was my first question on StackOverflow and it has been very helpful to me.
I have to go with ptomli's answer because I think it most clearly addressed my confusion. To wit, establishing a "happens-before" relation doesn't necessarily affect modification visibility in this case. My "title question" is poorly constructed regarding my actual question described in the text. ptomli's answer now jives with what I read in JCIP: "To ensure all threads see the most up-to-date values of shared mutable variables, the reading and writing threads must synchronize on a common lock" (page 37). Re-putting the object back into the map doesn't provide this common lock for the modification to the inserted object's members.
I appreciate all the tips for change (immutable objects, etc), and I wholeheartedly concur. But for this case, as I mentioned there is no concurrent modification because of careful thread handling. One thread modifies an object, and another thread later reads the object (with the CHM being the object conveyer). I think the CHM is insufficient to ensure that the later executing thread will see the modifications from the first given the situation I provided. However, I think many of you correctly answered the title question.
You call concurrHashMap.put after each write to an object. However you did not specified that you also call concurrHashMap.get before each read. This is necessary.
This is true of all forms of synchronization: you need to have some "checkpoints" in both threads. Synchronizing only one thread is useless.
I haven't checked the source code of ConcurrentHashMap to make sure that put and get trigger an happens-before, but it is only logical that they should.
There is still an issue with your method however, even if you use both put and get. The problem happens when you modify an object and it is used (in an inconsistent state) by the other thread before it is put. It's a subtle problem because you might think the old value would be read since it hasn't been put yet and it would not cause a problem. The problem is that when you don't synchronize, you are not guaranteed to get a consistent older object, but rather the behavior is undefined. The JVM can update whatever part of the object in the other threads, at any time. It's only when using some explicit synchronization that you are sure you are updating the values in a consistent way across threads.
What you could do:
(1) synchronize all accesses (getters and setters) to your objects everywhere in the code. Be careful with the setters: make sure that you can't set the object in an inconsistent state. For example, when setting first and last name, having two synchronized setters is not sufficient: you must get the object lock for both operations together.
or
(2) when you put an object in the map, put a deep copy instead of the object itself. That way the other threads will never read an object in an inconsistent state.
EDIT:
I just noticed
Currently the objects' code has synchronization on the "setters"
(guarded by the object itself). There is no synchronization on the
"getters" nor are the members volatile.
This is not good. As I said above synchronizing on only one thread is no synchronization at all. You might synchronize on all your writer threads, but who cares since the readers won't get the right values.
I think this has been already said across a few answers but to sum it up
If your code goes
CHM#get
call various setters
CHM#put
then the "happens-before" provided by the put will guarantee that all the mutate calls are executed before the put. This means that any subsequent get will be guaranteed to see those changes.
Your problem is that the actual state of the object will not be deterministic because if the actual flow of events is
thread 1: CHM#get
thread 1: call setter
thread 2: CHM#get
thread 1: call setter
thread 1: call setter
thread 1: CHM#put
then there is no guarantee over what the state of the object will be in thread 2. It might see the object with the value provided by the first setter or it might not.
The immutable copy would be the best approach as then only completely consistent objects are published. Making the various setters synchronized (or the underlying references volatile) still doesn't let you publish consistent state, it just means that the object will always see the latest value for each getter on each call.
I think your question relates more to the objects you're storing in the map, and how they react to concurrent access, than the concurrent map itself.
If the instances you're storing in the map have synchronized mutators, but not synchronized accessors, then I don't see how they can be thread safe as described.
Take the Map out of the equation and determine if the instances you're storing are thread safe by themselves.
However, when an object is modified it is re-put back into the map (the put() method is called again, same key). Does this mean that when another thread pulls the object out of the map, it will see the modifications?
This exemplifies the confusion. The instance that is re-put into the Map will be retrieved from the Map by another thread. This is the guarantee of the concurrent map. That has nothing to do with visibility of the state of the stored instance itself.
My understanding is that it should work for all gets after the re-put, but this would be a very unsafe method of synchronization.
What happens to gets that happen before the re-put, but while modifications are happening. They may see only some of the changes, and the object would have an inconsistent state.
If you can, I'd recommend store immutable objects in the map. Then any get will retrieve a version of the object that was current when it did the get.
That's a code snippet from java.util.concurrent.ConcurrentHashMap (Open JDK 7):
919 public V get(Object key) {
920 Segment<K,V> s; // manually integrate access methods to reduce overhead
921 HashEntry<K,V>[] tab;
922 int h = hash(key.hashCode());
923 long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
924 if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
925 (tab = s.table) != null) {
926 for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
927 (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
928 e != null; e = e.next) {
929 K k;
930 if ((k = e.key) == key || (e.hash == h && key.equals(k)))
931 return e.value;
932 }
933 }
934 return null;
935 }
UNSAFE.getObjectVolatile() is documented as getter with internal volatile semantics, thus the memory barrier will be crossed when getting the reference.
yes, put incurs a volatile write, even if key-value already exists in the map.
using ConcurrentHashMap to publish objects across thread is pretty effecient. Objects should not be modified further once they are in the map. (They don't have to be strictly immutable (with final fields))

Categories