I have the following code,
private final Map<String, AtomicInteger> wordCounter = new ConcurrentHashMap<>();
AtomicInteger count = wordCounter.get(word);
if (count == null) {
if ((count = wordCounter.putIfAbsent(word, new AtomicInteger(1))) == null) {
continue;
}
}
count.incrementAndGet();
I'm checking count == null in IF condition. As far as i know, operation in AutomicInteger is thread-safe. Is it necessary to lock count instance using one of the locking mechanism?
The above code works without any additional locking, but it can be simplified to the following idiomatic form
// If word doesn't exist, create a new atomic integer, otherwise return the existing
wordCounter.computeIfAbsent(word, k -> new AtomicInteger(0))
.incrementAndGet(); // increment it
Your code looks a bit like double checked locking, in that putIfAbsent() is used after the null-check to avoid overwriting a value that was possibly put there by another thread. However that path creates an extra AtomicInteger which doesn't happen with DCL. The extra object probably wouldn't matter much, but it does make the solution a little less "pure".
Related
I have been asked to implement fine grained locking on a hashlist. I have done this using synchronized but the questions tells me to use Lock instead.
I have created a hashlist of objects in the constructor
private LinkedList<E> data[];;
private Lock lock[];
private Lock lockR = new ReentrantLock();
// The constructors ensure that both the data and the dataLock are the same size
#SuppressWarnings("unchecked")
public ConcurrentHashList(int n){
if(n > 1000) {
data = (LinkedList<E>[])(new LinkedList[n/10]);
lock = new Lock [n/10];
}
else {
data = (LinkedList<E>[])(new LinkedList[100]);
lock = new Lock [100]; ;
}
for(int j = 0; j < data.length;j++) {
data[j] = new LinkedList<E>();
lock[j] = new ReentrantLock();// Adding a lock to each bucket index
}
}
The original method
public void add(E x){
if(x != null){
lock.lock();
try{
int index = hashC(x);
if(!data[index].contains(x))
data[index].add(x);
}finally{lock.unlock();}
}
}
Using synchronization to grab a handle on the object hashlist to allow mutable Threads to work on mutable indexes concurrently.
public void add(E x){
if(x != null){
int index = hashC(x);
synchronized (dataLock[index]) { // Getting handle before adding
if(!data[index].contains(x))
data[index].add(x);
}
}
}
I do not know how to implement it using Lock though I can not lock a single element in a array only the whole method which means it is not coarse grained.
Using an array of ReentrantLock
public void add(E x){
if(x != null){
int index = hashC(x);
dataLock[index].lock();
try {
// Getting handle before adding
if(!data[index].contains(x))
data[index].add(x);
}finally {dataLock[index].unlock();}
}
}
The hash function
private int hashC(E x){
int k = x.hashCode();
int h = Math.abs(k % data.length);
return(h);
}
Presumably, hashC() is a function that is highly likely to produce unique numbers. As in, you have no guarantee that the hashes are unique, but the incidence of non-unique hashes is extremely low. For a data structure with a few million entries, you have a literal handful of collisions, and any given collision always consists of only a pair or maybe 3 conflicts (2 to 3 objects in your data structure have the same hash, but not 'thousands').
Also, assumption: the hash for a given object is constant. hashC(x) will produce the same value no matter how many times you call it, assuming you provide the same x.
Then, you get some fun conclusions:
The 'bucket' (The LinkedList instance found at array slot hashC(x) in data) that your object should go into, is always the same - you know which one it should be based solely on the result of hashC.
Calculating hashC does not require a lock of any sort. It has no side effects whatsoever.
Thus, knowing which bucket you need for a given operation on a single value (Be it add, remove, or check-if-in-collection) can be done without locking anything.
Now, once you know which bucket you need to look at / mutate, okay, now locking is involved.
So, just have 1 lock for each bucket. Not a List<Object> locks[];, that's a whole list worth of locks per bucket. Just Object[] locks is all you need, or ReentrantLock[] locks if you prefer to use lock/unlock instead of synchronized (lock[bucketIdx]) { ... }.
This is effectively fine-grained: After all, the odds that one operation needs to twiddle its thumbs because another thread is doing something, even though that other thread is operating on a different object, is very low; it would require the two different objects to have a colliding hash, which is possible, but extremely rare - as per assumption #1.
NB: Note that therefore lock can go away entirely, you don't need it, unless you want to build into your code that the code may completely re-design its bucket structure. For example, 1000 buckets feels a bit meh if you end up with a billion objects. I don't think 'rebucket everything' is part of the task here, though.
Consider the following implementation of some kind of fixed size cache, that allows lookup by an integer handle:
static class HandleCache {
private final AtomicInteger counter = new AtomicInteger();
private final Map<Data, Integer> handles = new ConcurrentHashMap<>();
private final Data[] array = new Data[100_000];
int getHandle(Data data) {
return handles.computeIfAbsent(data, k -> {
int i = counter.getAndIncrement();
if (i >= array.length) {
throw new IllegalStateException("array overflow");
}
array[i] = data;
return i;
});
}
Data getData(int handle) {
return array[handle];
}
}
There is an array store inside the compute function, which is not synchronized in any way. Would it be allowed by the java memory model for other threads to read a null value from this array later on?
PS: Would the outcome change if the id returned from getHandle was stored in a final field and only accessed through this field from other threads?
The read access isn't thread safe. You could make it thread safe indirectly however it's likely to be brittle. I would implemented it in a much simpler way and only optimise it later should it prove to a performance problem. e.g. because you see it in a profiler for a realistic test.
static class HandleCache {
private final Map<Data, Integer> handles = new HashMap<>();
private final List<Data> dataByIndex = new ArrayList<>();
synchronized int getHandle(Data data) {
Integer id = handles.get(data);
if (id == null) {
id = handles.size();
handles.put(data, id);
dataByIndex.add(id);
}
return id;
}
synchronized Data getData(int handle) {
return dataByIndex.get(handle);
}
}
Assuming that you determine the index for the array read from the value of counter than yes - you may get a null read
The simplest example (there are others) is a follows:
T1 calls getHandle(data) and is suspended just after int i = counter.getAndIncrement();
T2 calls handles[counter.get()] and reads null.
You should be able to easily verify this with a strategically placed sleep and two threads.
From the documentation of ConcurrentHashMap#computeIfAbsent:
The entire method invocation is performed atomically, so the function is applied at most once per key. Some attempted update operations on this map by other threads may be blocked while computation is in progress, so the computation should be short and simple, and must not attempt to update any other mappings of this map.
The documentation's reference to blocking refers only to update operations on the Map, so if any other thread attempts to access array directly (rather than through an update operation on the Map), there can be race conditions and null can be read.
This may be a duplicate question, but I've found this part of code in a book about concurrency. This is said to be thread-safe:
ConcurrentHashMap<String, Integer> counts = new ...;
private void countThing(String thing) {
while (true) {
Integer currentCount = counts.get(thing);
if (currentCount == null) {
if (counts.putIfAbsent(thing, 1) == null)
break;
} else if (counts.replace(thing, currentCount, currentCount + 1)) {
break;
}
}
}
From my (concurrency beginners') point of view, thread t1 and thread t2 could both read currentCount = 1. Then both threads could change the maps' value to 2. Can someone please explain me if the code is okay or not?
The trick is that replace(K key, V oldValue, V newValue) provides the atomicity for you. From the docs (emphasis mine):
Replaces the entry for a key only if currently mapped to a given value. ... the action is performed atomically.
The key word is "atomically." Within replace, the "check if the old value is what we expect, and only if it is, replace it" happens as a single chunk of work, with no other threads able to interleave with it. It's up to the implementation to do whatever synchronization it needs to make sure that it provides this atomicity.
So, it can't be that both threads see currentAction == 1 from within the replace function. One of them will see it as 1, and thus its invocation to replace will return true. The other will see it as 2 (because of the first call), and thus return false — and loop back to try again, this time with the new value of currentAction == 2.
Of course, it could be that a third thread has updated currentAction to 3 in the meanwhile, in which case that second thread will just keep trying until it's lucky enough to not have anyone jump ahead of it.
Can someone please explain me if the code is okay or not?
In addition to yshavit's answer, you can avoid writing your own loop by using compute which was added in Java 8.
ConcurrentMap<String, Integer> counts = new ...;
private void countThing(String thing) {
counts.compute(thing, (k, prev) -> prev == null ? 1 : 1 + prev);
}
With put you can too replace the value.
if (currentCount == null) {
counts.put(thing, 2);
}
I have a ConcurrentHashMap where I do the following:
sequences = new ConcurrentHashMap<Class<?>, AtomicLong>();
if(!sequences.containsKey(table)) {
synchronized (sequences) {
if(!sequences.containsKey(table))
initializeHashMapKeyValue(table);
}
}
My question is - is it unnecessary to make the extra
if(!sequences.containsKey(table))
Check inside the synschronized block so other threads wont initialize the same hashmap value?
Maybe the check is necessary and I am doing it wrong? It seems a bit silly what I'm doing, but I think it is necessary.
All operations on a ConcurrentHashMap are thread-safe, but thread-safe operations are not composable. You trying to make atomic a pair of operations: checking for something in the map and, in case it's not there, put something there (I assume). So the answer to your questions is yes, you need to check again, and your code looks ok.
You should be using the putIfAbsent methods of ConcurrentMap.
ConcurrentMap<String, AtomicLong> map = new ConcurrentHashMap<String, AtomicLong> ();
public long addTo(String key, long value) {
// The final value it became.
long result = value;
// Make a new one to put in the map.
AtomicLong newValue = new AtomicLong(value);
// Insert my new one or get me the old one.
AtomicLong oldValue = map.putIfAbsent(key, newValue);
// Was it already there? Note the deliberate use of '!='.
if ( oldValue != newValue ) {
// Update it.
result = oldValue.addAndGet(value);
}
return result;
}
For the functional purists amongst us, the above can be simplified (or perhaps complexified) to:
public long addTo(String key, long value) {
return map.putIfAbsent(key, new AtomicLong()).addAndGet(value);
}
And in Java 8 we can avoid the unnecessary creation of an AtomicLong:
public long addTo8(String key, long value) {
return map.computeIfAbsent(key, k -> new AtomicLong()).addAndGet(value);
}
You can't get exclusive lock with ConcurrentHashMap. In such case you should better use Synchronized HashMap.
There is already an atomic method to put inside ConcurrentHashMap if the object is not already there; putIfAbsent
I see what you did there ;-) question is do you see it yourself?
First off all you used something called "Double checked locking pattern". Where you have fast path (first contains) which does not need synchronization if case it is satisfied and slow path which must be synchronized because you do complex operation. Your operation consists of checking if something is inside the map and then putting there something / initializing it. So it does not matter that ConcurrentHashMap is thread safe for single operation because you do two simple operations which must be treated as unit so yes this synchronized block is correct and actually it could be synchronized by anything else for example this.
In Java 8 you should be able to replace the double checked lock with .computeIfAbsent:
sequences.computeIfAbsent(table, k -> initializeHashMapKeyValue(k));
Create a file named dictionary.txt with the following contents:
a
as
an
b
bat
ball
Here we have:
Count of words starting with "a": 3
Count of words starting with "b": 3
Total word count: 6
Now execute the following program as: java WordCount test_dictionary.txt 10
public class WordCount {
String fileName;
public WordCount(String fileName) {
this.fileName = fileName;
}
public void process() throws Exception {
long start = Instant.now().toEpochMilli();
LongAdder totalWords = new LongAdder();
//Map<Character, LongAdder> wordCounts = Collections.synchronizedMap(new HashMap<Character, LongAdder>());
ConcurrentHashMap<Character, LongAdder> wordCounts = new ConcurrentHashMap<Character, LongAdder>();
Files.readAllLines(Paths.get(fileName))
.parallelStream()
.map(line -> line.split("\\s+"))
.flatMap(Arrays::stream)
.parallel()
.map(String::toLowerCase)
.forEach(word -> {
totalWords.increment();
char c = word.charAt(0);
if (!wordCounts.containsKey(c)) {
wordCounts.put(c, new LongAdder());
}
wordCounts.get(c).increment();
});
System.out.println(wordCounts);
System.out.println("Total word count: " + totalWords);
long end = Instant.now().toEpochMilli();
System.out.println(String.format("Completed in %d milliseconds", (end - start)));
}
public static void main(String[] args) throws Exception {
for (int r = 0; r < Integer.parseInt(args[1]); r++) {
new WordCount(args[0]).process();
}
}
}
You would see counts vary as shown below:
{a=2, b=3}
Total word count: 6
Completed in 77 milliseconds
{a=3, b=3}
Total word count: 6
Now comment out ConcurrentHashMap at line 13, uncomment the line above it and run the program again.
You would see deterministic counts.
We are writing some locking code and have run into a peculiar question. We use a ConcurrentHashMap for fetching instances of Object that we lock on. So our synchronized blocks look like this
synchronized(locks.get(key)) { ... }
We have overridden the get method of ConcurrentHashMap to make it always return a new object if it did not contain one for the key.
#Override
public Object get(Object key) {
Object o = super.get(key);
if (null == o) {
Object no = new Object();
o = putIfAbsent((K) key, no);
if (null == o) {
o = no;
}
}
return o;
}
But is there a state in which the get-method has returned the object, but the thread has not yet entered the synchronized block. Allowing other threads to get the same object and lock on it.
We have a potential race condition were
thread 1: gets the object with key A, but does not enter the synchronized block
thread 2: gets the object with key A, enters a synchronized block
thread 2: removes the object from the map, exits synchronized block
thread 1: enters the synchronized block with the object that is no longer in the map
thread 3: gets a new object for key A (not the same object as thread 1 got)
thread 3: enters a synchronized block, while thread 1 also is in its synchronized block both using key A
This situation would not be possible if java entered the synchronized block directly after the call to get has returned. If not, does anyone have any input on how we could remove keys without having to worry about this race condition?
As I see it, the problem originates from the fact that you lock on map values, while in fact you need to lock on the key (or some derivation of it). If I understand correctly, you want to avoid 2 threads from running the critical section using the same key.
Is it possible for you to lock on the keys? can you guarantee that you always use the same instance of the key?
A nice alternative:
Don't delete the locks at all. Use a ReferenceMap with weak values. This way, a map entry is removed only if it is not currently in use by any thread.
Note:
1) Now you will have to synchronize this map (using Collections.synchronizedMap(..)).
2) You also need to synchronize the code that generates/returns a value for a given key.
you have 2 options:
a. you could check the map once inside the synchronized block.
Object o = map.get(k);
synchronized(o) {
if(map.get(k) != o) {
// object removed, handle...
}
}
b. you could extend your values to contain a flag indicating their status. when a value is removed from the map, you set a flag indicating that it was removed (within the sync block).
CacheValue v = map.get(k);
sychronized(v) {
if(v.isRemoved()) {
// object removed, handle...
}
}
The code as is, is thread safe. That being said, if you are removing from the CHM then any type of assumptions that are made when synchronizing on an object returned from the collection will be lost.
But is there a state in which the
get-method has returned the object,
but the thread has not yet entered the
synchronized block. Allowing other
threads to get the same object and
lock on it.
Yes, but that happens any time you synchronize on an Object. What is garunteed is that the other thread will not enter the synchronized block until the other exists.
If not, does anyone have any input on
how we could remove keys without
having to worry about this race
condition?
The only real way of ensuring this atomicity is to either synchronize on the CHM or another object (shared by all threads). The best way is to not remove from the CHM.
Thanks for all the great suggestions and ideas, really appreciate it! Eventually this discussion made me come up with a solution that does not use objects for locking.
Just a brief description of what we're actually doing.
We have a cache that receives data continuously from our environment. The cache has several 'buckets' for each key and aggregated events into the buckets as they come in. The events coming in have a key that determines the cache entry to be used, and a timestamp determining the bucket in the cache entry that should be incremented.
The cache also has an internal flush task that runs periodically. It will iterate all cache entries and flushes all buckets but the current one to database.
Now the timestamps of the incoming data can be for any time in the past, but the majority of them are for very recent timestamps. So the current bucket will get more hits than buckets for previous time intervals.
Knowing this, I can demonstrate the race condition we had. All this code is for one single cache entry, since the issue was isolated to concurrent writing and flushing of single cache elements.
// buckets :: ConcurrentMap<Long, AtomicLong>
void incrementBucket(long timestamp, long value) {
long key = bucketKey(timestamp, LOG_BUCKET_INTERVAL);
AtomicLong bucket = buckets.get(key);
if (null == bucket) {
AtomicLong newBucket = new AtomicLong(0);
bucket = buckets.putIfAbsent(key, newBucket);
if (null == bucket) {
bucket = newBucket;
}
}
bucket.addAndGet(value);
}
Map<Long, Long> flush() {
long now = System.currentTimeMillis();
long nowKey = bucketKey(now, LOG_BUCKET_INTERVAL);
Map<Long, Long> flushedValues = new HashMap<Long, Long>();
for (Long key : new TreeSet<Long>(buckets.keySet())) {
if (key != nowKey) {
AtomicLong bucket = buckets.remove(key);
if (null != bucket) {
long databaseKey = databaseKey(key);
long n = bucket.get()
if (!flushedValues.containsKey(databaseKey)) {
flushedValues.put(databaseKey, n);
} else {
long sum = flushedValues.get(databaseKey) + n;
flushedValues.put(databaseKey, sum);
}
}
}
}
return flushedValues;
}
What could happen was: (fl = flush thread, it = increment thread)
it: enters incrementBucket, executes until just before the call to addAndGet(value)
fl: enters flush and iterates the buckets
fl: reaches the bucket that is being incremented
fl: removes it and calls bucket.get() and stores the value to the flushed values
it: increments the bucket (which will be lost now, because the bucket has been flushed and removed)
The solution:
void incrementBucket(long timestamp, long value) {
long key = bucketKey(timestamp, LOG_BUCKET_INTERVAL);
boolean done = false;
while (!done) {
AtomicLong bucket = buckets.get(key);
if (null == bucket) {
AtomicLong newBucket = new AtomicLong(0);
bucket = buckets.putIfAbsent(key, newBucket);
if (null == bucket) {
bucket = newBucket;
}
}
synchronized (bucket) {
// double check if the bucket still is the same
if (buckets.get(key) != bucket) {
continue;
}
done = true;
bucket.addAndGet(value);
}
}
}
Map<Long, Long> flush() {
long now = System.currentTimeMillis();
long nowKey = bucketKey(now, LOG_BUCKET_INTERVAL);
Map<Long, Long> flushedValues = new HashMap<Long, Long>();
for (Long key : new TreeSet<Long>(buckets.keySet())) {
if (key != nowKey) {
AtomicLong bucket = buckets.get(key);
if (null != value) {
synchronized(bucket) {
buckets.remove(key);
long databaseKey = databaseKey(key);
long n = bucket.get()
if (!flushedValues.containsKey(databaseKey)) {
flushedValues.put(databaseKey, n);
} else {
long sum = flushedValues.get(databaseKey) + n;
flushedValues.put(databaseKey, sum);
}
}
}
}
}
return flushedValues;
}
I hope this will be useful for others that might run in to the same problem.
The two code snippets you've provided are fine, as they are. What you've done is similar to how lazy instantiation with Guava's MapMaker.makeComputingMap() might work, but I see no problems with the way that the keys are lazily created.
You're right by the way that it's entirely possible for a thread to be prempted after the get() lookup of a lock object, but before entering sychronized.
My problem is with the third bullet point in your race condition description. You say:
thread 2: removes the object from the map, exits synchronized block
Which object, and which map? In general, I presumed that you were looking up a key to lock on, and then would be performing some other operations on other data structures, within the synchronized block. If you're talking about removing the lock object from the ConcurrentHashMap mentioned at the start, that's a massive difference.
And the real question is whether this is necessary at all. In a general purpose environment, I don't think there will be any memory issues with just remembering all of the lock objects for all the keys that have ever been looked up (even if those keys no longer represent live objects). It is much harder to come up with some way of safely disposing of an object that may be stored in a local variable of some other thread at any time, and if you do want to go down this route I have a feeling that performance will degrade to that of a single coarse lock around the key lookup.
If I've misunderstood what's going on there then feel free to correct me.
Edit: OK - in which case I stand by my above claim that the easiest way to do this is not remove the keys; this might not actually be as problematic as you think, since the rate at which the space grows will be very small. By my calculations (which may well be off, I'm not an expert in space calculations and your JVM may vary) the map grows by about 14Kb/hour. You'd have to have a year of continuous uptime before this map used up 100MB of heap space.
But let's assume that the keys really do need to be removed. This poses the problem that you can't remove a key until you know that no threads are using it. This leads to the chicken-and-egg problem that you'll require all threads to synchronize on something else in order to get atomicity (of checking) and visibility across threads, which then means that you can't do much else than slap a single synchronized block around the whole thing, completely subverting your lock striping strategy.
Let's revisit the constraints. The main thing here is that things get cleared up eventually. It's not a correctness constraint but just a memory issue. Hence what we really want to do is identify some point at which the key could definitely no longer be used, and then use this as the trigger to remove it from the map. There are two cases here:
You can identify such a condition, and logically test for it. In which case you can remove the keys from the map with (in the worst case) some kind of timer thread, or hopefully some logic that's more cleanly integrated with your application.
You cannot identify any condition by which you know that a key will no longer be used. In this case, by definition, there is no point at which it's safe to remove the keys from the map. So in fact, for correctness' sake, you must leave them in.
In any case, this effectively boils down to manual garbage collection. Remove the keys from the map when you can lazily determine that they're no longer going to be used. Your current solution is too eager here since (as you point out) it's doing the removal before this situation holds.