Is this a valid code to write,if I wish to avoid unnecessary contains call?
I wish to avoid a contains call on every invocation,as this is highly time sensitive code.
cancelretryCountMap.putIfAbsent(tag,new AtomicInteger(0));
count = cancelretryCountMap.get(tag).incrementAndGet();
if(count > 10){
///abort after x retries
....
}
I am using JDK 7
Usually, you would use putIfAbsent like this:
final AtomicInteger present = map.get(tag);
int count;
if (present != null) {
count = present.incrementAndGet();
} else {
final AtomicInteger instance = new AtomicInteger(0);
final AtomicInteger marker = map.putIfAbsent(tag, instance);
if (marker == null) {
count = instance.incrementAndGet();
} else {
count = marker.incrementAndGet();
}
}
The reason for the explicit get being, that you want to avoid the allocation of the default value in the "happy" path (i.e., when there is already an entry with the given key).
If there is no matching entry, you have to use the return value of putIfAbsent in order to distinguish between
the entry was still missing (and the default value has been added due to the call), in which case the method returns null, and
some other thread has won the race and inserted the new entry after the call to get (in which case the method returns the current value associated with the given key)
You can abstract this sequence by introducing a helper method, e.g.,
interface Supplier<T> {
T get();
}
static <T> T computeIfAbsent(ConcurrentMap<K,T> map, T key, Supplier<? extends T> producer) {
final T present = map.get(key);
if (present != null) {
return present;
} else {
final T fallback = producer.get();
final T marker = map.putIfAbsent(key, fallback);
if (marker == null) {
return fallback;
} else {
return marker;
}
}
}
You could use this in your example:
static final Supplier<AtomicInteger> newAtomicInteger = new Supplier<AtomicInteger>() {
public AtomicInteger get() { return new AtomicInteger(0); }
};
void yourMethodWhatever(Object tag) {
final AtomicInteger counter = computeIfAbsent(cancelretryCountMap, tag, newAtomicInteger);
if (counter.incrementAndGet() > 10) {
... whatever ...
}
}
Note, that this is actually already provided in the JDK 8 as default method on Map, but since you are still on JDK 7, you have to roll your own, as is done here.
Related
I have a problem with deserialization in Java 11 that results in a HashMap with a key that can't be found. I would appreciate if anyone with more knowledge about the issue could say if my proposed workaround looks ok, or if there is something better I could do.
Consider the following contrived implementation (the relationships in the real problem are a bit more complex and hard to change):
public class Element implements Serializable {
private static long serialVersionUID = 1L;
private final int id;
private final Map<Element, Integer> idFromElement = new HashMap<>();
public Element(int id) {
this.id = id;
}
public void addAll(Collection<Element> elements) {
elements.forEach(e -> idFromElement.put(e, e.id));
}
public Integer idFrom(Element element) {
return idFromElement.get(element);
}
#Override
public int hashCode() {
return id;
}
#Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (!(obj instanceof Element)) {
return false;
}
Element other = (Element) obj;
return this.id == other.id;
}
}
Then I create an instance that has a reference to itself and serialize and deserialize it:
public static void main(String[] args) {
List<Element> elements = Arrays.asList(new Element(111), new Element(222));
Element originalElement = elements.get(1);
originalElement.addAll(elements);
Storage<Element> storage = new Storage<>();
storage.serialize(originalElement);
Element retrievedElement = storage.deserialize();
if (retrievedElement.idFrom(retrievedElement) == 222) {
System.out.println("ok");
}
}
If I run this code in Java 8 the result is "ok", if I run it in Java 11 the result is a NullPointerException because retrievedElement.idFrom(retrievedElement) returns null.
I put a breakpoint at HashMap.hash() and noticed that:
In Java 8, when idFromElement is being deserialized and Element(222) is being added to it, its id is 222, so I am able to find it later.
In Java 11, the id is not initialized (0 for int or null if I make it an Integer), so hash() is 0 when it's stored in the HashMap. Later, when I try to retrieve it, the id is 222, so idFromElement.get(element) returns null.
I understand that the sequence here is deserialize(Element(222)) -> deserialize(idFromElement) -> put unfinished Element(222) into Map. But, for some reason, in Java 8 id is already initialized when we get to the last step, while in Java 11 it is not.
The solution I came up with was to make idFromElement transient and write custom writeObject and readObject methods to force idFromElement to be deserialized after id:
...
transient private Map<Element, Integer> idFromElement = new HashMap<>();
...
private void writeObject(ObjectOutputStream output) throws IOException {
output.defaultWriteObject();
output.writeObject(idFromElement);
}
#SuppressWarnings("unchecked")
private void readObject(ObjectInputStream input) throws IOException, ClassNotFoundException {
input.defaultReadObject();
idFromElement = (HashMap<Element, Integer>) input.readObject();
}
The only reference I was able to find about the order during serialization/deserialization was this:
For serializable classes, the SC_SERIALIZABLE flag is set, the number of fields counts the number of serializable fields and is followed by a descriptor for each serializable field. The descriptors are written in canonical order. The descriptors for primitive typed fields are written first sorted by field name followed by descriptors for the object typed fields sorted by field name. The names are sorted using String.compareTo.
Which is the same in both Java 8 and Java 11 docs, and seems to imply that primitive typed fields should be written first, so I expected there would be no difference.
Implementation of Storage<T> included for completeness:
public class Storage<T> {
private final ByteArrayOutputStream buffer = new ByteArrayOutputStream();
public void serialize(T object) {
buffer.reset();
try (ObjectOutputStream objectOutputStream = new ObjectOutputStream(buffer)) {
objectOutputStream.writeObject(object);
objectOutputStream.flush();
} catch (Exception ioe) {
ioe.printStackTrace();
}
}
#SuppressWarnings("unchecked")
public T deserialize() {
ByteArrayInputStream byteArrayIS = new ByteArrayInputStream(buffer.toByteArray());
try (ObjectInputStream objectInputStream = new ObjectInputStream(byteArrayIS)) {
return (T) objectInputStream.readObject();
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
return null;
}
}
As mentioned in the comments and encouraged by the asker, here are the parts of the code that changed between version 8 and version 11 that I assume to be the reason for the different behavior (based on reading and debugging).
The difference is in the ObjectInputStream class, in one of its core methods. This is the relevant part of the implementation in Java 8:
private void readSerialData(Object obj, ObjectStreamClass desc)
throws IOException
{
ObjectStreamClass.ClassDataSlot[] slots = desc.getClassDataLayout();
for (int i = 0; i < slots.length; i++) {
ObjectStreamClass slotDesc = slots[i].desc;
if (slots[i].hasData) {
if (obj == null || handles.lookupException(passHandle) != null) {
...
} else {
defaultReadFields(obj, slotDesc);
}
...
}
}
}
/**
* Reads in values of serializable fields declared by given class
* descriptor. If obj is non-null, sets field values in obj. Expects that
* passHandle is set to obj's handle before this method is called.
*/
private void defaultReadFields(Object obj, ObjectStreamClass desc)
throws IOException
{
Class<?> cl = desc.forClass();
if (cl != null && obj != null && !cl.isInstance(obj)) {
throw new ClassCastException();
}
int primDataSize = desc.getPrimDataSize();
if (primVals == null || primVals.length < primDataSize) {
primVals = new byte[primDataSize];
}
bin.readFully(primVals, 0, primDataSize, false);
if (obj != null) {
desc.setPrimFieldValues(obj, primVals);
}
int objHandle = passHandle;
ObjectStreamField[] fields = desc.getFields(false);
Object[] objVals = new Object[desc.getNumObjFields()];
int numPrimFields = fields.length - objVals.length;
for (int i = 0; i < objVals.length; i++) {
ObjectStreamField f = fields[numPrimFields + i];
objVals[i] = readObject0(f.isUnshared());
if (f.getField() != null) {
handles.markDependency(objHandle, passHandle);
}
}
if (obj != null) {
desc.setObjFieldValues(obj, objVals);
}
passHandle = objHandle;
}
...
The method calls defaultReadFields, which reads the values of the fields. As mentioned in the quoted part of the specification, it first handles the field descriptors of primitive fields. The values that are read for these fields are set immediately after reading them, with
desc.setPrimFieldValues(obj, primVals);
and importantly: This happens before it calls readObject0 for each of the non-primitive fields.
In contrast to that, here is the relevant part of the implementation of Java 11:
private void readSerialData(Object obj, ObjectStreamClass desc)
throws IOException
{
ObjectStreamClass.ClassDataSlot[] slots = desc.getClassDataLayout();
...
for (int i = 0; i < slots.length; i++) {
ObjectStreamClass slotDesc = slots[i].desc;
if (slots[i].hasData) {
if (obj == null || handles.lookupException(passHandle) != null) {
...
} else {
FieldValues vals = defaultReadFields(obj, slotDesc);
if (slotValues != null) {
slotValues[i] = vals;
} else if (obj != null) {
defaultCheckFieldValues(obj, slotDesc, vals);
defaultSetFieldValues(obj, slotDesc, vals);
}
}
...
}
}
...
}
private class FieldValues {
final byte[] primValues;
final Object[] objValues;
FieldValues(byte[] primValues, Object[] objValues) {
this.primValues = primValues;
this.objValues = objValues;
}
}
/**
* Reads in values of serializable fields declared by given class
* descriptor. Expects that passHandle is set to obj's handle before this
* method is called.
*/
private FieldValues defaultReadFields(Object obj, ObjectStreamClass desc)
throws IOException
{
Class<?> cl = desc.forClass();
if (cl != null && obj != null && !cl.isInstance(obj)) {
throw new ClassCastException();
}
byte[] primVals = null;
int primDataSize = desc.getPrimDataSize();
if (primDataSize > 0) {
primVals = new byte[primDataSize];
bin.readFully(primVals, 0, primDataSize, false);
}
Object[] objVals = null;
int numObjFields = desc.getNumObjFields();
if (numObjFields > 0) {
int objHandle = passHandle;
ObjectStreamField[] fields = desc.getFields(false);
objVals = new Object[numObjFields];
int numPrimFields = fields.length - objVals.length;
for (int i = 0; i < objVals.length; i++) {
ObjectStreamField f = fields[numPrimFields + i];
objVals[i] = readObject0(f.isUnshared());
if (f.getField() != null) {
handles.markDependency(objHandle, passHandle);
}
}
passHandle = objHandle;
}
return new FieldValues(primVals, objVals);
}
...
An inner class, FieldValues, has been introduced. The defaultReadFields method now only reads the field values, and returns them as a FieldValuesobject. Afterwards, the returned values are assigned to the fields, by passing this FieldValues object to a newly introduced defaultSetFieldValues method, which internally does the desc.setPrimFieldValues(obj, primValues) call that originally was done immediately after the primitive values had been read.
To emphasize this again: The defaultReadFields method first reads the primitive field values. Then it reads the non-primitive field values. But it does so before the primitive field values have been set!
This new process interferes with the deserialization method of HashMap. Again, the relevant part is shown here:
private void readObject(java.io.ObjectInputStream s)
throws IOException, ClassNotFoundException {
...
int mappings = s.readInt(); // Read number of mappings (size)
if (mappings < 0)
throw new InvalidObjectException("Illegal mappings count: " +
mappings);
else if (mappings > 0) { // (if zero, use defaults)
...
Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
table = tab;
// Read the keys and values, and put the mappings in the HashMap
for (int i = 0; i < mappings; i++) {
#SuppressWarnings("unchecked")
K key = (K) s.readObject();
#SuppressWarnings("unchecked")
V value = (V) s.readObject();
putVal(hash(key), key, value, false, false);
}
}
}
It reads the key- and value objects, one by one, and puts them into the table, by computing the hash of the key and using the internal putVal method. This is the same method that is used when manually populating the map (i.e. when it is filled programmatically, and not deserialized).
Holger already gave a hint in the comments why this is necessary: There is no guarantee that the hash code of the deserialized keys will be the same as before the serialization. So blindly "restoring the original array" could basically lead to objects being stored in the table under a wrong hash code.
But here, the opposite happens: The keys (i.e. the objects of type Element) are deserialized. They contain the idFromElement map, which in turn contains the Element objects. These elements are put into the map, while the Element objects are still in the process of being deserialized, using the putVal method. But due to the changed order in ObjectInputStream, this is done before the primitive value of the id field (which determines the hash code) has been set. So the objects are stored using hash code 0, and later, the id values is assigned (e.g. the value 222), causing the objects to end up in the table under a hash code that they actually no longer have.
Now, on a more abstract level, this was already clear from the observed behavior. Therefore, the original question was not "What is going on here???", but
if my proposed workaround looks ok, or if there is something better I could do.
I think that the workaround could be OK, but would hesitate to say that nothing could go wrong there. It's complicated.
As of the second part: Something better could be to file a bug report at the Java Bug Database, because the new behavior is clearly broken. It may be hard to point out a specification that is violated, but the deserialized map is certainly inconsistent, and this is not acceptable.
(Yes, I could also file a bug report, but think that more research might be necessary in order to make sure it is written properly, not a duplicate, etc....)
I want to add one possible solution to the excellent answers above:
Instead of making idFromElement transient and forcing the HashMap to be deserialized after the id, you could also make id not final and deserialize it before calling defaultReadObject().
This makes the solution more scalable, since there could be other classes / objects using the hashCode and equals methods or the id leading to similar cycles as you described.
It might also lead to a more generic solution of the problem, although this is not yet completely thought out: All the information that is used in the deserialization of other objects needs to be deserialized before defaultReadObject() is called. That might be the id, but also other fields that your class exposes.
I have a key-value map accessed by multiple threads:
private final ConcurrentMap<Key, VersionValue> key_vval_map = new ConcurrentHashMap<Key, VersionValue>();
My custom get() and put() methods follow the typical check-then-act pattern. Therefore, synchronization is necessary to ensure atomicity. To avoid locking the whole ConcurrentHashMap, I define:
private final Object[] locks = new Object[10];
{
for(int i = 0; i < locks.length; i++)
locks[i] = new Object();
}
And the get() method goes (it calls the get() method of ConcurrentHashMap):
public VersionValue get(Key key)
{
final int hash = key.hashCode() & 0x7FFFFFFF;
synchronized (locks[hash % locks.length]) // I am not sure whether this synchronization is necessary.
{
VersionValue vval = this.key_vval_map.get(key);
if (vval == null)
return VersionValue.RESERVED_VERSIONVALUE; // RESERVED_VERSIONVALUE is defined elsewhere
return vval;
}
}
The put() method goes (it calls the get() method above):
public void put(Key key, VersionValue vval)
{
final int hash = key.hashCode() & 0x7FFFFFFF;
synchronized (locks[hash % locks.length]) // allowing concurrent writers
{
VersionValue current_vval = this.get(key); // call the get() method above
if (current_vval.compareTo(vval) < 0) // it is an newer VersionValue
this.key_vval_map.put(key, vval);
}
}
The above code works. But, as you know, working is far from being correct in multi-threaded programming.
My questions are :
Is this synchronization mechanism (especially synchronized (locks[hash % locks.length])) necessary and correct in my code?
In Javadoc on Interface Lock, it says
Lock implementations provide more extensive locking operations than
can be obtained using synchronized methods and statements.
Then is it feasible to replace synchronization by Lock in my code?
Edit: If you are using Java-8, don't hesitate to refer to the answer by #nosid.
ConcurrentMap allows you to use optimistic locking instead of explicit synchronization:
VersionValue current_vval = null;
VersionValue new_vval = null;
do {
current_vval = key_vval_map.get(key);
VersionValue effectiveVval = current_vval == null ? VersionValue.RESERVED_VERSIONVALUE : current_vval;
if (effectiveVval.compareTo(vval) < 0) {
new_vval = vval;
} else {
break;
}
} while (!replace(key, current_vval, new_vval));
...
private boolean replace(Key key, VersionValue current, VersionValue newValue) {
if (current == null) {
return key_vval_map.putIfAbsent(key, newValue) == null;
} else {
return key_vval_map.replace(key, current, newValue);
}
}
It will probably have better performance under low contention.
Regarding your questions:
If you use Guava, take a look at Striped
No, you don't need additional functionality of Lock here
If you are using Java-8, you can use the method ConcurrentHashMap::merge instead of reading and updating the value in two steps.
public VersionValue get(Key key) {
return key_vval_map.getOrDefault(key, VersionValue.RESERVED_VERSIONVALUE);
}
public void put(Key key, VersionValue vval) {
key_vval_map.merge(key, vval,
(lhs, rhs) -> lhs.compareTo(rhs) >= 0 ? lhs : rhs);
}
I have a following code snippet (The code is in Java, but I have tried to reduce as much clutter as possible):
class State {
public synchronized read() {
}
public synchronized write(ResourceManager rm) {
rm.request();
}
public synchronized returnResource() {
}
}
State st1 = new State();
State st2 = new State();
State st3 = new State();
class ResourceManager {
public syncronized request() {
st2 = findIdleState();
return st2.returnResource();
}
}
ResourceManager globalRM = new ResourceManager();
Thread1()
{
st1.write(globalRM);
}
Thread2()
{
st2.write(globalRM);
}
Thread3()
{
st1.read();
}
This code snippet has the possibility of entering a deadlock with the following sequence of calls:
Thread1: st1.write()
Thread1: st1.write() invokes globalRM.request()
Thread2: st2.write()
Thread1: globalRM.request() tries to invoke st2.returnResource(), but gets blocked because Thread2 is holding a lock on st2.
Thread2: st2.write() tries to invoke globalRM.request(), but gets blocked because globalRM's lock is with Thread1
Thread3: st2.read(), gets blocked.
How do I solve such a deadlock? I thought about it for a while to see there is some sort of ordered locks approach I can use to acquire the locks, but I cannot think of such a solution. The problem is that, the resource manager is global, while states are specific to each job (each job has an ID which is sequential which can be used for ordering if there is some way to use order for lock acquisition).
There are some options to avoid this scenario, each has its advantages and drawbacks:
1.) Use a single lock object for all instances. This approach is simple to implement, but limits you to one thread to aquire the lock. This can be reasonable if the synchronized blocks are short and scalability is not a big issue (e.g. desktop application aka non-server). The main selling point of this is the simplicity in implementation.
2.) Use ordered locking - this means whenever you have to aquire two or more locks, ensure that the order in which they are aquired is the same. Thats much easier said then done and can require heavy changes to the code base.
3.) Get rid of the locks completely. With the java.util.concurrent(.atomic) classes you can implement multithreaded data structures without blocking (usually using compareAndSet-flavor methods). This certainly requires changes to the code base and requires some rethinking of the structures. Usually reqiures a rewrite of critical portions of the code base.
4.) Many problems just disappear when you consequently use immutable types and objects. Combines well with the atomic (3.) approach to implement mutable super-structures (often implemented as copy-on-change).
To give any recommendation one would need to know a lot more details about what is protected by your locks.
--- EDIT ---
I needed a lock-free Set implementation, this code sample illustrates it strengths and weaknesses. I did implement iterator() as a snapshot, implementing it to throw ConcurrentModificationException and support remove() would be a little more complicated and I had no need for it. Some of the referenced utility classes I did not post (I think its completely obvious what the missing referenced pieces do).
I hope its at least a little useful as a starting point how to work with AtomicReferences.
/**
* Helper class that implements a set-like data structure
* with atomic add/remove capability.
*
* Iteration occurs always on a current snapshot, thus
* the iterator will not support remove, but also never
* throw ConcurrentModificationException.
*
* Iteration and reading the set is cheap, altering the set
* is expensive.
*/
public final class AtomicArraySet<T> extends AbstractSet<T> {
protected final AtomicReference<Object[]> reference =
new AtomicReference<Object[]>(Primitives.EMPTY_OBJECT_ARRAY);
public AtomicArraySet() {
}
/**
* Checks if the set contains the element.
*/
#Override
public boolean contains(final Object object) {
final Object[] array = reference.get();
for (final Object element : array) {
if (element.equals(object))
return true;
}
return false;
}
/**
* Adds an element to the set. Returns true if the element was added.
*
* If element is NULL or already in the set, no change is made to the
* set and false is returned.
*/
#Override
public boolean add(final T element) {
if (element == null)
return false;
while (true) {
final Object[] expect = reference.get();
final int length = expect.length;
// determine if element is already in set
for (int i=length-1; i>=0; --i) {
if (expect[i].equals(element))
return false;
}
final Object[] update = new Object[length + 1];
System.arraycopy(expect, 0, update, 0, length);
update[length] = element;
if (reference.compareAndSet(expect, update))
return true;
}
}
/**
* Adds all the given elements to the set.
* Semantically this is the same a calling add() repeatedly,
* but the whole operation is made atomic.
*/
#Override
public boolean addAll(final Collection<? extends T> collection) {
if (collection == null || collection.isEmpty())
return false;
while (true) {
boolean modified = false;
final Object[] expect = reference.get();
int length = expect.length;
Object[] temp = new Object[collection.size() + length];
System.arraycopy(expect, 0, temp, 0, length);
ELoop: for (final Object element : collection) {
if (element == null)
continue;
for (int i=0; i<length; ++i) {
if (element.equals(temp[i])) {
modified |= temp[i] != element;
temp[i] = element;
continue ELoop;
}
}
temp[length++] = element;
modified = true;
}
// check if content did not change
if (!modified)
return false;
final Object[] update;
if (temp.length == length) {
update = temp;
} else {
update = new Object[length];
System.arraycopy(temp, 0, update, 0, length);
}
if (reference.compareAndSet(expect, update))
return true;
}
}
/**
* Removes an element from the set. Returns true if the element was removed.
*
* If element is NULL not in the set, no change is made to the set and
* false is returned.
*/
#Override
public boolean remove(final Object element) {
if (element == null)
return false;
while (true) {
final Object[] expect = reference.get();
final int length = expect.length;
int i = length;
while (--i >= 0) {
if (expect[i].equals(element))
break;
}
if (i < 0)
return false;
final Object[] update;
if (length == 1) {
update = Primitives.EMPTY_OBJECT_ARRAY;
} else {
update = new Object[length - 1];
System.arraycopy(expect, 0, update, 0, i);
System.arraycopy(expect, i+1, update, i, length - i - 1);
}
if (reference.compareAndSet(expect, update))
return true;
}
}
/**
* Removes all entries from the set.
*/
#Override
public void clear() {
reference.set(Primitives.EMPTY_OBJECT_ARRAY);
}
/**
* Gets an estimation how many elements are in the set.
* (its an estimation as it only returns the current size
* and that may change at any time).
*/
#Override
public int size() {
return reference.get().length;
}
#Override
public boolean isEmpty() {
return reference.get().length <= 0;
}
#SuppressWarnings("unchecked")
#Override
public Iterator<T> iterator() {
final Object[] array = reference.get();
return (Iterator<T>) ArrayIterator.get(array);
}
#Override
public Object[] toArray() {
final Object[] array = reference.get();
return Primitives.cloneArray(array);
}
#SuppressWarnings("unchecked")
#Override
public <U extends Object> U[] toArray(final U[] array) {
final Object[] content = reference.get();
final int length = content.length;
if (array.length < length) {
// Make a new array of a's runtime type, but my contents:
return (U[]) Arrays.copyOf(content, length, array.getClass());
}
System.arraycopy(content, 0, array, 0, length);
if (array.length > length)
array[length] = null;
return array;
}
}
The answer to any deadlock is to acquire the same locks in the same order. You'll just have to figure out a way to do that.
In Java, I want to do something like this:
Object r = map.get(t);
if (r == null) {
r = create(); // creating r is an expensive operation.
map.put(t, r);
}
Now that snippet of code can be executed in a multithreaded environment.
map can be a ConcurrentHashMap.
But how do I make that logic atomic?
Please don't give me trivial solution like a 'synchronized' block.
I 'd expect this problem can be solved neatly once and for all.
It's been solved neatly by Guava.
Use CacheBuilder and call build with a CacheLoader. This will return a LoadingCache object. If you really need a Map implementation, you can call asMap().
There's also the older MapMaker with its makeComputingMap, but that's deprecated in favor of the CacheBuilder approach.
Of course you can also implement it manually, but doing that correctly is nontrivial. Several aspects to consider are:
you want to avoid calling create twice with the same input
you want to wait for a current thread to finish creating but don't want to do that with an idle loop
you want to avoid synchronizing in the good case (i.e. element is already in the map).
if two create calls happen at the same time you want each caller to only wait for the one relevant to him.
try
value = concurentMap.get(key);
if(value == null) {
map.putIfAbsent(key, new Value());
value = map.get(key);
}
return value;
Since Java 8, method ConcurrentMap.computeIfAbsent is what you are looking for:
equivalent to the following steps for this map, but atomic:
V oldValue = map.get(key);
if (oldValue == null) {
V newValue = mappingFunction.apply(key);
if (newValue != null) {
return map.putIfAbsent(key, newValue);
} else {
return null;
}
} else {
return oldValue;
}
The most common usage is to construct a new object serving as an initial mapped value or memoized result, which is what you are looking for, I think, as in:
Value v = map.computeIfAbsent(key, k -> new Value(f(k)));
I know this maybe isn't what you're looking for, but I'll include it for sake of argument.
public Object ensureExistsInMap(Map map, Object t) {
Object r = map.get(t);
if (r != null) return r; // we know for sure it exists
synchronized (creationLock) {
// multiple threads might have come this far if r was null
// outside the synchronized block
r = map.get(t);
if (r != null) return r;
r = create();
map.put(t, r);
return r;
}
}
What you describe is basically the Multitone pattern with Lazy Initialization
Here is an example using double locking with modern Java locks
private static Map<Object, Object> instances = new ConcurrentHashMap<Object, Object>();
private static Lock createLock = new ReentrantLock();
private Multitone() {}
public static Object getInstance(Object key) {
Object instance = instances.get(key);
if (instance == null) {
createLock.lock();
try {
if (instance == null) {
instance = createInstance();
instances.put(key, instance);
}
} finally {
createLock.unlock();
}
}
return instance;
}
I think the solution is documented in concurrency in practice.
The trick is to use a Future instead of R as object in the map.
Although I dislike this answer because it looks far far too complex.
here is the code:
public class Memorizer<A, V> implements Computable<A, V> {
private final ConcurrentMap<A, Future<V>> cache = new ConcurrentHashMap<A, Future<V>>();
private final Computable<A, V> c;
public Memorizer(Computable<A, V> c) { this.c = c; }
public V compute(final A arg) throws InterruptedException {
while (true) {
Future<V> f = cache.get(arg);
if (f == null) {
Callable<V> eval = new Callable<V>() {
public V call() throws InterruptedException {
return c.compute(arg);
}
};
FutureTask<V> ft = new FutureTask<V>(eval);
f = cache.putIfAbsent(arg, ft);
if (f == null) { f = ft; ft.run(); }
try {
return f.get();
} catch (CancellationException e) {
cache.remove(arg, f);
} catch (ExecutionException e) {
throw launderThrowable(e.getCause());
}
}
}
I have the following code:
for (String helpId : helpTipFragCache.getKeys())
{
List<HelpTopicFrag> value = helpTipFragCache.getValue(helpId);
helpTipFrags.put(helpId, value);
}
The helpTipFragCache has a mechanism to load the cache if values are needed at it is empty. The getKeys() method triggers this and the cache is loaded when this is called. However in the above case, I see varying behavior.
I first debugged it quickly to see if the cache was indeed populating (within eclipse). I stepped through and the for loop was never entered (due to an empty iterator).
I then debugged it again (with the same code) and stepped into the getKeys() and analyzed the whole process there. It then did everything it was supposed to, the iterator had values to iterate over and there was peace in the console.
I have fixed the issue by changing the code to do this:
Set<String> helpIds = helpTipFragCache.getKeys();
helpIds = helpTipFragCache.getKeys();
for (String helpId : helpIds)
{
List<HelpTopicFrag> value = helpTipFragCache.getValue(helpId);
helpTipFrags.put(helpId, value);
}
Obviously the debugging triggered something to initialize or act differently, does anyone know what causes this? Basically, what is happening to create the iterator from the returned collection?
Some other pertinent information:
This code is executed on server startup (tomcat)
This code doesn't behave as expected when executed from an included jar, but does when it is in the same code base
The collection is a Set
EDIT
Additional Code:
public Set<String> getKeys() throws Exception
{
if (CACHE_TYPE.LOAD_ALL == cacheType)
{
//Fake a getValue call to make sure the cache is loaded
getValue("");
}
return Collections.unmodifiableSet(cache.keySet());
}
public final T getValue(String key, Object... singleValueArgs) throws Exception
{
T retVal = null;
if (notCaching())
{
if (cacheType == CACHE_TYPE.MODIFY_EXISTING_CACHE_AS_YOU_GO)
{
retVal = getSingleValue(key, null, singleValueArgs);
}
else
{
retVal = getSingleValue(key, singleValueArgs);
}
}
else
{
synchronized (cache)
{
if (needToLoadCache())
{
logger.debug("Need to load cache: " + getCacheName());
if (cacheType != CACHE_TYPE.MODIFY_EXISTING_CACHE_AS_YOU_GO)
{
Map<String, T> newCache = null;
if (cacheType != CACHE_TYPE.MODIFY_EXISTING_CACHE)
{
newCache = getNewCache();
}
else
{
newCache = cache;
}
loadCache(newCache);
cache = newCache;
}
lastUpdatedInMillis = System.currentTimeMillis();
forceLoadCache = false;
}
}
...//code in here does not execute for this example, simply gets a value that is already in the cache
}
return retVal;
}
And back to the original class (where the previous code was posted from):
#Override
protected void loadCache(
Map<String, List<HelpTopicFrag>> newCache)
throws Exception
{
Map<String, List<HelpTopicFrag>> _helpTipFrags = helpDAO.getHelpTopicFrags(getAppName(), _searchIds);
addDisplayModeToFrags(_helpTipFrags);
newCache.putAll(_helpTipFrags);
}
Above, a database call is made to get the values to be put in the cache.
The answer to
Basically, what is happening to create the iterator from the returned collection?
The for loop in your case treats Setas Iterable and uses an Iterator obtained by calling Iterable.iterator().
Set as = ...;
for (A a : as) {
doSth ();
}
is basically equivalent to
Set as = ...;
Iterator hidden = as.iterator ();
while (hidden.hasNext ()) {
a = hidden.next ();
doSth ();
}