I have a problem with deserialization in Java 11 that results in a HashMap with a key that can't be found. I would appreciate if anyone with more knowledge about the issue could say if my proposed workaround looks ok, or if there is something better I could do.
Consider the following contrived implementation (the relationships in the real problem are a bit more complex and hard to change):
public class Element implements Serializable {
private static long serialVersionUID = 1L;
private final int id;
private final Map<Element, Integer> idFromElement = new HashMap<>();
public Element(int id) {
this.id = id;
}
public void addAll(Collection<Element> elements) {
elements.forEach(e -> idFromElement.put(e, e.id));
}
public Integer idFrom(Element element) {
return idFromElement.get(element);
}
#Override
public int hashCode() {
return id;
}
#Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (!(obj instanceof Element)) {
return false;
}
Element other = (Element) obj;
return this.id == other.id;
}
}
Then I create an instance that has a reference to itself and serialize and deserialize it:
public static void main(String[] args) {
List<Element> elements = Arrays.asList(new Element(111), new Element(222));
Element originalElement = elements.get(1);
originalElement.addAll(elements);
Storage<Element> storage = new Storage<>();
storage.serialize(originalElement);
Element retrievedElement = storage.deserialize();
if (retrievedElement.idFrom(retrievedElement) == 222) {
System.out.println("ok");
}
}
If I run this code in Java 8 the result is "ok", if I run it in Java 11 the result is a NullPointerException because retrievedElement.idFrom(retrievedElement) returns null.
I put a breakpoint at HashMap.hash() and noticed that:
In Java 8, when idFromElement is being deserialized and Element(222) is being added to it, its id is 222, so I am able to find it later.
In Java 11, the id is not initialized (0 for int or null if I make it an Integer), so hash() is 0 when it's stored in the HashMap. Later, when I try to retrieve it, the id is 222, so idFromElement.get(element) returns null.
I understand that the sequence here is deserialize(Element(222)) -> deserialize(idFromElement) -> put unfinished Element(222) into Map. But, for some reason, in Java 8 id is already initialized when we get to the last step, while in Java 11 it is not.
The solution I came up with was to make idFromElement transient and write custom writeObject and readObject methods to force idFromElement to be deserialized after id:
...
transient private Map<Element, Integer> idFromElement = new HashMap<>();
...
private void writeObject(ObjectOutputStream output) throws IOException {
output.defaultWriteObject();
output.writeObject(idFromElement);
}
#SuppressWarnings("unchecked")
private void readObject(ObjectInputStream input) throws IOException, ClassNotFoundException {
input.defaultReadObject();
idFromElement = (HashMap<Element, Integer>) input.readObject();
}
The only reference I was able to find about the order during serialization/deserialization was this:
For serializable classes, the SC_SERIALIZABLE flag is set, the number of fields counts the number of serializable fields and is followed by a descriptor for each serializable field. The descriptors are written in canonical order. The descriptors for primitive typed fields are written first sorted by field name followed by descriptors for the object typed fields sorted by field name. The names are sorted using String.compareTo.
Which is the same in both Java 8 and Java 11 docs, and seems to imply that primitive typed fields should be written first, so I expected there would be no difference.
Implementation of Storage<T> included for completeness:
public class Storage<T> {
private final ByteArrayOutputStream buffer = new ByteArrayOutputStream();
public void serialize(T object) {
buffer.reset();
try (ObjectOutputStream objectOutputStream = new ObjectOutputStream(buffer)) {
objectOutputStream.writeObject(object);
objectOutputStream.flush();
} catch (Exception ioe) {
ioe.printStackTrace();
}
}
#SuppressWarnings("unchecked")
public T deserialize() {
ByteArrayInputStream byteArrayIS = new ByteArrayInputStream(buffer.toByteArray());
try (ObjectInputStream objectInputStream = new ObjectInputStream(byteArrayIS)) {
return (T) objectInputStream.readObject();
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
return null;
}
}
As mentioned in the comments and encouraged by the asker, here are the parts of the code that changed between version 8 and version 11 that I assume to be the reason for the different behavior (based on reading and debugging).
The difference is in the ObjectInputStream class, in one of its core methods. This is the relevant part of the implementation in Java 8:
private void readSerialData(Object obj, ObjectStreamClass desc)
throws IOException
{
ObjectStreamClass.ClassDataSlot[] slots = desc.getClassDataLayout();
for (int i = 0; i < slots.length; i++) {
ObjectStreamClass slotDesc = slots[i].desc;
if (slots[i].hasData) {
if (obj == null || handles.lookupException(passHandle) != null) {
...
} else {
defaultReadFields(obj, slotDesc);
}
...
}
}
}
/**
* Reads in values of serializable fields declared by given class
* descriptor. If obj is non-null, sets field values in obj. Expects that
* passHandle is set to obj's handle before this method is called.
*/
private void defaultReadFields(Object obj, ObjectStreamClass desc)
throws IOException
{
Class<?> cl = desc.forClass();
if (cl != null && obj != null && !cl.isInstance(obj)) {
throw new ClassCastException();
}
int primDataSize = desc.getPrimDataSize();
if (primVals == null || primVals.length < primDataSize) {
primVals = new byte[primDataSize];
}
bin.readFully(primVals, 0, primDataSize, false);
if (obj != null) {
desc.setPrimFieldValues(obj, primVals);
}
int objHandle = passHandle;
ObjectStreamField[] fields = desc.getFields(false);
Object[] objVals = new Object[desc.getNumObjFields()];
int numPrimFields = fields.length - objVals.length;
for (int i = 0; i < objVals.length; i++) {
ObjectStreamField f = fields[numPrimFields + i];
objVals[i] = readObject0(f.isUnshared());
if (f.getField() != null) {
handles.markDependency(objHandle, passHandle);
}
}
if (obj != null) {
desc.setObjFieldValues(obj, objVals);
}
passHandle = objHandle;
}
...
The method calls defaultReadFields, which reads the values of the fields. As mentioned in the quoted part of the specification, it first handles the field descriptors of primitive fields. The values that are read for these fields are set immediately after reading them, with
desc.setPrimFieldValues(obj, primVals);
and importantly: This happens before it calls readObject0 for each of the non-primitive fields.
In contrast to that, here is the relevant part of the implementation of Java 11:
private void readSerialData(Object obj, ObjectStreamClass desc)
throws IOException
{
ObjectStreamClass.ClassDataSlot[] slots = desc.getClassDataLayout();
...
for (int i = 0; i < slots.length; i++) {
ObjectStreamClass slotDesc = slots[i].desc;
if (slots[i].hasData) {
if (obj == null || handles.lookupException(passHandle) != null) {
...
} else {
FieldValues vals = defaultReadFields(obj, slotDesc);
if (slotValues != null) {
slotValues[i] = vals;
} else if (obj != null) {
defaultCheckFieldValues(obj, slotDesc, vals);
defaultSetFieldValues(obj, slotDesc, vals);
}
}
...
}
}
...
}
private class FieldValues {
final byte[] primValues;
final Object[] objValues;
FieldValues(byte[] primValues, Object[] objValues) {
this.primValues = primValues;
this.objValues = objValues;
}
}
/**
* Reads in values of serializable fields declared by given class
* descriptor. Expects that passHandle is set to obj's handle before this
* method is called.
*/
private FieldValues defaultReadFields(Object obj, ObjectStreamClass desc)
throws IOException
{
Class<?> cl = desc.forClass();
if (cl != null && obj != null && !cl.isInstance(obj)) {
throw new ClassCastException();
}
byte[] primVals = null;
int primDataSize = desc.getPrimDataSize();
if (primDataSize > 0) {
primVals = new byte[primDataSize];
bin.readFully(primVals, 0, primDataSize, false);
}
Object[] objVals = null;
int numObjFields = desc.getNumObjFields();
if (numObjFields > 0) {
int objHandle = passHandle;
ObjectStreamField[] fields = desc.getFields(false);
objVals = new Object[numObjFields];
int numPrimFields = fields.length - objVals.length;
for (int i = 0; i < objVals.length; i++) {
ObjectStreamField f = fields[numPrimFields + i];
objVals[i] = readObject0(f.isUnshared());
if (f.getField() != null) {
handles.markDependency(objHandle, passHandle);
}
}
passHandle = objHandle;
}
return new FieldValues(primVals, objVals);
}
...
An inner class, FieldValues, has been introduced. The defaultReadFields method now only reads the field values, and returns them as a FieldValuesobject. Afterwards, the returned values are assigned to the fields, by passing this FieldValues object to a newly introduced defaultSetFieldValues method, which internally does the desc.setPrimFieldValues(obj, primValues) call that originally was done immediately after the primitive values had been read.
To emphasize this again: The defaultReadFields method first reads the primitive field values. Then it reads the non-primitive field values. But it does so before the primitive field values have been set!
This new process interferes with the deserialization method of HashMap. Again, the relevant part is shown here:
private void readObject(java.io.ObjectInputStream s)
throws IOException, ClassNotFoundException {
...
int mappings = s.readInt(); // Read number of mappings (size)
if (mappings < 0)
throw new InvalidObjectException("Illegal mappings count: " +
mappings);
else if (mappings > 0) { // (if zero, use defaults)
...
Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
table = tab;
// Read the keys and values, and put the mappings in the HashMap
for (int i = 0; i < mappings; i++) {
#SuppressWarnings("unchecked")
K key = (K) s.readObject();
#SuppressWarnings("unchecked")
V value = (V) s.readObject();
putVal(hash(key), key, value, false, false);
}
}
}
It reads the key- and value objects, one by one, and puts them into the table, by computing the hash of the key and using the internal putVal method. This is the same method that is used when manually populating the map (i.e. when it is filled programmatically, and not deserialized).
Holger already gave a hint in the comments why this is necessary: There is no guarantee that the hash code of the deserialized keys will be the same as before the serialization. So blindly "restoring the original array" could basically lead to objects being stored in the table under a wrong hash code.
But here, the opposite happens: The keys (i.e. the objects of type Element) are deserialized. They contain the idFromElement map, which in turn contains the Element objects. These elements are put into the map, while the Element objects are still in the process of being deserialized, using the putVal method. But due to the changed order in ObjectInputStream, this is done before the primitive value of the id field (which determines the hash code) has been set. So the objects are stored using hash code 0, and later, the id values is assigned (e.g. the value 222), causing the objects to end up in the table under a hash code that they actually no longer have.
Now, on a more abstract level, this was already clear from the observed behavior. Therefore, the original question was not "What is going on here???", but
if my proposed workaround looks ok, or if there is something better I could do.
I think that the workaround could be OK, but would hesitate to say that nothing could go wrong there. It's complicated.
As of the second part: Something better could be to file a bug report at the Java Bug Database, because the new behavior is clearly broken. It may be hard to point out a specification that is violated, but the deserialized map is certainly inconsistent, and this is not acceptable.
(Yes, I could also file a bug report, but think that more research might be necessary in order to make sure it is written properly, not a duplicate, etc....)
I want to add one possible solution to the excellent answers above:
Instead of making idFromElement transient and forcing the HashMap to be deserialized after the id, you could also make id not final and deserialize it before calling defaultReadObject().
This makes the solution more scalable, since there could be other classes / objects using the hashCode and equals methods or the id leading to similar cycles as you described.
It might also lead to a more generic solution of the problem, although this is not yet completely thought out: All the information that is used in the deserialization of other objects needs to be deserialized before defaultReadObject() is called. That might be the id, but also other fields that your class exposes.
Related
Below is a code snippet out of an AVL-tree implementation. It worked fine back in the days, but now it doesn't work any longer. The cause seems to be casting Object to Integer.
So the Avl structure handles Object as data, and the user (in this case main() ) does the casting. What I wanted to achieve was a generic AVL-tree with comparable objects. Actually I insert Objects alongside with a key to be able to distinguish what to sort for. They are internally put in a local class called KeyData.
Here is the code:
// private stuff above - not interesting for problem
// Public AVL tree methods
public void insert(Object thedata, String thekey) {
internal_insert(new KeyData(thedata, thekey));
}
public Object find(String key) {
Object ret = null;
KeyData x = new KeyData(null, key);
if(data != null) {
if(x.compareTo(data) == 0)
ret = data;
else if(x.compareTo(data) < 0) {
if(left != null)
ret = left.find(key);
} else {
if(right != null)
ret = right.find(key);
}
}
return ret;
}
public Object[] inorder() {
Vector<Object> v = new Vector<Object>();
iinorder(v);
return v.toArray();
}
public static void main(String[] args) {
Avl test = new Avl();
test.insert(Integer.valueOf(1090), "1");
test.insert(Integer.valueOf(343452), "2");
test.insert(Integer.valueOf(3345), "3");
//Object t2 = test.find("2");
Object lookfor = test.find("2");
Integer t2 = (Integer) lookfor; // Line 164
System.out.println("Got: " + t2.toString());
}
The outcome is like follows:
$ java Avl
Exception in thread "main" java.lang.ClassCastException:
class Avl$KeyData cannot be cast to class java.lang.Integer (Avl$KeyData is in unnamed module of loader 'app'; java.lang.Integer is in module java.base of loader 'bootstrap')
at Avl.main(Avl.java:164)
...so what's the story?
...so what's the story?
The short version is that your find method doesn't return an Integer value. So you can't cast it to an Integer.
It worked fine back in the days, but now it doesn't work any longer.
Well, you must have changed something significant in your code between then and now. (Hint: the Java language or its implementations have not changed in ways that would cause this!)
So lets take a look at your find method.
public Object find(String key) {
Object ret = null;
KeyData x = new KeyData(null, key);
if (data != null) {
if (x.compareTo(data) == 0)
ret = data;
else if (x.compareTo(data) < 0) {
if (left != null)
ret = left.find(key);
} else {
if (right != null)
ret = right.find(key);
}
}
return ret;
}
First observation is that the original indentation of your method was a mess. Bad indentation makes you code hard for everyone to read ... and understand. I have fixed it.
So the find method is recursively searches a tree, and when it finds a match. It returns whatever data is. I can't see the declaration of the Data field, but the evidence is that it is an instance of Avl.KeyData. (Which makes sense ... because you compare data with x which is a KeyData instance.)
Anyhow, that explains why the result isn't an Integer.
You haven't shown us the KeyData class, but my guess is that it has a value field that is / should be an Integer. That's what you should return. The contents of the found KeyData object's value field.
But the big problem here is your use of Object. As #NomadMaker commented, this should really be a generic type with a type parameter that will be the type of the values in the tree. Then you don't have to use a type cast in main ... and the compiler would have told you that it was incorrect for find to return a KeyData<V> instead of a V.
(There are a few other problems with your implementation ... but this is not a "clinic".)
CompareObj is a class in java It consists of three attributes String rowKey, Integer hitCount, Long recency
public CompareObj(String string, Integer i) {
this.rowKey = string;
this.hitCount = i%10;
this.recency= (Long) i*1000;
}
Now I created a treeMap
Comparator<CompareObj> comp1 = (e1,e2) -> e1.getHitCount().compareTo(e2.getHitCount());
Comparator<CompareObj> comp2 = (e1,e2) -> e2.getRecency().compareTo(e1.getRecency());
Comparator<CompareObj> result = comp1.thenComparing(comp2);
TreeMap<CompareObj, CompareObj> tM = new TreeMap<CompareObj, CompareObj>(result);
for(int i=0;i<=1000;i++)
{
CompareObj cO = new CompareObj("A"+i, i);
tM.put(cO,cO);
}
for(int i=0;i<=1000;i++)
{
CompareObj cO = new CompareObj("A"+i, i);
CompareObj values = tM.get(cO);
System.out.println(values.getRowKey()); // Line 28: get Null Pointer Exception
}
Also I overide hashCode and Equals. Still I get nullponter exception.
#Override
public int hashCode() {
return Objects.hash(getRowKey());
}
#Override
public boolean equals(Object obj) {
if(this==obj) return true;
if(!(obj instanceof CompareObj)) return false;
CompareObj compareObj = (CompareObj) obj;
return Objects.equals(this.getRowKey(), compareObj.getRowKey());
}
Here when I try to retrive value from treemap back I get Null Pointer exception in the line mentioned. How to solve this
If I want to implement comapareTo() of Comaprable interface, how should I implement if there is multiple sort conditions.
The first thing to understand, is the NullPointerException. If you get that exception on the exact line
System.out.println(values.getRowKey());
then either System.out or values is null. Since we can preclude System.out being null, it’s the values variable, which contains the result of get and can be null if the lookup failed.
Since you are initializing the TreeMap with a custom Comparator, that Comparatordetermines equality. Your Comparator is based on the properties getHitCount() and getRecency() which must match, which implies that when the lookup fails, the map doesn’t contain an object having the same values as reported by these two methods.
You show that you construct objects with the same values but not the code of these getters. There must be an inconsistency. As Misha pointed out, your posted code can’t be the code you have ran when getting the exception, therefore we can’t help you further (unless you post the real code you ran).
Is it possible to wrap following code in a reusable function?
EDIT: this is just an example, I want a working solution for ALL recursion depths
what I want is that following code is generated:
if (MyObject o == null ||
o.getSubObject() == null ||
o..getSubObject().getSubSubObject() == null /*||
... */)
return defaultValue;
return o.getSubObject().getSubObject()/*...*/.getDesiredValue();
by calling something like
Object defaultValue = null;
Object result = NullSafeCall(o.getSubObject().getSubObject()/*...*/.getDesiredValue(), defaultValue);
The seond code block is just an idea, I don't care how it looks like, all I want is that I, if desired, can avoid all the null checks before calling a deeper function...
Injection could do this propably, but is there no other/easier solution? Never looked at injection before yet...
EDIT2: example in another language: http://groovy.codehaus.org/Operators#Operators-SafeNavigationOperator
Not really, any code you would write this way would look horrible and/or use very slow reflection. Unless you use an actual Java preprocessor that can understand and change the code you've written.
A better (but associated with quite a bit of refactoring) approach would be to make sure that the values in question cannot possibly be null. For example, you could modify the individual accessors (getSubObject(), getDesiredValue()) to never return null in the first place: make them return default values. The accessors on the default values return default values in turn.
Java8 helps to get the closest you'll get to your syntax with decent performance I suspect;
// Evaluate with default 5 if anything returns null.
int result = Optional.eval(5, o, x->x.getSubObject(), x->x.getDesiredValue());
This can be done with this utility class;
class Optional {
public static <T, Tdef, T1> Tdef eval(Tdef def, T input, Function<T,T1> fn1,
Function<T1, Tdef> fn2)
{
if(input == null) return def;
T1 res1 = fn1.apply(input);
if(res1 == null) return def;
return fn2.apply(res1);
}
}
Sadly, you'll need a separate eval() defined per number of method calls in the chain, so you may want to define a few, but compile time type safe and reusable with just about any calls/types.
You can do something like this
public static Object NullSafeCall(MyObject o,Object defaultValue){
if ( o == null || o.getSubObject() == null)
{
return defaultValue;
}
else
{
return o.getSubObject().getDesiredValue();
}
}
Now you can call this method as follows
Object result = NullSafeCall(o, defaultValue);
i would suggest just replace
Object result = NullSafeCall(o.getSubObject().getDesiredValue(), defaultValue);
by the
Object result = (o == null || o.subObject == null) ? defaultVlue : o.getSubObject().getDesiredValue();
Create method only if you can reuse it......
What you want is not possible. It is essential to understand that using this syntax: Object result = NullSafeCall(o.getSubObject().getSubObject() ...); the part of o.getSubObject().getSubObject() will be evaluated before any control passes to the function/method thus throwing the exception.
It is required to have some type of context before executing such code. The closest to this I could think of, can be done using anonymous inner classes like the example below:
// intended to be implemented by an anonymous inner class
interface NullSafeOperation<T> {
public T executeSafely();
};
// our executor that executes operations safely
public static class NullSafeExecutor<T> {
public NullSafeExecutor() {}
public T execute(T defaultValue, NullSafeOperation<T> nso) {
T result = defaultValue;
try {
result = nso.executeSafely();
} catch(NullPointerException e) {
// ignore
}
return result;
}
// utility method to create a new instance and execute in one step
public static <T> T executeOperation(T defaultValue, NullSafeOperation<T> nso) {
NullSafeExecutor<T> e = new NullSafeExecutor<T>();
T result = e.execute(defaultValue, nso);
return result;
}
}
public static void main(String[] args) {
final String aNullString = null;
String result = NullSafeExecutor.executeOperation("MyDefault", new NullSafeOperation<String>() {
#Override
public String executeSafely() {
// trying to call a method on a null string
// it will throw NullPointerException but it will be catched by the executor
return aNullString.trim();
}
});
System.out.println("Output = " + result); // prints: Output = MyDefault
}
I have two lists of type object with data , the first one is principal entity and the second is dependent entity.
In addition I have key table that relate between the principal and depended entity objects.
In the first for statement I get one instance of type object and then I go and loop on every instance of the second entity and trying to find
Match between them (i think exponential problem…) ,if match is find update the principal entity with the reference object .
The following code is working but I check it from performance perspective and it's not working in efficient way.
Do you have an idea/tips how to improve this code from perforce aspect.
In the JVM monitor I found that EntityDataCreator.getInstanceValue have a problem.
This is the method start
// start with the principal entity
for (Object principalEntityInstance : principalEntityInstances) {
List<Object> genObject = null;
Object refObject = createRefObj(dependentMultiplicity);
// check entries in dependent entity
for (Object dependentEntityInstance : toEntityInstances) {
boolean matches = true;
for (String[] prop : propertiesMappings) {
// Get properties related keys
String fromProp = prop[0];
String toProp = prop[1];
Object fromValue = EntityDataCreator.getInstanceValue(fromProp, principalEntityInstance);
Object toValue = EntityDataCreator.getInstanceValue(toProp, dependentEntityInstance);
if (fromValue != null && toValue != null) {
if (!fromValue.equals(toValue)) {
matches = false;
break;
}
}
}
if (matches) {
// all properties match
if (refObject instanceof List) {
genObject = (List<Object>) refObject;
genObject.add(dependentEntityInstance);
refObject = genObject;
} else {
refObject = dependentEntityInstance;
break;
}
}
}
if (refObject != null) {
EntityDataCreator.createMemberValue(principalEntityInstance, navigationPropName, refObject);
}
}
public static Object getInstanceValue(String Property, Object EntityInstance) throws NoSuchFieldException,
IllegalAccessException {
Class<? extends Object> EntityObj = EntityInstance.getClass();
Field Field = EntityObj.getDeclaredField(Property);
Field.setAccessible(true);
Object Value = Field.get(EntityInstance);
Field.setAccessible(false);
return Value;
}
my guess would be your best bet is to go through both lists once, prepare all data that you need in hashtables, then do one iteration. this way, your problem becomes N+M instead of N*M
edit
Map<String,List<Object>> principalMap = new HashMap<String,List<Object>>();
for (Object principalEntityInstance : principalEntityInstances) {
List<String> keys = getKeysFor(principalEntityInstance);
for(String key : keys) {
List<Object> l = principalMap.get(key);
if(l==null) {
l = new ArrayList<Object>();
principalMap.put(key,l);
}
l.add(principalEntityInstance);
}
}
the do the same for dependentEntityInstance - this way, your searches will be much faster.
I might be misunderstanding your question, but I would suggest defining an equals method for your entities and a hashing method for them, so that you can leverage all the goodness that java already has for searching and matching entities already.
When at all possible rely on Java's infrastructure I think, Sun/Oracle spent a long time making it really fast.
I have the following code:
for (String helpId : helpTipFragCache.getKeys())
{
List<HelpTopicFrag> value = helpTipFragCache.getValue(helpId);
helpTipFrags.put(helpId, value);
}
The helpTipFragCache has a mechanism to load the cache if values are needed at it is empty. The getKeys() method triggers this and the cache is loaded when this is called. However in the above case, I see varying behavior.
I first debugged it quickly to see if the cache was indeed populating (within eclipse). I stepped through and the for loop was never entered (due to an empty iterator).
I then debugged it again (with the same code) and stepped into the getKeys() and analyzed the whole process there. It then did everything it was supposed to, the iterator had values to iterate over and there was peace in the console.
I have fixed the issue by changing the code to do this:
Set<String> helpIds = helpTipFragCache.getKeys();
helpIds = helpTipFragCache.getKeys();
for (String helpId : helpIds)
{
List<HelpTopicFrag> value = helpTipFragCache.getValue(helpId);
helpTipFrags.put(helpId, value);
}
Obviously the debugging triggered something to initialize or act differently, does anyone know what causes this? Basically, what is happening to create the iterator from the returned collection?
Some other pertinent information:
This code is executed on server startup (tomcat)
This code doesn't behave as expected when executed from an included jar, but does when it is in the same code base
The collection is a Set
EDIT
Additional Code:
public Set<String> getKeys() throws Exception
{
if (CACHE_TYPE.LOAD_ALL == cacheType)
{
//Fake a getValue call to make sure the cache is loaded
getValue("");
}
return Collections.unmodifiableSet(cache.keySet());
}
public final T getValue(String key, Object... singleValueArgs) throws Exception
{
T retVal = null;
if (notCaching())
{
if (cacheType == CACHE_TYPE.MODIFY_EXISTING_CACHE_AS_YOU_GO)
{
retVal = getSingleValue(key, null, singleValueArgs);
}
else
{
retVal = getSingleValue(key, singleValueArgs);
}
}
else
{
synchronized (cache)
{
if (needToLoadCache())
{
logger.debug("Need to load cache: " + getCacheName());
if (cacheType != CACHE_TYPE.MODIFY_EXISTING_CACHE_AS_YOU_GO)
{
Map<String, T> newCache = null;
if (cacheType != CACHE_TYPE.MODIFY_EXISTING_CACHE)
{
newCache = getNewCache();
}
else
{
newCache = cache;
}
loadCache(newCache);
cache = newCache;
}
lastUpdatedInMillis = System.currentTimeMillis();
forceLoadCache = false;
}
}
...//code in here does not execute for this example, simply gets a value that is already in the cache
}
return retVal;
}
And back to the original class (where the previous code was posted from):
#Override
protected void loadCache(
Map<String, List<HelpTopicFrag>> newCache)
throws Exception
{
Map<String, List<HelpTopicFrag>> _helpTipFrags = helpDAO.getHelpTopicFrags(getAppName(), _searchIds);
addDisplayModeToFrags(_helpTipFrags);
newCache.putAll(_helpTipFrags);
}
Above, a database call is made to get the values to be put in the cache.
The answer to
Basically, what is happening to create the iterator from the returned collection?
The for loop in your case treats Setas Iterable and uses an Iterator obtained by calling Iterable.iterator().
Set as = ...;
for (A a : as) {
doSth ();
}
is basically equivalent to
Set as = ...;
Iterator hidden = as.iterator ();
while (hidden.hasNext ()) {
a = hidden.next ();
doSth ();
}