I have the following code:
for (String helpId : helpTipFragCache.getKeys())
{
List<HelpTopicFrag> value = helpTipFragCache.getValue(helpId);
helpTipFrags.put(helpId, value);
}
The helpTipFragCache has a mechanism to load the cache if values are needed at it is empty. The getKeys() method triggers this and the cache is loaded when this is called. However in the above case, I see varying behavior.
I first debugged it quickly to see if the cache was indeed populating (within eclipse). I stepped through and the for loop was never entered (due to an empty iterator).
I then debugged it again (with the same code) and stepped into the getKeys() and analyzed the whole process there. It then did everything it was supposed to, the iterator had values to iterate over and there was peace in the console.
I have fixed the issue by changing the code to do this:
Set<String> helpIds = helpTipFragCache.getKeys();
helpIds = helpTipFragCache.getKeys();
for (String helpId : helpIds)
{
List<HelpTopicFrag> value = helpTipFragCache.getValue(helpId);
helpTipFrags.put(helpId, value);
}
Obviously the debugging triggered something to initialize or act differently, does anyone know what causes this? Basically, what is happening to create the iterator from the returned collection?
Some other pertinent information:
This code is executed on server startup (tomcat)
This code doesn't behave as expected when executed from an included jar, but does when it is in the same code base
The collection is a Set
EDIT
Additional Code:
public Set<String> getKeys() throws Exception
{
if (CACHE_TYPE.LOAD_ALL == cacheType)
{
//Fake a getValue call to make sure the cache is loaded
getValue("");
}
return Collections.unmodifiableSet(cache.keySet());
}
public final T getValue(String key, Object... singleValueArgs) throws Exception
{
T retVal = null;
if (notCaching())
{
if (cacheType == CACHE_TYPE.MODIFY_EXISTING_CACHE_AS_YOU_GO)
{
retVal = getSingleValue(key, null, singleValueArgs);
}
else
{
retVal = getSingleValue(key, singleValueArgs);
}
}
else
{
synchronized (cache)
{
if (needToLoadCache())
{
logger.debug("Need to load cache: " + getCacheName());
if (cacheType != CACHE_TYPE.MODIFY_EXISTING_CACHE_AS_YOU_GO)
{
Map<String, T> newCache = null;
if (cacheType != CACHE_TYPE.MODIFY_EXISTING_CACHE)
{
newCache = getNewCache();
}
else
{
newCache = cache;
}
loadCache(newCache);
cache = newCache;
}
lastUpdatedInMillis = System.currentTimeMillis();
forceLoadCache = false;
}
}
...//code in here does not execute for this example, simply gets a value that is already in the cache
}
return retVal;
}
And back to the original class (where the previous code was posted from):
#Override
protected void loadCache(
Map<String, List<HelpTopicFrag>> newCache)
throws Exception
{
Map<String, List<HelpTopicFrag>> _helpTipFrags = helpDAO.getHelpTopicFrags(getAppName(), _searchIds);
addDisplayModeToFrags(_helpTipFrags);
newCache.putAll(_helpTipFrags);
}
Above, a database call is made to get the values to be put in the cache.
The answer to
Basically, what is happening to create the iterator from the returned collection?
The for loop in your case treats Setas Iterable and uses an Iterator obtained by calling Iterable.iterator().
Set as = ...;
for (A a : as) {
doSth ();
}
is basically equivalent to
Set as = ...;
Iterator hidden = as.iterator ();
while (hidden.hasNext ()) {
a = hidden.next ();
doSth ();
}
Related
I have a problem with deserialization in Java 11 that results in a HashMap with a key that can't be found. I would appreciate if anyone with more knowledge about the issue could say if my proposed workaround looks ok, or if there is something better I could do.
Consider the following contrived implementation (the relationships in the real problem are a bit more complex and hard to change):
public class Element implements Serializable {
private static long serialVersionUID = 1L;
private final int id;
private final Map<Element, Integer> idFromElement = new HashMap<>();
public Element(int id) {
this.id = id;
}
public void addAll(Collection<Element> elements) {
elements.forEach(e -> idFromElement.put(e, e.id));
}
public Integer idFrom(Element element) {
return idFromElement.get(element);
}
#Override
public int hashCode() {
return id;
}
#Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (!(obj instanceof Element)) {
return false;
}
Element other = (Element) obj;
return this.id == other.id;
}
}
Then I create an instance that has a reference to itself and serialize and deserialize it:
public static void main(String[] args) {
List<Element> elements = Arrays.asList(new Element(111), new Element(222));
Element originalElement = elements.get(1);
originalElement.addAll(elements);
Storage<Element> storage = new Storage<>();
storage.serialize(originalElement);
Element retrievedElement = storage.deserialize();
if (retrievedElement.idFrom(retrievedElement) == 222) {
System.out.println("ok");
}
}
If I run this code in Java 8 the result is "ok", if I run it in Java 11 the result is a NullPointerException because retrievedElement.idFrom(retrievedElement) returns null.
I put a breakpoint at HashMap.hash() and noticed that:
In Java 8, when idFromElement is being deserialized and Element(222) is being added to it, its id is 222, so I am able to find it later.
In Java 11, the id is not initialized (0 for int or null if I make it an Integer), so hash() is 0 when it's stored in the HashMap. Later, when I try to retrieve it, the id is 222, so idFromElement.get(element) returns null.
I understand that the sequence here is deserialize(Element(222)) -> deserialize(idFromElement) -> put unfinished Element(222) into Map. But, for some reason, in Java 8 id is already initialized when we get to the last step, while in Java 11 it is not.
The solution I came up with was to make idFromElement transient and write custom writeObject and readObject methods to force idFromElement to be deserialized after id:
...
transient private Map<Element, Integer> idFromElement = new HashMap<>();
...
private void writeObject(ObjectOutputStream output) throws IOException {
output.defaultWriteObject();
output.writeObject(idFromElement);
}
#SuppressWarnings("unchecked")
private void readObject(ObjectInputStream input) throws IOException, ClassNotFoundException {
input.defaultReadObject();
idFromElement = (HashMap<Element, Integer>) input.readObject();
}
The only reference I was able to find about the order during serialization/deserialization was this:
For serializable classes, the SC_SERIALIZABLE flag is set, the number of fields counts the number of serializable fields and is followed by a descriptor for each serializable field. The descriptors are written in canonical order. The descriptors for primitive typed fields are written first sorted by field name followed by descriptors for the object typed fields sorted by field name. The names are sorted using String.compareTo.
Which is the same in both Java 8 and Java 11 docs, and seems to imply that primitive typed fields should be written first, so I expected there would be no difference.
Implementation of Storage<T> included for completeness:
public class Storage<T> {
private final ByteArrayOutputStream buffer = new ByteArrayOutputStream();
public void serialize(T object) {
buffer.reset();
try (ObjectOutputStream objectOutputStream = new ObjectOutputStream(buffer)) {
objectOutputStream.writeObject(object);
objectOutputStream.flush();
} catch (Exception ioe) {
ioe.printStackTrace();
}
}
#SuppressWarnings("unchecked")
public T deserialize() {
ByteArrayInputStream byteArrayIS = new ByteArrayInputStream(buffer.toByteArray());
try (ObjectInputStream objectInputStream = new ObjectInputStream(byteArrayIS)) {
return (T) objectInputStream.readObject();
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
return null;
}
}
As mentioned in the comments and encouraged by the asker, here are the parts of the code that changed between version 8 and version 11 that I assume to be the reason for the different behavior (based on reading and debugging).
The difference is in the ObjectInputStream class, in one of its core methods. This is the relevant part of the implementation in Java 8:
private void readSerialData(Object obj, ObjectStreamClass desc)
throws IOException
{
ObjectStreamClass.ClassDataSlot[] slots = desc.getClassDataLayout();
for (int i = 0; i < slots.length; i++) {
ObjectStreamClass slotDesc = slots[i].desc;
if (slots[i].hasData) {
if (obj == null || handles.lookupException(passHandle) != null) {
...
} else {
defaultReadFields(obj, slotDesc);
}
...
}
}
}
/**
* Reads in values of serializable fields declared by given class
* descriptor. If obj is non-null, sets field values in obj. Expects that
* passHandle is set to obj's handle before this method is called.
*/
private void defaultReadFields(Object obj, ObjectStreamClass desc)
throws IOException
{
Class<?> cl = desc.forClass();
if (cl != null && obj != null && !cl.isInstance(obj)) {
throw new ClassCastException();
}
int primDataSize = desc.getPrimDataSize();
if (primVals == null || primVals.length < primDataSize) {
primVals = new byte[primDataSize];
}
bin.readFully(primVals, 0, primDataSize, false);
if (obj != null) {
desc.setPrimFieldValues(obj, primVals);
}
int objHandle = passHandle;
ObjectStreamField[] fields = desc.getFields(false);
Object[] objVals = new Object[desc.getNumObjFields()];
int numPrimFields = fields.length - objVals.length;
for (int i = 0; i < objVals.length; i++) {
ObjectStreamField f = fields[numPrimFields + i];
objVals[i] = readObject0(f.isUnshared());
if (f.getField() != null) {
handles.markDependency(objHandle, passHandle);
}
}
if (obj != null) {
desc.setObjFieldValues(obj, objVals);
}
passHandle = objHandle;
}
...
The method calls defaultReadFields, which reads the values of the fields. As mentioned in the quoted part of the specification, it first handles the field descriptors of primitive fields. The values that are read for these fields are set immediately after reading them, with
desc.setPrimFieldValues(obj, primVals);
and importantly: This happens before it calls readObject0 for each of the non-primitive fields.
In contrast to that, here is the relevant part of the implementation of Java 11:
private void readSerialData(Object obj, ObjectStreamClass desc)
throws IOException
{
ObjectStreamClass.ClassDataSlot[] slots = desc.getClassDataLayout();
...
for (int i = 0; i < slots.length; i++) {
ObjectStreamClass slotDesc = slots[i].desc;
if (slots[i].hasData) {
if (obj == null || handles.lookupException(passHandle) != null) {
...
} else {
FieldValues vals = defaultReadFields(obj, slotDesc);
if (slotValues != null) {
slotValues[i] = vals;
} else if (obj != null) {
defaultCheckFieldValues(obj, slotDesc, vals);
defaultSetFieldValues(obj, slotDesc, vals);
}
}
...
}
}
...
}
private class FieldValues {
final byte[] primValues;
final Object[] objValues;
FieldValues(byte[] primValues, Object[] objValues) {
this.primValues = primValues;
this.objValues = objValues;
}
}
/**
* Reads in values of serializable fields declared by given class
* descriptor. Expects that passHandle is set to obj's handle before this
* method is called.
*/
private FieldValues defaultReadFields(Object obj, ObjectStreamClass desc)
throws IOException
{
Class<?> cl = desc.forClass();
if (cl != null && obj != null && !cl.isInstance(obj)) {
throw new ClassCastException();
}
byte[] primVals = null;
int primDataSize = desc.getPrimDataSize();
if (primDataSize > 0) {
primVals = new byte[primDataSize];
bin.readFully(primVals, 0, primDataSize, false);
}
Object[] objVals = null;
int numObjFields = desc.getNumObjFields();
if (numObjFields > 0) {
int objHandle = passHandle;
ObjectStreamField[] fields = desc.getFields(false);
objVals = new Object[numObjFields];
int numPrimFields = fields.length - objVals.length;
for (int i = 0; i < objVals.length; i++) {
ObjectStreamField f = fields[numPrimFields + i];
objVals[i] = readObject0(f.isUnshared());
if (f.getField() != null) {
handles.markDependency(objHandle, passHandle);
}
}
passHandle = objHandle;
}
return new FieldValues(primVals, objVals);
}
...
An inner class, FieldValues, has been introduced. The defaultReadFields method now only reads the field values, and returns them as a FieldValuesobject. Afterwards, the returned values are assigned to the fields, by passing this FieldValues object to a newly introduced defaultSetFieldValues method, which internally does the desc.setPrimFieldValues(obj, primValues) call that originally was done immediately after the primitive values had been read.
To emphasize this again: The defaultReadFields method first reads the primitive field values. Then it reads the non-primitive field values. But it does so before the primitive field values have been set!
This new process interferes with the deserialization method of HashMap. Again, the relevant part is shown here:
private void readObject(java.io.ObjectInputStream s)
throws IOException, ClassNotFoundException {
...
int mappings = s.readInt(); // Read number of mappings (size)
if (mappings < 0)
throw new InvalidObjectException("Illegal mappings count: " +
mappings);
else if (mappings > 0) { // (if zero, use defaults)
...
Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
table = tab;
// Read the keys and values, and put the mappings in the HashMap
for (int i = 0; i < mappings; i++) {
#SuppressWarnings("unchecked")
K key = (K) s.readObject();
#SuppressWarnings("unchecked")
V value = (V) s.readObject();
putVal(hash(key), key, value, false, false);
}
}
}
It reads the key- and value objects, one by one, and puts them into the table, by computing the hash of the key and using the internal putVal method. This is the same method that is used when manually populating the map (i.e. when it is filled programmatically, and not deserialized).
Holger already gave a hint in the comments why this is necessary: There is no guarantee that the hash code of the deserialized keys will be the same as before the serialization. So blindly "restoring the original array" could basically lead to objects being stored in the table under a wrong hash code.
But here, the opposite happens: The keys (i.e. the objects of type Element) are deserialized. They contain the idFromElement map, which in turn contains the Element objects. These elements are put into the map, while the Element objects are still in the process of being deserialized, using the putVal method. But due to the changed order in ObjectInputStream, this is done before the primitive value of the id field (which determines the hash code) has been set. So the objects are stored using hash code 0, and later, the id values is assigned (e.g. the value 222), causing the objects to end up in the table under a hash code that they actually no longer have.
Now, on a more abstract level, this was already clear from the observed behavior. Therefore, the original question was not "What is going on here???", but
if my proposed workaround looks ok, or if there is something better I could do.
I think that the workaround could be OK, but would hesitate to say that nothing could go wrong there. It's complicated.
As of the second part: Something better could be to file a bug report at the Java Bug Database, because the new behavior is clearly broken. It may be hard to point out a specification that is violated, but the deserialized map is certainly inconsistent, and this is not acceptable.
(Yes, I could also file a bug report, but think that more research might be necessary in order to make sure it is written properly, not a duplicate, etc....)
I want to add one possible solution to the excellent answers above:
Instead of making idFromElement transient and forcing the HashMap to be deserialized after the id, you could also make id not final and deserialize it before calling defaultReadObject().
This makes the solution more scalable, since there could be other classes / objects using the hashCode and equals methods or the id leading to similar cycles as you described.
It might also lead to a more generic solution of the problem, although this is not yet completely thought out: All the information that is used in the deserialization of other objects needs to be deserialized before defaultReadObject() is called. That might be the id, but also other fields that your class exposes.
How to check whether the list has only one non-null element and if so retrieve the same using java 8 or Streams?
One of my method return list of objects which needs to check whether the returned list contains only one non null object, If so it creates a map as defined below else, needs to log an error as below.
`public void myMethod() {
List<MyClass> tst = getAll();
if(!tst.isEmpty() ) {
if( tst.size() == 1) {
if(tst.get(0)!= null) {
MyClass class1 = tst.get(0);
Map<Integer,MyClass> m =
Stream.of(class1).collect(Collectors.toMap(MyClass:: getId,
Function.identity()));
}
}
else {
LOGGER.error("Multiple object found - {} object", tst.size());
}
}`
I'm looking for a way to write in a clean and standard format as I have three If conditions
Something like that should do the trick but it's not using streams. If you really need to use streams say so and I'll give it a try with it :)
int notNullCount = 0;
Object myNotNullElement;
for (Object element : myArray){
if (notNullCount > 1){
//Throw exception or do whaterver you need to do to signal this
break;
}
if (element != null){
myNotNullElement = element;
notNullCount++;
}
}
Is this a valid code to write,if I wish to avoid unnecessary contains call?
I wish to avoid a contains call on every invocation,as this is highly time sensitive code.
cancelretryCountMap.putIfAbsent(tag,new AtomicInteger(0));
count = cancelretryCountMap.get(tag).incrementAndGet();
if(count > 10){
///abort after x retries
....
}
I am using JDK 7
Usually, you would use putIfAbsent like this:
final AtomicInteger present = map.get(tag);
int count;
if (present != null) {
count = present.incrementAndGet();
}Â else {
final AtomicInteger instance = new AtomicInteger(0);
final AtomicInteger marker = map.putIfAbsent(tag, instance);
if (marker == null) {
count = instance.incrementAndGet();
} else {
count = marker.incrementAndGet();
}
}
The reason for the explicit get being, that you want to avoid the allocation of the default value in the "happy" path (i.e., when there is already an entry with the given key).
If there is no matching entry, you have to use the return value of putIfAbsent in order to distinguish between
the entry was still missing (and the default value has been added due to the call), in which case the method returns null, and
some other thread has won the race and inserted the new entry after the call to get (in which case the method returns the current value associated with the given key)
You can abstract this sequence by introducing a helper method, e.g.,
interface Supplier<T> {
T get();
}
static <T> T computeIfAbsent(ConcurrentMap<K,T> map, T key, Supplier<? extends T> producer) {
final T present = map.get(key);
if (present != null) {
return present;
} else {
final T fallback = producer.get();
final T marker = map.putIfAbsent(key, fallback);
if (marker == null) {
return fallback;
} else {
return marker;
}
}
}
You could use this in your example:
static final Supplier<AtomicInteger> newAtomicInteger = new Supplier<AtomicInteger>() {
public AtomicInteger get() { return new AtomicInteger(0); }
};
void yourMethodWhatever(Object tag) {
final AtomicInteger counter = computeIfAbsent(cancelretryCountMap, tag, newAtomicInteger);
if (counter.incrementAndGet() > 10) {
... whatever ...
}
}
Note, that this is actually already provided in the JDK 8 as default method on Map, but since you are still on JDK 7, you have to roll your own, as is done here.
Is it possible to wrap following code in a reusable function?
EDIT: this is just an example, I want a working solution for ALL recursion depths
what I want is that following code is generated:
if (MyObject o == null ||
o.getSubObject() == null ||
o..getSubObject().getSubSubObject() == null /*||
... */)
return defaultValue;
return o.getSubObject().getSubObject()/*...*/.getDesiredValue();
by calling something like
Object defaultValue = null;
Object result = NullSafeCall(o.getSubObject().getSubObject()/*...*/.getDesiredValue(), defaultValue);
The seond code block is just an idea, I don't care how it looks like, all I want is that I, if desired, can avoid all the null checks before calling a deeper function...
Injection could do this propably, but is there no other/easier solution? Never looked at injection before yet...
EDIT2: example in another language: http://groovy.codehaus.org/Operators#Operators-SafeNavigationOperator
Not really, any code you would write this way would look horrible and/or use very slow reflection. Unless you use an actual Java preprocessor that can understand and change the code you've written.
A better (but associated with quite a bit of refactoring) approach would be to make sure that the values in question cannot possibly be null. For example, you could modify the individual accessors (getSubObject(), getDesiredValue()) to never return null in the first place: make them return default values. The accessors on the default values return default values in turn.
Java8 helps to get the closest you'll get to your syntax with decent performance I suspect;
// Evaluate with default 5 if anything returns null.
int result = Optional.eval(5, o, x->x.getSubObject(), x->x.getDesiredValue());
This can be done with this utility class;
class Optional {
public static <T, Tdef, T1> Tdef eval(Tdef def, T input, Function<T,T1> fn1,
Function<T1, Tdef> fn2)
{
if(input == null) return def;
T1 res1 = fn1.apply(input);
if(res1 == null) return def;
return fn2.apply(res1);
}
}
Sadly, you'll need a separate eval() defined per number of method calls in the chain, so you may want to define a few, but compile time type safe and reusable with just about any calls/types.
You can do something like this
public static Object NullSafeCall(MyObject o,Object defaultValue){
if ( o == null || o.getSubObject() == null)
{
return defaultValue;
}
else
{
return o.getSubObject().getDesiredValue();
}
}
Now you can call this method as follows
Object result = NullSafeCall(o, defaultValue);
i would suggest just replace
Object result = NullSafeCall(o.getSubObject().getDesiredValue(), defaultValue);
by the
Object result = (o == null || o.subObject == null) ? defaultVlue : o.getSubObject().getDesiredValue();
Create method only if you can reuse it......
What you want is not possible. It is essential to understand that using this syntax: Object result = NullSafeCall(o.getSubObject().getSubObject() ...); the part of o.getSubObject().getSubObject() will be evaluated before any control passes to the function/method thus throwing the exception.
It is required to have some type of context before executing such code. The closest to this I could think of, can be done using anonymous inner classes like the example below:
// intended to be implemented by an anonymous inner class
interface NullSafeOperation<T> {
public T executeSafely();
};
// our executor that executes operations safely
public static class NullSafeExecutor<T> {
public NullSafeExecutor() {}
public T execute(T defaultValue, NullSafeOperation<T> nso) {
T result = defaultValue;
try {
result = nso.executeSafely();
} catch(NullPointerException e) {
// ignore
}
return result;
}
// utility method to create a new instance and execute in one step
public static <T> T executeOperation(T defaultValue, NullSafeOperation<T> nso) {
NullSafeExecutor<T> e = new NullSafeExecutor<T>();
T result = e.execute(defaultValue, nso);
return result;
}
}
public static void main(String[] args) {
final String aNullString = null;
String result = NullSafeExecutor.executeOperation("MyDefault", new NullSafeOperation<String>() {
#Override
public String executeSafely() {
// trying to call a method on a null string
// it will throw NullPointerException but it will be catched by the executor
return aNullString.trim();
}
});
System.out.println("Output = " + result); // prints: Output = MyDefault
}
I have two lists of type object with data , the first one is principal entity and the second is dependent entity.
In addition I have key table that relate between the principal and depended entity objects.
In the first for statement I get one instance of type object and then I go and loop on every instance of the second entity and trying to find
Match between them (i think exponential problem…) ,if match is find update the principal entity with the reference object .
The following code is working but I check it from performance perspective and it's not working in efficient way.
Do you have an idea/tips how to improve this code from perforce aspect.
In the JVM monitor I found that EntityDataCreator.getInstanceValue have a problem.
This is the method start
// start with the principal entity
for (Object principalEntityInstance : principalEntityInstances) {
List<Object> genObject = null;
Object refObject = createRefObj(dependentMultiplicity);
// check entries in dependent entity
for (Object dependentEntityInstance : toEntityInstances) {
boolean matches = true;
for (String[] prop : propertiesMappings) {
// Get properties related keys
String fromProp = prop[0];
String toProp = prop[1];
Object fromValue = EntityDataCreator.getInstanceValue(fromProp, principalEntityInstance);
Object toValue = EntityDataCreator.getInstanceValue(toProp, dependentEntityInstance);
if (fromValue != null && toValue != null) {
if (!fromValue.equals(toValue)) {
matches = false;
break;
}
}
}
if (matches) {
// all properties match
if (refObject instanceof List) {
genObject = (List<Object>) refObject;
genObject.add(dependentEntityInstance);
refObject = genObject;
} else {
refObject = dependentEntityInstance;
break;
}
}
}
if (refObject != null) {
EntityDataCreator.createMemberValue(principalEntityInstance, navigationPropName, refObject);
}
}
public static Object getInstanceValue(String Property, Object EntityInstance) throws NoSuchFieldException,
IllegalAccessException {
Class<? extends Object> EntityObj = EntityInstance.getClass();
Field Field = EntityObj.getDeclaredField(Property);
Field.setAccessible(true);
Object Value = Field.get(EntityInstance);
Field.setAccessible(false);
return Value;
}
my guess would be your best bet is to go through both lists once, prepare all data that you need in hashtables, then do one iteration. this way, your problem becomes N+M instead of N*M
edit
Map<String,List<Object>> principalMap = new HashMap<String,List<Object>>();
for (Object principalEntityInstance : principalEntityInstances) {
List<String> keys = getKeysFor(principalEntityInstance);
for(String key : keys) {
List<Object> l = principalMap.get(key);
if(l==null) {
l = new ArrayList<Object>();
principalMap.put(key,l);
}
l.add(principalEntityInstance);
}
}
the do the same for dependentEntityInstance - this way, your searches will be much faster.
I might be misunderstanding your question, but I would suggest defining an equals method for your entities and a hashing method for them, so that you can leverage all the goodness that java already has for searching and matching entities already.
When at all possible rely on Java's infrastructure I think, Sun/Oracle spent a long time making it really fast.