I am using trove library to create hash maps
http://trove.starlight-systems.com/
The class I am using is TObjectIntMap in which I had to use the function get.
The issue is that get returns 0 if two cases
1- If the value of the specified key is zero
2- If the key does not exist
For example in the following code
TObjectIntMap<String> featuresMap = new TObjectIntHashMap<String>();
if(String.valueOf(featuresMap.get("B")) == null)
System.out.println("NULL");
else
System.out.println("NotNull");
System.out.println(featuresMap.get("B"));
The program will print the following
1- NotNull: because it gets zero. Although the key "B" has not been set
2- Zero: The return of featuresMap.get("B") is zero instead of null.
I have checked their documentation in the link below and it was a mistake that they solved. So get actually return zero instead of null because int cannot be null.
https://bitbucket.org/robeden/trove/issue/43/incorrect-javadoc-for-tobjectintmapget
Now my question is: How to differentiate between a zero and Null in this case. Is their any way around to address this issue.
Try their containsKey method. If the value comes back 0, use that method to check if the map contains the key - if it does, then the key's value really is 0. If it doesn't, then the key is not set.
Related
I'm using trove4j for its primitives collections. I notice it has constructors like
public TLongArrayList( int capacity, long no_entry_value )
where no_entry_value represents null. The default value is zero.
The null value in collections, like Set specially, is very important, according to my cognition. But I found trove4j did't use this value much after I glanced at the source code.
So I'm confused that should I care about that value. Should I elaborately pick a value that would never occur in my programs, or just leave it to be default zero.
This is kind of one of those things that you know you need when you need it. It comes down to whether or not you want to just call get(...) and know whether or not the value was in the map without calling containsKey(...). That's where the "no entry value" comes in because that is what is returned in the case where you didn't store that key in your map. The default is 0, so if you never store a value of 0 in your map, that will work for checks. But of course that can be a bit risky. Typical values would be 0, -1, Integer.MAX_VALUE... things like that.
If there is no value that you can guarantee will never be in your map, then you need to make sure you check with containsKey before you trust the returned value. You can minimize the overhead of doing two lookups with something like:
int value = my_map.get( my_key );
// NOTE: Assuming no entry value of 0
if ( value == 0 && !my_map.containsKey( my_key ) ) {
// value wasn't present
}
else {
// value was present
}
That's a performance improvement over calling containsKey every time before doing a get.
so I'm using Collections.min to find an object in an ArrayList, that contains the smallest integer value in one of it's elements. Basically comparing every element in the list.
Now, there are some objects in the list, that don't contain a value, so I had to set them to -1.
How would I exclude all the elements in the list that have an int value of -1, I don't see how I can apply an if statement.
temp = Collections.min(PollutionDatasetList, Comparator.comparingInt(Measurement::getLevel));
PollutionDatesetList - my list
Measurement - my class that's contained within the list
GetLevel - the integer value I'm comparing.
Just use the stream API:
list.stream()
.filter(m -> m.getLevel() >= 0)
.min(Comparator.comparingInt(Measurement::getLevel))
A better return type for getLevel() would be OptionalInt. That would force every code dealing with levels to think about the case where the measurement has no level, and thus avoid bugs.
Since you are finding the minimum value using Collections.min so set the value of the objects that does not have a value to some large number instead of -1 say Integer.MAX_VALUE so it won't cause any more problem . I am not sure if this change will cause any problems to other modules of your program but it will definitely solve this issue of Collections.min .
In Jaspersoft Studio I have tried the following expression. I am getting null but I don't understand why. This should be as simple as 3.00/2 and display 1.50 however it is not working it still shows null. I have confirmed that the fields contain values for all fields.
The expression I am using is as follows:
new Double($V{UnitPrice}.doubleValue() == 0 ? 0 : ($F{Price Qty}.doubleValue()/$F{Price}.doubleValue()))
Since you are using Double for your arithmetic, why not use the compareTo(Double anotherDouble)?
Not sure if this is the source of your trouble, but it could be the == behaving in a way that you did not intend it to and returning false hence, the zero...
I am pulling data values from a database that returns a List of <Integer>. However, I would like to see if the List contains my BigInteger. Is there a simple way to do this?
I currently have the following code in Java:
ArrayList<Integer> arr = new ArrayList<Integer>() {{add(new Integer(29415));}};
boolean contains = arr.contains(29415); // true
boolean contains2 = arr.contains(new BigInteger("29415")); // false
I'm not sure on an efficient way to do this?
The correct answer will be returned by evaluation of the following:
val != null
&& BigInteger.valueOf(Integer.MIN_VALUE).compareTo(val) < 0
&& BigInteger.valueOf(Integer.MAX_VALUE).compareTo(val) > 0
&& list.contains(val.intValue())
This will correctly solve the question of whether the BigInteger you have is "contained" within the List<Integer>. Note that here we only downcast where necessary. If the val is outside the range of Integer values there is no need to downcast as we know that the value cannot be within the list.
A more relevant question is whether you should actually be using a List<BigInteger> in place of a List<Integer> but that is a different question and not part of the answer to your explicit question
While arshajii provides a solution which works, i would vote against it.
You should never downcast values. You are running in danger of your program producing larger values which translate to invalid values when downcasted. This kind of bug will be super nasty to troubleshoot months later.
If your code works with BigInteger, then you should convert all values from the database into BigInteger. This is an upcast where you cannot loose values.
Overall I would value correctness over efficiency. If at all, I would reconsider your usage of BigInteger (maybe long is fine?) but because you have it, I assume you have a reason for it.
In Java List.contains() uses the equals() method internally and because BigInteger.equals(Integer) returns false, your List.contains() also returns false. Either use the an List<BigInteger> or extract the Int value from BigInteger (as arshajii explained!). Of course, if you really want to search effectively, you should think of a binary search (in a sorted list) or of another data structure like Map.
You can try using BigInteger#intValue():
arr.contains(myBigInteger.intValue())
Note, however, that if myBigInteger is too big to fit into an int, then only the lower 32 bits will be returned (as described in the linked docs). Therefore, you might want to check if myBigInteger is less than or equal to Integer.MAX_VALUE before checking for containment.
I have set of values, an arraylist, and i have to find duplicate keys. One approach is to use 2 loops. and iterate through the list for each value resutling O(n2).
the other thing, That i can do is to put the values as keys in HashTable. I believed, that hashtable would throw an exception if there is already same key in it. But it is not throwing an exception
Hashtable<String, String> ht = new Hashtable<String, String>();
for (int i = 0; i<20; i++){
ht.put(String.valueOf(i%10), String.valueOf(i%10));
}
do i understand it wrong? Doesn't hastable/hashmap throw exception if there is already same key in it?
My suggestion is you want a HashSet instead of a Hashtable:
Set<String> ht = new HashSet<String>();
for (int i = 0; i<20; i++){
if ( !ht.add(String.valueOf(i%10)) ) {
//it already existed, throw an exception or whatever
}
}
If you don't care about the values that you add to a map, you almost certainly want a Set and not a Map/table.
No, it doesn't throw an exception, it simply replaces the old value. You can check if a value already exists by calling get:
if (ht.get(key) != null) {
// value already exists
}
Edit: As #Mark Peters suggested, containsKey is a simpler and sometimes better solution.
You can see in the API docs that put returns null if there was nothing in the table before for that key, and the key's previous value if there was one. (It doesn't throw an exception in either case.)
You may want to read up on the performance characteristics of hashes.
For example, hashes will make answering the question "does this key exist?" fast, which might help with your algorithm.
According to Java Docs, the only exceptions that put may raise is NullPointerException, if key or value is null. You can change your loop to something like:
for(int i = 0 ; i < 20 ; i++) {
if (ht.containsKey(String.valueOf(i%10)))
throw new Something();
ht.put(String.valueOf(i%20), True);
}
From the JavaDoc:
put
public Object put(Object key,
Object value)
Maps the specified key to the specified value in this hashtable. Neither the key nor the value can be null.
The value can be retrieved by calling the get method with a key that is equal to the original key.
Specified by:
put in interface Map
Specified by:
put in class Dictionary
Parameters:
key - the hashtable key.
value - the value.
Returns:
the previous value of the specified key in this hashtable, or null if it did not have one.
Throws:
NullPointerException - if the key or value is null.
See Also:
Object.equals(Object), get(Object)
It looks like it will let you overwrite the value, but then it gives you the old value as a return Object.
Here's the easiest way to do it:
List yourList;
HashSet noDuplicates = new HashSet(yourList);
HashSet duplicates = new HashSet(yourList).removeAll(noDuplicates);
Depending on your memory vs runtime constraints, I would recommend something if you are space constrained:
You can sort the array (worst case of O(nlog_n) if you use something of the likes of quicksort), and then traverse it to find duplicates in adjacent elements.
Hope this helps