I am trying to find key with minimum value in Map shown below.
Map<Node, Integer> freeMap = new TreeMap<>();
Node minNode = null;
for (Map.Entry<Node, Integer> entry : freeMap.entrySet()) {
if (minNode == null) {
minNode = entry.getKey();
} else {
if (entry.getValue() < freeMap.get(minNode)) {
minNode = entry.getKey();
}
}
}
Firstly, Is there a straight forward way to find key with minimum value than using foreach loop. Secondly, can you suggest some alternate data structure approach which can be used to store a Node object and an associated Integer value, so I can fetch entry with minimum value in constant time O(1).
If your goal is to improve time complexity, there's really only one possible change, from O(n log n) to O(n):
Map<Node, Integer> freeMap = new TreeMap<>();
Map.Entry<Node, Integer> minEntry = null;
for (Map.Entry<Node, Integer> entry : freeMap.entrySet()) {
if (minEntry == null || entry.getValue() < minEntry.getValue()) {
minEntry = entry;
}
}
Node minNode = minEntry.getKey();
The keys for a concise, efficient and elegant solution here are the Collections#min method and the Map.Entry#comparingByValue method
The first method can be applied to the entrySet of the map, and the second one provides a Comparator that compares map Entry objects by their value. So the solution is a one-liner, and you can either obtain the entry or the key directly, as shown in this example:
import java.util.Collections;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Map.Entry;
public class KeyWithMinValue
{
public static void main(String[] args)
{
Map<String, Integer> map = new LinkedHashMap<String, Integer>();
map.put("Zero", 0);
map.put("One", 1);
map.put("Two", 2);
map.put("Three", 3);
map.put("Four", 4);
// Obtain the entry with the minimum value:
Entry<String, Integer> entryWithMinValue = Collections.min(
map.entrySet(), Entry.comparingByValue());
System.out.println(entryWithMinValue);
// Or directly obtain the key, if you only need that:
String keyWithMinValue = Collections.min(
map.entrySet(), Entry.comparingByValue()).getKey();
System.out.println(keyWithMinValue);
}
}
I suspect that Integer values are not unique in your system.
If this is the case, I suggest you use TreeMultimap from guava library, and use Integer value as a key.
TreeMultimap<Integer, Node> freeMap = new TreeMultimap<>();
Node minNode =
freeMap.isEmpty()
? null
: freeMap.entries().iterator().next().getValue();
Minor improvement not a whole bunch :
Map<Node, Integer> freeMap = new TreeMap<Node, Integer>();
Node minNode = freeMap.isEmpty() ? null : (Node) freeMap.entrySet().iterator().next();
for (Map.Entry<Node, Integer> entry : freeMap.entrySet()) {
if (entry.getValue() < freeMap.get(minNode)) {
minNode = entry.getKey();
}
}
Got the if check out of the loop.
Data Structure for O(1) Min
For an alternative data structure, how about a Priority Queue.
You can either use a custom Comparator or have your data type implement Comparable.
From the javadoc:
Implementation note: this implementation provides O(log(n)) time for the enqueing and dequeing methods (offer, poll, remove() and add); linear time for the remove(Object) and contains(Object) methods; and constant time for the retrieval methods (peek, element, and size).
Data Structure for O(1) Min and amortized O(1) find
If you want both efficient min and efficient find and you control access to the data structure (otherwise what is the point of the question?) you can just roll out your own by extending Java's HashMap to keep track of the minimum element.
You will have to override the put, putAll and remove methods. In each case, you can just call the super class method (e.g. super.put(key, value)) and then update the minimum element, which is kept as an instance member of your newly defined class.
Note that this increases the remove time to (O(N)) (since you will have to update the minimum value).
You can define your own Comparator and use Collections.min.
Example:
Comparator<Entry<Node, Integer>> customComparator = new Comparator<>() {
#Override
public int compare(Entry<Node, Integer> o1, Entry<Node, Integer> o2){
return (int)(o1.getValue() - o2.getValue());
}
#Override
public boolean equals(Object obj) {
return false;
}
};
Entry<Node, Integer> minVal = Collections.min(freeMap.entrySet(), customComparator);
Hope this helps :)
Related
Let's say I have the LinkedHashMap with some unknown data inside.
//==================
Map< Integer, String > map = new LinkedHashMap<>();
map.put(10, "C");
map.put(20, "C++");
map.put(50, "JAVA");
map.put(40, "PHP");
map.put(30, "Kotlin");
//=============
And I know just the key = 50;
I am wondering what is the best way to get the next element to the element that I have by this key (50)? This is not a multi-threaded application. I don't worry about thread-safety.
I don't like the way to iterate all keys through entrySet from the beginning.
It would be great to somehow get access to the next() of LinkedHashMaps Entry.
This is LinkedHashMap so it remembers the order of elements insertion.
public static Map.Entry<Integer, String> getNextEntry(LinkedHashMap<Integer, String> map, Integer key) {
List<Integer> keys = new ArrayList<>(map.keySet());
int index = keys.indexOf(key);
if (index < 0 || index >= keys.size() - 1)
return null;
int k = keys.get(index + 1);
return Map.entry(k, map.get(k));
}
Or you can use Iterator:
public static Map.Entry<Integer, String> getNextEntry(LinkedHashMap<Integer, String> map, Integer key) {
boolean found = false;
for (Map.Entry<Integer, String> entry : map.entrySet()) {
if (found)
return Map.entry(entry.getKey(), entry.getValue());
if (entry.getKey().intValue() == key)
found = true;
}
return null;
}
LinkedHashMap doesn't offer a functionality which would allow finding the next key or entry.
In case if you simply don't want to bother with managing iteration yourself manually, then sure you can alternate this process, but keep in mind that the iteration should happen somewhere anyway.
Stream API
Alternatively you can make use of the Stream API if you don't want to bother with loops.
public static Optional<Map.Entry<Integer, String>> getNextEntry(Map<Integer, String> map,
int previous) {
return map.entrySet().stream()
.dropWhile(entry -> entry.getKey() != previous) // discard the entries, until the target key has been encountered
.skip(1) // skip the entry with the target key
.findFirst(); // grab the next entry and return it as an Optional (because the next entry might not exist)
}
TreeMap
However, you would be able to navigate through the keys of the map if you were using a TreeMap.
TreeMap maintains a red-black tree under the hood, and it keep the entries in sorted order based on keys. And it offers various method like higherEntry(), higherKey().
NavigableMap<Integer, String> map = new TreeMap<>();
// populating the map
int key = 50;
Map.Entry<Integer, String> next = map.higherEntry(key);
I am using a treemap but created my own comparator so that the treemap is ordered by the values rather than the keys. This works fine but whenever I come to overwrite a <key, value> mapping, instead of being overwritten, a new mapping is added with the same key (which shouldn't happen because maps in Java are meant to have unique keys). I have even tried to remove the mapping first before adding another one but nothing gets deleted from the treemap. When I remove the comparator, there are no unique values and the treemap works as expected. Why does this happen?
Here is my code:
public Map<String, List<String>> mapQtToNonSampledCase(List<Entry> cases, Map<String, Integer> populationDistribution) {
Map<String. Integer> distribution = new HashMap<>(populationDistribution);
Map<String. List<String>> qtToCases = new HashMap<>();
Comparator<String> valueComparator = new Comparator<String>() {
public int compare(String k1, String k2) {
int compare = distribution.get(k1).compareTo(distribution.get(k2));
if (compare == 0)
return 1;
else
return compare;
}
};
TreeMap<String, Integer> sortedByValues = new TreeMap<>(valueComparator);
sortedByValues.putAll(distribution);
for(Entry entry: cases) {
List<Map.Entry<String, Integer>> listEntries = sortedByValues.entrySet().stream().collect(Collectors.tolist());
Map.Entry<String, Integer> qt = sortedByValues.firstEntry().getKey().equals(entry.get(UtilsClass.ID).toString()) ? (listEntries.get(1) != null ? listEntries.get(1) : null) : sortedByValues.firstEntry();
if(qt != null) {
if(!qtToCases.containsKey(qt.getKey()) {
qtToCases.put(qt.getKey(), new ArrayList<>());
);
}
qtToCases.get(qt.getKey()).add(entry.get(UtilsClass.ID).toString());
sortedByValues.put(qt.getKey(), qt.getValue() - 1);
}
}
// Printing keys
for(Map.Entry<String, Integer> entry : sortedByValues.entrySet()) {
System.out.println(entry.getKey());
}
}
And here is the console output (apologies for the quality, it's a picture from another device):
Your custom comparator is not consistent with equals: When you try to update a key with a different value, your comparator will return a value != 0, but the keys are the same.
See this comment in TreeMap API doc:
Note that the ordering maintained by a tree map, like any sorted map,
and whether or not an explicit comparator is provided, must be
consistent with equals if this sorted map is to correctly implement
the Map interface.
The term 'consistent with equals' is defined here: [Comparable API doc]:2
The natural ordering for a class C is said to be consistent with equals if and only if e1.compareTo(e2) == 0 has the same boolean value as e1.equals(e2) for every e1 and e2 of class C.
I have a HashMap as follows-
HashMap<String, Integer> BC = new HashMap<String, Integer>();
which stores as keys- "tokens/tages" and as values- "frequency of each tokens/tags".
Example-
"the/at" 153
"that/cs" 45
"Ann/np" 3
I now parse through each key and check whether for same token say "the" whether it's associated with more than one tag and then take the largest of the two.
Example-
"the/at" 153
"the/det" 80
Then I take the key- "the/at" with value - 153.
The code that I have written to do so is as follows-
private HashMap<String, Integer> Unigram_Tagger = new HashMap<String, Integer>();
for(String curr_key: BC.keySet())
{
for(String next_key: BC.keySet())
{
if(curr_key.equals(next_key))
continue;
else
{
String[] split_key_curr_key = curr_key.split("/");
String[] split_key_next_key = next_key.split("/");
//out.println("CK- " + curr_key + ", NK- " + next_key);
if(split_key_curr_key[0].equals(split_key_next_key[0]))
{
int ck_v = 0, nk_v = 0;
ck_v = BC.get(curr_key);
nk_v = BC.get(next_key);
if(ck_v > nk_v)
Unigram_Tagger.put(curr_key, BC.get(curr_key));
else
Unigram_Tagger.put(next_key, BC.get(next_key));
}
}
}
}
But this code is taking too long to compute since the original HashMap 'BC' has 68442 entries which comes approximately to its square = 4684307364 times (plus some more).
My question is this- can I accomplish the same output using a more efficient method?
Thanks!
Create a new
Map<String,Integer> highCount = new HashMap<>();
that will map tokens to their largest count.
Make a single pass through the keys.
Split each key into its component tokens.
For each token, look in highMap. If the key does not exist, add it with its count. If the entry already exists and the current count is greater than the previous maximum, replace the maximum in the map.
When you are done with the single pass the highCount will contain all the unique tokens along with the highest count seen for each token.
Note: This answer is intended to give you a starting point from which to develop a complete solution. The key concept is that you create and populate a new map from token to some "value" type (not necessarily just Integer) that provides you with the functionality you need. Most likely the value type will be a new custom class that stores the tag and the count.
The slowest part of your current method is due to the pairwise comparison of keys. First, define a Tuple class:
public class Tuple<X, Y> {
public final X x;
public final Y y;
public Tuple(X x, Y y) {
this.x = x;
this.y = y;
}
}
Thus you can try an algorithm that does:
Initializes a new HashMap<String, Tuple<String, Integer>> result
Given input pair (key, value) from the old map, where key = "a/b", check whether result.keySet().contains(a) and result.keySet().contains(b).
If both a and b is not present, result.put(a, new Tuple<String, Integer>(b, value) and result.put(b, new Tuple<String, Integer>(a, value))
If a is present, compare value and v = result.get(a). If value > v, remove a and b from result and do step 3. Do the same for b. Otherwise, get the next key-value pair.
After you have iterated through the old hash map and inserted everything, then you can easily reconstruct the output you want by transforming the key-values in result.
A basic thought on the algorithm:
You should get the entrySet() of the HashMap and convert it to a List:
ArrayList<Map.Entry<String, Integer>> list = new ArrayList<>(map.entrySet());
Now you should sort the list by the keys in alphabetical order. We do that because the HashMap has no order, so you can expect that the corresponding keys might be far apart. But by sorting them, all related keys are directly next to each other.
Collections.sort(list, Comparator.comparing(e -> e.getKey()));
The entries "the/at" and "the/det" will be next to each other, thanks to sorting alphabetically.
Now you can iterate over the entire list while remembering the best item, until you find a better one or you find the first item which has not the same prefix (e.g. "the").
ArrayList<Map.Entry<String, Integer>> bestList = new ArrayList<>();
// The first entry of the list is considered the currently best item for it's group
Map.Entry<String, Integer> currentBest = best.get(0);
String key = currentBest.getKey();
String currentPrefix = key.substring(0, key.indexOf('/'));
for (int i=1; i<list.size(); i++) {
// The item we compare the current best with
Map.Entry<String, Integer> next = list.get(i);
String nkey = next.getKey();
String nextPrefix = nkey.substring(0, nkey.indexOf('/'));
// If both items have the same prefix, then we want to keep the best one
// as the current best item
if (currentPrefix.equals(nextPrefix)) {
if (currentBest.getValue() < next.getValue()) {
currentBest = next;
}
// If the prefix is different we add the current best to the best list and
// consider the current item the best one for the next group
} else {
bestList.add(currentBest);
currentBest = next;
currentPrefix = nextPrefix;
}
}
// The last one must be added here, or we would forget it
bestList.add(currentBest);
Now you should have a list of Map.Entry objects representing the desired entries. The complexity should be n(log n) and is limited by the sorting algorithm, while grouping/collection the items has a complexity of n.
import java.util.Comparator;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.TreeMap;
import java.util.stream.Collectors;
public class Point {
public static void main(String[] args) {
HashMap<String, Integer> BC = new HashMap<>();
//some random values
BC.put("the/at",5);
BC.put("Ann/npe",6);
BC.put("the/atx",7);
BC.put("that/cs",8);
BC.put("the/aty",9);
BC.put("Ann/np",1);
BC.put("Ann/npq",2);
BC.put("the/atz",3);
BC.put("Ann/npz",4);
BC.put("the/atq",0);
BC.put("the/atw",12);
BC.put("that/cs",14);
BC.put("that/cs1",16);
BC.put("the/at1",18);
BC.put("the/at2",100);
BC.put("the/at3",123);
BC.put("that/det",153);
BC.put("xyx",123);
BC.put("xyx/w",2);
System.out.println("\nUnsorted Map......");
printMap(BC);
System.out.println("\nSorted Map......By Key");
//sort original map using TreeMap, it will sort the Map by keys automatically.
Map<String, Integer> sortedBC = new TreeMap<>(BC);
printMap(sortedBC);
// find all distinct prefixes by spliting the keys at "/"
List<String> uniquePrefixes = sortedBC.keySet().stream().map(i->i.split("/")[0]).distinct().collect(Collectors.toList());
System.out.println("\nuniquePrefixes: "+uniquePrefixes);
TreeMap<String,Integer> mapOfMaxValues = new TreeMap<>();
// for each prefix from the list above filter the entries from the sorted map
// having keys starting with this prefix
//and sort them by value in descending order and get the first which will have the highst value
uniquePrefixes.stream().forEach(i->{
Entry <String,Integer> e =
sortedBC.entrySet().stream().filter(j->j.getKey().startsWith(i))
.sorted(Map.Entry.comparingByValue(Comparator.reverseOrder())).findFirst().get();
mapOfMaxValues.put(e.getKey(), e.getValue());
});
System.out.println("\nmapOfMaxValues...\n");
printMap(mapOfMaxValues);
}
//pretty print a map
public static <K, V> void printMap(Map<K, V> map) {
map.entrySet().stream().forEach((entry) -> {
System.out.println("Key : " + entry.getKey()
+ " Value : " + entry.getValue());
});
}
}
// note: only tested with random values provided in the code
// behavior for large maps untested
I understand that the Set returned from a Map's keySet() method does not guarantee any particular order.
My question is, does it guarantee the same order over multiple iterations. For example
Map<K,V> map = getMap();
for( K k : map.keySet() )
{
}
...
for( K k : map.keySet() )
{
}
In the above code, assuming that the map is not modified, will the iteration over the keySets be in the same order. Using Sun's jdk15 it does iterate in the same order, but before I depend on this behavior, I'd like to know if all JDKs will do the same.
EDIT
I see from the answers that I cannot depend on it. Too bad. I was hoping to get away with not having to build some new Collection to guarantee my ordering. My code needed to iterate through, do some logic, and then iterate through again with the same ordering. I'll just create a new ArrayList from the keySet which will guarantee order.
You can use a LinkedHashMap if you want a HashMap whose iteration order does not change.
Moreover you should always use it if you iterate through the collection. Iterating over HashMap's entrySet or keySet is much slower than over LinkedHashMap's.
If it is not stated to be guaranteed in the API documentation, then you shouldn't depend on it. The behavior might even change from one release of the JDK to the next, even from the same vendor's JDK.
You could easily get the set and then just sort it yourself, right?
Map is only an interface (rather than a class), which means that the underlying class that implements it (and there are many) could behave differently, and the contract for keySet() in the API does not indicate that consistent iteration is required.
If you are looking at a specific class that implements Map (HashMap, LinkedHashMap, TreeMap, etc) then you could see how it implements the keySet() function to determine what the behaviour would be by checking out the source, you'd have to really take a close look at the algorithm to see if the property you are looking for is preserved (that is, consistent iteration order when the map has not had any insertions/removals between iterations). The source for HashMap, for example, is here (open JDK 6): http://www.docjar.com/html/api/java/util/HashMap.java.html
It could vary widely from one JDK to the next, so i definitely wouldn't rely on it.
That being said, if consistent iteration order is something you really need, you might want to try a LinkedHashMap.
The API for Map does not guarantee any ordering whatsoever, even between multiple invocations of the method on the same object.
In practice I would be very surprised if the iteration order changed for multiple subsequent invocations (assuming the map itself did not change in between) - but you should not (and according to the API cannot) rely on this.
EDIT - if you want to rely on the iteration order being consistent, then you want a SortedMap which provides exactly these guarantees.
Just for fun, I decided to write some code that you can use to guarantee a random order each time. This is useful so that you can catch cases where you are depending on the order but you should not be. If you want to depend on the order, than as others have said, you should use a SortedMap. If you just use a Map and happen to rely on the order then using the following RandomIterator will catch that. I'd only use it in testing code since it makes use of more memory then not doing it would.
You could also wrap the Map (or the Set) to have them return the RandomeIterator which would then let you use the for-each loop.
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
public class Main
{
private Main()
{
}
public static void main(final String[] args)
{
final Map<String, String> items;
items = new HashMap<String, String>();
items.put("A", "1");
items.put("B", "2");
items.put("C", "3");
items.put("D", "4");
items.put("E", "5");
items.put("F", "6");
items.put("G", "7");
display(items.keySet().iterator());
System.out.println("---");
display(items.keySet().iterator());
System.out.println("---");
display(new RandomIterator<String>(items.keySet().iterator()));
System.out.println("---");
display(new RandomIterator<String>(items.keySet().iterator()));
System.out.println("---");
}
private static <T> void display(final Iterator<T> iterator)
{
while(iterator.hasNext())
{
final T item;
item = iterator.next();
System.out.println(item);
}
}
}
class RandomIterator<T>
implements Iterator<T>
{
private final Iterator<T> iterator;
public RandomIterator(final Iterator<T> i)
{
final List<T> items;
items = new ArrayList<T>();
while(i.hasNext())
{
final T item;
item = i.next();
items.add(item);
}
Collections.shuffle(items);
iterator = items.iterator();
}
public boolean hasNext()
{
return (iterator.hasNext());
}
public T next()
{
return (iterator.next());
}
public void remove()
{
iterator.remove();
}
}
I agree with LinkedHashMap thing. Just putting my findings and experience while I was facing the problem when I was trying to sort HashMap by keys.
My code to create HashMap:
HashMap<Integer, String> map;
#Before
public void initData() {
map = new HashMap<>();
map.put(55, "John");
map.put(22, "Apple");
map.put(66, "Earl");
map.put(77, "Pearl");
map.put(12, "George");
map.put(6, "Rocky");
}
I have a function showMap which prints entries of map:
public void showMap (Map<Integer, String> map1) {
for (Map.Entry<Integer, String> entry: map1.entrySet()) {
System.out.println("[Key: "+entry.getKey()+ " , "+"Value: "+entry.getValue() +"] ");
}
}
Now when I print the map before sorting, it prints following sequence:
Map before sorting :
[Key: 66 , Value: Earl]
[Key: 22 , Value: Apple]
[Key: 6 , Value: Rocky]
[Key: 55 , Value: John]
[Key: 12 , Value: George]
[Key: 77 , Value: Pearl]
Which is basically different than the order in which map keys were put.
Now When I sort it with map keys:
List<Map.Entry<Integer, String>> entries = new ArrayList<>(map.entrySet());
Collections.sort(entries, new Comparator<Entry<Integer, String>>() {
#Override
public int compare(Entry<Integer, String> o1, Entry<Integer, String> o2) {
return o1.getKey().compareTo(o2.getKey());
}
});
HashMap<Integer, String> sortedMap = new LinkedHashMap<>();
for (Map.Entry<Integer, String> entry : entries) {
System.out.println("Putting key:"+entry.getKey());
sortedMap.put(entry.getKey(), entry.getValue());
}
System.out.println("Map after sorting:");
showMap(sortedMap);
the out put is:
Sorting by keys :
Putting key:6
Putting key:12
Putting key:22
Putting key:55
Putting key:66
Putting key:77
Map after sorting:
[Key: 66 , Value: Earl]
[Key: 6 , Value: Rocky]
[Key: 22 , Value: Apple]
[Key: 55 , Value: John]
[Key: 12 , Value: George]
[Key: 77 , Value: Pearl]
You can see the difference in order of keys. Sorted order of keys is fine but that of keys of copied map is again in the same order of the earlier map. I dont know if this is valid to say, but for two hashmap with same keys, order of keys is same. This implies to the statement that order of keys is not guaranteed but can be same for two maps with same keys because of inherent nature of key insertion algorithm if HashMap implementation of this JVM version.
Now when I use LinkedHashMap to copy sorted Entries to HashMap, I get desired result (which was natural, but that is not the point. Point is regarding order of keys of HashMap)
HashMap<Integer, String> sortedMap = new LinkedHashMap<>();
for (Map.Entry<Integer, String> entry : entries) {
System.out.println("Putting key:"+entry.getKey());
sortedMap.put(entry.getKey(), entry.getValue());
}
System.out.println("Map after sorting:");
showMap(sortedMap);
Output:
Sorting by keys :
Putting key:6
Putting key:12
Putting key:22
Putting key:55
Putting key:66
Putting key:77
Map after sorting:
[Key: 6 , Value: Rocky]
[Key: 12 , Value: George]
[Key: 22 , Value: Apple]
[Key: 55 , Value: John]
[Key: 66 , Value: Earl]
[Key: 77 , Value: Pearl]
Hashmap does not guarantee that the order of the map will remain constant over time.
It doesn't have to be. A map's keySet function returns a Set and the set's iterator method says this in its documentation:
"Returns an iterator over the elements in this set. The elements are returned in no particular order (unless this set is an instance of some class that provides a guarantee)."
So, unless you are using one of those classes with a guarantee, there is none.
Map is an interface and it does not define in the documentation that order should be the same. That means that you can't rely on the order. But if you control Map implementation returned by the getMap(), then you can use LinkedHashMap or TreeMap and get the same order of keys/values all the time you iterate through them.
tl;dr Yes.
I believe the iteration order for .keySet() and .values() is consistent (Java
8).
Proof 1: We load a HashMap with random keys and random values. We iterate on this HashMap using .keySet() and load the keys and it's corresponding values to a LinkedHashMap (it will preserve the order of the keys and values inserted). Then we compare the .keySet() of both the Maps and .values() of both the Maps. It always comes out to be the same, never fails.
public class Sample3 {
static final String AB = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
static SecureRandom rnd = new SecureRandom();
// from here: https://stackoverflow.com/a/157202/8430155
static String randomString(int len){
StringBuilder sb = new StringBuilder(len);
for (int i = 0; i < len; i++) {
sb.append(AB.charAt(rnd.nextInt(AB.length())));
}
return sb.toString();
}
public static void main(String[] args) throws Exception {
for (int j = 0; j < 10; j++) {
Map<String, String> map = new HashMap<>();
Map<String, String> linkedMap = new LinkedHashMap<>();
for (int i = 0; i < 1000; i++) {
String key = randomString(8);
String value = randomString(8);
map.put(key, value);
}
for (String k : map.keySet()) {
linkedMap.put(k, map.get(k));
}
if (!(map.keySet().toString().equals(linkedMap.keySet().toString()) &&
map.values().toString().equals(linkedMap.values().toString()))) {
// never fails
System.out.println("Failed");
break;
}
}
}
}
Proof 2: From here, the table is an array of Node<K,V> class. We know that iterating an array will give the same result every time.
/**
* The table, initialized on first use, and resized as
* necessary. When allocated, length is always a power of two.
* (We also tolerate length zero in some operations to allow
* bootstrapping mechanics that are currently not needed.)
*/
transient Node<K,V>[] table;
The class responsible for .values():
final class Values extends AbstractCollection<V> {
// more code here
public final void forEach(Consumer<? super V> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.value);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
The class responsible for .keySet():
final class KeySet extends AbstractSet<K> {
// more code here
public final void forEach(Consumer<? super K> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.key);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
Carefully look at both the inner classes. They are pretty much the same except:
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.key); <- from KeySet class
// action.accept(e.value); <- the only change from Values class
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
They iterate on the same array table to support .keySet() in KeySet class and .values() in Values class.
Proof 3: this answer also explicitly states - So, yes, keySet(), values(), and entrySet() return values in the order the internal linked list uses.
Therefore, the .keySet() and .values() are consistent.
Logically, if the contract says "no particular order is guaranteed", and since "the order it came out one time" is a particular order, then the answer is no, you can't depend on it coming out the same way twice.
You also can store the Set instance returned by the keySet() method and can use this instance whenever you need the same order.
What would be the fastest way to get the common values from all the sets within an hash map?
I have a
Map<String, Set<String>>
I check for the key and get all the sets that has the given key. But instead of getting all the sets from the hashmap, is there any better way to get the common elements (value) from all the sets?
For example, the hashmap contains,
abc:[ax1,au2,au3]
def:[ax1,aj5]
ijk:[ax1,au2]
I want to extract the ax1 and au2 alone, as they are the most common values from the set.
note: not sure if this is the fastest, but this is one way to do this.
First, write a simple method to extract the frequencies for the Strings occurring across all value sets in the map. Here is a simple implementation:
Map<String, Integer> getFrequencies(Map<String, Set<String>> map) {
Map<String, Integer> frequencies = new HashMap<String, Integer>();
for(String key : map.keySet()) {
for(String element : map.get(key)) {
int count;
if(frequencies.containsKey(element)) {
count = frequencies.get(element);
} else {
count = 1;
}
frequencies.put(element, count + 1);
}
}
return new frequencies;
}
You can simply call this method like this: Map<String, Integer> frequencies = getFrequencies(map)
Second, in order to get the most "common" elements in the frequencies map, you simply sort the entries in the map by using the Comparator interface. It so happens that SO has an excellent community wiki that discusses just that: Sort a Map<Key, Value> by values (Java). The wiki contains multiple interesting solutions to the problem. It might help to go over them.
You can simply implement a class, call it FrequencyMap, as shown below.
Have the class implement the Comparator<String> interface and thus the int compare(String a, String b) method to have the elements of the map sorted in the increasing order of the value Integers.
Third, implement another method, call it getCommon(int threshold) and pass it a threshold value. Any entry in the map that has a frequency value greater than threshold, can be considered "common", and will be returned as a simple List.
class FrequencyMap implements Comparator<String> {
Map<String, Integer> map;
public FrequencyMap(Map<String, Integer> map) {
this.map = map;
}
public int compare(String a, String b) {
if (map.get(a) >= map.get(b)) {
return -1;
} else {
return 1;
} // returning 0 would merge keys
}
public ArrayList<String> getCommon(int threshold) {
ArrayList<String> common = new ArrayList<String>();
for(String key : this.map.keySet()) {
if(this.map.get(key) >= threshold) {
common.add(key);
}
}
return common;
}
#Override public String toString() {
return this.map.toString();
}
}
So using FrequencyMap class and the getCommon method, it boils down to these few lines of code:
FrequencyMap frequencyMap = new FrequencyMap(frequencies);
System.out.println(frequencyMap.getCommon(2));
System.out.println(frequencyMap.getCommon(3));
System.out.println(frequencyMap.getCommon(4));
For the sample input in your question this is the o/p that you get:
// common values
[ax1, au6, au3, au2]
[ax1, au2]
[ax1]
Also, here is a gist containing the code i whipped up for this question: https://gist.github.com/VijayKrishna/5973268