ConcurrentHashMap, find by value, compare fields and put - java

How can I check if there is a value using the fields of a given value? And put new one?
In ConcurrentHashMap, cause I have N threads.
Here is an example of what I want. However, it is not thread-safe.
Map<Integer, Record> map = new ConcurrentHashMap<>();
// it works, but I think it's unsafe
int get(Object key) {
for (Map.Entry<Integer, Record> next : map.entrySet()) {
if (next.getValue().a == key) {
return next.getValue().b;
}
}
int code = ...newCode();
map.put(code, new Record(...))
return code;
}
record Record(Object a, int b) {
}

What you're suggesting would defeat the purpose of using a HashMap since you're iterating through the Map instead of retrieving from the Map.
What you should really do is create a new Map where the field in Record.a is the Key and the field in Record.B is the value (or just the whole Record). Then just update your logic to insert into both Maps appropriately.

Related

Should we always use a ConcurrentHashMap when using multiple threads?

If I have a hash map and this method:
private Map<String, String> m = new HashMap<>();
private void add(String key, String value) {
String val = m.get(key);
if (val == null) {
m.put(key, value);
}
}
If I have two threads A and B calling the method with the same key and value, A and B may both see that the key is not in the map, and so may both write to the map simultaneously. However, the write order (A before B or B before A) should not affect the result because they both write the same value. But I am just wondering whether concurrent writes would be dangerous and could lead to unexpected results. In that case I should maybe use a ConcurrentHashMap.
Yes, you should use a ConcurrentHashMap (which is internally thread-safe), and use the m.putIfAbsent(key, value) of it.
m should also be final, to avoid that it is being reassigned.

N Level Map, Need to place map inside map

Need a dynamic data structure, which may be similar to a MAP(Java.util.Map), wchich is capable of storing, String and Object. And that Object may again need to store another map, which can store, String and Object.
I suspect that the requester is looking for something like the below:
class MultilevelMap<K,V> extends HashMap<List<K>,V> {
#SafeVarargs
public final put(V value, K keys...) {
put(makeKey(keys), value);
}
#SafeVarargs
public final V get(K keys...) {
return get(makeKey(keys));
}
// The remainder of this class is left as a tedious exercise for the reader
private List<K> makeKey(K[] keys) {
List<K> key = new ArrayList<K>(keys.size);
for(K k: keys) {
key.add(k);
}
return key;
}
}
A Trie, as far as I understand it is similar, but opposite. It presents an interface of Map<S,V>, but internally is implemented as a variable-depth Map<K,Map<K, ... V>> where K are consecutive affixes of S, such that if you concatenate all of the Ks between the top of the tree and V, you get the S you used as the key. The above presents an interface of (very approximately) Map<K,K, ... , V>, but internally is Map<List<K>, V>.
You can nest maps (and other containers) to arbitrary depth. This puts a Map in another Map in another ... to a total depth of 10:
private Map<String, Object> nest(int levelsLeft, Map<String, Object> parent) {
if (levelsLeft > 0) {
parent.put("key" + levelsLeft,
nest(levelsLeft - 1, new HashMap<String, Object>()));
}
return parent;
}
// from somewhere else
Map<String, Object> nested = nest(10, new Map<String, Object>());
((Map<String, Object>)nested.get("key10")).get("key9"); // goes all the way down to "key1"
Note that the price for declaring a Map<String, Object> is that, whenever you access something via get(), you need to cast it to whatever it actually is to be able to use it as something more specific than an Object.
Sounds like you either need a Multimap or a Trie.

a hashmap with multiple keys to value?

I have a question about hashmaps with multiple keys to value. Let's say I have (key / value )
1/a, 1/b, 1/3, 2/aa, 2/bb, 2/cc.
Would this work?
If it does, could I have a way to loop through it and display all values for only either key 1 or 2?
You can use a map with lists as values, e.g.:
HashMap<Integer, List<String>> myMap = new HashMap<Integer, List<String>>();
java.util.HashMap does not allow you to map multiple values to a single key. You want to use one of Guava's Multimap's. Read through the interface to determine which implemented version is suitable for you.
A simple MultiMap would look something like this skeleton:
public class MultiMap<K,V>
{
private Map<K,List<V>> map = new HashMap<K,List<V>>();
public MultiMap()
{
// Define constructors
}
public void put(K key, V value)
{
List<V> list = map.get(key);
if (list == null)
{
list = new ArrayList<V>();
map.put(key, list);
}
list.add(value);
}
public List<V> get(K key)
{
return map.get(key);
}
public int getCount(K key)
{
return map.containsKey(key) ? map.get(key).size() : 0;
}
}
It cannot directly implement Map<K,V> because put can't return the replaced element (you never replace). A full elaboration would define an interface MultiMap<K,V> and an implementation class, I've omitted that for brevity, as well as other methods you might want, such as V remove(K key) and V get(K key, int index)... and anything else you can think of that might be useful :-)
Maps will handle multiple keys to one value since only the keys need be unique:
Map(key, value)
However one key to multiple values requires s multimap of a map strict of :
Map(key, list(values))
Also, whatever you use as a key really should implement a good hadhCode() function if you decide to use a HashMap and/or HashSet
Edit: had to use() instead of <> because my mobile or sof's mobile site editor clobbered the <> symbols....odd

Deduplicating HashMap Values

I'm wondering if anyone knows a good way to remove duplicate Values in a LinkedHashMap? I have a LinkedHashMap with pairs of String and List<String>. I'd like to remove duplicates across the ArrayList's. This is to improve some downstream processing.
The only thing I can think of is keeping a log of the processed Values as I iterate over HashMap and then through the ArrayList and check to see if I've encountered a Value previously. This approach seems like it would degrade in performance as the list grows. Is there a way to pre-process the HashMap to remove duplicates from the ArrayList values?
To illustrate...if I have
String1>List1 (a, b, c)
String2>List2 (c, d, e)
I would want to remove "c" so there are no duplicates across the Lists within the HashMap.
I believe creating a second HashMap, that can be sorted by values (Alphabetically, numerically), then do a single sweep through the sorted list, to check to see if the current node, is equivalent to the next node, if it is, remove the next one, and keep the increment at the same, so it will remain at the same index of that sorted list.
Or, when you are adding values, you can check to see if it already contains this value.
Given your clarification, you want something like this:
class KeyValue {
public String key;
public Object value;
KeyValue(String key, Object value) {
this.key = key;
this.value = value;
}
public boolean equals(Object o) {
// boilerplate omitted, only use the value field for comparison
}
public int hashCode() {
return value.hashCode();
}
}
public void deduplicate() {
Map<String, List<Object>> items = new HashMap<String, List<Object>>();
Set<KeyValue> kvs = new HashSet<KeyValue>();
for (Map.Entry<String, List<Object>> entry : items.entrySet()) {
String key = entry.getKey();
List<Object> values = entry.getValue();
for (Object value : values) {
kvs.add(new KeyValue(key, value));
}
values.clear();
}
for (KeyValue kv : kvs) {
items.get(kv.key).add(kv.value);
}
}
Using a set will remove the duplicate values, and the KeyValue lets us preserve the original hash key while doing so. Add getters and setters or generics as needed. This will also modify the original map and the lists in it in place. I also think the performance for this should be O(n).
I'm assuming you need unique elements (contained in your Lists) and not unique Lists.
If you need no association between the Map's key and elements in its associated List, just add all of the elements individually to a Set.
If you add all of the Lists to a Set, it will contain the unique List objects, not unique elements of the Lists, so you have to add the elements individually.
(you can, of course, use addAll to make this easier)
So, to clarify... You essentially have K, [V1...Vn] and you want unique values for all V?
public void add( HashMap<String, List> map, HashMap<Objet, String> listObjects, String key, List values)
{
List uniqueValues= new List();
for( int i = 0; i < values.size(); i++ )
{
if( !listObjects.containsKey( values.get(i) ) )
{
listObjects.put( values.get(i), key );
uniqueValues.add( values.get(i) );
}
}
map.put( key, uniqueValues);
}
Essentially, we have another HashMap that stores the list values and we remove the non-unique ones when adding a list to the map. This also gives you the added benefit of knowing which list a value occurs in.
Using Guava:
Map<Value, Key> uniques = new LinkedHashMap<Value, Key>();
for (Map.Entry<Key, List<Value>> entry : mapWithDups.entrySet()) {
for (Value v : entry.getValue()) {
uniques.put(v, entry.getKey());
}
}
ListMultimap<K, V> uniqueLists = Multimaps.invertFrom(Multimaps.forMap(uniques),
ArrayListMultimap.create());
Map<K, List<V>> uniqueListsMap = (Map) uniqueLists.asMap(); // only if necessary
which should preserve the ordering of the values, and keep them unique. If you can use a ListMultimap<K, V> for your result -- which you probably can -- then go for it, otherwise you can probably just cast uniqueLists.asMap() to a Map<K, List<V>> (with some abuse of generics, but with guaranteed type safety).
As other have noted, you could check the value as you add, but, if you have to do it after the fact:
static public void removeDups(Map<String, List<String>> in) {
ArrayList<String> allValues = new ArrayList<String>();
for (List<String> inValue : in.values())
allValues.addAll(inValue);
HashSet<String> uniqueSet = new HashSet<String>(allValues);
for (String unique : uniqueSet)
allValues.remove(unique);
// anything left over was a duplicate
HashSet<String> nonUniqueSet = new HashSet<String>(allValues);
for (List<String> inValue : in.values())
inValue.removeAll(nonUniqueSet);
}
public static void main(String[] args) {
HashMap<String, List<String>> map = new HashMap<String, List<String>>();
map.put("1", new ArrayList(Arrays.asList("a", "b", "c", "a")));
map.put("2", new ArrayList(Arrays.asList("d", "e", "f")));
map.put("3", new ArrayList(Arrays.asList("a", "e")));
System.out.println("Before");
System.out.println(map);
removeDups(map);
System.out.println("After");
System.out.println(map);
}
generates an output of
Before
{3=[a, e], 2=[d, e, f], 1=[a, b, c, a]}
After
{3=[], 2=[d, f], 1=[b, c]}

Java code to Prevent duplicate <Key,Value> pairs in HashMap/HashTable

I have a HashMap as below (assuming it has 10,0000 elements)
HashMap<String,String> hm = new HashMap<String,String>();
hm.put("John","1");
hm.put("Alex","2");
hm.put("Mike","3");
hm.put("Justin","4");
hm.put("Code","5");
==========================
Expected Output
==========================
Key = John",Value = "1"
Key = Alex",Value = "2"
Key = Mike",Value = "3"
Key = Justin",Value = "4"
Key = Code",Value = "5"
===========================
I need Java code to prevent Addition of Duplicate <Key,Value> Pairs in HashMap such
that below conditions are staisfied.
1> hm.put("John","1"); is not accepted/added again in the Map
2> hm.put("John","2"); is not accepted/added again in the Map
Hope its clear.
Java code provided will be appreciated.(generic solution needed since i can add any duplicate to the existing map)
You can wrap HashMap in a class, which delegates put, get, and other methods you use from HashMap. This method is wasteful but safe, since it doesn't depend on the internal implementation of HashMap, AbstractMap. The code below illustrates put, get delegating:
public class Table {
protected java.util.HashMap<String, Integer> map =
new java.util.HashMap<String, Integer>();
public Integer get(String key) { return map.get(key); }
public Integer put(String key, Integer value) {
if (map.containsKey(key)) {
// implement the logic you need here.
// You might want to return `value` to indicate
// that no changes applied
return value;
} else {
return map.put(key, value);
}
}
// other methods goes here
}
Another option is to make a class which extends HashMap, and depend on its internal implementation. Java 1.6 sources shows that put is called only in putAll in HashMap, so you can simply override put method:
public class Table extends java.util.HashMap<String, Integer> {
public Integer put(String key, Integer value) {
if (containsKey(key)) {
// implement the logic you need here.
// You might want to return `value` to indicate
// that no changes applied
return value;
} else {
return super.put(key, value);
}
}
}
Another option is similar to the first, and can make an utility method in your class which contains the HashMap instance and call that method wherever you need put something to your map:
public final Integer putToMap(String key, String value) {
if(this.map.containsKey(key)) {
return value;
} else {
return this.map.put(key, value);
}
}
This is an "inline" equivalent of checking manually.
I note that you clarify the question by suggesting you might have "100000000 elements". You still won't have duplicates in the HashMap, because, as two other posters have pointed out, you can't get duplicate keys in a Map. I'm still not sure we understand the question, though, as it's not at all clear how you expected to generate the block titled "Output", or what you intend to do with it.
This may be old question but I thought to share my experience with this. As others pointed out you can't have the same element in a HashMap. By default HashMap will not allow this but there are some cases that you could end up with two or more elements are almost alike that you do not accept but HashMap will. For example, the following code defines a HashMap that takes an array of integers as a key then add :
HashMap<int[], Integer> map1 = new HashMap<>();
int[] arr = new int[]{1,2,3};
map1.put(arr, 4);
map1.put(arr, 4);
map1.put(arr, 4);
At this point, the HashMap did not allow dublicating the key and map1.size() will return 1. However, if you added elements without creating the array first things will be different:
HashMap<int[], Integer> map2 = new HashMap<>();
map2.put(new int[]{4,5,6}, 6);
map2.put(new int[]{4,5,6}, 6);
map2.put(new int[]{4,5,6}, 6);
This way, the HashMap will add all the three new elements so the map2.size() will return 3 and not 1 as expected.
The explanation is that with the first map I created the object arr once and tried to add the same object 3 times which HashMap does not allow by default so only the last usage will be considered. With the second map, however, evey time I recreate a new object on the stack. The three objects created are different and separated thought the three of them have the same data but they are different. That's why HashMap allowed them as different keys.
Bottom line, you don't need to prevent HashMap from adding dublicated keys because it won't by design. However, you have to watch out how you define these keys because the fault may be on your side.
List<String> keys = new ArrayList<String>(); (1000000)
List<String> values = new ArrayList<String>(); (1000000)
Map<String, String> map = new HashMap<String, String>();
int i =0;
for(String key : keys){
String returnedValue = map.put(key, values.get(i));
if(returnedValue!=null){
map.put(key, returnedValue);
system.out.println("Duplicate key trying to be entered with new value so reverting the duplicate key ="+key+"new Value"+values.get(i));
}
}
Unfortunately, it is the way that Map works.
The easiest workaround is to remove all pre existed keys and their values by calling hm.remove() first! like this:
for (String name : names) {
hm.remove(name);
hm.put(name,uri.getQueryParameter(name));
}
And if you don't use a for loop just call it like this:
hm.remove("John");
hm.put("John","1");
hm.remove("Alex");
hm.put("Alex","2");
hm.remove("Mike");
hm.put("Mike","3");
And so on ...
see even if u write same key values multiple times you will just have unique set of pairs. Check that by either iterating or by doing hm.size();
if(hm.put("John","1") != null)
{
// "John" was already a key in the map. The sole value for this key is now "1".
}
List<Object> yourElements = new ... // 10000000
for(Object O : yourElements) {
if(myMap.get(O.key)==null) {
myMap.put(O.key,O);
}
}

Categories