Time complexity of TreeMultimap operations - java

What is the time complexity of put(key, value), get(key) in TreeMultimap?
It isn't mentioned in the documentation:
http://google-collections.googlecode.com/svn/trunk/javadoc/com/google/common/collect/TreeMultimap.html

Check on grepcode:
#Override public boolean put(K key, V value) {
return super.put(key, value);
}
super is com.google.common.collect.AbstractMultimap.
public boolean put(#Nullable K key, #Nullable V value) {
Collection<V> collection = getOrCreateCollection(key);
if (collection.add(value)) {
totalSize++;
return true;
} else {
return false;
}
}
The data structure that drives this is:
private transient Map<K, Collection<V>> map;
The outer map is a TreeMap, which you can verify from tracing constructors.
createCollection is abstract:
protected abstract Collection<V> createCollection();
And the implementation uses TreeSet:
#Override SortedSet<V> createCollection() {
return (valueComparator == null)
? new TreeSet<V>() : new TreeSet<V>(valueComparator);
}
Therefore put is
A get into a TreeMap
A put into either a TreeMap or TreeSet.
Those are log(n), so a TreeMultimap is also log(n).

Related

How to add all integers for duplicates elements in HashMap?

I have the following HashMap:
Map<String, Integer> map = new HashMap<>();
How can I sum up all the integers for the duplicates String? or is there a better way to do it using Set?
for example, if I add these elements:
car 100
TV 140
car 5
charger 10
TV 10
I want the list to have:
car 105
TV 150
charger 10
I believe your question is: how do I put key/value pairs into a map in a way that changes the value rather than replacing it, for the same key.
Java has a Map method specifically for this purpose:
map.merge(key, value, (v, n) -> v + n);
This will add the value if the key isn't in the map. Otherwise it'll replace the current value with the sum of the current and new values.
The merge method was introduced in Java 8.
First of all you cannot add duplicate keys in map.
But if I understood what you want, the below code may help you:
if (map.containsKey(key))
map.put(key, map.get(key) + newValue);
else
map.put(key, newValue);
For java-8 and higher
You may just want to use the Map#merge method. It is the easiest way possible. If the key does not exist, it will add it, if it does exist, it will perform the merge operation.
map.merge("car", 100, Integer::sum);
map.merge("car", 20, Integer::sum);
System.out.println(map); // {car=120}
When you add "TV" for the second time, the first value (140) will be override because you cannot have duplicated keys on Map implementation. If you want to increment the value you will need to check if the key "TV" already exists and then increment/add the value.
For example:
if (map.containsKey(key)) {
value += map.get(key);
}
map.put(key, value)
HashMap dosen't save duplicates keys!
You can extend the HashMap Class(JAVA >= 8):
public class MyHashMap2 extends HashMap<String, Integer>{
#Override
public Integer put(String key, Integer value) {
return merge(key, value, (v, n) -> v + n);
}
public static void main (String[] args) throws java.lang.Exception
{
MyHashMap2 list3=new MyHashMap2();
list3.put("TV", 10);
list3.put("TV", 20);
System.out.println(list3);
}
}
Or You can aggregate the HashMap and replace the put method to add to the previous value the new value.
HashMap<String, Integer> list = new HashMap<>();
list.put("TV", 10);
list.put("TV", 20);
System.out.println(list);
MyHashMap list2 = new MyHashMap();
list2.put("TV", 10);
list2.put("TV", 20);
System.out.println(list2);
//OUTPUT:
//{TV=20}
//MyHashMap [List={TV=30}]
public class MyHashMap implements Map<String, Integer>{
HashMap<String, Integer> list = new HashMap<>();
public MyHashMap() {
super();
}
#Override
public int size() {
return list.size();
}
#Override
public boolean isEmpty() {
return list.isEmpty();
}
#Override
public boolean containsKey(Object key) {
return list.containsKey(key);
}
#Override
public boolean containsValue(Object value) {
return list.containsValue( value);
}
#Override
public Integer get(Object key) {
return list.get(key);
}
#Override
public Integer put(String key, Integer value) {
if(list.containsKey(key))
list.put(key, list.get(key)+value);
else
list.put(key, value);
return value;
}
#Override
public Integer remove(Object key) {
return list.remove(key);
}
#Override
public void putAll(Map<? extends String, ? extends Integer> m) {
list.putAll(m);
}
#Override
public void clear() {
list.clear();
}
#Override
public Set<String> keySet() {
return list.keySet();
}
#Override
public Collection<Integer> values() {
return list.values();
}
#Override
public Set<java.util.Map.Entry<String, Integer>> entrySet() {
return list.entrySet();
}
#Override
public String toString() {
return "MyHashMap [list=" + list + "]";
}
}
you can try the code here:https://ideone.com/Wl4Arb

JAVA - Ordered HashMap Implementation with change key name function

I am trying to create a user interface with a HashMap. Users could change values and change the name of the keys without disturbing the order of the keys. I searched and found the LinkedHashMap. Which kept the order of the keys in most cases. But when I remove a key and add it back after renaming it, it always adds it to the end. So I've overridden LinkedHashMap class and added a changeKeyName() function.
Now it works (in my case) but I was wondering if it could be improved and made foolproof. I only overridden the functions I was using. What other functions have to be overridden in order make it complete?
Thanks in advance.
Here is the code:
private static class OrderedHashMap<K, V> extends LinkedHashMap<K, V> {
ArrayList<K> keys = new ArrayList<K>();
#Override
public V put(K key, V value) {
if (!keys.contains(key))
keys.add(key);
return super.put(key, value);
}
#Override
public V remove(Object key) {
keys.remove(key);
return super.remove(key);
}
#Override
public Set<K> keySet() {
LinkedHashSet<K> keys = new LinkedHashSet<K>();
for (K key : this.keys) {
keys.add(key);
}
return keys;
}
public void changeKeyName(K oldKeyName, K newKeyName) {
int index = keys.indexOf(oldKeyName);
keys.add(index, newKeyName);
keys.remove(keys.get(index + 1));
V value = super.get(oldKeyName);
super.remove(oldKeyName);
super.put(newKeyName, value);
}
#Override
public Set<Map.Entry<K, V>> entrySet() {
final OrderedHashMap<K, V> copy = this;
LinkedHashSet<Map.Entry<K, V>> keys = new LinkedHashSet<Map.Entry<K, V>>();
for (final K key : this.keys) {
final V value = super.get(key);
keys.add(new Map.Entry<K, V>() {
#Override
public K getKey() {
return key;
}
#Override
public V getValue() {
return value;
}
#Override
public V setValue(V value) {
return copy.put(getKey(), value);
}
});
}
return keys;
}
}
EDIT: I think the why wasn't clear enough. Let's say we added the keys below.
{"key1":"value1"},
{"key2":"value2"},
{"key3":"value3"},
{"key4":"value4"}
And for example I want to change the key name of the "key2". But as this is also a user interface, order of the keys should stay the same.
I made some research and I found out that apart from removing the key and re-puting the new key name with the same value, nothing could be done. So if we do that and change "key2" to "key2a":
{"key1":"value1"},
{"key3":"value3"},
{"key4":"value4"},
{"key2a":"value2"}
And what I want is this:
{"key1":"value1"},
{"key2a":"value2"},
{"key3":"value3"},
{"key4":"value4"}
So I just kept the keys in a ArrayList and returned them when entrySet() and keySet() methods are called.
Have you considered simply using the TreeMap class instead of a custom subclass of LinkedHashMap? It will maintain order if you implement the Comparable interface on the keys.
If you want to be able to change keys without affecting the hashing function in the collection where the value is stored try a custom class such as;
private class VariableKeyMap {
private LinkedHashSet<K, V> myCollection = new LinkedHashSet<K, V>();
private HashMap<int, K> aliases = new HashMap<int, K>();
int id = 0;
public void addEntry(K key, V value) {
id += 1;
aliases.put(K, id);
myCollection.put(id, V);
}
public V getValue(K key) {
return myCollection.get(aliases.get(key));
}
...
}
The you can update your key alias without affecting where the value is actually stored;
public void changeKey(K oldKey, K newKey) {
int currentId = aliases.get(oldKey);
aliases.remove(oldKey);
aliases.put(newKey, currentId);
}

Use LinkedHashMap to implement LRU cache

I was trying to implement a LRU cache using LinkedHashMap.
In the documentation of LinkedHashMap (http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html), it says:
Note that insertion order is not affected if a key is re-inserted into the map.
But when I do the following puts
public class LRUCache<K, V> extends LinkedHashMap<K, V> {
private int size;
public static void main(String[] args) {
LRUCache<Integer, Integer> cache = LRUCache.newInstance(2);
cache.put(1, 1);
cache.put(2, 2);
cache.put(1, 1);
cache.put(3, 3);
System.out.println(cache);
}
private LRUCache(int size) {
super(size, 0.75f, true);
this.size = size;
}
#Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > size;
}
public static <K, V> LRUCache<K, V> newInstance(int size) {
return new LRUCache<K, V>(size);
}
}
The output is
{1=1, 3=3}
Which indicates that the re-inserted did affected the order.
Does anybody know any explanation?
As pointed out by Jeffrey, you are using accessOrder. When you created the LinkedHashMap, the third parameter specify how the order is changed.
"true for access-order, false for insertion-order"
For more detailed implementation of LRU, you can look at this
http://www.programcreek.com/2013/03/leetcode-lru-cache-java/
But you aren't using insertion order, you're using access order.
order of iteration is the order in which its entries were last
accessed, from least-recently accessed to most-recently (access-order)
...
Invoking the put or get method results in an access to the
corresponding entry
So this is the state of your cache as you modify it:
LRUCache<Integer, Integer> cache = LRUCache.newInstance(2);
cache.put(1, 1); // { 1=1 }
cache.put(2, 2); // { 1=1, 2=2 }
cache.put(1, 1); // { 2=2, 1=1 }
cache.put(3, 3); // { 1=1, 3=3 }
Here is my implementation by using LinkedHashMap in AccessOrder. It will move the latest accessed element to the front which only incurs O(1) overhead because the underlying elements are organized in a doubly-linked list while also are indexed by hash function. So the get/put/top_newest_one operations all cost O(1).
class LRUCache extends LinkedHashMap<Integer, Integer>{
private int maxSize;
public LRUCache(int capacity) {
super(capacity, 0.75f, true);
this.maxSize = capacity;
}
//return -1 if miss
public int get(int key) {
Integer v = super.get(key);
return v == null ? -1 : v;
}
public void put(int key, int value) {
super.put(key, value);
}
#Override
protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
return this.size() > maxSize; //must override it if used in a fixed cache
}
}
Technically LinkedHashMap has the following constructor. Which help us to make the access-order True/False. If it is false then it keep the insertion-order.
LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder)
(#Constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode)
Following is the simple implementation of LRU Cache ---
class LRUCache {
private LinkedHashMap<Integer, Integer> linkHashMap;
public LRUCache(int capacity) {
linkHashMap = new LinkedHashMap<Integer, Integer>(capacity, 0.75F, true) {
#Override
protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
return size() > capacity;
}
};
}
public void put(int key, int value) {
linkHashMap.put(key, value);
}
public int get(int key) {
return linkHashMap.getOrDefault(key, -1);
}
}
I used the following code and its works!!!!
I have taken window size to be 4, but any value can be taken.
for Insertion order:
1: Check if the key is present.
2: If yes, then remove it (by using lhm.remove(key))
3: Add the new Key Value pair.
for Access Order:
No need of removing keys only put and get statements do everything automatically.
This code is for ACCESS ORDER:
import java.util.LinkedHashMap;
public class LRUCacheDemo {
public static void main(String args[]){
LinkedHashMap<String,String> lhm = new LinkedHashMap<String,String>(4,0.75f,true) {
#Override
protected boolean removeEldestEntry(Map.Entry<String,String> eldest) {
return size() > 4;
}
};
lhm.put("test", "test");
lhm.put("test1", "test1");
lhm.put("1", "abc");
lhm.put("test2", "test2");
lhm.put("1", "abc");
lhm.put("test3", "test3");
lhm.put("test4", "test4");
lhm.put("test3", "test3");
lhm.put("1", "abc");
lhm.put("test1", "test1");
System.out.println(lhm);
}
}
I also implement LRU cache with little change in code. I have tested and it works perfectly as concept of LRU cache.
package com.first.misc;
import java.util.LinkedHashMap;
import java.util.Map;
public class LRUCachDemo {
public static void main(String aa[]){
LRUCache<String, String> lruCache = new LRUCache<>(3);
lruCache.cacheable("test", "test");
lruCache.cacheable("test1", "test1");
lruCache.cacheable("test2", "test2");
lruCache.cacheable("test3", "test3");
lruCache.cacheable("test4", "test4");
lruCache.cacheable("test", "test");
System.out.println(lruCache.toString());
}
}
class LRUCache<K, T>{
private Map<K,T> cache;
private int windowSize;
public LRUCache( final int windowSize) {
this.windowSize = windowSize;
this.cache = new LinkedHashMap<K, T>(){
#Override
protected boolean removeEldestEntry(Map.Entry<K, T> eldest) {
return size() > windowSize;
}
};
}
// put data in cache
public void cacheable(K key, T data){
// check key is exist of not if exist than remove and again add to make it recently used
// remove element if window size is exhaust
if(cache.containsKey(key)){
cache.remove(key);
}
cache.put(key,data);
}
// evict functioanlity
#Override
public String toString() {
return "LRUCache{" +
"cache=" + cache.toString() +
", windowSize=" + windowSize +
'}';
}
}

A sorted ComputingMap?

How can I construct a SortedMap on top of Guava's computing map (or vice versa)? I want the sorted map keys as well as computing values on-the-fly.
The simplest is probably to use a ConcurrentSkipListMap and the memoizer idiom (see JCiP), rather than relying on the pre-built unsorted types from MapMaker. An example that you could use as a basis is a decorator implementation.
May be you can do something like this.It's not a complete implementation.Just a sample to convey the idea.
public class SortedComputingMap<K, V> extends TreeMap<K, V> {
private Function<K, V> function;
private int maxSize;
public SortedComputingMap(int maxSize, Function<K, V> function) {
this.function = function;
this.maxSize = maxSize;
}
#Override
public V put(K key, V value) {
throw new UnsupportedOperationException();
}
#Override
public void putAll(Map<? extends K, ? extends V> map) {
throw new UnsupportedOperationException();
}
#Override
public V get(Object key) {
V tmp = null;
K Key = (K) key;
if ((tmp = super.get(key)) == null) {
super.put(Key, function.apply(Key));
}
if (size() > maxSize)
pollFirstEntry();
return tmp;
}
public static void main(String[] args) {
Map<Integer, Long> sortedMap = new SortedComputingMap<Integer, Long>(3,
new Function<Integer, Long>() {
#Override
public Long apply(Integer n) {
Long fact = 1l;
while (n != 0)
fact *= n--;
return fact;
}
});
sortedMap.get(12);
sortedMap.get(1);
sortedMap.get(2);
sortedMap.get(5);
System.out.println(sortedMap.entrySet());
}
}
If you need the thread safety, this could be tricky, but if you don't I'd recommend something close to Emil's suggestion, but using a ForwardingSortedMap rather than extending TreeMap directly.

Searching in a TreeMap (Java)

I need to do a search in a map of maps and return the keys this element belong.
I think this implementation is very slow, can you help me to optimize it?.
I need to use TreeSet and I can't use contains because they use compareTo, and equals/compareTo pair are implemented in an incompatible way and I can't change that.
(sorry my bad english)
Map<Key, Map<SubKey, Set<Element>>> m = new TreeSet();
public String getKeys(Element element) {
for(Entry<Key, Map<SubKey, Set<Element>>> e : m.entrySet()) {
mapSubKey = e.getValue();
for(Entry<SubKey, Set<Element>> e2 : mapSubKey.entrySet()) {
setElements = e2.getValue();
for(Element elem : setElements)
if(elem.equals(element)) return "Key: " + e.getKey() + " SubKey: " + e2.getKey();
}
}
}
The problem here is that the keys and values are backward.
Maps allow one to efficiently find a value (which would be Key and SubKey) associated with a key (Element, in this example).
Going backwards is slow.
There are bi-directional map implementations, like Google Collections BiMap, that support faster access in both directions—but that would mean replacing TreeMap. Otherwise, maintain two maps, one for each direction.
if you can't use contains, and you're stuck using a Map of Maps, then your only real option is to iterate, as you are doing.
alternatively, you could keep a reverse map of Element to Key/SubKey in a separate map, which would make reverse lookups faster.
also, if you're not sure that a given Element can exist in only one place, you might want to store and retrieve a List<Element> instead of just an Element
Using TreeMap and TreeSet work properly when compareTo and equals are implemented in such a way that they are compatible with each other. Furthermore, when searching in a Map, only the search for the key will be efficient (for a TreeMap O(log n)). When searching for a value in a Map, the complexity will become linear.
There is a way to optimize the search in the inner TreeSet though, when implementing your own Comparator for the Element type. This way you can implement your own compareTo method without changing the Element object itself.
Here is a bidirectional TreeMap (or Bijection over TreeMap).
It associates two overloaded TreeMaps Which are tied together.
Each one "inverse" constant field points to the other TreeMap. Any change on one TreeMap is automatically reflected on its inverse.
As a result, each value is unique.
public class BiTreeMap<K, V> extends TreeMap<K, V> {
public final BiTreeMap<V, K> inverse;
private BiTreeMap(BiTreeMap inverse) {
this.inverse = inverse;
}
public BiTreeMap() {
inverse = new BiTreeMap<V, K>(this);
}
public BiTreeMap(Map<? extends K, ? extends V> m) {
inverse = new BiTreeMap<V, K>(this);
putAll(m);
}
public BiTreeMap(Comparator<? super K> comparator) {
super(comparator);
inverse = new BiTreeMap<V, K>(this);
}
public BiTreeMap(Comparator<? super K> comparatorK, Comparator<? super V> comparatorV) {
super(comparatorK);
inverse = new BiTreeMap<V, K>(this, comparatorV);
}
private BiTreeMap(BiTreeMap<V, K> inverse, Comparator<? super K> comparatorK) {
super(comparatorK);
this.inverse = inverse;
}
#Override
public V put(K key, V value) {
if(key == null || value == null) {
throw new NullPointerException();
}
V oldValue = super.put(key, value);
if (oldValue != null && inverse._compareKeys(value, oldValue) != 0) {
inverse._remove(oldValue);
}
K inverseOldKey = inverse._put(value, key);
if (inverseOldKey != null && _compareKeys(key, inverseOldKey) != 0) {
super.remove(inverseOldKey);
}
return oldValue;
}
private int _compareKeys(K k1, K k2) {
Comparator<? super K> c = comparator();
if (c == null) {
Comparable<? super K> ck1 = (Comparable<? super K>) k1;
return ck1.compareTo(k2);
} else {
return c.compare(k1, k2);
}
}
private V _put(K key, V value) {
return super.put(key, value);
}
#Override
public V remove(Object key) {
V value = super.remove(key);
inverse._remove(value);
return value;
}
private V _remove(Object key) {
return super.remove(key);
}
#Override
public void putAll(Map<? extends K, ? extends V> map) {
for (Map.Entry<? extends K, ? extends V> e : map.entrySet()) {
K key = e.getKey();
V value = e.getValue();
put(key, value);
}
}
#Override
public void clear() {
super.clear();
inverse._clear();
}
private void _clear() {
super.clear();
}
public boolean containsValue(Object value) {
return inverse.containsKey(value);
}
#Override
public Map.Entry<K, V> pollFirstEntry() {
Map.Entry<K, V> entry = super.pollFirstEntry();
inverse._remove(entry.getValue());
return entry;
}
#Override
public Map.Entry<K, V> pollLastEntry() {
Map.Entry<K, V> entry = super.pollLastEntry();
inverse._remove(entry.getValue());
return entry;
}
}

Categories