Hashmap using lists as a buffer - java

I need to create a hashmap that can store multiple values for one key, I know multimaps could do this but I also need to keep those value lists to a specific length. I need a each key to store a list of n values with those being the latest n values, i.e if i reached length n and I add another value the first one added will drop off the value list and the target length is maintained.
I started off the below code and want to change it so i can store/add to a list for a specific key.
public static void main(final String args[]) throws Exception {
final int maxSize = 4;
final LinkedHashMap<String, String> cache = new LinkedHashMap<String, String>() {
#Override
protected boolean removeEldestEntry(final Map.Entry eldest) {
return size() > maxSize;
}
};
cache.put("A", "A");
System.out.println(cache);
cache.put("B", "A");
System.out.println(cache);
cache.put("C", "A");
System.out.println(cache);
cache.put("D", "A");
System.out.println(cache);
cache.put("E", "A");
System.out.println(cache);
cache.put("F", "A");
System.out.println(cache);
cache.put("G", "A");
}
Output:
{A=A}
{A=A, B=A}
{A=A, B=A, C=A}
{A=A, B=A, C=A, D=A}
{B=A, C=A, D=A, E=A}
{C=A, D=A, E=A, F=A}
I tried changing it to sth like this but can't get it working (python guy here who is getting started with java)
public LinkedHashMap filteroutliers(final String arg, final long arg2) throws Exception{
final int bufferSize = 5;
final LinkedHashMap<Integer, ArrayList<Double>> bufferList = new LinkedHashMap<Integer, ArrayList<Double>>(){
#Override
protected boolean removeEldestEntry(final Map.Entry eldest){
return size() < bufferSize;
}
};
return bufferList;
}

You can extend the HashMap and have your custom map something like this, here I maintained a queue to store the keys so when the limit reaches you can remove the earliest key-value pair (FIFO)
class CacheMap<K, V> extends HashMap<K, V> {
private static final long serialVersionUID = 1L;
private int MAX_SIZE;
private Queue<K> queue = new LinkedList<>();
public CacheMap(int capacity) {
super();
MAX_SIZE = capacity;
}
#Override
public V put(K key, V value) {
if (super.size() < MAX_SIZE) {
queue.add(key);
} else {
super.remove(queue.poll());
}
super.put(key, value);
return value;
}
}
}

Related

How to iterate over a list of Map of Strings and add to another list if map contains matching elements on a key value?

I have a List of Map<String, String> that I want to iterate over and find the common elements inside the map of string and add to another map.
I am confused what should go inside the if loop to get my expected output. I am looking for comparator type call but I couldn't find that anywhere.
for (int i = 0; i < list.size() - 1; i++) {
if (list.get(i).get("Journal ID").equals(list.get(i+1).get("Journal ID")))
// ???
}
}
I was using this method to sort list of Maps. I am expecting some thing like this
public Comparator<Map<String, String>> mapComparator = new Comparator<>() {
public int compare(Map<String, String> m1, Map<String, String> m2) {
return m1.get("Journal ID").compareTo(m2.get("Journal ID"));
}
}
Collections.sort(list, mapComparator);
// input and the expected output
my List = [{Journal ID=123, featureID=312},{Journal ID=123, featureID=313},{Journal ID=134,
featureID=314},{Journal ID=123, featureID=1255}]
expected output is one that matching the "Journal ID" [{Journal ID=123, featureID=312},
{ Journal ID=123, featureID=313},{Journal ID=123, featureID=1255}].
One approach is to construct a second map which will aggregate all maps.
It will reflect all keys form all maps. As value will be a list of each key value with counters. Implementation can be enhanced also, but the main aspect is how to proceed. Having the aggregate map, then is straight forward to transform in what ever structure needed.
public class TestEqMap {
public static void main(String[] args)
{
Map<String, String> m1 = Map.of("a","a1","b","b1");
Map<String, String> m2 = Map.of("a","a1","b","b2");
Map<String, String> m3 = Map.of("a","a2","b","b2");
Map<String, String> m4 = Map.of("a","a1","b","b2");
Map<String, String> m5 = Map.of("a","a3","b","b2");
AggMap amap = new AggMap();
amap.addMap(m1);
amap.addMap(m2);
amap.addMap(m3);
amap.addMap(m4);
amap.addMap(m5);
amap.map.forEach((k,v)->System.out.println("key="+k+"\n"+v));
}
static class AggMap
{
public Map<String, ListItem> map = new HashMap<String,ListItem>();
public void addMap(Map<String,String> m)
{
for(String key: m.keySet())
{
if(this.map.containsKey(key))
{
this.map.get(key).addItem(m.get(key));
}
else
{
ListItem li = new ListItem();
li.addItem(m.get(key));
this.map.put(key, li);
}
}
}
}
static class ListItem
{
public List<Item> li = new ArrayList<Item>();
public ListItem() {};
public void addItem(String str)
{
for(Item i: this.li)
{
if(i.val.equals(str))
{
i.count++;
return;
}
}
this.li.add(new Item(str));
}
public String toString()
{
StringBuffer sb= new StringBuffer();
this.li.forEach(i->sb.append(i+"\n"));
return sb.toString();
}
}
static class Item
{
public String val;
public int count=1;
public Item(String val)
{
this.val = val;
}
public String toString()
{
return "val="+this.val+" count="+this.count;
}
}
}
Output:
key=a
val=a1 count=3
val=a2 count=1
val=a3 count=1
key=b
val=b1 count=1
val=b2 count=4

How to write a custom Comparator for TreeMap in Java?

I want to store key-value pairs in TreeMap and sort the entries based on the value of Key as per following logic:
Sort by the length of the key. If the length of two keys is same then sort them alphabetically. Example, for the following key-value pairs.
IBARAKI MitoCity
TOCHIGI UtunomiyaCity
GUNMA MaehashiCity
SAITAMA SaitamaCity
CHIBA ChibaCity
TOKYO Sinjyuku
KANAGAWA YokohamaCity
The expected output is like this.
CHIBA : ChibaCity
GUNMA : MaehashiCity
TOKYO : Sinjyuku
IBARAKI : MitoCity
SAITAMA : SaitamaCity
TOCHIGI : UtunomiyaCity
KANAGAWA : YokohamaCity
You can pass the Comparator as a parameter to Map's constructor.
According to documentation it is used for Keys only:
/**
* Constructs a new, empty tree map, ordered according to the given
* comparator. All keys inserted into the map must be <em>mutually
* comparable</em> by the given comparator: {#code comparator.compare(k1,
* k2)} must not throw a {#code ClassCastException} for any keys
* {#code k1} and {#code k2} in the map. If the user attempts to put
* a key into the map that violates this constraint, the {#code put(Object
* key, Object value)} call will throw a
* {#code ClassCastException}.
*
* #param comparator the comparator that will be used to order this map.
* If {#code null}, the {#linkplain Comparable natural
* ordering} of the keys will be used.
*/
public TreeMap(Comparator<? super K> comparator) {
this.comparator = comparator;
}
In this way you can pass comparator by length of your key like this:
new TreeMap<>(Comparator.comparingInt(String::length).thenComparing(Comparator.naturalOrder()))
You need to write your own comparator for this and use it in TreeMap, e.g.:
public class StringComparator implements Comparator<String> {
#Override
public int compare(String s1, String s2) {
return s1.length() == s2.length() ? s1.compareTo(s2) : s1.length() - s2.length();
}
public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException {
Map<String, String> map = new TreeMap<>(new StringComparator());
map.put("IBARAKI", "MitoCity");
map.put("TOCHIGI", "UtunomiyaCity");
map.put("GUNMA", "MaehashiCity");
map.put("SAITAMA", "SaitamaCity");
map.put("CHIBA", "ChibaCity");
map.put("TOKYO", "Sinjyuku");
map.put("KANAGAWA", "YokohamaCity");
System.out.println(map);
}
}
This does not handle null values but you can add the handling if you are expecting null values in your use case.
You should create a unique comparator for comparing the keys of the map. But because you want to print the values too, you should compare the whole entrysets instead:
Comparator<Map.Entry<String, String>> c = new Comparator<Map.Entry<String, String>>() {
#Override
public int compare(Map.Entry<String, String> o1, Map.Entry<String, String> o2) {
int q = Integer.compare(o1.getKey().length(), o2.getKey().length());
return q != 0 ? q : o1.getKey().compareTo(o2.getKey());
}
};
Then you can use this comparator in sorting:
map.entrySet().stream().sorted(c).forEach(System.out::println);
You can do this as follows.
public static void main(String[] args) {
Map<String, String> map = new TreeMap<>(new CustomSortComparator());
map.put("IBARAKI", "MitoCity");
map.put("TOCHIGI", "UtunomiyaCity");
map.put("GUNMA", "MaehashiCity");
map.put("SAITAMA", "SaitamaCity");
map.put("CHIBA", "ChibaCity");
map.put("TOKYO", "Sinjyuku");
map.put("KANAGAWA", "YokohamaCity");
System.out.println(map);
}
The CustomSortComparator has been defined as follows.
public class CustomSortComparator implements Comparator<String> {
#Override
public int compare(String o1, String o2) {
if (o1.length() > o2.length()) {
return 1;
}
if (o1.length() < o2.length()) {
return -1;
}
return returnCompareBytes(o1, o2);
}
private int returnCompareBytes(String key1, String key2) {
for (int i = 0; i < key1.length() - 1; i++) {
if (key1.charAt(i) > key2.charAt(i)) {
return 1;
}
if (key1.charAt(i) < key2.charAt(i)) {
return -1;
}
}
return 0;
}
}
Instead of converting Map into TreeMap directly you can use this method
public static Map toTreeMap(Map hashMap)
{
// Create a new TreeMap
Map treeMap = new TreeMap<>(new Comparator<Map.Entry<String, String>>(){
public int compare(Map.Entry<String, String> o1, Map.Entry<String, String> o2 )
{
if (o1.getKey().length() > o2.getKey().length()) {
return 1;
}
if (o1.getKey().length() > o2.getKey().length()) {
return -1;
}
return o1.getKey().compareTo(o2.getKey());
}
});
for(Map.entry e : hashmap){
treeMap.put(e.getKey(),e.getValue);
}
return treeMap;
}
You can define the Comparator<String> you need in the constructor call to the TreeMap:
import java.util.Comparator;
import java.util.Map;
import java.util.TreeMap;
public class Main {
static final Map<String, String> map =
new TreeMap<String, String> (new Comparator<String>() {
#Override
public int compare(String o1, String o2) {
int diff_length = o1.length() - o2.length();
if (diff_length != 0) return diff_length;
return o1.compareTo(o2);
}
});
public static final void main(String[] args) {
map.put("IBARAKI", "MitoCity");
map.put("TOCHIGI", "UtunomiyaCity");
map.put("GUNMA", "MaehashiCity");
map.put("SAITAMA", "SaitamaCity");
map.put("CHIBA", "ChibaCity");
map.put("TOKYO", "Sinjyuku");
map.put("KANAGAWA", "YokohamaCity");
System.out.println(map);
}
}

How to add all integers for duplicates elements in HashMap?

I have the following HashMap:
Map<String, Integer> map = new HashMap<>();
How can I sum up all the integers for the duplicates String? or is there a better way to do it using Set?
for example, if I add these elements:
car 100
TV 140
car 5
charger 10
TV 10
I want the list to have:
car 105
TV 150
charger 10
I believe your question is: how do I put key/value pairs into a map in a way that changes the value rather than replacing it, for the same key.
Java has a Map method specifically for this purpose:
map.merge(key, value, (v, n) -> v + n);
This will add the value if the key isn't in the map. Otherwise it'll replace the current value with the sum of the current and new values.
The merge method was introduced in Java 8.
First of all you cannot add duplicate keys in map.
But if I understood what you want, the below code may help you:
if (map.containsKey(key))
map.put(key, map.get(key) + newValue);
else
map.put(key, newValue);
For java-8 and higher
You may just want to use the Map#merge method. It is the easiest way possible. If the key does not exist, it will add it, if it does exist, it will perform the merge operation.
map.merge("car", 100, Integer::sum);
map.merge("car", 20, Integer::sum);
System.out.println(map); // {car=120}
When you add "TV" for the second time, the first value (140) will be override because you cannot have duplicated keys on Map implementation. If you want to increment the value you will need to check if the key "TV" already exists and then increment/add the value.
For example:
if (map.containsKey(key)) {
value += map.get(key);
}
map.put(key, value)
HashMap dosen't save duplicates keys!
You can extend the HashMap Class(JAVA >= 8):
public class MyHashMap2 extends HashMap<String, Integer>{
#Override
public Integer put(String key, Integer value) {
return merge(key, value, (v, n) -> v + n);
}
public static void main (String[] args) throws java.lang.Exception
{
MyHashMap2 list3=new MyHashMap2();
list3.put("TV", 10);
list3.put("TV", 20);
System.out.println(list3);
}
}
Or You can aggregate the HashMap and replace the put method to add to the previous value the new value.
HashMap<String, Integer> list = new HashMap<>();
list.put("TV", 10);
list.put("TV", 20);
System.out.println(list);
MyHashMap list2 = new MyHashMap();
list2.put("TV", 10);
list2.put("TV", 20);
System.out.println(list2);
//OUTPUT:
//{TV=20}
//MyHashMap [List={TV=30}]
public class MyHashMap implements Map<String, Integer>{
HashMap<String, Integer> list = new HashMap<>();
public MyHashMap() {
super();
}
#Override
public int size() {
return list.size();
}
#Override
public boolean isEmpty() {
return list.isEmpty();
}
#Override
public boolean containsKey(Object key) {
return list.containsKey(key);
}
#Override
public boolean containsValue(Object value) {
return list.containsValue( value);
}
#Override
public Integer get(Object key) {
return list.get(key);
}
#Override
public Integer put(String key, Integer value) {
if(list.containsKey(key))
list.put(key, list.get(key)+value);
else
list.put(key, value);
return value;
}
#Override
public Integer remove(Object key) {
return list.remove(key);
}
#Override
public void putAll(Map<? extends String, ? extends Integer> m) {
list.putAll(m);
}
#Override
public void clear() {
list.clear();
}
#Override
public Set<String> keySet() {
return list.keySet();
}
#Override
public Collection<Integer> values() {
return list.values();
}
#Override
public Set<java.util.Map.Entry<String, Integer>> entrySet() {
return list.entrySet();
}
#Override
public String toString() {
return "MyHashMap [list=" + list + "]";
}
}
you can try the code here:https://ideone.com/Wl4Arb

what is the best way to get a sub HashMap based on a list of Keys?

I have a HashMap and I would like to get a new HashMap that contains only the elements from the first HashMap where K belongs to a specific List.
I could look through all the keys and fillup a new HashMap but I was wondering if there is a more efficient way to do it?
thanks
With Java8 streams, there is a functional (elegant) solution. If keys is the list of keys to keep and map is the source Map.
keys.stream()
.filter(map::containsKey)
.collect(Collectors.toMap(Function.identity(), map::get));
Complete example:
List<Integer> keys = new ArrayList<>();
keys.add(2);
keys.add(3);
keys.add(42); // this key is not in the map
Map<Integer, String> map = new HashMap<>();
map.put(1, "foo");
map.put(2, "bar");
map.put(3, "fizz");
map.put(4, "buz");
Map<Integer, String> res = keys.stream()
.filter(map::containsKey)
.collect(Collectors.toMap(Function.identity(), map::get));
System.out.println(res.toString());
Prints: {2=bar, 3=fizz}
EDIT add a filter for keys that are absent from the map
Yes there is a solution:
Map<K,V> myMap = ...;
List<K> keysToRetain = ...;
myMap.keySet().retainAll(keysToRetain);
The retainAll operation on the Set updates the underlying map. See java doc.
Edit
Be aware this solution modify the Map.
With a help of Guava.
Suppose you have a map Map<String, String> and want to submap with a values from List<String> list.
Map<String, String> map = new HashMap<>();
map.put("1", "1");
map.put("2", "2");
map.put("3", "4");
final List<String> list = Arrays.asList("2", "4");
Map<String, String> subMap = Maps.filterValues(
map, Predicates.in(list));
Update / Note: As #assylias mentioned in the comment, you will have O(n) when using contains(). So if you have large list, this could have huge impact in performance.
On the other side HashSet.contains() is constant time O(1), so if there is a possibility to have Set instead of List, this could be a nice approach (note that converting List to Set will cost O(n) anyway, so better not to convert :))
If you have Map m1 and List keys, then try following
Map m2 = new HashMap(m1);
m2.keySet().retainAll(keys);
Depending on your usage, this may be a more efficient implementation
public class MapView implements Map{
List ak;
Map map;
public MapView(Map map, List allowableKeys) {
ak = allowableKeys;
map = map;
}
public Object get(Object key) {
if (!ak.contains(key)) return null;
return map.get(key);
}
}
If your keys have an ordering, you can use a TreeMap.
Look at TreeMap.subMap()
It does not let you do this using a list, though.
You could even grow your own:
public class FilteredMap<K, V> extends AbstractMap<K, V> implements Map<K, V> {
// The map I wrap.
private final Map<K, V> map;
// The filter.
private final Set<K> filter;
public FilteredMap(Map<K, V> map, Set<K> filter) {
this.map = map;
this.filter = filter;
}
#Override
public Set<Entry<K, V>> entrySet() {
// Make a new one to break the bond with the underlying map.
Set<Entry<K, V>> entries = new HashSet<>(map.entrySet());
Set<Entry<K, V>> remove = new HashSet<>();
for (Entry<K, V> entry : entries) {
if (!filter.contains(entry.getKey())) {
remove.add(entry);
}
}
entries.removeAll(remove);
return entries;
}
}
public void test() {
Map<String, String> map = new HashMap<>();
map.put("1", "One");
map.put("2", "Two");
map.put("3", "Three");
Set<String> filter = new HashSet<>();
filter.add("1");
filter.add("2");
Map<String, String> filtered = new FilteredMap<>(map, filter);
System.out.println(filtered);
}
If you're concerned about all of the copying you could also grow a filtered Set and a filterd Iterator instead.
public interface Filter<T> {
public boolean accept(T t);
}
public class FilteredIterator<T> implements Iterator<T> {
// The Iterator
private final Iterator<T> i;
// The filter.
private final Filter<T> filter;
// The next.
private T next = null;
public FilteredIterator(Iterator<T> i, Filter<T> filter) {
this.i = i;
this.filter = filter;
}
#Override
public boolean hasNext() {
while (next == null && i.hasNext()) {
T n = i.next();
if (filter.accept(n)) {
next = n;
}
}
return next != null;
}
#Override
public T next() {
T n = next;
next = null;
return n;
}
}
public class FilteredSet<K> extends AbstractSet<K> implements Set<K> {
// The Set
private final Set<K> set;
// The filter.
private final Filter<K> filter;
public FilteredSet(Set<K> set, Filter<K> filter) {
this.set = set;
this.filter = filter;
}
#Override
public Iterator<K> iterator() {
return new FilteredIterator(set.iterator(), filter);
}
#Override
public int size() {
int n = 0;
Iterator<K> i = iterator();
while (i.hasNext()) {
i.next();
n += 1;
}
return n;
}
}
public class FilteredMap<K, V> extends AbstractMap<K, V> implements Map<K, V> {
// The map I wrap.
private final Map<K, V> map;
// The filter.
private final Filter<K> filter;
public FilteredMap(Map<K, V> map, Filter<K> filter) {
this.map = map;
this.filter = filter;
}
#Override
public Set<Entry<K, V>> entrySet() {
return new FilteredSet<>(map.entrySet(), new Filter<Entry<K, V>>() {
#Override
public boolean accept(Entry<K, V> t) {
return filter.accept(t.getKey());
}
});
}
}
public void test() {
Map<String, String> map = new HashMap<>();
map.put("1", "One");
map.put("2", "Two");
map.put("3", "Three");
Set<String> filter = new HashSet<>();
filter.add("1");
filter.add("2");
Map<String, String> filtered = new FilteredMap<>(map, new Filter<String>() {
#Override
public boolean accept(String t) {
return filter.contains(t);
}
});
System.out.println(filtered);
}
Instead of looking through all keys you could loop over the list and check if the HashMap contains a mapping. Then create a new HashMap with the filtered entries:
List<String> keys = Arrays.asList('a', 'c', 'e');
Map<String, String> old = new HashMap<>();
old.put('a', 'aa');
old.put('b', 'bb');
old.put('c', 'cc');
old.put('d', 'dd');
old.put('e', 'ee');
// only use an inital capacity of keys.size() if you won't add
// additional entries to the map; anyways it's more of a micro optimization
Map<String, String> newMap = new HashMap<>(keys.size(), 1f);
for (String key: keys) {
String value = old.get(key);
if (value != null) newMap.put(key, value);
}
Copy the map and remove all keys not in the list:
Map map2 = new Hashmap(map);
map2.keySet().retainAll(keysToKeep);
you can use the clone() method on the K HashMap returned.
something like this:
import java.util.HashMap;
public class MyClone {
public static void main(String a[]) {
Map<String, HashMap<String, String>> hashMap = new HashMap<String, HashMap<String, String>>();
Map hashMapCloned = new HashMap<String, String>();
Map<String, String> insert = new HashMap<String, String>();
insert.put("foo", "bar");
hashMap.put("first", insert);
hashMapCloned.put((HashMap<String, String>) hashMap.get("first").clone());
}
}
It may have some syntax errors because I haven't tested, but try something like that.
No, because HashMap doesn't maintain an order of it's entries. You can use TreeMap if you need a sub map between some range. And also, please look at this question; it seems to be on the similar lines of yours.
You asked for a new HashMap. Since HashMap does not support structure sharing, there is no better approach than the obvious one. (I have assumed here that null cannot be a value).
Map<K, V> newMap = new HashMap<>();
for (K k : keys) {
V v = map.get(k);
if (v != null)
newMap.put(k, v);
}
If you don't absolutely require that new object created is a HashMap you could create a new class (ideally extending AbstractMap<K, V>) representing a restricted view of the original Map. The class would have two private final fields
Map<? extends K, ? extends V> originalMap;
Set<?> restrictedSetOfKeys;
The get method for the new Map would be something like this
#Override
public V get(Object k) {
if (!restrictedSetOfKeys.contains(k))
return null;
return originalMap.get(k);
}
Notice that it is better if the restrictedSetOfKeys is a Set rather than a List because if it is a HashSet you would typically have O(1) time complexity for the get method.

Use LinkedHashMap to implement LRU cache

I was trying to implement a LRU cache using LinkedHashMap.
In the documentation of LinkedHashMap (http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html), it says:
Note that insertion order is not affected if a key is re-inserted into the map.
But when I do the following puts
public class LRUCache<K, V> extends LinkedHashMap<K, V> {
private int size;
public static void main(String[] args) {
LRUCache<Integer, Integer> cache = LRUCache.newInstance(2);
cache.put(1, 1);
cache.put(2, 2);
cache.put(1, 1);
cache.put(3, 3);
System.out.println(cache);
}
private LRUCache(int size) {
super(size, 0.75f, true);
this.size = size;
}
#Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > size;
}
public static <K, V> LRUCache<K, V> newInstance(int size) {
return new LRUCache<K, V>(size);
}
}
The output is
{1=1, 3=3}
Which indicates that the re-inserted did affected the order.
Does anybody know any explanation?
As pointed out by Jeffrey, you are using accessOrder. When you created the LinkedHashMap, the third parameter specify how the order is changed.
"true for access-order, false for insertion-order"
For more detailed implementation of LRU, you can look at this
http://www.programcreek.com/2013/03/leetcode-lru-cache-java/
But you aren't using insertion order, you're using access order.
order of iteration is the order in which its entries were last
accessed, from least-recently accessed to most-recently (access-order)
...
Invoking the put or get method results in an access to the
corresponding entry
So this is the state of your cache as you modify it:
LRUCache<Integer, Integer> cache = LRUCache.newInstance(2);
cache.put(1, 1); // { 1=1 }
cache.put(2, 2); // { 1=1, 2=2 }
cache.put(1, 1); // { 2=2, 1=1 }
cache.put(3, 3); // { 1=1, 3=3 }
Here is my implementation by using LinkedHashMap in AccessOrder. It will move the latest accessed element to the front which only incurs O(1) overhead because the underlying elements are organized in a doubly-linked list while also are indexed by hash function. So the get/put/top_newest_one operations all cost O(1).
class LRUCache extends LinkedHashMap<Integer, Integer>{
private int maxSize;
public LRUCache(int capacity) {
super(capacity, 0.75f, true);
this.maxSize = capacity;
}
//return -1 if miss
public int get(int key) {
Integer v = super.get(key);
return v == null ? -1 : v;
}
public void put(int key, int value) {
super.put(key, value);
}
#Override
protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
return this.size() > maxSize; //must override it if used in a fixed cache
}
}
Technically LinkedHashMap has the following constructor. Which help us to make the access-order True/False. If it is false then it keep the insertion-order.
LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder)
(#Constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode)
Following is the simple implementation of LRU Cache ---
class LRUCache {
private LinkedHashMap<Integer, Integer> linkHashMap;
public LRUCache(int capacity) {
linkHashMap = new LinkedHashMap<Integer, Integer>(capacity, 0.75F, true) {
#Override
protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
return size() > capacity;
}
};
}
public void put(int key, int value) {
linkHashMap.put(key, value);
}
public int get(int key) {
return linkHashMap.getOrDefault(key, -1);
}
}
I used the following code and its works!!!!
I have taken window size to be 4, but any value can be taken.
for Insertion order:
1: Check if the key is present.
2: If yes, then remove it (by using lhm.remove(key))
3: Add the new Key Value pair.
for Access Order:
No need of removing keys only put and get statements do everything automatically.
This code is for ACCESS ORDER:
import java.util.LinkedHashMap;
public class LRUCacheDemo {
public static void main(String args[]){
LinkedHashMap<String,String> lhm = new LinkedHashMap<String,String>(4,0.75f,true) {
#Override
protected boolean removeEldestEntry(Map.Entry<String,String> eldest) {
return size() > 4;
}
};
lhm.put("test", "test");
lhm.put("test1", "test1");
lhm.put("1", "abc");
lhm.put("test2", "test2");
lhm.put("1", "abc");
lhm.put("test3", "test3");
lhm.put("test4", "test4");
lhm.put("test3", "test3");
lhm.put("1", "abc");
lhm.put("test1", "test1");
System.out.println(lhm);
}
}
I also implement LRU cache with little change in code. I have tested and it works perfectly as concept of LRU cache.
package com.first.misc;
import java.util.LinkedHashMap;
import java.util.Map;
public class LRUCachDemo {
public static void main(String aa[]){
LRUCache<String, String> lruCache = new LRUCache<>(3);
lruCache.cacheable("test", "test");
lruCache.cacheable("test1", "test1");
lruCache.cacheable("test2", "test2");
lruCache.cacheable("test3", "test3");
lruCache.cacheable("test4", "test4");
lruCache.cacheable("test", "test");
System.out.println(lruCache.toString());
}
}
class LRUCache<K, T>{
private Map<K,T> cache;
private int windowSize;
public LRUCache( final int windowSize) {
this.windowSize = windowSize;
this.cache = new LinkedHashMap<K, T>(){
#Override
protected boolean removeEldestEntry(Map.Entry<K, T> eldest) {
return size() > windowSize;
}
};
}
// put data in cache
public void cacheable(K key, T data){
// check key is exist of not if exist than remove and again add to make it recently used
// remove element if window size is exhaust
if(cache.containsKey(key)){
cache.remove(key);
}
cache.put(key,data);
}
// evict functioanlity
#Override
public String toString() {
return "LRUCache{" +
"cache=" + cache.toString() +
", windowSize=" + windowSize +
'}';
}
}

Categories