I need to create priority set/array that bases on:
public interface IListener
{
public Priority getPriority();
public enum Priority
{
HIGHEST,
HIGH,
NORMAL,
LOW,
LOWEST;
}
}
IListeners are stored in:
HashMap<Class<? extends IListener>, Set<IListener>> listeners = new HashMap<>();
I am looking to make method that will always add IListener in 1st place after its Priority group.
Example:
Given Set contains some IListeners with this order.
{ HIGHEST, HIGHEST, HIGH, HIGH, LOW, LOW, LOW, LOWEST }
Adding listener with Priority == HIGH would result in:
{ HIGHEST, HIGHEST, HIGH, HIGH, HIGH, LOW, LOW, LOW, LOWEST }
Bold one being newly added.
I know I could just iterate and add at 1st "free slot", but question is rather - does Java provide some good-looking (maybe better?) solutions? Might be just for future reference.
As indicated in the comments, I don't think there is any collection in the JDK that exactly meets your requirements.
IListenerSet is an implementation of Set that meets your needs. The iterator always returns the elements in order of priority. If two elements have the same priority, they are returned in the order they were put into the set. The set supports addition and removal. The iterator supports the remove() method. The set cannot contain null, and throws a NullPointerException if you try to add null. The set cannot contain an IListener for which getPriority() returns null, and throws an IllegalArgumentException if you try to add such an element.
public final class IListenerSet<T extends IListener> extends AbstractSet<T> {
private final Map<IListener.Priority, Set<T>> map;
public IListenerSet() {
map = new EnumMap<>(IListener.Priority.class);
for (IListener.Priority p : IListener.Priority.values())
map.put(p, new LinkedHashSet<>());
}
public IListenerSet(Collection<? extends T> collection) {
this();
addAll(collection);
}
#Override
public int size() {
int size = 0;
for (Set<T> set : map.values())
size += set.size();
return size;
}
#Override
public boolean contains(Object o) {
if (!(o instanceof IListener))
return false;
IListener listener = (IListener) o;
IListener.Priority p = listener.getPriority();
return p != null && map.get(p).contains(listener);
}
#Override
public boolean add(T listener) {
IListener.Priority p = listener.getPriority();
if (p == null)
throw new IllegalArgumentException();
return map.get(p).add(listener);
}
#Override
public boolean remove(Object o) {
if (!(o instanceof IListener))
return false;
IListener listener = (IListener) o;
IListener.Priority p = listener.getPriority();
return p != null && map.get(p).remove(listener);
}
#Override
public void clear() {
for (Set<T> set : map.values())
set.clear();
}
#Override
public Iterator<T> iterator() {
return new Iterator<T>() {
private Iterator<T> iterator = map.get(IListener.Priority.values()[0]).iterator();
private int nextIndex = 1;
private Iterator<T> nextIterator = null;
#Override
public boolean hasNext() {
if (iterator.hasNext() || nextIterator != null)
return true;
IListener.Priority[] priorities = IListener.Priority.values();
while (nextIndex < priorities.length) {
Set<T> set = map.get(priorities[nextIndex++]);
if (!set.isEmpty()) {
nextIterator = set.iterator();
return true;
}
}
return false;
}
#Override
public T next() {
if (iterator.hasNext())
return iterator.next();
if (!hasNext())
throw new NoSuchElementException();
iterator = nextIterator;
nextIterator = null;
return iterator.next();
}
#Override
public void remove() {
iterator.remove();
}
};
}
}
An alternative approach is to use TreeSet with custom comparator and automatically assign autogenerated ids to the elements added to the set, so the latter elements always get bigger id which can be used in comparison:
public class IListenerSet extends AbstractSet<IListener> {
private long maxId = 0;
private final Map<IListener, Long> ids = new HashMap<>();
private final Set<IListener> set = new TreeSet<>(new Comparator<IListener>() {
#Override
public int compare(IListener o1, IListener o2) {
int res = o1.getPriority().compareTo(o2.getPriority());
if(res == 0)
res = ids.get(o1).compareTo(ids.get(o2));
return res;
}
});
#Override
public Iterator<IListener> iterator() {
return new Iterator<IListener>() {
Iterator<IListener> it = set.iterator();
private IListener e;
#Override
public boolean hasNext() {
return it.hasNext();
}
#Override
public IListener next() {
this.e = it.next();
return e;
}
#Override
public void remove() {
it.remove();
ids.remove(e);
}
};
}
#Override
public int size() {
return set.size();
}
#Override
public boolean contains(Object o) {
return ids.containsKey(o);
}
#Override
public boolean add(IListener e) {
if(ids.get(e) != null)
return false;
// assign new id and store it in the internal map
ids.put(e, ++maxId);
return set.add(e);
}
#Override
public boolean remove(Object o) {
if(!ids.containsKey(o)) return false;
set.remove(o);
return true;
}
#Override
public void clear() {
ids.clear();
set.clear();
}
}
Keep it easy:
You can combine several kinds of collections:
A LinkedHashSet allows you to store items by ordering them by insertion order (and with no repeated items).
A TreeMap allows you to map keys and values ordering them according to the keys.
Thus, you can declare this combination:
TreeMap<IListener.Priority, LinkedHashSet<IListener>> listenersByPriority=new TreeMap<IListener.Priority, LinkedHashSet<IListener>>(new PriorityComparator());
... and encapsulate it in a proper abstraction to manage it:
public class ListenerManager
{
private final TreeMap<IListener.Priority, LinkedHashSet<IListener>> listenersByPriority=new TreeMap<IListener.Priority, LinkedHashSet<IListener>>();
private int size;
public void addListener(IListener listener)
{
synchronized (listenersByPriority)
{
LinkedHashSet<IListener> list=listenersByPriority.get(listener.getPriority());
if (list == null)
{
list=new LinkedHashSet<IListener>();
listenersByPriority.put(listener.getPriority(), list);
}
list.add(listener);
size++;
}
}
public Iterator<IListener> iterator()
{
synchronized (listenersByPriority)
{
List<IListener> output=new ArrayList<IListener>(size);
for (LinkedHashSet<IListener> list : listenersByPriority.values())
{
for (IListener listener : list)
{
output.add(listener);
}
}
return output.iterator();
}
}
}
When declaring a TreeMap, it is usually necessary an specific implementation of Comparator coupled to the key class, but it is not necessary in this case, because enums are already comparable by its ordinal. (thanks to Paul Boddington). And the ordinal of each enum item is the position it is placed in the declaration.
I've created a hashmap with .class objects for keys.
Hashmap<Class<? extends MyObject>, Object> mapping = new Hashmap<Class<? extends MyObject>, Object>();
This is all well and fine, but I'm getting strange behaviour that I can only attribute to strangeness with the hash function. Randomly during runtime, iterating through the hashmap will not hit every value; it will miss one or two. I think this may be due to the .class object not being final, and therefore it changes causing it to map to a different hash value. With a different hash value, the hashmap wouldn't be able to correctly correlate the key with the value, thus making it appear to have lost the value.
Am I correct that this is what is going on? How can I work around this? Is there a better way to accomplish this form of data structure?
Edit: I really thought I was onto something with the hash function thing, but I'll post my real code to try and figure this out. It may be a problem with my implementation of a multimap. I've been using it for quite some time and haven't noticed any issues until recently.
/**
* My own implementation of a map that maps to a List. If the key is not present, then
* the map adds a List with a single entry. Every subsequent addition to the key
* is appended to the List.
* #author
*
* #param <T> Key
* #param <K> Value
*/
public class MultiMap<T, K> implements Map<T, List<K>>, Serializable, Iterable<K> {
/**
*
*/
private static final long serialVersionUID = 5789101682525659411L;
protected HashMap<T, List<K>> set = new HashMap<T, List<K>>();
#Override
public void clear() {
set = new HashMap<T, List<K>>();
}
#Override
public boolean containsKey(Object arg0) {
return set.containsKey(arg0);
}
#Override
public boolean containsValue(Object arg0) {
boolean output = false;
for(Iterator<List<K>> iter = set.values().iterator();iter.hasNext();) {
List<K> searchColl = iter.next();
for(Iterator<K> iter2 = searchColl.iterator(); iter2.hasNext();) {
K value = iter2.next();
if(value == arg0) {
output = true;
break;
}
}
}
return output;
}
#Override
public Set<Entry<T, List<K>>> entrySet() {
Set<Entry<T, List<K>>> output = new HashSet<Entry<T,List<K>>>();
for(Iterator<T> iter1 = set.keySet().iterator(); iter1.hasNext();) {
T key = iter1.next();
for(Iterator<K> iter2 = set.get(key).iterator(); iter2.hasNext();) {
K value = iter2.next();
List<K> input = new ArrayList<K>();
input.add(value);
output.add(new AbstractMap.SimpleEntry<T,List<K>>(key, input));
}
}
return output;
}
#Override
public boolean isEmpty() {
return set.isEmpty();
}
#Override
public Set<T> keySet() {
return set.keySet();
}
#Override
public int size() {
return set.size();
}
#Override
public Collection<List<K>> values() {
Collection<List<K>> values = new ArrayList<List<K>>();
for(Iterator<T> iter1 = set.keySet().iterator(); iter1.hasNext();) {
T key = iter1.next();
values.add(set.get(key));
}
return values;
}
#Override
public List<K> get(Object key) {
return set.get(key);
}
#Override
public List<K> put(T key, List<K> value) {
return set.put(key, value);
}
public void putValue(T key, K value) {
if(set.containsKey(key)) {
set.get(key).add(value);
}
else {
List<K> setval = new ArrayList<K>();
setval.add(value);
set.put(key, setval);
}
}
#Override
public List<K> remove(Object key) {
return set.remove(key);
}
public K removeValue(Object value) {
K valueRemoved = null;
for(T key:this.keySet()) {
for(K val:this.get(key)) {
if(val.equals(value)) {
List<K> temp = this.get(key);
temp.remove(value);
valueRemoved = val;
this.put(key, temp);
}
}
}
return valueRemoved;
}
#Override
public void putAll(Map<? extends T, ? extends List<K>> m) {
for(Iterator<? extends T> iter = m.keySet().iterator(); iter.hasNext();) {
T key = iter.next();
set.put(key, m.get(key));
}
}
#Override
public Iterator<K> iterator() {
return new MultiMapIterator<K>(this);
}
}
Perhaps there is an issue with my iterator? I'll post that code as well.
import java.util.Iterator;
import java.util.List;
import java.util.NoSuchElementException;
public class MultiMapIterator<T> implements Iterator<T> {
private MultiMap <?, T> map;
private Iterator<List<T>> HashIter;
private Iterator<T> govIter;
private T value;
public MultiMapIterator(MultiMap<?, T> map) {
this.map = map;
HashIter = map.values().iterator();
if(HashIter.hasNext()) {
govIter = HashIter.next().iterator();
}
if(govIter.hasNext()) {
value = govIter.next();
}
}
#Override
public boolean hasNext() {
if (govIter.hasNext()) {
return true;
}
else if(HashIter.hasNext()) {
govIter = HashIter.next().iterator();
return this.hasNext();
}
else {
return false;
}
}
#Override
public T next() {
if(!this.hasNext()) {
throw new NoSuchElementException();
}
else {
value = govIter.next();
return value;
}
}
#Override
public void remove() {
map.remove(value);
}
}
Sorry for the long tracts of code. Thank you for spending time helping me with this.
You pull the a value out of govIter in the constructor, but never return it.
Your iterator remove method is completely wrong. You are iterating values, but calling the map.remove which removes by key. you simply want to call govIter.remove() (unless you need to avoid empty lists, in which case it's more complicated).
Your hasNext() method could also have problems depending on whether or not you allow empty Lists values in your multimap.
I would like to use case insensitive string as a HashMap key for the following reasons.
During initialization, my program creates HashMap with user defined String
While processing an event (network traffic in my case), I might received String in a different case but I should be able to locate the <key, value> from HashMap ignoring the case I received from traffic.
I've followed this approach
CaseInsensitiveString.java
public final class CaseInsensitiveString {
private String s;
public CaseInsensitiveString(String s) {
if (s == null)
throw new NullPointerException();
this.s = s;
}
public boolean equals(Object o) {
return o instanceof CaseInsensitiveString &&
((CaseInsensitiveString)o).s.equalsIgnoreCase(s);
}
private volatile int hashCode = 0;
public int hashCode() {
if (hashCode == 0)
hashCode = s.toUpperCase().hashCode();
return hashCode;
}
public String toString() {
return s;
}
}
LookupCode.java
node = nodeMap.get(new CaseInsensitiveString(stringFromEvent.toString()));
Because of this, I'm creating a new object of CaseInsensitiveString for every event. So, it might hit performance.
Is there any other way to solve this issue?
Map<String, String> nodeMap =
new TreeMap<>(String.CASE_INSENSITIVE_ORDER);
That's really all you need.
As suggested by Guido García in their answer here:
import java.util.HashMap;
public class CaseInsensitiveMap extends HashMap<String, String> {
#Override
public String put(String key, String value) {
return super.put(key.toLowerCase(), value);
}
// not #Override because that would require the key parameter to be of type Object
public String get(String key) {
return super.get(key.toLowerCase());
}
}
Or
https://commons.apache.org/proper/commons-collections/apidocs/org/apache/commons/collections4/map/CaseInsensitiveMap.html
One approach is to create a custom subclass of the Apache Commons AbstractHashedMap class, overriding the hash and isEqualKeys methods to perform case insensitive hashing and comparison of keys. (Note - I've never tried this myself ...)
This avoids the overhead of creating new objects each time you need to do a map lookup or update. And the common Map operations should O(1) ... just like a regular HashMap.
And if you are prepared to accept the implementation choices they have made, the Apache Commons CaseInsensitiveMap does the work of customizing / specializing AbstractHashedMap for you.
But if O(logN) get and put operations are acceptable, a TreeMap with a case insensitive string comparator is an option; e.g. using String.CASE_INSENSITIVE_ORDER.
And if you don't mind creating a new temporary String object each time you do a put or get, then Vishal's answer is just fine. (Though, I note that you wouldn't be preserving the original case of the keys if you did that ...)
Subclass HashMap and create a version that lower-cases the key on put and get (and probably the other key-oriented methods).
Or composite a HashMap into the new class and delegate everything to the map, but translate the keys.
If you need to keep the original key you could either maintain dual maps, or store the original key along with the value.
Two choices come to my mind:
You could use directly the s.toUpperCase().hashCode(); as the key of the Map.
You could use a TreeMap<String> with a custom Comparator that ignore the case.
Otherwise, if you prefer your solution, instead of defining a new kind of String, I would rather implement a new Map with the required case insensibility functionality.
Wouldn't it be better to "wrap" the String in order to memorize the hashCode. In the normal String class hashCode() is O(N) the first time and then it is O(1) since it is kept for future use.
public class HashWrap {
private final String value;
private final int hash;
public String get() {
return value;
}
public HashWrap(String value) {
this.value = value;
String lc = value.toLowerCase();
this.hash = lc.hashCode();
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o instanceof HashWrap) {
HashWrap that = (HashWrap) o;
return value.equalsIgnoreCase(that.value);
} else {
return false;
}
}
#Override
public int hashCode() {
return this.hash;
}
//might want to implement compare too if you want to use with SortedMaps/Sets.
}
This would allow you to use any implementation of Hashtable in java and to have O(1) hasCode().
You can use a HashingStrategy based Map from Eclipse Collections
HashingStrategy<String> hashingStrategy =
HashingStrategies.fromFunction(String::toUpperCase);
MutableMap<String, String> node = HashingStrategyMaps.mutable.of(hashingStrategy);
Note: I am a contributor to Eclipse Collections.
Based on other answers, there are basically two approaches: subclassing HashMap or wrapping String. The first one requires a little more work. In fact, if you want to do it correctly, you must override almost all methods (containsKey, entrySet, get, put, putAll and remove).
Anyway, it has a problem. If you want to avoid future problems, you must specify a Locale in String case operations. So you would create new methods (get(String, Locale), ...). Everything is easier and clearer wrapping String:
public final class CaseInsensitiveString {
private final String s;
public CaseInsensitiveString(String s, Locale locale) {
this.s = s.toUpperCase(locale);
}
// equals, hashCode & toString, no need for memoizing hashCode
}
And well, about your worries on performance: premature optimization is the root of all evil :)
Instead of creating your own class to validate and store case insensitive string as a HashMap key, you can use:
LinkedCaseInsensitiveMap wraps a LinkedHashMap, which is a Map based on a hash table and a linked list. Unlike LinkedHashMap, it doesn't allow null key inserting. LinkedCaseInsensitiveMap preserves the original order as well as the original casing of keys while allowing calling functions like get and remove with any case.
Eg:
Map<String, Integer> linkedHashMap = new LinkedCaseInsensitiveMap<>();
linkedHashMap.put("abc", 1);
linkedHashMap.put("AbC", 2);
System.out.println(linkedHashMap);
Output: {AbC=2}
Mvn Dependency:
Spring Core is a Spring Framework module that also provides utility classes, including LinkedCaseInsensitiveMap.
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>5.2.5.RELEASE</version>
</dependency>
CaseInsensitiveMap is a hash-based Map, which converts keys to lower case before they are being added or retrieved. Unlike TreeMap, CaseInsensitiveMap allows null key inserting.
Eg:
Map<String, Integer> commonsHashMap = new CaseInsensitiveMap<>();
commonsHashMap.put("ABC", 1);
commonsHashMap.put("abc", 2);
commonsHashMap.put("aBc", 3);
System.out.println(commonsHashMap);
Output: {abc=3}
Dependency:
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-collections4</artifactId>
<version>4.4</version>
</dependency>
TreeMap is an implementation of NavigableMap, which means that it always sorts the entries after inserting, based on a given Comparator. Also, TreeMap uses a Comparator to find if an inserted key is a duplicate or a new one.
Therefore, if we provide a case-insensitive String Comparator, we'll get a case-insensitive TreeMap.
Eg:
Map<String, Integer> treeMap = new TreeMap<>(String.CASE_INSENSITIVE_ORDER);
treeMap.put("ABC", 1);
treeMap.put("ABc", 2);
treeMap.put("cde", 1);
System.out.println(treeMap);
Output: {ABC=2, cde=1}
You can use CollationKey objects instead of strings:
Locale locale = ...;
Collator collator = Collator.getInstance(locale);
collator.setStrength(Collator.SECONDARY); // Case-insensitive.
collator.setDecomposition(Collator.FULL_DECOMPOSITION);
CollationKey collationKey = collator.getCollationKey(stringKey);
hashMap.put(collationKey, value);
hashMap.get(collationKey);
Use Collator.PRIMARY to ignore accent differences.
The CollationKey API does not guarantee that hashCode() and equals() are implemented, but in practice you'll be using RuleBasedCollationKey, which does implement these. If you're paranoid, you can use a TreeMap instead, which is guaranteed to work at the cost of O(log n) time instead of O(1).
This is an adapter for HashMaps which I implemented for a recent project. Works in a way similart to what #SandyR does, but encapsulates conversion logic so you don't manually convert strings to a wrapper object.
I used Java 8 features but with a few changes, you can adapt it to previous versions. I tested it for most common scenarios, except new Java 8 stream functions.
Basically it wraps a HashMap, directs all functions to it while converting strings to/from a wrapper object. But I had to also adapt KeySet and EntrySet because they forward some functions to the map itself. So I return two new Sets for keys and entries which actually wrap the original keySet() and entrySet().
One note: Java 8 has changed the implementation of putAll method which I could not find an easy way to override. So current implementation may have degraded performance especially if you use putAll() for a large data set.
Please let me know if you find a bug or have suggestions to improve the code.
package webbit.collections;
import java.util.*;
import java.util.function.*;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
public class CaseInsensitiveMapAdapter<T> implements Map<String,T>
{
private Map<CaseInsensitiveMapKey,T> map;
private KeySet keySet;
private EntrySet entrySet;
public CaseInsensitiveMapAdapter()
{
}
public CaseInsensitiveMapAdapter(Map<String, T> map)
{
this.map = getMapImplementation();
this.putAll(map);
}
#Override
public int size()
{
return getMap().size();
}
#Override
public boolean isEmpty()
{
return getMap().isEmpty();
}
#Override
public boolean containsKey(Object key)
{
return getMap().containsKey(lookupKey(key));
}
#Override
public boolean containsValue(Object value)
{
return getMap().containsValue(value);
}
#Override
public T get(Object key)
{
return getMap().get(lookupKey(key));
}
#Override
public T put(String key, T value)
{
return getMap().put(lookupKey(key), value);
}
#Override
public T remove(Object key)
{
return getMap().remove(lookupKey(key));
}
/***
* I completely ignore Java 8 implementation and put one by one.This will be slower.
*/
#Override
public void putAll(Map<? extends String, ? extends T> m)
{
for (String key : m.keySet()) {
getMap().put(lookupKey(key),m.get(key));
}
}
#Override
public void clear()
{
getMap().clear();
}
#Override
public Set<String> keySet()
{
if (keySet == null)
keySet = new KeySet(getMap().keySet());
return keySet;
}
#Override
public Collection<T> values()
{
return getMap().values();
}
#Override
public Set<Entry<String, T>> entrySet()
{
if (entrySet == null)
entrySet = new EntrySet(getMap().entrySet());
return entrySet;
}
#Override
public boolean equals(Object o)
{
return getMap().equals(o);
}
#Override
public int hashCode()
{
return getMap().hashCode();
}
#Override
public T getOrDefault(Object key, T defaultValue)
{
return getMap().getOrDefault(lookupKey(key), defaultValue);
}
#Override
public void forEach(final BiConsumer<? super String, ? super T> action)
{
getMap().forEach(new BiConsumer<CaseInsensitiveMapKey, T>()
{
#Override
public void accept(CaseInsensitiveMapKey lookupKey, T t)
{
action.accept(lookupKey.key,t);
}
});
}
#Override
public void replaceAll(final BiFunction<? super String, ? super T, ? extends T> function)
{
getMap().replaceAll(new BiFunction<CaseInsensitiveMapKey, T, T>()
{
#Override
public T apply(CaseInsensitiveMapKey lookupKey, T t)
{
return function.apply(lookupKey.key,t);
}
});
}
#Override
public T putIfAbsent(String key, T value)
{
return getMap().putIfAbsent(lookupKey(key), value);
}
#Override
public boolean remove(Object key, Object value)
{
return getMap().remove(lookupKey(key), value);
}
#Override
public boolean replace(String key, T oldValue, T newValue)
{
return getMap().replace(lookupKey(key), oldValue, newValue);
}
#Override
public T replace(String key, T value)
{
return getMap().replace(lookupKey(key), value);
}
#Override
public T computeIfAbsent(String key, final Function<? super String, ? extends T> mappingFunction)
{
return getMap().computeIfAbsent(lookupKey(key), new Function<CaseInsensitiveMapKey, T>()
{
#Override
public T apply(CaseInsensitiveMapKey lookupKey)
{
return mappingFunction.apply(lookupKey.key);
}
});
}
#Override
public T computeIfPresent(String key, final BiFunction<? super String, ? super T, ? extends T> remappingFunction)
{
return getMap().computeIfPresent(lookupKey(key), new BiFunction<CaseInsensitiveMapKey, T, T>()
{
#Override
public T apply(CaseInsensitiveMapKey lookupKey, T t)
{
return remappingFunction.apply(lookupKey.key, t);
}
});
}
#Override
public T compute(String key, final BiFunction<? super String, ? super T, ? extends T> remappingFunction)
{
return getMap().compute(lookupKey(key), new BiFunction<CaseInsensitiveMapKey, T, T>()
{
#Override
public T apply(CaseInsensitiveMapKey lookupKey, T t)
{
return remappingFunction.apply(lookupKey.key,t);
}
});
}
#Override
public T merge(String key, T value, BiFunction<? super T, ? super T, ? extends T> remappingFunction)
{
return getMap().merge(lookupKey(key), value, remappingFunction);
}
protected Map<CaseInsensitiveMapKey,T> getMapImplementation() {
return new HashMap<>();
}
private Map<CaseInsensitiveMapKey,T> getMap() {
if (map == null)
map = getMapImplementation();
return map;
}
private CaseInsensitiveMapKey lookupKey(Object key)
{
return new CaseInsensitiveMapKey((String)key);
}
public class CaseInsensitiveMapKey {
private String key;
private String lookupKey;
public CaseInsensitiveMapKey(String key)
{
this.key = key;
this.lookupKey = key.toUpperCase();
}
#Override
public boolean equals(Object o)
{
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
CaseInsensitiveMapKey that = (CaseInsensitiveMapKey) o;
return lookupKey.equals(that.lookupKey);
}
#Override
public int hashCode()
{
return lookupKey.hashCode();
}
}
private class KeySet implements Set<String> {
private Set<CaseInsensitiveMapKey> wrapped;
public KeySet(Set<CaseInsensitiveMapKey> wrapped)
{
this.wrapped = wrapped;
}
private List<String> keyList() {
return stream().collect(Collectors.toList());
}
private Collection<CaseInsensitiveMapKey> mapCollection(Collection<?> c) {
return c.stream().map(it -> lookupKey(it)).collect(Collectors.toList());
}
#Override
public int size()
{
return wrapped.size();
}
#Override
public boolean isEmpty()
{
return wrapped.isEmpty();
}
#Override
public boolean contains(Object o)
{
return wrapped.contains(lookupKey(o));
}
#Override
public Iterator<String> iterator()
{
return keyList().iterator();
}
#Override
public Object[] toArray()
{
return keyList().toArray();
}
#Override
public <T> T[] toArray(T[] a)
{
return keyList().toArray(a);
}
#Override
public boolean add(String s)
{
return wrapped.add(lookupKey(s));
}
#Override
public boolean remove(Object o)
{
return wrapped.remove(lookupKey(o));
}
#Override
public boolean containsAll(Collection<?> c)
{
return keyList().containsAll(c);
}
#Override
public boolean addAll(Collection<? extends String> c)
{
return wrapped.addAll(mapCollection(c));
}
#Override
public boolean retainAll(Collection<?> c)
{
return wrapped.retainAll(mapCollection(c));
}
#Override
public boolean removeAll(Collection<?> c)
{
return wrapped.removeAll(mapCollection(c));
}
#Override
public void clear()
{
wrapped.clear();
}
#Override
public boolean equals(Object o)
{
return wrapped.equals(lookupKey(o));
}
#Override
public int hashCode()
{
return wrapped.hashCode();
}
#Override
public Spliterator<String> spliterator()
{
return keyList().spliterator();
}
#Override
public boolean removeIf(Predicate<? super String> filter)
{
return wrapped.removeIf(new Predicate<CaseInsensitiveMapKey>()
{
#Override
public boolean test(CaseInsensitiveMapKey lookupKey)
{
return filter.test(lookupKey.key);
}
});
}
#Override
public Stream<String> stream()
{
return wrapped.stream().map(it -> it.key);
}
#Override
public Stream<String> parallelStream()
{
return wrapped.stream().map(it -> it.key).parallel();
}
#Override
public void forEach(Consumer<? super String> action)
{
wrapped.forEach(new Consumer<CaseInsensitiveMapKey>()
{
#Override
public void accept(CaseInsensitiveMapKey lookupKey)
{
action.accept(lookupKey.key);
}
});
}
}
private class EntrySet implements Set<Map.Entry<String,T>> {
private Set<Entry<CaseInsensitiveMapKey,T>> wrapped;
public EntrySet(Set<Entry<CaseInsensitiveMapKey,T>> wrapped)
{
this.wrapped = wrapped;
}
private List<Map.Entry<String,T>> keyList() {
return stream().collect(Collectors.toList());
}
private Collection<Entry<CaseInsensitiveMapKey,T>> mapCollection(Collection<?> c) {
return c.stream().map(it -> new CaseInsensitiveEntryAdapter((Entry<String,T>)it)).collect(Collectors.toList());
}
#Override
public int size()
{
return wrapped.size();
}
#Override
public boolean isEmpty()
{
return wrapped.isEmpty();
}
#Override
public boolean contains(Object o)
{
return wrapped.contains(lookupKey(o));
}
#Override
public Iterator<Map.Entry<String,T>> iterator()
{
return keyList().iterator();
}
#Override
public Object[] toArray()
{
return keyList().toArray();
}
#Override
public <T> T[] toArray(T[] a)
{
return keyList().toArray(a);
}
#Override
public boolean add(Entry<String,T> s)
{
return wrapped.add(null );
}
#Override
public boolean remove(Object o)
{
return wrapped.remove(lookupKey(o));
}
#Override
public boolean containsAll(Collection<?> c)
{
return keyList().containsAll(c);
}
#Override
public boolean addAll(Collection<? extends Entry<String,T>> c)
{
return wrapped.addAll(mapCollection(c));
}
#Override
public boolean retainAll(Collection<?> c)
{
return wrapped.retainAll(mapCollection(c));
}
#Override
public boolean removeAll(Collection<?> c)
{
return wrapped.removeAll(mapCollection(c));
}
#Override
public void clear()
{
wrapped.clear();
}
#Override
public boolean equals(Object o)
{
return wrapped.equals(lookupKey(o));
}
#Override
public int hashCode()
{
return wrapped.hashCode();
}
#Override
public Spliterator<Entry<String,T>> spliterator()
{
return keyList().spliterator();
}
#Override
public boolean removeIf(Predicate<? super Entry<String, T>> filter)
{
return wrapped.removeIf(new Predicate<Entry<CaseInsensitiveMapKey, T>>()
{
#Override
public boolean test(Entry<CaseInsensitiveMapKey, T> entry)
{
return filter.test(new FromCaseInsensitiveEntryAdapter(entry));
}
});
}
#Override
public Stream<Entry<String,T>> stream()
{
return wrapped.stream().map(it -> new Entry<String, T>()
{
#Override
public String getKey()
{
return it.getKey().key;
}
#Override
public T getValue()
{
return it.getValue();
}
#Override
public T setValue(T value)
{
return it.setValue(value);
}
});
}
#Override
public Stream<Map.Entry<String,T>> parallelStream()
{
return StreamSupport.stream(spliterator(), true);
}
#Override
public void forEach(Consumer<? super Entry<String, T>> action)
{
wrapped.forEach(new Consumer<Entry<CaseInsensitiveMapKey, T>>()
{
#Override
public void accept(Entry<CaseInsensitiveMapKey, T> entry)
{
action.accept(new FromCaseInsensitiveEntryAdapter(entry));
}
});
}
}
private class EntryAdapter implements Map.Entry<String,T> {
private Entry<String,T> wrapped;
public EntryAdapter(Entry<String, T> wrapped)
{
this.wrapped = wrapped;
}
#Override
public String getKey()
{
return wrapped.getKey();
}
#Override
public T getValue()
{
return wrapped.getValue();
}
#Override
public T setValue(T value)
{
return wrapped.setValue(value);
}
#Override
public boolean equals(Object o)
{
return wrapped.equals(o);
}
#Override
public int hashCode()
{
return wrapped.hashCode();
}
}
private class CaseInsensitiveEntryAdapter implements Map.Entry<CaseInsensitiveMapKey,T> {
private Entry<String,T> wrapped;
public CaseInsensitiveEntryAdapter(Entry<String, T> wrapped)
{
this.wrapped = wrapped;
}
#Override
public CaseInsensitiveMapKey getKey()
{
return lookupKey(wrapped.getKey());
}
#Override
public T getValue()
{
return wrapped.getValue();
}
#Override
public T setValue(T value)
{
return wrapped.setValue(value);
}
}
private class FromCaseInsensitiveEntryAdapter implements Map.Entry<String,T> {
private Entry<CaseInsensitiveMapKey,T> wrapped;
public FromCaseInsensitiveEntryAdapter(Entry<CaseInsensitiveMapKey, T> wrapped)
{
this.wrapped = wrapped;
}
#Override
public String getKey()
{
return wrapped.getKey().key;
}
#Override
public T getValue()
{
return wrapped.getValue();
}
#Override
public T setValue(T value)
{
return wrapped.setValue(value);
}
}
}
Because of this, I'm creating a new object of CaseInsensitiveString for every event. So, it might hit performance.
Creating wrappers or converting key to lower case before lookup both create new objects. Writing your own java.util.Map implementation is the only way to avoid this. It's not too hard, and IMO is worth it. I found the following hash function to work pretty well, up to few hundred keys.
static int ciHashCode(String string)
{
// length and the low 5 bits of hashCode() are case insensitive
return (string.hashCode() & 0x1f)*33 + string.length();
}
I like using ICU4J’s CaseInsensitiveString wrap of the Map key because it takes care of the hash\equals and issue and it works for unicode\i18n.
HashMap<CaseInsensitiveString, String> caseInsensitiveMap = new HashMap<>();
caseInsensitiveMap.put("tschüß", "bye");
caseInsensitiveMap.containsKey("TSCHÜSS"); # true
I find solutions which require you to change the key (e.g., toLowerCase) very unwelcome and solutions which require TreeMap also unwelcome.
Since TreeMap changes the time complexity (compared to other HashMaps), I think it's more viable to simply go with a utility method that is O(n):
public static <T> T getIgnoreCase(Map<String, T> map, String key) {
for(Entry<String, T> entry : map.entrySet()) {
if(entry.getKey().equalsIgnoreCase(key))
return entry.getValue();
}
return null;
}
This is that method. Since the sacrifice to performance (time complexity) looks inevitable, at least this doesn't require you to change the underlying map to suit the lookup.
Is it even possible?
Say you have
private Set<String> names = new LinkedHashSet<String>();
and Strings are "Mike", "John", "Karen".
Is it possible to get "1" in return to "what's the index of "John" without iteration?
The following works fine .. with this question i wonder if there is a better way
for (String s : names) {
++i;
if (s.equals(someRandomInputString)) {
break;
}
}
The Set interface doesn't have something like as an indexOf() method. You'd really need to iterate over it or to use the List interface instead which offers an indexOf() method.
If you would like to, converting Set to List is pretty trivial, it should be a matter of passing the Set through the constructor of the List implementation. E.g.
List<String> nameList = new ArrayList<String>(nameSet);
// ...
I don't believe so, but you could create a LinkedHashSetWithIndex wrapper class that would do the iteration for you, or keep a separate table with the indexes of each entry, if the performance decrease is acceptable for your use case.
It is generally not possible for a Set to return the index because it's not necessarily well defined for the particular Set implementation. For example it says in the HashSet documentation
It makes no guarantees as to the iteration order of the set; in particular, it does not guarantee that the order will remain constant over time.
So you shouldn't say the type is Set when what you actually expect is a Set implementing som order.
Here is an implementation that does insertions, removals, retainings, backed by an arraylist to achieve o(1) on get(index).
/**
* #Author Mo. Joseph
*
* Allows you to call get with o(1) instead of o(n) to get an instance by index
*/
public static final class $IndexLinkedHashSet<E> extends LinkedHashSet<E> {
private final ArrayList<E> list = new ArrayList<>();
public $IndexLinkedHashSet(int initialCapacity, float loadFactor) {
super(initialCapacity, loadFactor);
}
public $IndexLinkedHashSet() {
super();
}
public $IndexLinkedHashSet(int initialCapacity) {
super(initialCapacity);
}
public $IndexLinkedHashSet(Collection<? extends E> c) {
super(c);
}
#Override
public synchronized boolean add(E e) {
if ( super.add(e) ) {
return list.add(e);
}
return false;
}
#Override
public synchronized boolean remove(Object o) {
if ( super.remove(o) ) {
return list.remove(o);
}
return false;
}
#Override
public synchronized void clear() {
super.clear();
list.clear();
}
public synchronized E get(int index) {
return list.get(index);
}
#Override
public synchronized boolean removeAll(Collection<?> c) {
if ( super.removeAll(c) ) {
return list.removeAll(c);
}
return true;
}
#Override
public synchronized boolean retainAll(Collection<?> c) {
if ( super.retainAll(c) ) {
return list.retainAll(c);
}
return false;
}
/**
* Copied from super class
*/
#Override
public synchronized boolean addAll(Collection<? extends E> c) {
boolean modified = false;
for (E e : c)
if (add(e))
modified = true;
return modified;
}
}
To test it:
public static void main(String[] args) {
$IndexLinkedHashSet<String> abc = new $IndexLinkedHashSet<String>();
abc.add("8");
abc.add("8");
abc.add("8");
abc.add("2");
abc.add("3");
abc.add("4");
abc.add("1");
abc.add("5");
abc.add("8");
System.out.println("Size: " + abc.size());
int i = 0;
while ( i < abc.size()) {
System.out.println( abc.get(i) );
i++;
}
abc.remove("8");
abc.remove("5");
System.out.println("Size: " + abc.size());
i = 0;
while ( i < abc.size()) {
System.out.println( abc.get(i) );
i++;
}
abc.clear();
System.out.println("Size: " + abc.size());
i = 0;
while ( i < abc.size()) {
System.out.println( abc.get(i) );
i++;
}
}
Which outputs:
Size: 6
8
2
3
4
1
5
Size: 4
2
3
4
1
Size: 0
Ofcourse remove, removeAll, retainAll now has the same or worse performance as ArrayList. But I do not use them and so I am ok with that.
Enjoy!
EDIT:
Here is another implementation, which does not extend LinkedHashSet because that's redundant. Instead it uses a HashSet and an ArrayList.
/**
* #Author Mo. Joseph
*
* Allows you to call get with o(1) instead of o(n) to get an instance by index
*/
public static final class $IndexLinkedHashSet<E> implements Set<E> {
private final ArrayList<E> list = new ArrayList<>( );
private final HashSet<E> set = new HashSet<> ( );
public synchronized boolean add(E e) {
if ( set.add(e) ) {
return list.add(e);
}
return false;
}
public synchronized boolean remove(Object o) {
if ( set.remove(o) ) {
return list.remove(o);
}
return false;
}
#Override
public boolean containsAll(Collection<?> c) {
return set.containsAll(c);
}
public synchronized void clear() {
set.clear();
list.clear();
}
public synchronized E get(int index) {
return list.get(index);
}
public synchronized boolean removeAll(Collection<?> c) {
if ( set.removeAll(c) ) {
return list.removeAll(c);
}
return true;
}
public synchronized boolean retainAll(Collection<?> c) {
if ( set.retainAll(c) ) {
return list.retainAll(c);
}
return false;
}
public synchronized boolean addAll(Collection<? extends E> c) {
boolean modified = false;
for (E e : c)
if (add(e))
modified = true;
return modified;
}
#Override
public synchronized int size() {
return set.size();
}
#Override
public synchronized boolean isEmpty() {
return set.isEmpty();
}
#Override
public synchronized boolean contains(Object o) {
return set.contains(o);
}
#Override
public synchronized Iterator<E> iterator() {
return list.iterator();
}
#Override
public synchronized Object[] toArray() {
return list.toArray();
}
#Override
public synchronized <T> T[] toArray(T[] a) {
return list.toArray(a);
}
}
Now you have two implementations, I would prefer the second one.
Although not as efficient for the machine, this achieves it in one line:
int index = new ArrayList<String>(names).indexOf("John");
A better way there is not, only a single lined one (which makes use of the iterator, too but implicitly):
new ArrayList(names).get(0)
You can convert your Set to List then you can do any indexing operations.
Example: need to crop Set list to 5 items.
Set<String> listAsLinkedHashSet = new LinkedHashSet<>();
listAsLinkedHashSet.add("1");
listAsLinkedHashSet.add("2");
listAsLinkedHashSet.add("3");
listAsLinkedHashSet.add("4");
listAsLinkedHashSet.add("1");
listAsLinkedHashSet.add("2");
listAsLinkedHashSet.add("5");
listAsLinkedHashSet.add("7");
listAsLinkedHashSet.add("9");
listAsLinkedHashSet.add("8");
listAsLinkedHashSet.add("1");
listAsLinkedHashSet.add("10");
listAsLinkedHashSet.add("11");
List<String> listAsArrayList = new ArrayList<>(listAsLinkedHashSet);
//crop list to 5 elements
if (listAsArrayList.size() > 5) {
for (int i = 5; i < listAsArrayList.size(); i++) {
listAsArrayList.remove(i);
i--;
}
}
listAsLinkedHashSet.clear();
listAsLinkedHashSet.addAll(listAsArrayList);