How to remove “Null” key from HashMap<String, String>? - java

According to Java, HashMap allowed Null as key. My client said
Use HashMap only, Not other like HashTable,ConcurrentHashMap etc. write logic such a way that HashMap don't
contains Null as Key in my overall product logic.
I have a options like
Create wrapper class of HashMap and use it everywhere.
import java.util.HashMap;
public class WHashMap<T, K> extends HashMap<T, K> {
#Override
public K put(T key, K value) {
// TODO Auto-generated method stub
if (key != null) {
return super.put(key, value);
}
return null;
}
}
I suggested another option like remove null key manually or don't allowed it in each. It is also not allowed as its same operations repeated.
let me know..if I missed any other better approach?
Use HashMap with Nullonly as per java standard.
Let me know what is good approach to handle such case?

Change your put method implementation as follows
#Override
public K put(T key, K value) {
// TODO Auto-generated method stub
if (key == null) {
throw new NullPointerException("Key must not be null.");
}
return super.put(key, value);
}

Your code is a reasonable way to create a HashMap that can't contain a null key (though it's not perfect: what happens if someone calls putAll and passes in a map with a null key?); but I don't think that's what your client is asking for. Rather, I think your client is just saying that (s)he wants you to create a HashMap that doesn't contain a null key (even though it can). As in, (s)he just wants you to make sure that nothing in your program logic will ever put a null key in the map.

Related

Is there a a container in Java to store objects with many-to-many relations? [duplicate]

I am looking for a way to store key-value pairs. I need the lookup to be bidirectional, but at the same time I need to store multiple values for the same key. In other words, something like a BidiMap, but for every key there can be multiple values. For example, it needs to be able to hold pairs like: "s1"->1, "s2"->1, "s3"->2, and I need to be able to get the value mapped to each key, and for each value, get all the keys associated with it.
So you need support for many-to-many relationships? Closest you can get is Guava's Multimap like #Mechkov wrote - but more specifically Multimap combination with Multimaps.invertFrom. "BiMultimap" isn't implemented yet, but there is an issue requesting this feature in Google Guava library.
At this point you have few options:
If your "BiMultimap" is going to immutable constant - use Multimaps.invertFrom and ImmutableMultimap / ImmutableListMultimap / ImmutableSetMultimap (each of theese three has different collection storing values). Some code (example taken from app I develop, uses Enums and Sets.immutableEnumSet):
public class RolesAndServicesMapping {
private static final ImmutableMultimap<Service, Authority> SERVICES_TO_ROLES_MAPPING =
ImmutableMultimap.<Service, Authority>builder()
.put(Service.SFP1, Authority.ROLE_PREMIUM)
.put(Service.SFP, Authority.ROLE_PREMIUM)
.put(Service.SFE, Authority.ROLE_EXTRA)
.put(Service.SF, Authority.ROLE_STANDARD)
.put(Service.SK, Authority.ROLE_STANDARD)
.put(Service.SFP1, Authority.ROLE_ADMIN)
.put(Service.ADMIN, Authority.ROLE_ADMIN)
.put(Service.NONE, Authority.ROLE_DENY)
.build();
// Whole magic is here:
private static final ImmutableMultimap<Authority, Service> ROLES_TO_SERVICES_MAPPING =
SERVICES_TO_ROLES_MAPPING.inverse();
// before guava-11.0 it was: ImmutableMultimap.copyOf(Multimaps.invertFrom(SERVICES_TO_ROLES_MAPPING, HashMultimap.<Authority, Service>create()));
public static ImmutableSet<Authority> getRoles(final Service service) {
return Sets.immutableEnumSet(SERVICES_TO_ROLES_MAPPING.get(service));
}
public static ImmutableSet<Service> getServices(final Authority role) {
return Sets.immutableEnumSet(ROLES_TO_SERVICES_MAPPING.get(role));
}
}
If you really want your Multimap to be modifiable, it will be hard to maintain both K->V and V->K variants unless you will be modifying only kToVMultimap and call invertFrom each time you want to have its inverted copy (and making that copy unmodifiable to make sure that you accidentally don't modify vToKMultimap what wouldn't update kToVMultimap). This is not optimal but should do in this case.
(Not your case probably, mentioned as bonus): BiMap interface and implementing classes has .inverse() method which gives BiMap<V, K> view from BiMap<K, V> and itself after biMap.inverse().inverse(). If this issue I mentioned before is done, it will probably have something similar.
(EDIT October 2016) You can also use new graph API which will be present in Guava 20:
As a whole, common.graph supports graphs of the following varieties:
directed graphs
undirected graphs
nodes and/or edges with associated values (weights, labels, etc.)
graphs that do/don't allow self-loops
graphs that do/don't allow parallel edges (graphs with parallel edges are sometimes called multigraphs)
graphs whose nodes/edges are insertion-ordered, sorted, or unordered
What's wrong with having two maps, key->values, values->keys?
I hope using MultivaluedMap solves the problem.
Please find the documentation from oracle below link.
http://docs.oracle.com/javaee/6/api/javax/ws/rs/core/MultivaluedMap.html
Using Google Guava we can write a primitive BiMulitMap as below.
import java.util.Collection;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.Multimap;
public class BiMultiMap<K,V> {
Multimap<K, V> keyToValue = ArrayListMultimap.create();
Multimap<V, K> valueToKey = ArrayListMultimap.create();
public void putForce(K key, V value) {
keyToValue.put(key, value);
valueToKey.put(value, key);
}
public void put(K key, V value) {
Collection<V> oldValue = keyToValue.get(key);
if ( oldValue.contains(value) == false ) {
keyToValue.put(key, value);
valueToKey.put(value, key);
}
}
public Collection<V> getValue(K key) {
return keyToValue.get(key);
}
public Collection<K> getKey(V value) {
return valueToKey.get(value);
}
#Override
public String toString() {
return "BiMultiMap [keyToValue=" + keyToValue + ", valueToKey=" + valueToKey + "]";
}
}
Hope this will help some basic needs of the Bi-Directional Multi Map.
Note the K and V needs to implement the hascode and equals method properly
Hope I got you right
class A {
long id;
List<B> bs;
}
class B {
long id;
List<A> as;
}
Google's Guava MultiMap implementation is what i am using for these purposes.
Map<Key Collection<Values>>
where Collection can be an ArrayList for example. It allows multiple values stored in a collection to be mapped to a key.
Hope this helps!

Multi threading with a ConcurrentHashMap

I'm trying to create a method with a ConcurrentHashMap with the following behavior.
Read no lock
Write lock
prior to writing,
read to see if record exist,
if it still doesn't exist, save to database and add record to map.
if record exist from previous write, just return record.
My thoughts.
private Object lock1 = new Object();
private ConcurrentHashMap<String, Object> productMap;
private Object getProductMap(String name) {
if (productMap.isEmpty()) {
productMap = new ConcurrentHashMap<>();
}
if (productMap.containsKey(name)) {
return productMap.get(name);
}
synchronized (lock1) {
if (productMap.containsKey(name)) {
return productMap.get(name);
} else {
Product product = new Product(name);
session.save(product);
productMap.putIfAbsent(name, product);
}
}
}
Could someone help me to understand if this is a correct approach?
There are several bugs here.
If productMap isn't guaranteed to be initialized, you will get an NPE in your first statement to this method.
The method isn't guaranteed to return anything if the map is empty.
The method doesn't return on all paths.
The method is both poorly named and unnecessary; you're trying to emulate putIfAbsent which half accomplishes your goal.
You also don't need to do any synchronization; ConcurrentHashMap is thread safe for your purposes.
If I were to rewrite this, I'd do a few things differently:
Eagerly instantiate the ConcurrentHashMap
Bind it to ConcurrentMap instead of the concrete class (so ConcurrentMap<String, Product> productMap = new ConcurrentHashMap<>();)
Rename the method to putIfMissing and delegate to putIfAbsent, with some logic to return the same record I want to add if the result is null. The above absolutely depends on Product having a well-defined equals and hashCode method, such that new Product(name) will produce objects with the same values for equals and hashCode if provided the same name.
Use an Optional to avoid any NPEs with the result of putIfAbsent, and to provide easier to digest code.
A snippet of the above:
public Product putIfMissing(String key) {
Product product = new Product(key);
Optional<Product> result =
Optional.ofNullable(productMap.putIfAbsent(key, product));
session.save(result.orElse(product));
return result.orElse(product);
}

Closed addressing hash tables. How are they resized?

Reading about hopscotch hashing and trying to understand how it can be code I realized that in linear probing hash table variants we need to have a recursive approach to resize as follows:
create a back up array of the existing buckets
allocate a new array of the requested capacity
go over the back up array and rehash each element to get
the new position of the element in the new array and insert it in the
new array
when done release the backup array
And the code structure would be like:
public V put(Object key, Object value) {
//code
//we need to resize)
if(condition){
resize(2*keys.length);
return put(key, value);
}
//other code
}
private void resize(int newCapacity) {
//step 1
//step 2
//go over each element
for(Object key:oldKeys) {
put(key, value);
}
}
I don't like this structure as we recursively call put inside resize.
Is this the standard approach to resizing a hash table when using linear probing variants
Good question! Usually, in closed address hashing like hopscotch hashing, cuckoo hashing, or static perfect hashing where there's a chance that a rehash can fail, a single "rehash" step might have to sit in a loop trying to assign everything into a new table until it finds a way to do so that works.
You might want to consider having three methods - put, the externally visible function, rehash, an internal function, and tryPut, which tries to add an element, but might fail. You can then implement the functions like these, which are primarily for exposition and can definitely be optimized a bit:
public V put(Object key, Object value) {
V oldValue = get(key);
while (!tryPut(key, value)) {
rehash();
}
return oldValue;
}
private void rehash() {
increaseCapacity();
boolean success;
do {
success = true;
reallocateSpace();
for (each old key/value pair) {
if (!tryPut(key, value)) {
success = false;
break;
}
}
} while (!success);
}
private boolean tryPut(Object key, Object value) {
// Try adding the key/value pair using a
// hashtable specific implementation, returning
// true if it works and false otherwise.
}
There's no longer any risk of a weird recursion here, because tryPut never calls anything else.
Hope this helps!

ConcurrentHashMap Conditional Replace

I'd like to be able to conditionally replace a value in a ConcurrentHashMap. That is, given:
public class PriceTick {
final String instrumentId;
...
final long timestamp;
...
And a class (let's call it TickHolder) which owns a ConcurrentHashMap (let's just call it map).
I wish to be able to implement a conditional put method, so that if there's no entry for the key, the new one is inserted, but if there is an existing entry, the new one is inserted only if the timestamp value in the new PriceTick is greater than the existing one.
For an old-school HashMap solution, TickHolder would have a put method:
public void add(PriceTick tick) {
synchronized(map) {
if ((map.get(tick.instrumentId) == null)
|| (tick.getTimestamp() > map.get(tick.instrumentId).getTimestamp()) )
map.put(tick.instrumentId, tick);
}
}
With a ConcurrentHashMap, one would want to drop the synchronization and use some atomic method like replace, but that's unconditional. So clearly the "conditional replace" method must be written.
However, since the test-and-replace operation is non-atomic, in order to be thread safe, it would have to be synchronized - but my initial reading of the ConcurrentHashMap source leads me to think that external synchronization and their internal locks will not work very well, so at a very minimum, every Map method which performs structural changes and the containing class performs would have to be synchronized by the containing class... and even then, I'm going to be fairly uneasy.
I thought about subclassing ConcurrentHashMap, but that seems to be impossible. It makes use of an inner final class HashEntry with default access, so although ConcurrentHashMap is not final, it's not extensible.
Which seems to mean that I have to fall back to implementing TickHolder as containing an old-school HashMap in order to write my conditional replace method.
So, the questions: am I right about the above? Have I (hopefully) missed something, whether obvious or subtle, which would lead to a different conclusion? I'd really like to be able to make use of that lovely striped locking mechanism here.
The non-deterministic solution is to loop replace():
do {
PriceTick oldTick = map.get(newTick.getInstrumentId());
} while ((oldTick == null || oldTick.before(newTick)) && !map.replace(newTick.getInstrumentId(), oldTick, newTick);
Odd though it may seem, that is a commonly suggested pattern for this kind of thing.
#cletus solution formed the base for my solution to an almost identical problem. I think a couple of changes are needed though as if oldTick is null then replace throws a NullPointerException as stated by #hotzen
PriceTick oldTick;
do {
oldTick = map.putIfAbsent(newTick.getInstrumentId());
} while (oldTick != null && oldTick.before(newTick) && !map.replace(newTick.getInstrumentId(), oldTick, newTick);
The correct answer should be
PriceTick oldTick;
do {
oldTick = map.putIfAbsent(newTick.getInstrumentId(), newTick);
if (oldTick == null) {
break;
}
} while (oldTick.before(newTick) && !map.replace(newTick.getInstrumentId(), oldTick, newTick);
As an alternative, could you create a TickHolder class, and use that as the value in your map? It makes the map slightly more cumbersome to use (getting a value is now map.getValue(key).getTick()), but it lets you keep the ConcurrentHashMap's behavior.
public class TickHolder {
public PriceTick getTick() { /* returns current value */
public synchronized PriceTick replaceIfNewer (PriceTick pCandidate) { /* does your check */ }
}
And your put method becomes something like:
public void updateTick (PriceTick pTick) {
TickHolder value = map.getValue(pTick.getInstrumentId());
if (value != null) {
TickHolder newTick = new TickHolder(pTick);
value = map.putIfAbsent(pTick.getInstrumentId(), newTick);
if (value == null) {
value = newTick;
}
}
value.replaceIfNewer(pTick);
}

On using Enum based Singleton to cache large objects (Java)

Is there any better way to cache up some very large objects, that can only be created once, and therefore need to be cached ? Currently, I have the following:
public enum LargeObjectCache {
INSTANCE;
private Map<String, LargeObject> map = new HashMap<...>();
public LargeObject get(String s) {
if (!map.containsKey(s)) {
map.put(s, new LargeObject(s));
}
return map.get(s);
}
}
There are several classes that can use the LargeObjects, which is why I decided to use a singleton for the cache, instead of passing LargeObjects to every class that uses it.
Also, the map doesn't contain many keys (one or two, but the key can vary in different runs of the program) so, is there another, more efficient map to use in this case ?
You may need thread-safety to ensure you don't have two instance of the same name.
It does matter much for small maps but you can avoid one call which can make it faster.
public LargeObject get(String s) {
synchronized(map) {
LargeObject ret = map.get(s);
if (ret == null)
map.put(s, ret = new LargeObject(s));
return ret;
}
}
As it has been pointed out, you need to address thread-safety. Simply using Collections.synchronizedMap() doesn't make it completely correct, as the code entails compound operations. Synchronizing the entire block is one solution. However, using ConcurrentHashMap will result in a much more concurrent and scalable behavior if it is critical.
public enum LargeObjectCache {
INSTANCE;
private final ConcurrentMap<String, LargeObject> map = new ConcurrentHashMap<...>();
public LargeObject get(String s) {
LargeObject value = map.get(s);
if (value == null) {
value = new LargeObject(s);
LargeObject old = map.putIfAbsent(s, value);
if (old != null) {
value = old;
}
}
return value;
}
}
You'll need to use it exactly in this form to have the correct and the most efficient behavior.
If you must ensure only one thread gets to even instantiate the value for a given key, then it becomes necessary to turn to something like the computing map in Google Collections or the memoizer example in Brian Goetz's book "Java Concurrency in Practice".

Categories