I have three hashCode methods as follows, I prioritised them based on their efficiency. I am wondering if there is any other way to make a more efficient hashCode method.
1) public int hashCode() { //terrible
return 5;
}
2) public int hashCode() { //a bit less terrible
return name.length;
}
3) public int hashCode() { //better
final int prime = 31;
int result = 1;
result = prime * result + ((name == null) ? 0 : name.hashCode());
return result;
}
There is no surefire way to guarantee that your hashcode function is optimal because it is measured by two different metrics.
Efficiency - How quick it is to calculate.
Collisions - What is the chance of collision.
Your:
Maximises efficiency at the expense of collisions.
Finds a spot somwhere in the middle - but still not good.
Least efficient but best for avoiding collisions - still not necessarily best.
You have to find the balance yourself.
Sometimes it is obvious when there is a very efficient method that never collides (e.g. the ordinal of an enum).
Sometimes memoising the values is a good solution - this way even a very inefficient method can be mitigated because it is only ever calculated once. There is an obvious emeory cost to this which also must be balanced.
Sometimes the overall functionality of your code contributes to your choice. Say you want to put File objects in a HashMap. A number of options are clear:
Use the hashcode of the file name.
Use the hashcode of the file path.
Use a crc of the contents of the file.
Use the hashcode of the SHA1 digest of the contents of the file.
Why collisions are bad
One of the main uses of hashcode is when inserting objects into a HashMap. The algorithm requests a hash code from the object and uses that to decide which bucket to put the object in. If the hash collides with another object there will be another object in that bucket, in which case the bucket will have to grow which costs time. If all hashes are unique then the map will be one item per bucket and thus maximally efficient.
See the excellent WikiPedia article on Hash Table for a deeper discussion on how HashMap works.
I prioritised them based on their efficiency
Your list is sorted by ascending efficiency—if by "efficiency" you mean the performance of your application as opposed to the latency of the hashCode method isolated from everything else. A hashcode with bad dispersion will result in a linear or near-linear search through a linked list inside HashMap, completely annulling the advantages of a hashtable.
Especially note that, on today's architectures, computation is much cheaper than pointer dereference, and it comes at a fixed low cost. A single cache miss is worth a thousand simple arithmetic operations and each pointer dereference is a potential cache miss.
In addition to the valuable answers so far, I'd like to add some other methods to consider:
3a):
public int hashCode() {
return Objects.hashCode(name);
}
Not many pros/cons in terms of performance, but a bit more concise.
4.) You should either provide more information about the class that you are talking about, or reconsider your design. But using a class as the key of a hash map when the only property of this class is a String, then you might also be able to just use the String directly. So option 4 is:
// Changing this...
Map<Key, Value> map;
map.put(key, value);
Value value = map.get(key);
// ... to this:
Map<String, Value> map;
map.put(key.getName(), value);
Value value = map.get(key.getName());
(And if this is not possible, because the "name" of a Key might change after it has been created, you're in bigger trouble anyhow - see the next point)
5.) Maybe you can precompute the hash code. In fact, this is also done in the java.lang.String class:
public final class String
implements java.io.Serializable, Comparable<String>, CharSequence {
...
/** Cache the hash code for the string */
private int hash; // Default to 0
But of course, this only makes sense for immutable classes. You should be aware of the fact that using mutable classes as keys of a Map is "dangerous" and may lead to consistency errors, and should only be done when you're absolutely sure that the instances that are used as keys won't change.
So if you want to use your class as the keys, and maybe your class even has more fields than just a single one, then you could store the hash code as a field:
class Key
{
private final String name;
... // Other fields...
private final int hashCode;
Key(String name, ...)
{
this.name = name;
... // Other fields
// Pre-compute and store the hash code:
this.hashCode = computeHashCode();
}
private int computeHashCode()
{
int result = 31;
result = 31 * result + Objects.hashCode(name);
result = 31 * result + ... // Other fields
return result;
}
}
My answer is going a different path - basically it is not answer, but a question: why do you worry about the performance of hashCode()?
Did you exhaustive profiling of your application and found that there is a performance problem originating from that one method on some of your objects?
If the answer to that question is "no" ... then - why do you think you need to worry about this one method? Why do you think that the default, generated by eclipse, probably used billions of times each day ... isn't good enough for you?
See here for explanations why it is in general a very bad idea to waste ones time with such questions.
Yes, there are better alternatives.
xxHash or MurmurHash3 are general-purpose hashing algorithms that are both faster and better in quality.
Related
I have following scenario (modified one than actual business purpose).
I have a program which predicts how much calories a person will
loose for the next 13 weeks based on certain attributes.
I want to cache this result in the database so that i don't call the
prediction again for the same combination.
I have class person
class Person { int personId; String weekStartDate; }
I have HashMap<List<Person>, Integer> - The key is 13 weeks data of a person and the value is the prediction
I will keep the hashvalue in the database for caching purpose
Is there a better way to handle above scenario? Any design pattern to support such scenarios
Depends: the implementation of hashCode() uses the elements of your list. So adding elements later on changes the result of that operation:
public int hashCode() {
int hashCode = 1;
for (E e : this)
hashCode = 31*hashCode + (e==null ? 0 : e.hashCode());
return hashCode;
}
Maps aren't build for keys that can change their hash values! And of course, it doesn't really make sense to implement that method differently.
So: it can work when your lists are all immutable, meaning that neither the list nor any of its members is modified after the list was used as key. But there is a certain risk: if you forget about that contract later on, and these lists see modifications, then you will run into interesting issues.
This works because the hashcode of the standard List implementations is computed with the hashcodes of the contents. You need to make sure, however, to also implement hashCode and equals in the Person class, otherwise you will get the same problem this guy had. See also my answer on that question.
I would suggest you define a class (say Data) and use it as a key in your hashmap. Override equals/hashcode accordingly with knowledge of data over weeks.
I have a method that checks if two objects are equal(by reference).
public boolean isUnique( T uniqueIdOfFirstObject, T uniqueIdOfSecondObject ) {
return (uniqueIdOfFirstObject == uniqueIdOfSecondObject);
}
(Use case) Assuming that I don't have any control over creation of the object.
I have a method
void currentNodeExistOrAddToHashSet(Object newObject, HashSet<T> objectHash) {
// will it be 100% precise? Assuming both object have the same field values.
if(!objectHash.contains(newObject){
objectHash.add(newObject);
}
}
or I could do something like this
void currentNodeExistOrAddToHashSet(Object newObject, HashSet<T> objectHash){
//as per my knowledge, there might be collision for different objects.
int uniqueId = System.identityHashCode(newObject);
if(!objectHash.contains(uniqueId){
objectHash.add(uniqueId);
}
}
Is it possible to get a 100% collision proof Id in java i.e different object having different IDs, the same object having same ids irrespective of the content of the object?
Since you put them into a HashSet that uses hashcode/equals and hashCode is 32 bits long - this has a limit; thus collision will happen. Especially since a HashSet actually only cares about n-last bits before making itself bigger in size and thus adding one more bit and so on. You can read a lot more about this here for example.
The question is different here: why you want a collision free structure in the first place? If you define a fairly well distributed hashCode and a fairly decent equals - these things should not matter to you at all. If you worry about performance of a search, it is O(1) for HashSet.
You could define hashCode and equality based on UUID, like let's say UUID#randomUUID - but this still bounds your hashCode to the same 32-bits, thus collision could still happen.
The best look-up structure is a HashTable. It provides constant access on average (linear in worst case).
This depends on the hash function. Ok.
My question is the following. Assuming a good implementation of a HashTable e.g. HashMap is there a best practice concerning the keys passed in the map?I mean it is recommended that the key must be an immutable object but I was wondering if there are other recommendations.
Example the size of the key? For example in a good hashmap (in the way described above) if we used String as keys, won't the "bottleneck" be in the string comparison for equals (trying to find the key)? So should the keys be kept small? Or are there objects that should not be used as keys? E.g. a URL? In such cases how can you choose what to use as a key?
The best performing key for an HashMap is probably an Integer, where hashCode() and equals() are implemented as:
public int hashCode() {
return value;
}
public boolean equals(Object obj) {
if (obj instanceof Integer) {
return value == ((Integer)obj).intValue();
}
return false;
}
Said that, the purpose of an HashMap is to map some object (value) to some others (key). The fact that a hash function is used to address the (value) objects is to provide fast, constant-time access.
it is recommended that the key must be an immutable object but I was wondering if there are other recommendations.
The recommendation is to Map objects to what you need: don't think what is faster; but think what is the best for your business logic to address the objects to retrieve.
The important requirement is that the key object must be immutable, because if you change the key object after storing it in the Map it may be not possible to retrieve the associated value later.
The key word in HashMap is Map. Your object should just map. If you sacrifice the mapping task optimizing the key, you are defeating the purpose of the Map - without probably achieving any performance boost.
I 100% agree with the first two comments in your question:
the major constraint is that it has to be the thing that you want to base the lookup on ;)
– Oli Charlesworth
The general rule is to use as the key whatever you need to look up with.
– Louis Wasserman
Remember the two rules for optimization:
Don't.
(for experts only) don't yet.
The third rule is: profile before to optimize.
You should use whatever key you want to use to lookup things in the data structure, it's typically a domain-specific constraint. With that said, keep in mind that both hashCode() and equals() will be used in finding a key in the table.
hashCode() is used to find the position of the key, while equals() is used to determine if the key you are searching for is actually the key that we just found using hashCode().
For example, consider two keys a and b that have the same hash code in a table using separate chaining. Then a search for a would require testing if a.equals(key) for potentially both a and b in the table once we find the index of the list containing a and b from hashCode().
it is recommended that the key must be an immutable object but I was wondering if there are other recommendations.
The key of the value should be final.
Most times a field of the object is used as key. If that field changes then the map cannot find it:
void foo(Employee e) {
map.put(e.getId(), e);
String newId = e.getId() + "new";
e.setId(newId);
Employee e2 = e.get(newId);
// e != e2 !
}
So Employee should not have a setId() method at all, but that is difficult because when you are writing Employee you don't know what it will be keyed by.
I digged up the implementation. I had an assumption that the effectiveness of the hashCode() method will be the key factor.
When I looked into the HashMap() and the Hashtable() implementation, I found that the implementation is quite similar (with one exception). Both are using and storing an internal hash code for all entries, so that's a good point that hashCode() is not so heavily influencing the performance.
Both are having a number of buckets, where the values are stored. It is important balance between the number of buckets (say n), and the average number of keys within a bucket (say k). The bucket is found in O(1) time, the content of the bucket is iterated in O(k) size, but the more bucket we have, the more memory will be allocated. Also, if many buckets are empty, it means that the hashCode() method for the key class does not the hashcode wide enough.
The algorithm works like this:
Take the `hashCode()` of the Key (and make a slight bijective transformation on it)
Find the appropriate bucket
Loop through the content of the bucket (which is some kind of LinkedList)
Make the comparison of the keys as follows:
1. Compare the hashcodes
(it is calculated in the first step, and stored for the entry)
2. Examine if key `==` the stored key (still no call)
(this step is missing from Hashtable)
3. Compare the keys by `key.equals(storedKey)`
To summarize:
hashCode() is called once per call (this is a must, you cannot do
without it)
equals() is called if the hashCode is not so well spread, and two keys happen to have the same hashcode
The same algorithm is for get() and put() (because in put() case you can set the value for an existing key). So, the most important thing is how the hashCode() method was implemented. That is the most frequently called method.
Two strategies are: make it fast and make it effective (well-spread). The JDK developers made efforts to make it both, but it's not always possible to have them both.
Numeric types are good
Object (and non-overriden classes) are good (hashCode() is native), except that you cannot specify an own equals()
String is not good, iterates through the characters, but caches after that (see my comment below)
Any class with synchronized hashCode() is not good
Any class that has an iteration is not good
Classes that have hashcode cache are a bit better (depends on the usage)
Comment on the String: To make it fast, in the first versions of JDK the String hash code calculation was made for the first 32 characters only. But the hashcode it produced was not well spread, so they decided to take all the characters into the hashcode.
We have to lookup some data based on three input data fields. The lookup has to be fast. There are only about 20 possible lookup combinations. We've implemented this using a static HashMap instance where we create a key by concatinating the three data fields. Is there a better way to do this or is this the way to go? Code is below.
Update: I'm not implying that this code is slow. Just curious if there is a better way to do this. I thought there might be a more elegant solution but I'm happy to keep this in place if there are no compelling alternatives!
Create class level static HashMap instance:
private static HashMap map = new HashMap();
How we load data into memory:
private void load(Iterator iterator) {
while (iterator.next()) {
Object o = it.next();
key = o.getField1() + "-" + o.getField2() + "-" o.getField3();
map.put(key, o.getData());
}
}
And how we look up the data based on the three fields:
private Stirng getData(String f1, String f2, String f3) {
String key = f1 + "-" + f2 + "-" f3;
return map.get(key);
}
Well, the question to ask yourself is of course "is it fast enough?" Because unless your application needs to be speedier and this is the bottleneck, it really doesn't matter. What you've got is already reasonably efficient.
That being said, if you want to squeeze every bit of speed possible out of this routine (without rewriting it in assembly language ;-) you might consider using an array instead of a HashMap, since there are only a small, limited number of keys. You'd have to develop some sort of hash function that hashes each object to a unique number between 0 and 19 (or however many elements you actually have). You may also be able to optimize the implementation of that hash function, although I couldn't tell you how exactly to do that without knowing the details of the objects you're working with.
You could create a special key object having three String fields to avoid building up the key string:
class MapKey {
public final String k1;
public final String k2;
public final String k3;
public MapKey(String k1, String k2, String k3) {
this.k1 = k1; this.k2 = k2; this.k3 = k3;
}
public MapKey(Object o) {
this.k1 = o.getField1(); this.k2 = o.getField2(); this.k3 = o.getField3();
}
public int hashCode() {
return k1.hashCode(); // if k1 is likely to be the same, also add hashes from k2 and k3
}
}
In your case I would keep using the implementation you outlined. For a large list of constant keys mapping to constant data, you could use Minimal Perfect Hashing. As it is not trivial to code this, and I am not sure about existing libraries, you have to consider the implementation cost before using this.
I think your approach is pretty fast. Any gains by implementing your own hashing algorithm would be very small, especially compared to the effort required.
One remark about your key format. You better make sure that your separator cannot occur in the field toString() values, otherwise you might get key collisions:
field1="a-", field2="b-", field3="c" -> key="a--b--c"
field1="a", field2="-b", field3="-c" -> key="a--b--c"
Concatenating strings is a bad idea for creating a key. My main object is that it is unclear. But in practice a significant proportion of implementations have bugs, notably that the separator can actually occur in the strings. In terms of performance, I have seen a program speed up ten percent simply by changing the key for a string hack to a meaningful key object. (If you really must be lazy about code, you can use Arrays.asList to make the key - see List.equals API doc.)
Since you only have 20 combinations it might be feasible to handcraft a "give me the index 1..20 of this combination" based on knowing the characteristics of each combination.
Are you in a position to list the exact list of combinations?
Another way to get this done is to create an Object to handle your key, with which you can override equals() (and hashCode()) to do a test against an incomming key, testing field1, field2 and field3 in turn.
EDIT (in response to comment):
As the value returned from hashCode() is used by your Map to put your keys into buckets, (from which it then will test equals), the value could theoretically be the same for all keys. I wouldn't suggest doing that, however, as you would not reap the benefits of HashMaps performance. You would essentially be iterating over all of your items in a bucket and testing equals().
One approach you could take would be to delegate the call to hashCode() to one of the values in your key container. You could always return the hashCode from field3, for example. In this case, you will distribute your keys to potentially as many buckets as there are distinct values for field3. Once your HashMap finds the bucket, it will still need to iterate over the items in the bucket to test the result of equals() until it finds a match.
You could create would be the sum of the values returned by hashCode() on all of your fields. As just discussed, this value does not need to be unique. Further, the potential for collision, and therefore larger buckets, is much smaller. With that in mind, your lookups on the HashMap should be quicker.
EDIT2:
the question of a good hash code for this key has been answered in a separate question here
With a TreeMap it's trivial to provide a custom Comparator, thus overriding the semantics provided by Comparable objects added to the map. HashMaps however cannot be controlled in this manner; the functions providing hash values and equality checks cannot be 'side-loaded'.
I suspect it would be both easy and useful to design an interface and to retrofit this into HashMap (or a new class)? Something like this, except with better names:
interface Hasharator<T> {
int alternativeHashCode(T t);
boolean alternativeEquals(T t1, T t2);
}
class HasharatorMap<K, V> {
HasharatorMap(Hasharator<? super K> hasharator) { ... }
}
class HasharatorSet<T> {
HasharatorSet(Hasharator<? super T> hasharator) { ... }
}
The case insensitive Map problem gets a trivial solution:
new HasharatorMap(String.CASE_INSENSITIVE_EQUALITY);
Would this be doable, or can you see any fundamental problems with this approach?
Is the approach used in any existing (non-JRE) libs? (Tried google, no luck.)
EDIT: Nice workaround presented by hazzen, but I'm afraid this is the workaround I'm trying to avoid... ;)
EDIT: Changed title to no longer mention "Comparator"; I suspect this was a bit confusing.
EDIT: Accepted answer with relation to performance; would love a more specific answer!
EDIT: There is an implementation; see the accepted answer below.
EDIT: Rephrased the first sentence to indicate more clearly that it's the side-loading I'm after (and not ordering; ordering does not belong in HashMap).
.NET has this via IEqualityComparer (for a type which can compare two objects) and IEquatable (for a type which can compare itself to another instance).
In fact, I believe it was a mistake to define equality and hashcodes in java.lang.Object or System.Object at all. Equality in particular is hard to define in a way which makes sense with inheritance. I keep meaning to blog about this...
But yes, basically the idea is sound.
A bit late for you, but for future visitors, it might be worth knowing that commons-collections has an AbstractHashedMap (in 3.2.2 and with generics in 4.0). You can override these protected methods to achieve your desired behaviour:
protected int hash(Object key) { ... }
protected boolean isEqualKey(Object key1, Object key2) { ... }
protected boolean isEqualValue(Object value1, Object value2) { ... }
protected HashEntry createEntry(
HashEntry next, int hashCode, Object key, Object value) { ... }
An example implementation of such an alternative HashedMap is commons-collections' own IdentityMap (only up to 3.2.2 as Java has its own since 1.4).
This is not as powerful as providing an external "Hasharator" to a Map instance. You have to implement a new map class for every hashing strategy (composition vs. inheritance striking back...). But it's still good to know.
HashingStrategy is the concept you're looking for. It's a strategy interface that allows you to define custom implementations of equals and hashcode.
public interface HashingStrategy<E>
{
int computeHashCode(E object);
boolean equals(E object1, E object2);
}
You can't use a HashingStrategy with the built in HashSet or HashMap. GS Collections includes a java.util.Set called UnifiedSetWithHashingStrategy and a java.util.Map called UnifiedMapWithHashingStrategy.
Let's look at an example.
public class Data
{
private final int id;
public Data(int id)
{
this.id = id;
}
public int getId()
{
return id;
}
// No equals or hashcode
}
Here's how you might set up a UnifiedSetWithHashingStrategy and use it.
java.util.Set<Data> set =
new UnifiedSetWithHashingStrategy<>(HashingStrategies.fromFunction(Data::getId));
Assert.assertTrue(set.add(new Data(1)));
// contains returns true even without hashcode and equals
Assert.assertTrue(set.contains(new Data(1)));
// Second call to add() doesn't do anything and returns false
Assert.assertFalse(set.add(new Data(1)));
Why not just use a Map? UnifiedSetWithHashingStrategy uses half the memory of a UnifiedMap, and one quarter the memory of a HashMap. And sometimes you don't have a convenient key and have to create a synthetic one, like a tuple. That can waste more memory.
How do we perform lookups? Remember that Sets have contains(), but not get(). UnifiedSetWithHashingStrategy implements Pool in addition to Set, so it also implements a form of get().
Here's a simple approach to handle case-insensitive Strings.
UnifiedSetWithHashingStrategy<String> set =
new UnifiedSetWithHashingStrategy<>(HashingStrategies.fromFunction(String::toLowerCase));
set.add("ABC");
Assert.assertTrue(set.contains("ABC"));
Assert.assertTrue(set.contains("abc"));
Assert.assertFalse(set.contains("def"));
Assert.assertEquals("ABC", set.get("aBc"));
This shows off the API, but it's not appropriate for production. The problem is that the HashingStrategy constantly delegates to String.toLowerCase() which creates a bunch of garbage Strings. Here's how you can create an efficient hashing strategy for case-insensitive Strings.
public static final HashingStrategy<String> CASE_INSENSITIVE =
new HashingStrategy<String>()
{
#Override
public int computeHashCode(String string)
{
int hashCode = 0;
for (int i = 0; i < string.length(); i++)
{
hashCode = 31 * hashCode + Character.toLowerCase(string.charAt(i));
}
return hashCode;
}
#Override
public boolean equals(String string1, String string2)
{
return string1.equalsIgnoreCase(string2);
}
};
Note: I am a developer on GS collections.
Trove4j has the feature I'm after and they call it hashing strategies.
Their map has an implementation with different limitations and thus different prerequisites, so this does not implicitly mean that an implementation for Java's "native" HashMap would be feasible.
Note: As noted in all other answers, HashMaps don't have an explicit ordering. They only recognize "equality". Getting an order out of a hash-based data structure is meaningless, as each object is turned into a hash - essentially a random number.
You can always write a hash function for a class (and often times must), as long as you do it carefully. This is a hard thing to do properly because hash-based data structures rely on a random, uniform distribution of hash values. In Effective Java, there is a large amount of text devoted to properly implementing a hash method with good behaviour.
With all that being said, if you just want your hashing to ignore the case of a String, you can write a wrapper class around String for this purpose and insert those in your data structure instead.
A simple implementation:
public class LowerStringWrapper {
public LowerStringWrapper(String s) {
this.s = s;
this.lowerString = s.toLowerString();
}
// getter methods omitted
// Rely on the hashing of String, as we know it to be good.
public int hashCode() { return lowerString.hashCode(); }
// We overrode hashCode, so we MUST also override equals. It is required
// that if a.equals(b), then a.hashCode() == b.hashCode(), so we must
// restore that invariant.
public boolean equals(Object obj) {
if (obj instanceof LowerStringWrapper) {
return lowerString.equals(((LowerStringWrapper)obj).lowerString;
} else {
return lowerString.equals(obj);
}
}
private String s;
private String lowerString;
}
good question, ask josh bloch. i submitted that concept as an RFE in java 7, but it was dropped, i believe the reason was something performance related. i agree, though, should have been done.
I suspect this has not been done because it would prevent hashCode caching?
I attempted creating a generic Map solution where all keys are silently wrapped. It turned out that the wrapper would have to hold the wrapped object, the cached hashCode and a reference to the callback interface responsible for equality-checks. This is obviously not as efficient as using a wrapper class, where you'd only have to cache the original key plus one more object (see hazzens answer).
(I also bumped into a problem related to generics; the get-method accepts Object as input, so the callback interface responsible for hashing would have to perform an additional instanceof-check. Either that, or the map class would have to know the Class of its keys.)
This is an interesting idea, but it's absolutely horrendous for performance. The reason for this is quite fundamental to the idea of a hashtable: the ordering cannot be relied upon. Hashtables are very fast (constant time) because of the way in which they index elements in the table: by computing a pseudo-unique integer hash for that element and accessing that location in an array. It's literally computing a location in memory and directly storing the element.
This contrasts with a balanced binary search tree (TreeMap) which must start at the root and work its way down to the desired node every time a lookup is required. Wikipedia has some more in-depth analysis. To summarize, the efficiency of a tree map is dependent upon a consistent ordering, thus the order of the elements is predictable and sane. However, because of the performance hit imposed by the "traverse to your destination" approach, BSTs are only able to provide O(log(n)) performance. For large maps, this can be a significant performance hit.
It is possible to impose a consistent ordering on a hashtable, but to do so involves using techniques similar to LinkedHashMap and manually maintaining the ordering. Alternatively, two separate data structures can be maintained internally: a hashtable and a tree. The table can be used for lookups, while the tree can be used for iteration. The problem of course is this uses more than double the required memory. Also, insertions are only as fast as the tree: O(log(n)). Concurrent tricks can bring this down a bit, but that isn't a reliable performance optimization.
In short, your idea sounds really good, but if you actually tried to implement it, you would see that to do so would impose massive performance limitations. The final verdict is (and has been for decades): if you need performance, use a hashtable; if you need ordering and can live with degraded performance, use a balanced binary search tree. I'm afraid there's really no efficiently combining the two structures without losing some of the guarantees of one or the other.
There's such a feature in com.google.common.collect.CustomConcurrentHashMap, unfortunately, there's currently no public way how to set the Equivalence (their Hasharator). Maybe they're not yet done with it, maybe they don't consider the feature to be useful enough. Ask at the guava mailing list.
I wonder why it haven't happened yet, as it was mentioned in this talk over two years ago.