Creating a HashSet for Doubles - java

I wish to create a HashSet for real numbers (at present Doubles) using a defined tolerance (epsilon), (cf Assert.assertEquals(double, double, double)
Since using Double.equals() only works for exact equality and Double is a final class I can't use it. My initial idea is to extend HashSet (e.g. to DoubleHashSet), with a setEpsilon(double) method and create a new class ComparableDouble where equals() uses this value from DoubleHashSet. However I'd like to check whether there are existing solutions already and existing F/OSS libraries.
(In the future I shall want to extend this to tuples of real numbers - e.g. rectangles and cubes - so a generic approach is preferable
NOTE: #NPE has suggested it's impossible. Unfortunately I suspect this is formally correct :-) So I'm wondering if there are approximate methods ... Others must have had this problem and solved it approximately. (I already regularly use a tool Real.isEqual(a, b, epsilon) and it's very useful.) I am prepared to accept some infrequent errors of transitivity.
NOTE: I shall use a TreeSet as that solves the problem of "nearly equals()". Later I shall be comparing complexNumbers, rectangles (and more complex objects) and it's really useful to be able to set a limit within which 2 things are equal. There is no simple natural ordering of complexNumbers (perhaps a Cantor approach would work), but we can tell whether they are nearly equal.

There are some fundamental flaws in this approach.
HashSet uses equals() to check two elements for equality. The contract on equals() has the following among its requirements:
It is transitive: for any non-null reference values x, y, and z, if x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) should return true.
Now consider the following example:
x = 0.0
y = 0.9 * epsilon
z = 1.8 * epsilon
It is clear that your proposed comparison scheme would break the transitivity requirement (x equals y and y equals z, yet x doesn't equal z). In these circumstances, HashSet cannot function correctly.
Furthermore, hashCode() will produce additional challenges, due to the following requirement:
If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
The hashCode() requirement can be sidestepped by using a TreeSet instead of HashSet.

What I would do is round the doubles before using them (assuming this is appropriate)
e.g.
public static double roundByFactor(double d, long factor) {
return (double) Math.round(d * factor) / factor;
}
TDoubleHashSet set = new TDoubleHashSet(); // more efficient than HashSet<Double>
set.add(roundByFactor(1.001, 100));
set.add(roundByFactor(1.005, 100));
set.add(roundByFactor(1.01, 100));
// set has two elements.
You can wrap this behaviour in your own DoubleHashSet. If you want to reserve the original value you can use HashMap or TDoubleDoubleHashMap where the key is the rounded value and the value is the original.

I have implemented #NPE's approach (I have accepted his/her answer so s/he gets the points :-) and give the code here
//Create a comparator:
public class RealComparator implements Comparator<Double> {
private double epsilon = 0.0d;
public RealComparator(double eps) {
this.setEpsilon(eps);
}
/**
* if Math.abs(d0-d1) <= epsilon
* return -1 if either arg is null
*/
public int compare(Double d0, Double d1) {
if (d0 == null || d1 == null) {
return -1;
}
double delta = Math.abs(d0 - d1);
if (delta <= epsilon) {
return 0;
}
return (d0 < d1) ? -1 : 1;
}
/** set the tolerance
* negative values are converted to positive
* #param epsilon
*/
public void setEpsilon(double epsilon) {
this.epsilon = Math.abs(epsilon);
}
and test it
public final static Double ONE = 1.0;
public final static Double THREE = 3.0;
#Test
public void testTreeSet(){
RealComparator comparator = new RealComparator(0.0);
Set<Double> set = new TreeSet<Double>(comparator);
set.add(ONE);
set.add(ONE);
set.add(THREE);
Assert.assertEquals(2, set.size());
}
#Test
public void testTreeSet1(){
RealComparator comparator = new RealComparator(0.0);
Set<Double> set = new TreeSet<Double>(comparator);
set.add(ONE);
set.add(ONE-0.001);
set.add(THREE);
Assert.assertEquals(3, set.size());
}
#Test
public void testTreeSet2(){
RealComparator comparator = new RealComparator(0.01);
Set<Double> set = new TreeSet<Double>(comparator);
set.add(ONE);
set.add(ONE - 0.001);
set.add(THREE);
Assert.assertEquals(2, set.size());
}
#Test
public void testTreeSet3(){
RealComparator comparator = new RealComparator(0.01);
Set<Double> set = new TreeSet<Double>(comparator);
set.add(ONE - 0.001);
set.add(ONE);
set.add(THREE);
Assert.assertEquals(2, set.size());
}

Related

What hascode strategy should we use while comparing Set of custom objects in java? [duplicate]

How do we decide on the best implementation of hashCode() method for a collection (assuming that equals method has been overridden correctly) ?
The best implementation? That is a hard question because it depends on the usage pattern.
A for nearly all cases reasonable good implementation was proposed in Josh Bloch's Effective Java in Item 8 (second edition). The best thing is to look it up there because the author explains there why the approach is good.
A short version
Create a int result and assign a non-zero value.
For every field f tested in the equals() method, calculate a hash code c by:
If the field f is a boolean:
calculate (f ? 0 : 1);
If the field f is a byte, char, short or int: calculate (int)f;
If the field f is a long: calculate (int)(f ^ (f >>> 32));
If the field f is a float: calculate Float.floatToIntBits(f);
If the field f is a double: calculate Double.doubleToLongBits(f) and handle the return value like every long value;
If the field f is an object: Use the result of the hashCode() method or 0 if f == null;
If the field f is an array: see every field as separate element and calculate the hash value in a recursive fashion and combine the values as described next.
Combine the hash value c with result:
result = 37 * result + c
Return result
This should result in a proper distribution of hash values for most use situations.
If you're happy with the Effective Java implementation recommended by dmeister, you can use a library call instead of rolling your own:
#Override
public int hashCode() {
return Objects.hash(this.firstName, this.lastName);
}
This requires either Guava (com.google.common.base.Objects.hashCode) or the standard library in Java 7 (java.util.Objects.hash) but works the same way.
Although this is linked to Android documentation (Wayback Machine) and My own code on Github, it will work for Java in general. My answer is an extension of dmeister's Answer with just code that is much easier to read and understand.
#Override
public int hashCode() {
// Start with a non-zero constant. Prime is preferred
int result = 17;
// Include a hash for each field.
// Primatives
result = 31 * result + (booleanField ? 1 : 0); // 1 bit » 32-bit
result = 31 * result + byteField; // 8 bits » 32-bit
result = 31 * result + charField; // 16 bits » 32-bit
result = 31 * result + shortField; // 16 bits » 32-bit
result = 31 * result + intField; // 32 bits » 32-bit
result = 31 * result + (int)(longField ^ (longField >>> 32)); // 64 bits » 32-bit
result = 31 * result + Float.floatToIntBits(floatField); // 32 bits » 32-bit
long doubleFieldBits = Double.doubleToLongBits(doubleField); // 64 bits (double) » 64-bit (long) » 32-bit (int)
result = 31 * result + (int)(doubleFieldBits ^ (doubleFieldBits >>> 32));
// Objects
result = 31 * result + Arrays.hashCode(arrayField); // var bits » 32-bit
result = 31 * result + referenceField.hashCode(); // var bits » 32-bit (non-nullable)
result = 31 * result + // var bits » 32-bit (nullable)
(nullableReferenceField == null
? 0
: nullableReferenceField.hashCode());
return result;
}
EDIT
Typically, when you override hashcode(...), you also want to override equals(...). So for those that will or has already implemented equals, here is a good reference from my Github...
#Override
public boolean equals(Object o) {
// Optimization (not required).
if (this == o) {
return true;
}
// Return false if the other object has the wrong type, interface, or is null.
if (!(o instanceof MyType)) {
return false;
}
MyType lhs = (MyType) o; // lhs means "left hand side"
// Primitive fields
return booleanField == lhs.booleanField
&& byteField == lhs.byteField
&& charField == lhs.charField
&& shortField == lhs.shortField
&& intField == lhs.intField
&& longField == lhs.longField
&& floatField == lhs.floatField
&& doubleField == lhs.doubleField
// Arrays
&& Arrays.equals(arrayField, lhs.arrayField)
// Objects
&& referenceField.equals(lhs.referenceField)
&& (nullableReferenceField == null
? lhs.nullableReferenceField == null
: nullableReferenceField.equals(lhs.nullableReferenceField));
}
It is better to use the functionality provided by Eclipse which does a pretty good job and you can put your efforts and energy in developing the business logic.
First make sure that equals is implemented correctly. From an IBM DeveloperWorks article:
Symmetry: For two references, a and b, a.equals(b) if and only if b.equals(a)
Reflexivity: For all non-null references, a.equals(a)
Transitivity: If a.equals(b) and b.equals(c), then a.equals(c)
Then make sure that their relation with hashCode respects the contact (from the same article):
Consistency with hashCode(): Two equal objects must have the same hashCode() value
Finally a good hash function should strive to approach the ideal hash function.
about8.blogspot.com, you said
if equals() returns true for two objects, then hashCode() should return the same value. If equals() returns false, then hashCode() should return different values
I cannot agree with you. If two objects have the same hashcode it doesn't have to mean that they are equal.
If A equals B then A.hashcode must be equal to B.hascode
but
if A.hashcode equals B.hascode it does not mean that A must equals B
If you use eclipse, you can generate equals() and hashCode() using:
Source -> Generate hashCode() and equals().
Using this function you can decide which fields you want to use for equality and hash code calculation, and Eclipse generates the corresponding methods.
There's a good implementation of the Effective Java's hashcode() and equals() logic in Apache Commons Lang. Checkout HashCodeBuilder and EqualsBuilder.
Just a quick note for completing other more detailed answer (in term of code):
If I consider the question how-do-i-create-a-hash-table-in-java and especially the jGuru FAQ entry, I believe some other criteria upon which a hash code could be judged are:
synchronization (does the algo support concurrent access or not) ?
fail safe iteration (does the algo detect a collection which changes during iteration)
null value (does the hash code support null value in the collection)
If I understand your question correctly, you have a custom collection class (i.e. a new class that extends from the Collection interface) and you want to implement the hashCode() method.
If your collection class extends AbstractList, then you don't have to worry about it, there is already an implementation of equals() and hashCode() that works by iterating through all the objects and adding their hashCodes() together.
public int hashCode() {
int hashCode = 1;
Iterator i = iterator();
while (i.hasNext()) {
Object obj = i.next();
hashCode = 31*hashCode + (obj==null ? 0 : obj.hashCode());
}
return hashCode;
}
Now if what you want is the best way to calculate the hash code for a specific class, I normally use the ^ (bitwise exclusive or) operator to process all fields that I use in the equals method:
public int hashCode(){
return intMember ^ (stringField != null ? stringField.hashCode() : 0);
}
#about8 : there is a pretty serious bug there.
Zam obj1 = new Zam("foo", "bar", "baz");
Zam obj2 = new Zam("fo", "obar", "baz");
same hashcode
you probably want something like
public int hashCode() {
return (getFoo().hashCode() + getBar().hashCode()).toString().hashCode();
(can you get hashCode directly from int in Java these days? I think it does some autocasting.. if that's the case, skip the toString, it's ugly.)
As you specifically asked for collections, I'd like to add an aspect that the other answers haven't mentioned yet: A HashMap doesn't expect their keys to change their hashcode once they are added to the collection. Would defeat the whole purpose...
Use the reflection methods on Apache Commons EqualsBuilder and HashCodeBuilder.
I use a tiny wrapper around Arrays.deepHashCode(...) because it handles arrays supplied as parameters correctly
public static int hash(final Object... objects) {
return Arrays.deepHashCode(objects);
}
any hashing method that evenly distributes the hash value over the possible range is a good implementation. See effective java ( http://books.google.com.au/books?id=ZZOiqZQIbRMC&dq=effective+java&pg=PP1&ots=UZMZ2siN25&sig=kR0n73DHJOn-D77qGj0wOxAxiZw&hl=en&sa=X&oi=book_result&resnum=1&ct=result ) , there is a good tip in there for hashcode implementation (item 9 i think...).
I prefer using utility methods fromm Google Collections lib from class Objects that helps me to keep my code clean. Very often equals and hashcode methods are made from IDE's template, so their are not clean to read.
Here is another JDK 1.7+ approach demonstration with superclass logics accounted. I see it as pretty convinient with Object class hashCode() accounted, pure JDK dependency and no extra manual work. Please note Objects.hash() is null tolerant.
I have not include any equals() implementation but in reality you will of course need it.
import java.util.Objects;
public class Demo {
public static class A {
private final String param1;
public A(final String param1) {
this.param1 = param1;
}
#Override
public int hashCode() {
return Objects.hash(
super.hashCode(),
this.param1);
}
}
public static class B extends A {
private final String param2;
private final String param3;
public B(
final String param1,
final String param2,
final String param3) {
super(param1);
this.param2 = param2;
this.param3 = param3;
}
#Override
public final int hashCode() {
return Objects.hash(
super.hashCode(),
this.param2,
this.param3);
}
}
public static void main(String [] args) {
A a = new A("A");
B b = new B("A", "B", "C");
System.out.println("A: " + a.hashCode());
System.out.println("B: " + b.hashCode());
}
}
The standard implementation is weak and using it leads to unnecessary collisions. Imagine a
class ListPair {
List<Integer> first;
List<Integer> second;
ListPair(List<Integer> first, List<Integer> second) {
this.first = first;
this.second = second;
}
public int hashCode() {
return Objects.hashCode(first, second);
}
...
}
Now,
new ListPair(List.of(a), List.of(b, c))
and
new ListPair(List.of(b), List.of(a, c))
have the same hashCode, namely 31*(a+b) + c as the multiplier used for List.hashCode gets reused here. Obviously, collisions are unavoidable, but producing needless collisions is just... needless.
There's nothing substantially smart about using 31. The multiplier must be odd in order to avoid losing information (any even multiplier loses at least the most significant bit, multiples of four lose two, etc.). Any odd multiplier is usable. Small multipliers may lead to faster computation (the JIT can use shifts and additions), but given that multiplication has latency of only three cycles on modern Intel/AMD, this hardly matters. Small multipliers also leads to more collision for small inputs, which may be a problem sometimes.
Using a prime is pointless as primes have no meaning in the ring Z/(2**32).
So, I'd recommend using a randomly chosen big odd number (feel free to take a prime). As i86/amd64 CPUs can use a shorter instruction for operands fitting in a single signed byte, there is a tiny speed advantage for multipliers like 109. For minimizing collisions, take something like 0x58a54cf5.
Using different multipliers in different places is helpful, but probably not enough to justify the additional work.
When combining hash values, I usually use the combining method that's used in the boost c++ library, namely:
seed ^= hasher(v) + 0x9e3779b9 + (seed<<6) + (seed>>2);
This does a fairly good job of ensuring an even distribution. For some discussion of how this formula works, see the StackOverflow post: Magic number in boost::hash_combine
There's a good discussion of different hash functions at: http://burtleburtle.net/bob/hash/doobs.html
For a simple class it is often easiest to implement hashCode() based on the class fields which are checked by the equals() implementation.
public class Zam {
private String foo;
private String bar;
private String somethingElse;
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
Zam otherObj = (Zam)obj;
if ((getFoo() == null && otherObj.getFoo() == null) || (getFoo() != null && getFoo().equals(otherObj.getFoo()))) {
if ((getBar() == null && otherObj. getBar() == null) || (getBar() != null && getBar().equals(otherObj. getBar()))) {
return true;
}
}
return false;
}
public int hashCode() {
return (getFoo() + getBar()).hashCode();
}
public String getFoo() {
return foo;
}
public String getBar() {
return bar;
}
}
The most important thing is to keep hashCode() and equals() consistent: if equals() returns true for two objects, then hashCode() should return the same value. If equals() returns false, then hashCode() should return different values.

Create an unique hashCode based on many values

I am trying to implement an unique hashCode based on six different values. My Class has the following attributes:
private int id_place;
private String algorithm;
private Date mission_date;
private int mission_hour;
private int x;
private int y;
I am calculating the hashCode as following:
id_place * (7 * algorithm.hashCode()) + (31 * mission_date.hashCode()) + (23 * mission_hour + 89089) + (x * 19 + 67067) + (y * 11 + 97097);
How can I turn it into an unique hashCode? I'm not confident it is unique...
It doesn't have to be unique and it cannot be unique. hashCode() returns an int (32 bits), which means it could be unique if you only had one int property and nothing else.
The Integer class can (and does) have a unique hashCode(), but few other classes do.
Since you have multiple properties, some of which are int, a hashCode() that is a function of these properties can't be unique.
You should strive for a hasCode() function that gives a wide range of different values for different combinations of your properties, but it cannot be unique.
HashCode for two different object needs not be unique. According to https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#hashCode() -
Whenever it is invoked on the same object more than once during an execution of a Java application, hashCode() must consistently return the same value, provided no information used in equals comparisons on the object is modified. This value needs not remain consistent from one execution of an application to another execution of the same application
If two objects are equal according to the equals(Object) method, then calling the hashCode() method on each of the two objects must produce the same value.
It is not required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hash tables.
So , you don't have to create hashCode() function which returns distinct hash code everytime.
Unique is not a hard requirement, but the more unique the hash code is, the better.
Note first that the hash code in general is used for a HashMap, as index into a 'bucket.' Hence optimally it should be unique modulo the bucket size, the number of slots in the bucket. However this may vary, when the map grows.
But okay, towards an optimal hash code:
Ranges are important; if x and y where in 0..255, then they could be packed uniquely in two bytes, or when 0..999 then y*1000+x. For LocalDateTime, if one could take the long in seconds (i.o. ms or ns), and since 2012-01-01 so you might assume a range from 0 upto say two years in the future.
You can explore existing or generate plausible test data. One then can mathematically optimize your hash code function by their coincidental coefficients (7, 13, 23). This is linear optimisation, but one can also do it by simple trial-and-error: counting the clashes for varying (A, B, C).
//int[] coeffients = ...;
int[][] coefficientsCandidates = new int[NUM_OF_CANDIDATES][NUM_OF_COEFFS];
...
int[] collisionCounts = new int[NUM_OF_CANDIDATES];
for (Data data : allTestData) {
... update collisionCounts for every candidate
}
... take the candidate with smallest collision count
... or sort by collisionCounts and pick other candidates to try out
In general such evaluation code is not needed for a working hash code, but especially it might detect bad hash codes, were there is some pseudo-randomness going wrong. For instance if a factor is way too large for the range (weekday * 1000), so value holes appear.
But also one has to say in all honesty, that all this effort probably really is not needed.
In Eclipse, there is a function that generates the method public int hashCode() for you. I used the class attributes you provided and the result is as follows:
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((algorithm == null) ? 0 : algorithm.hashCode());
result = prime * result + id_place;
result = prime * result + ((mission_date == null) ? 0 : mission_date.hashCode());
result = prime * result + mission_hour;
result = prime * result + x;
result = prime * result + y;
return result;
}
It looks a lot like your calculation. However, as Andy Turner pointed out in a comment to your question and Eran in an answer, you simply cannot make a unique hash code for every single instance of an object if their amount exceeds the maximum amount of possible different hash codes.
Because you have multiple fields, use:
public int hashCode() {
return Objects.hash(id_place, algorithm, mission_date, mission_hour, x, y);
}
If objA.equals(objB) is true, then objA and objB must return the same hash code.
If objA.equals(objB) is false, then objA and objB might return the same hash code, if your hashing algo happens to return different hashCodes in this case, it ise good for performance reasons.
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ClassA classA = (ClassA) o;
return id_place == classA.id_place &&
mission_hour == classA.mission_hour &&
x == classA.x &&
y == classA.y &&
Objects.equals(algorithm, classA.algorithm) &&
Objects.equals(mission_date, classA.mission_date);
}

Sum of BigDecimal(s) created from possible null Double

In order to avoid possible loss of precision in Java operation on Double objects i.e.:
Double totalDouble = new Double(1590.0);
Double taxesDouble = new Double(141.11);
Double totalwithTaxes = Double.sum(totalDouble,taxesDouble);
//KO: 1731.1100000000001
System.out.println(totalwithTaxes); // 1731.1100000000001
I wrote this code, where totalDouble and taxesDouble could be also null:
Double totalDouble = myObject.getTotalDouble();
Double taxesDouble = myObject.getTaxesDouble();
BigDecimal totalBigDecimalNotNull = (totalDouble==null) ? BigDecimal.valueOf(0d):BigDecimal.valueOf(totalDouble);
BigDecimal taxesBigDecimalNotNull = (taxesDouble==null) ? BigDecimal.valueOf(0d):BigDecimal.valueOf(taxesDouble);
BigDecimal totalWithTaxesBigDecimal = totalBigDecimalNotNull.add(taxesBigDecimalNotNull);
System.out.println(totalWithTaxesBigDecimal);
Is there a better way (also with third part libraries i.e. guava, etc) to initialize BigDecimal in this cases (zero if Double is null and Double value otherwise)?
Not really. That is to say, you're still going to need to make a decision based on whether or not the value is null, but you can do it cleaner if you use the Optional pattern.
You can change your getTotalDouble and getTaxesDouble returns to Optional<Double> instead to mititgate having to do the ternary...
public Optional<Double> getTotalDouble() {
return Optional.ofNullable(totalDouble);
}
public Optional<Double> getTaxesDouble() {
return Optional.ofNullable(taxesDouble);
}
...then, you can use the conditional evaluation provided by Optional itself to evaluate and return a default value.
BigDecimal totalBigDecimalNotNull =
BigDecimal.valueOf(myObject.getTotalDouble().orElse(0d));
A simplification would be to return Optional<BigDecimal> instead, as opposed to transforming the value that you want in this fashion.
As an addendum, be careful when talking about precision. There is standing advice to use either int or long instead to ensure you don't lose any coin precision.
Whether you use Optional or not I recommend creating a static helper method so that you don't have to repeat yourself. e.g.:
public static BigDecimal bigDecimalValueOfOrZero(Double val) {
return val == null ? BigDecimal.ZERO : BigDecimal.valueOf(val);
}

Robust Map<Double, sth> in Java

I am looking for a robust Map in Java, where the key lookup would take into account that Double has a limited precision (something around 1e-15 or 1e-16). Where could I find such a thing?
EDIT: Following Jon's advice I think it would make sense to define equivalence. One idea would be to center these at numbers rounded to 15 most relevant decimal digits. Other numbers would be rounded (in any consistent way - the fastest to implement). Would this make sense? What would be the best implementation?
I'd suggest you to use TreeMap and implement your own custom comparator that compares 2 double values taking into account the required precision.
IMHO The best approach is to normalise the values before adding or looking up values. e.g. by using rounding.
BTW: You can use TDoubleObjectHashMap which support custom hash strategies and uses primitive double keys.
I'm not completely sure what you need it for, but you can implement a wrapper around Double and override its hashCode() and equals() methods to meet your "limited precision" lookup. Therefore any Map implementation will be robust, because it relies on hashCode() an equals() for key lookup.
Of course, your map will be in a form Map<DoubleWrapper, smth>.
Summing up answers and comments above, I ended up with the following wrapper (which probably doesn't handle NaN atm):
public static class DoubleWrapper {
private static final int PRECISION = 15;
private final Double roundedValue;
public DoubleWrapper(double value) {
final double d = Math.ceil(Math.log10(value < 0 ? -value: value));
final int power = PRECISION - (int) d;
final double magnitude = Math.pow(10, power);
final long shifted = Math.round(value*magnitude);
roundedValue = shifted/magnitude;
}
public double getDouble() {
return roundedValue;
}
#Override
public boolean equals(Object obj) {
return roundedValue.equals(obj);
}
#Override
public int hashCode() {
return roundedValue.hashCode();
}
}

Best implementation for hashCode method for a collection

How do we decide on the best implementation of hashCode() method for a collection (assuming that equals method has been overridden correctly) ?
The best implementation? That is a hard question because it depends on the usage pattern.
A for nearly all cases reasonable good implementation was proposed in Josh Bloch's Effective Java in Item 8 (second edition). The best thing is to look it up there because the author explains there why the approach is good.
A short version
Create a int result and assign a non-zero value.
For every field f tested in the equals() method, calculate a hash code c by:
If the field f is a boolean:
calculate (f ? 0 : 1);
If the field f is a byte, char, short or int: calculate (int)f;
If the field f is a long: calculate (int)(f ^ (f >>> 32));
If the field f is a float: calculate Float.floatToIntBits(f);
If the field f is a double: calculate Double.doubleToLongBits(f) and handle the return value like every long value;
If the field f is an object: Use the result of the hashCode() method or 0 if f == null;
If the field f is an array: see every field as separate element and calculate the hash value in a recursive fashion and combine the values as described next.
Combine the hash value c with result:
result = 37 * result + c
Return result
This should result in a proper distribution of hash values for most use situations.
If you're happy with the Effective Java implementation recommended by dmeister, you can use a library call instead of rolling your own:
#Override
public int hashCode() {
return Objects.hash(this.firstName, this.lastName);
}
This requires either Guava (com.google.common.base.Objects.hashCode) or the standard library in Java 7 (java.util.Objects.hash) but works the same way.
Although this is linked to Android documentation (Wayback Machine) and My own code on Github, it will work for Java in general. My answer is an extension of dmeister's Answer with just code that is much easier to read and understand.
#Override
public int hashCode() {
// Start with a non-zero constant. Prime is preferred
int result = 17;
// Include a hash for each field.
// Primatives
result = 31 * result + (booleanField ? 1 : 0); // 1 bit » 32-bit
result = 31 * result + byteField; // 8 bits » 32-bit
result = 31 * result + charField; // 16 bits » 32-bit
result = 31 * result + shortField; // 16 bits » 32-bit
result = 31 * result + intField; // 32 bits » 32-bit
result = 31 * result + (int)(longField ^ (longField >>> 32)); // 64 bits » 32-bit
result = 31 * result + Float.floatToIntBits(floatField); // 32 bits » 32-bit
long doubleFieldBits = Double.doubleToLongBits(doubleField); // 64 bits (double) » 64-bit (long) » 32-bit (int)
result = 31 * result + (int)(doubleFieldBits ^ (doubleFieldBits >>> 32));
// Objects
result = 31 * result + Arrays.hashCode(arrayField); // var bits » 32-bit
result = 31 * result + referenceField.hashCode(); // var bits » 32-bit (non-nullable)
result = 31 * result + // var bits » 32-bit (nullable)
(nullableReferenceField == null
? 0
: nullableReferenceField.hashCode());
return result;
}
EDIT
Typically, when you override hashcode(...), you also want to override equals(...). So for those that will or has already implemented equals, here is a good reference from my Github...
#Override
public boolean equals(Object o) {
// Optimization (not required).
if (this == o) {
return true;
}
// Return false if the other object has the wrong type, interface, or is null.
if (!(o instanceof MyType)) {
return false;
}
MyType lhs = (MyType) o; // lhs means "left hand side"
// Primitive fields
return booleanField == lhs.booleanField
&& byteField == lhs.byteField
&& charField == lhs.charField
&& shortField == lhs.shortField
&& intField == lhs.intField
&& longField == lhs.longField
&& floatField == lhs.floatField
&& doubleField == lhs.doubleField
// Arrays
&& Arrays.equals(arrayField, lhs.arrayField)
// Objects
&& referenceField.equals(lhs.referenceField)
&& (nullableReferenceField == null
? lhs.nullableReferenceField == null
: nullableReferenceField.equals(lhs.nullableReferenceField));
}
It is better to use the functionality provided by Eclipse which does a pretty good job and you can put your efforts and energy in developing the business logic.
First make sure that equals is implemented correctly. From an IBM DeveloperWorks article:
Symmetry: For two references, a and b, a.equals(b) if and only if b.equals(a)
Reflexivity: For all non-null references, a.equals(a)
Transitivity: If a.equals(b) and b.equals(c), then a.equals(c)
Then make sure that their relation with hashCode respects the contact (from the same article):
Consistency with hashCode(): Two equal objects must have the same hashCode() value
Finally a good hash function should strive to approach the ideal hash function.
about8.blogspot.com, you said
if equals() returns true for two objects, then hashCode() should return the same value. If equals() returns false, then hashCode() should return different values
I cannot agree with you. If two objects have the same hashcode it doesn't have to mean that they are equal.
If A equals B then A.hashcode must be equal to B.hascode
but
if A.hashcode equals B.hascode it does not mean that A must equals B
If you use eclipse, you can generate equals() and hashCode() using:
Source -> Generate hashCode() and equals().
Using this function you can decide which fields you want to use for equality and hash code calculation, and Eclipse generates the corresponding methods.
There's a good implementation of the Effective Java's hashcode() and equals() logic in Apache Commons Lang. Checkout HashCodeBuilder and EqualsBuilder.
Just a quick note for completing other more detailed answer (in term of code):
If I consider the question how-do-i-create-a-hash-table-in-java and especially the jGuru FAQ entry, I believe some other criteria upon which a hash code could be judged are:
synchronization (does the algo support concurrent access or not) ?
fail safe iteration (does the algo detect a collection which changes during iteration)
null value (does the hash code support null value in the collection)
If I understand your question correctly, you have a custom collection class (i.e. a new class that extends from the Collection interface) and you want to implement the hashCode() method.
If your collection class extends AbstractList, then you don't have to worry about it, there is already an implementation of equals() and hashCode() that works by iterating through all the objects and adding their hashCodes() together.
public int hashCode() {
int hashCode = 1;
Iterator i = iterator();
while (i.hasNext()) {
Object obj = i.next();
hashCode = 31*hashCode + (obj==null ? 0 : obj.hashCode());
}
return hashCode;
}
Now if what you want is the best way to calculate the hash code for a specific class, I normally use the ^ (bitwise exclusive or) operator to process all fields that I use in the equals method:
public int hashCode(){
return intMember ^ (stringField != null ? stringField.hashCode() : 0);
}
#about8 : there is a pretty serious bug there.
Zam obj1 = new Zam("foo", "bar", "baz");
Zam obj2 = new Zam("fo", "obar", "baz");
same hashcode
you probably want something like
public int hashCode() {
return (getFoo().hashCode() + getBar().hashCode()).toString().hashCode();
(can you get hashCode directly from int in Java these days? I think it does some autocasting.. if that's the case, skip the toString, it's ugly.)
As you specifically asked for collections, I'd like to add an aspect that the other answers haven't mentioned yet: A HashMap doesn't expect their keys to change their hashcode once they are added to the collection. Would defeat the whole purpose...
Use the reflection methods on Apache Commons EqualsBuilder and HashCodeBuilder.
I use a tiny wrapper around Arrays.deepHashCode(...) because it handles arrays supplied as parameters correctly
public static int hash(final Object... objects) {
return Arrays.deepHashCode(objects);
}
any hashing method that evenly distributes the hash value over the possible range is a good implementation. See effective java ( http://books.google.com.au/books?id=ZZOiqZQIbRMC&dq=effective+java&pg=PP1&ots=UZMZ2siN25&sig=kR0n73DHJOn-D77qGj0wOxAxiZw&hl=en&sa=X&oi=book_result&resnum=1&ct=result ) , there is a good tip in there for hashcode implementation (item 9 i think...).
I prefer using utility methods fromm Google Collections lib from class Objects that helps me to keep my code clean. Very often equals and hashcode methods are made from IDE's template, so their are not clean to read.
Here is another JDK 1.7+ approach demonstration with superclass logics accounted. I see it as pretty convinient with Object class hashCode() accounted, pure JDK dependency and no extra manual work. Please note Objects.hash() is null tolerant.
I have not include any equals() implementation but in reality you will of course need it.
import java.util.Objects;
public class Demo {
public static class A {
private final String param1;
public A(final String param1) {
this.param1 = param1;
}
#Override
public int hashCode() {
return Objects.hash(
super.hashCode(),
this.param1);
}
}
public static class B extends A {
private final String param2;
private final String param3;
public B(
final String param1,
final String param2,
final String param3) {
super(param1);
this.param2 = param2;
this.param3 = param3;
}
#Override
public final int hashCode() {
return Objects.hash(
super.hashCode(),
this.param2,
this.param3);
}
}
public static void main(String [] args) {
A a = new A("A");
B b = new B("A", "B", "C");
System.out.println("A: " + a.hashCode());
System.out.println("B: " + b.hashCode());
}
}
The standard implementation is weak and using it leads to unnecessary collisions. Imagine a
class ListPair {
List<Integer> first;
List<Integer> second;
ListPair(List<Integer> first, List<Integer> second) {
this.first = first;
this.second = second;
}
public int hashCode() {
return Objects.hashCode(first, second);
}
...
}
Now,
new ListPair(List.of(a), List.of(b, c))
and
new ListPair(List.of(b), List.of(a, c))
have the same hashCode, namely 31*(a+b) + c as the multiplier used for List.hashCode gets reused here. Obviously, collisions are unavoidable, but producing needless collisions is just... needless.
There's nothing substantially smart about using 31. The multiplier must be odd in order to avoid losing information (any even multiplier loses at least the most significant bit, multiples of four lose two, etc.). Any odd multiplier is usable. Small multipliers may lead to faster computation (the JIT can use shifts and additions), but given that multiplication has latency of only three cycles on modern Intel/AMD, this hardly matters. Small multipliers also leads to more collision for small inputs, which may be a problem sometimes.
Using a prime is pointless as primes have no meaning in the ring Z/(2**32).
So, I'd recommend using a randomly chosen big odd number (feel free to take a prime). As i86/amd64 CPUs can use a shorter instruction for operands fitting in a single signed byte, there is a tiny speed advantage for multipliers like 109. For minimizing collisions, take something like 0x58a54cf5.
Using different multipliers in different places is helpful, but probably not enough to justify the additional work.
When combining hash values, I usually use the combining method that's used in the boost c++ library, namely:
seed ^= hasher(v) + 0x9e3779b9 + (seed<<6) + (seed>>2);
This does a fairly good job of ensuring an even distribution. For some discussion of how this formula works, see the StackOverflow post: Magic number in boost::hash_combine
There's a good discussion of different hash functions at: http://burtleburtle.net/bob/hash/doobs.html
For a simple class it is often easiest to implement hashCode() based on the class fields which are checked by the equals() implementation.
public class Zam {
private String foo;
private String bar;
private String somethingElse;
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
Zam otherObj = (Zam)obj;
if ((getFoo() == null && otherObj.getFoo() == null) || (getFoo() != null && getFoo().equals(otherObj.getFoo()))) {
if ((getBar() == null && otherObj. getBar() == null) || (getBar() != null && getBar().equals(otherObj. getBar()))) {
return true;
}
}
return false;
}
public int hashCode() {
return (getFoo() + getBar()).hashCode();
}
public String getFoo() {
return foo;
}
public String getBar() {
return bar;
}
}
The most important thing is to keep hashCode() and equals() consistent: if equals() returns true for two objects, then hashCode() should return the same value. If equals() returns false, then hashCode() should return different values.

Categories