Does Java Float.compare() Always Produce The Correct Result? - java

This seems like a question that I could easily find an answer for but I couldn't see any entry on this. I know how the floating-point arithmetic works and in order to compare floating numbers I need to use an epsilon check. When I shared this with my team, one of my colleagues asked me this and I couldn't answer.
Does the compare method on Java always produce the correct result, i.e, the result an epsilon check for f1 and f2 would yield?
Float.compare(float f1, float f2);
Note: Especially consider this question for the equality case.

No, Float.compare does not use any kind of epsilon checking.
Here's for example the OpenJDK 13 implementation of the method:
/**
* Compares the two specified {#code float} values. The sign
* of the integer value returned is the same as that of the
* integer that would be returned by the call:
* <pre>
* new Float(f1).compareTo(new Float(f2))
* </pre>
*
* #param f1 the first {#code float} to compare.
* #param f2 the second {#code float} to compare.
* #return the value {#code 0} if {#code f1} is
* numerically equal to {#code f2}; a value less than
* {#code 0} if {#code f1} is numerically less than
* {#code f2}; and a value greater than {#code 0}
* if {#code f1} is numerically greater than
* {#code f2}.
* #since 1.4
*/
public static int compare(float f1, float f2) {
if (f1 < f2)
return -1; // Neither val is NaN, thisVal is smaller
if (f1 > f2)
return 1; // Neither val is NaN, thisVal is larger
// Cannot use floatToRawIntBits because of possibility of NaNs.
int thisBits = Float.floatToIntBits(f1);
int anotherBits = Float.floatToIntBits(f2);
return (thisBits == anotherBits ? 0 : // Values are equal
(thisBits < anotherBits ? -1 : // (-0.0, 0.0) or (!NaN, NaN)
1)); // (0.0, -0.0) or (NaN, !NaN)
}
Source

If you read the Javadoc of Float.compare() they talk about "numerically equal". That means that values that represent the same theoretical number, but are differently encoded, are considered equal. Examples are 0.0 == -0.0 and subnormal numbers.
Epsilon checks (where you check if two numbers are within a range of one other, typically a very small number), is not a de-facto standard, and has a lot of practical issues (like which epsilon to choose when you don't know the magnutide of the numbers, or what to do when the two numbers have vastly different magnitudes). Because of that reason, Java only implements exact operations.

No, java.lang.Float.compare( double, double ) does not return 0 (equal) for values within a hidden epsilon value of each other.
However, you can easily do such a comparison by hand, or with one of a few common libraries.
By hand, you can write a function that returns zero if a fuzzy check for equality passes, else returns Float.compare(). The fuzzy check for equality can be something like Math.abs( f1 - f2 ) < epsilon.
Alternatively, in Apache Commons Math, the Precision class provides compares() and equals() methods for floats and doubles that accept an epsilon value. And the class provides a value for epsilon in constant EPSILON.
Alternatively, in Google Guava, the DoubleMath class provides fuzzyCompare() and fuzzyEquals() methods that accept doubles and an epsilon value.

Sounds like you need to check that two float values are equal within the given non-negative delta, correct?
You can use JUnit's Assertions.assertEquals() method to achieve that:
assertEquals​(float f1, float f2, float delta)

Commonly done with a test like: if (abs(a-b) < DELTA) ... where DELTA is globally declared as some very-small numeric constant for consistency throughout the program. Since the test is so very simple, I like to explicitly state it.

Related

My method always returns 0.0 regardless of datatype, equation format, or input variables

I've been making a method to evaluate a randomly generated planet's temperature based on statistics given.
The method uses three doubles for input, and returns a double. Because of the scale of operations, long primitives had to be used for some equations. I am not very familiar with them.
The value is expected to be anywhere from 0 and higher, but it always evaluates to 0.
public double getSurfaceTemperature(double starLuminosity, double greenhouse, double albedo)
{
double tGreenhouse = greenhouse * 0.5841;
long luminosity = Math.round((3.846 * Math.pow(10.0,33)) * starLuminosity);
double sbc = 0.000056703;
long x = Math.round(Math.sqrt((1 - albedo) * (luminosity / (16.0 * 3.14 * sbc))));
long dts = Math.round(14960000000000L * lbr.gaussianValue(1,0.2,5));
double tEff = Math.sqrt(x) * (1.0 / Math.sqrt(dts));
long tEq = Math.round(Math.pow(tEff,4) * (1 + (3.0 * tGreenhouse / 4.0)));
long tSur = Math.round(tEq/0.9);
double tKel = Math.round(Math.sqrt(Math.sqrt(tSur)));
return tKel - 273;
}
I've tried different rounding in hopes that maybe it was rounding to zero, casting to other primitives to make the equation work, changing the format of equations in case order of operations failed, and making sure the input variables are never zero. The rounding didn't work because the numbers are so large that they can't possibly round that low. Why does the method always return 0.0 despite all the values having high magnitude? I adapted the equation from Artifexian's Worldsmith and there may have been some errors from google sheets to java. I don't entirely know how long numbers work differently than integers, but I've had a pretty good introduction through numerous error messages. Right now I'm just assuming that maybe I don't understand data types too well.

Guarantees concerning Math.atan2

The documentation for Math.atan2 says
The computed result must be within 2 ulps of the exact result.
The fact that it says 2 ulps presumably means there are cases where the returned value is not the closest double to the true result. Does anyone know if it is guaranteed to return the same value for equivalent pairs of int parameters? In other words, if a, b and k are positive int values and neither a * k nor b * k overflows, is it guaranteed that
Math.atan2(a, b) == Math.atan2(a * k, b * k)
Edit
Note that this is definitely not the case for non-overflowing long multiplications. For example
long a = 959786689;
long b = 363236985;
long k = 9675271;
System.out.println(Math.atan2(a, b));
System.out.println(Math.atan2(a * k, b * k));
prints
1.2089992287797169
1.208999228779717
but I could not find an example in int values.
Does anyone know if it is guaranteed to return the same value for equivalent pairs of int parameters?
Simply put, no. The Math documentation is the source of truth, and it provides no guarantee beyond the 2 ulp limit you reference. This is by design (as we'll see below), therefore any other source is either exposing an implementation detail or simply wrong.
Attempting to find lower bounds heuristically is impractical, since the behavior of Math is documented to be platform-specific:
Unlike some of the numeric methods of class StrictMath, all implementations of the equivalent functions of class Math are not defined to return the bit-for-bit same results. This relaxation permits better-performing implementations where strict reproducibility is not required.
Therefore even if you see tighter bounds in your tests there is no reason to believe these bounds are portable across platforms, processors, or Java versions.
However, as Math's documentation notes, StrictMath has more explicit behavior. StrictMath is documented to perform consistently across platforms, and is expected to have the same behavior as the reference implementation fdlibm. That project's readme notes:
FDLIBM is intended to provide a reasonably portable ... reference quality (below one ulp for major functions like sin,cos,exp,log) math library.
You can reference the source code for atan2 and determine precise bounds by examining its implementation; any other implementations of StrictMath.atan2() are required to give the same results as the reference implementation.
It's interesting to note that StrictMath.atan2() doesn't include the same 2 ulp comment as Math.atan2(). While it would be nice if it repeated fdlibm's "below one ulp" comment explicitly, I interpret the absence of this comment to mean StrictMath's implementation does not need to include that caveat - it will always be below one ulp.
tl;dr if you need precise results or stable results cross-platform use StrictMath. Math trades off precision for speed.
Edit: at first I thought this can be answered using "results must be semi-monotonic" requirement from the javadoc, but it actually can't be applied, so I re-wrote the answer.
Almost everything I can say is already covered by dimo414's answer. I just want to add: when using Math.atan2 on the same platform or even when using StrictMath.atan2 there is no formal guarantee (from the documentation) that atan2(y, x) == atan2(y * k, x * k). Sure, StrictMath's implementation actually uses y / x, so when y / x is precisely the same double value, the results will be equal (here I reasonably imply that function is deterministic), but keep it in mind.
Answering that part about int parameters: int holds 32 bits (actually, more like 31 bits plus one bit for sign) and can be represented by double type without any loss of precision, so no new problems there.
And the difference you described in the question (for non-overflowing long values) is caused by a loss of precision when converting long values to double, it has nothing to do with Math.atan2 itself, and it happens before the function is even called. double type can hold only 53 bits of mantissa, but in your case a * k requires 54 bits, so it is rounded to the nearest number a double can represent (b * k is okay though, it requires only 52 bits):
long a = 959786689;
long b = 363236985;
long k = 9675271;
System.out.println(a * k);
System.out.println((double) (a * k));
System.out.println((long) (double) (a * k));
System.out.println((long) (double) (a * k) == a * k);
System.out.println(b * k);
System.out.println((double) (b * k));
System.out.println((long) (double) (b * k));
System.out.println((long) (double) (b * k) == b * k);
Output:
9286196318267719
9.28619631826772E15
9286196318267720
false
3514416267097935
3.514416267097935E15
3514416267097935
true
And to address the example from the comment:
We have double a = 1.02551177480084, b = 1.12312341356234, k = 5;. In this case none of a, b, a * k, b * k can be represented as double without loss of precision. I'll use BigDecimal to demonstrate it, because it can show the true (not rounded) value of double:
double a = 1.02551177480084;
System.out.println("a is " + new BigDecimal(a));
System.out.println("a * 5 is " + new BigDecimal(a * 5));
System.out.println("a * 5 should be " + new BigDecimal(a).multiply(new BigDecimal("5")));
outputs
a is 1.0255117748008399924941613789997063577175140380859375
a * 5 is 5.12755887400420018451541182002983987331390380859375 // precision loss here
a * 5 should be 5.1275588740041999624708068949985317885875701904296875
and the difference can be clearly seen (same can be done with b instead of a).
There is a more simple test (since atan2() essentially uses a/b):
double a = 1.02551177480084, b = 1.12312341356234, k = 5;
System.out.println(a / b == (a * k) / (b * k));
outputs
false

Why is BigDecimal natural ordering inconsistent with equals?

From the Javadoc for BigDecimal:
Note: care should be exercised if BigDecimal objects are used as keys in a SortedMap or elements in a SortedSet since BigDecimal's natural ordering is inconsistent with equals.
For example, if you create a HashSet and add new BigDecimal("1.0") and new BigDecimal("1.00") to it, the set will contain two elements (because the values have different scales, so are non-equal according to equals and hashCode), but if you do the same thing with a TreeSet, the set will contain only one element, because the values compare as equal when you use compareTo.
Is there any specific reason behind this inconsistency?
From the OpenJDK implementation of BigDecimal:
/**
* Compares this {#code BigDecimal} with the specified
* {#code Object} for equality. Unlike {#link
* #compareTo(BigDecimal) compareTo}, this method considers two
* {#code BigDecimal} objects equal only if they are equal in
* value and scale (thus 2.0 is not equal to 2.00 when compared by
* this method).
*
* #param x {#code Object} to which this {#code BigDecimal} is
* to be compared.
* #return {#code true} if and only if the specified {#code Object} is a
* {#code BigDecimal} whose value and scale are equal to this
* {#code BigDecimal}'s.
* #see #compareTo(java.math.BigDecimal)
* #see #hashCode
*/
#Override
public boolean equals(Object x) {
if (!(x instanceof BigDecimal))
return false;
BigDecimal xDec = (BigDecimal) x;
if (x == this)
return true;
if (scale != xDec.scale)
return false;
long s = this.intCompact;
long xs = xDec.intCompact;
if (s != INFLATED) {
if (xs == INFLATED)
xs = compactValFor(xDec.intVal);
return xs == s;
} else if (xs != INFLATED)
return xs == compactValFor(this.intVal);
return this.inflate().equals(xDec.inflate());
}
More from the implementation:
* <p>Since the same numerical value can have different
* representations (with different scales), the rules of arithmetic
* and rounding must specify both the numerical result and the scale
* used in the result's representation.
Which is why the implementation of equals takes scale into consideration. The constructor that takes a string as a parameter is implemented like this:
public BigDecimal(String val) {
this(val.toCharArray(), 0, val.length());
}
where the third parameter will be used for the scale (in another constructor) which is why the strings 1.0 and 1.00 will create different BigDecimals (with different scales).
From Effective Java By Joshua Bloch:
The final paragraph of the compareTo contract, which is a strong
suggestion rather than a true provision, simply states that the
equality test imposed by the compareTo method should generally return
the same results as the equals method. If this provision is obeyed,
the ordering imposed by the compareTo method is said to be consistent
with equals. If it’s violated, the ordering is said to be inconsistent
with equals. A class whose compareTo method imposes an order that is
inconsistent with equals will still work, but sorted collections
containing elements of the class may not obey the general contract of
the appropriate collection interfaces (Collection, Set, or Map). This
is because the general contracts for these interfaces are defined in
terms of the equals method, but sorted collections use the equality
test imposed by compareTo in place of equals. It is not a catastrophe
if this happens, but it’s something to be aware of.
The behaviour seems reasonable in the context of arithmetic precision where trailing zeros are significant figures and 1.0 does not carry the same meaning as 1.00. Making them unequal seems to be a reasonable choice.
However from a comparison perspective neither of the two is greater or less than the other and the Comparable interface requires a total order (i.e. each BigDecimal must be comparable with any other BigDecimal). The only reasonable option here was to define a total order such that the compareTo method would consider the two numbers equal.
Note that inconsistency between equal and compareTo is not a problem as long as it's documented. It is even sometimes exactly what one needs.
BigDecimal works by having two numbers, an integer and a scale. The integer is the "number" and the scale is the number of digits to the right of the decimal place. Basically a base 10 floating point number.
When you say "1.0" and "1.00" these are technically different values in BigDecimal notation:
1.0
integer: 10
scale: 1
precision: 2
= 10 x 10 ^ -1
1.00
integer: 100
scale: 2
precision: 3
= 100 x 10 ^ -2
In scientific notation you wouldn't do either of those, it should be 1 x 10 ^ 0 or just 1, but BigDecimal allows it.
In compareTo the scale is ignored and they are evaluated as ordinary numbers, 1 == 1. In equals the integer and scale values are compared, 10 != 100 and 1 != 2. The BigDecimal equals method ignores the object == this check I assume because the intention is that each BigDecimal is treated as a type of number, not like an object.
I would liken it to this:
// same number, different types
float floatOne = 1.0f;
double doubleOne = 1.0;
// true: 1 == 1
System.out.println( (double)floatOne == doubleOne );
// also compare a float to a double
Float boxFloat = floatOne;
Double boxDouble = doubleOne;
// false: one is 32-bit and the other is 64-bit
System.out.println( boxInt.equals(boxDouble) );
// BigDecimal should behave essentially the same way
BigDecimal bdOne1 = new BigDecimal("1.0");
BigDecimal bdOne2 = new BigDecimal("1.00");
// true: 1 == 1
System.out.println( bdOne1.compareTo(bdOne2) );
// false: 10 != 100 and 1 != 2 ensuring 2 digits != 3 digits
System.out.println( bdOne1.equals(bdOne2) );
Because BigDecimal allows for a specific "precision", comparing both the integer and the scale is more or less the same as comparing both the number and the precision.
Although there is a semi-caveat to that when talking about BigDecimal's precision() method which always returns 1 if the BigDecimal is 0. In this case compareTo && precision evaluates true and equals evaluates false. But 0 * 10 ^ -1 should not equal 0 * 10 ^ -2 because the former is a 2 digit number 0.0 and the latter is a 3 digit number 0.00. The equals method is comparing both the value and the number of digits.
I suppose it is weird that BigDecimal allows trailing zeroes but this is basically necessary. Doing a mathematical operation like "1.1" + "1.01" requires a conversion but "1.10" + "1.01" doesn't.
So compareTo compares BigDecimals as numbers and equals compares BigDecimals as BigDecimals.
If the comparison is unwanted, use a List or array where this doesn't matter. HashSet and TreeSet are of course designed specifically for holding unique elements.
The answer is pretty short. equals() method compares objects while compareTo() compares values. In case of BigDecimal different objects can represent same value. Thats why equals() might return false, while compareTo() returns 0.
equal objects => equal values
equal values =/> equal objects
Object is just a computer representation of a some real world value. For example same picture might be represented in a GIF and JPEG formats. Thats very like BigDecimal, where same value might have distinct representations.

Finding absolute value of a number without using Math.abs()

Is there any way to find the absolute value of a number without using the Math.abs() method in java.
If you look inside Math.abs you can probably find the best answer:
Eg, for floats:
/*
* Returns the absolute value of a {#code float} value.
* If the argument is not negative, the argument is returned.
* If the argument is negative, the negation of the argument is returned.
* Special cases:
* <ul><li>If the argument is positive zero or negative zero, the
* result is positive zero.
* <li>If the argument is infinite, the result is positive infinity.
* <li>If the argument is NaN, the result is NaN.</ul>
* In other words, the result is the same as the value of the expression:
* <p>{#code Float.intBitsToFloat(0x7fffffff & Float.floatToIntBits(a))}
*
* #param a the argument whose absolute value is to be determined
* #return the absolute value of the argument.
*/
public static float abs(float a) {
return (a <= 0.0F) ? 0.0F - a : a;
}
Yes:
abs_number = (number < 0) ? -number : number;
For integers, this works fine (except for Integer.MIN_VALUE, whose absolute value cannot be represented as an int).
For floating-point numbers, things are more subtle. For example, this method -- and all other methods posted thus far -- won't handle the negative zero correctly.
To avoid having to deal with such subtleties yourself, my advice would be to stick to Math.abs().
Like this:
if (number < 0) {
number *= -1;
}
Since Java is a statically typed language, I would expect that a abs-method which takes an int returns an int, if it expects a float returns a float, for a Double, return a Double. Maybe it could return always the boxed or unboxed type for doubles and Doubles and so on.
So you need one method per type, but now you have a new problem: For byte, short, int, long the range for negative values is 1 bigger than for positive values.
So what should be returned for the method
byte abs (byte in) {
// #todo
}
If the user calls abs on -128? You could always return the next bigger type so that the range is guaranteed to fit to all possible input values. This will lead to problems for long, where no normal bigger type exists, and make the user always cast the value down after testing - maybe a hassle.
The second option is to throw an arithmetic exception. This will prevent casting and checking the return type for situations where the input is known to be limited, such that X.MIN_VALUE can't happen. Think of MONTH, represented as int.
byte abs (byte in) throws ArithmeticException {
if (in == Byte.MIN_VALUE) throw new ArithmeticException ("abs called on Byte.MIN_VALUE");
return (in < 0) ? (byte) -in : in;
}
The "let's ignore the rare cases of MIN_VALUE" habit is not an option. First make the code work - then make it fast. If the user needs a faster, but buggy solution, he should write it himself.
The simplest solution that might work means: simple, but not too simple.
Since the code doesn't rely on state, the method can and should be made static. This allows for a quick test:
public static void main (String args []) {
System.out.println (abs(new Byte ( "7")));
System.out.println (abs(new Byte ("-7")));
System.out.println (abs((byte) 7));
System.out.println (abs((byte) -7));
System.out.println (abs(new Byte ( "127")));
try
{
System.out.println (abs(new Byte ("-128")));
}
catch (ArithmeticException ae)
{
System.out.println ("Integer: " + Math.abs (new Integer ("-128")));
}
System.out.println (abs((byte) 127));
System.out.println (abs((byte) -128));
}
I catch the first exception and let it run into the second, just for demonstration.
There is a bad habit in programming, which is that programmers care much more for fast than for correct code. What a pity!
If you're curious why there is one more negative than positive value, I have a diagram for you.
Although this shouldn't be a bottle neck as branching issues on modern processors isn't normally a problem, but in the case of integers you could go for a branch-less solution as outlined here: http://graphics.stanford.edu/~seander/bithacks.html#IntegerAbs.
(x + (x >> 31)) ^ (x >> 31);
This does fail in the obvious case of Integer.MIN_VALUE however, so this is a use at your own risk solution.
In case of the absolute value of an integer x without using Math.abs(), conditions or bit-wise operations, below could be a possible solution in Java.
(int)(((long)x*x - 1)%(double)x + 1);
Because Java treats a%b as a - a/b * b, the sign of the result will be same as "a" no matter what sign of "b" is; (x*x-1)%x will equal abs(x)-1; type casting of "long" is to prevent overflow and double allows dividing by zero.
Again, x = Integer.MIN_VALUE will cause overflow due to subtracting 1.
You can use :
abs_num = (num < 0) ? -num : num;
Here is a one-line solution that will return the absolute value of a number:
abs_number = (num < 0) ? -num : num;
-num will equal to num for Integer.MIN_VALUE as
Integer.MIN_VALUE = Integer.MIN_VALUE * -1
Lets say if N is the number for which you want to calculate the absolute value(+ve number( without sign))
if (N < 0)
{
N = (-1) * N;
}
N will now return the Absolute value

Manipulating and comparing floating points in java

In Java the floating point arithmetic is not represented precisely. For example this java code:
float a = 1.2;
float b= 3.0;
float c = a * b;
if(c == 3.6){
System.out.println("c is 3.6");
}
else {
System.out.println("c is not 3.6");
}
Prints "c is not 3.6".
I'm not interested in precision beyond 3 decimals (#.###). How can I deal with this problem to multiply floats and compare them reliably?
It's a general rule that floating point number should never be compared like (a==b), but rather like (Math.abs(a-b) < delta) where delta is a small number.
A floating point value having fixed number of digits in decimal form does not necessary have fixed number of digits in binary form.
Addition for clarity:
Though strict == comparison of floating point numbers has very little practical sense, the strict < and > comparison, on the contrary, is a valid use case (example - logic triggering when certain value exceeds threshold: (val > threshold) && panic();)
If you are interested in fixed precision numbers, you should be using a fixed precision type like BigDecimal, not an inherently approximate (though high precision) type like float. There are numerous similar questions on Stack Overflow that go into this in more detail, across many languages.
I think it has nothing to do with Java, it happens on any IEEE 754 floating point number. It is because of the nature of floating point representation. Any languages that use the IEEE 754 format will encounter the same problem.
As suggested by David above, you should use the method abs of java.lang.Math class to get the absolute value (drop the positive/negative sign).
You can read this: http://en.wikipedia.org/wiki/IEEE_754_revision and also a good numerical methods text book will address the problem sufficiently.
public static void main(String[] args) {
float a = 1.2f;
float b = 3.0f;
float c = a * b;
final float PRECISION_LEVEL = 0.001f;
if(Math.abs(c - 3.6f) < PRECISION_LEVEL) {
System.out.println("c is 3.6");
} else {
System.out.println("c is not 3.6");
}
}
I’m using this bit of code in unit tests to compare if the outcome of 2 different calculations are the same, barring floating point math errors.
It works by looking at the binary representation of the floating point number. Most of the complication is due to the fact that the sign of floating point numbers is not two’s complement. After compensating for that it basically comes down to just a simple subtraction to get the difference in ULPs (explained in the comment below).
/**
* Compare two floating points for equality within a margin of error.
*
* This can be used to compensate for inequality caused by accumulated
* floating point math errors.
*
* The error margin is specified in ULPs (units of least precision).
* A one-ULP difference means there are no representable floats in between.
* E.g. 0f and 1.4e-45f are one ULP apart. So are -6.1340704f and -6.13407f.
* Depending on the number of calculations involved, typically a margin of
* 1-5 ULPs should be enough.
*
* #param expected The expected value.
* #param actual The actual value.
* #param maxUlps The maximum difference in ULPs.
* #return Whether they are equal or not.
*/
public static boolean compareFloatEquals(float expected, float actual, int maxUlps) {
int expectedBits = Float.floatToIntBits(expected) < 0 ? 0x80000000 - Float.floatToIntBits(expected) : Float.floatToIntBits(expected);
int actualBits = Float.floatToIntBits(actual) < 0 ? 0x80000000 - Float.floatToIntBits(actual) : Float.floatToIntBits(actual);
int difference = expectedBits > actualBits ? expectedBits - actualBits : actualBits - expectedBits;
return !Float.isNaN(expected) && !Float.isNaN(actual) && difference <= maxUlps;
}
Here is a version for double precision floats:
/**
* Compare two double precision floats for equality within a margin of error.
*
* #param expected The expected value.
* #param actual The actual value.
* #param maxUlps The maximum difference in ULPs.
* #return Whether they are equal or not.
* #see Utils#compareFloatEquals(float, float, int)
*/
public static boolean compareDoubleEquals(double expected, double actual, long maxUlps) {
long expectedBits = Double.doubleToLongBits(expected) < 0 ? 0x8000000000000000L - Double.doubleToLongBits(expected) : Double.doubleToLongBits(expected);
long actualBits = Double.doubleToLongBits(actual) < 0 ? 0x8000000000000000L - Double.doubleToLongBits(actual) : Double.doubleToLongBits(actual);
long difference = expectedBits > actualBits ? expectedBits - actualBits : actualBits - expectedBits;
return !Double.isNaN(expected) && !Double.isNaN(actual) && difference <= maxUlps;
}
This is a weakness of all floating point representations, and it happens because some numbers that appear to have a fixed number of decimals in the decimal system, actually have an infinite number of decimals in the binary system. And so what you think is 1.2 is actually something like 1.199999999997 because when representing it in binary it has to chop off the decimals after a certain number, and you lose some precision. Then multiplying it by 3 actually gives 3.5999999...
http://docs.python.org/py3k/tutorial/floatingpoint.html <- this might explain it better (even if it's for python, it's a common problem of the floating point representation)
Like the others wrote:
Compare floats with: if (Math.abs(a - b) < delta)
You can write a nice method for doing this:
public static int compareFloats(float f1, float f2, float delta)
{
if (Math.abs(f1 - f2) < delta)
{
return 0;
} else
{
if (f1 < f2)
{
return -1;
} else {
return 1;
}
}
}
/**
* Uses <code>0.001f</code> for delta.
*/
public static int compareFloats(float f1, float f2)
{
return compareFloats(f1, f2, 0.001f);
}
So, you can use it like this:
if (compareFloats(a * b, 3.6f) == 0)
{
System.out.println("They are equal");
}
else
{
System.out.println("They aren't equal");
}
There is an apache class for comparing doubles: org.apache.commons.math3.util.Precision
It contains some interesting constants: SAFE_MIN and EPSILON, which are the maximum possible deviations when performing arithmetic operations.
It also provides the necessary methods to compare, equal or round doubles.
Rounding is a bad idea. Use BigDecimal and set it's precision as needed.
Like:
public static void main(String... args) {
float a = 1.2f;
float b = 3.0f;
float c = a * b;
BigDecimal a2 = BigDecimal.valueOf(a);
BigDecimal b2 = BigDecimal.valueOf(b);
BigDecimal c2 = a2.multiply(b2);
BigDecimal a3 = a2.setScale(2, RoundingMode.HALF_UP);
BigDecimal b3 = b2.setScale(2, RoundingMode.HALF_UP);
BigDecimal c3 = a3.multiply(b3);
BigDecimal c4 = a3.multiply(b3).setScale(2, RoundingMode.HALF_UP);
System.out.println(c); // 3.6000001
System.out.println(c2); // 3.60000014305114740
System.out.println(c3); // 3.6000
System.out.println(c == 3.6f); // false
System.out.println(Float.compare(c, 3.6f) == 0); // false
System.out.println(c2.compareTo(BigDecimal.valueOf(3.6f)) == 0); // false
System.out.println(c3.compareTo(BigDecimal.valueOf(3.6f)) == 0); // false
System.out.println(c3.compareTo(BigDecimal.valueOf(3.6f).setScale(2, RoundingMode.HALF_UP)) == 0); // true
System.out.println(c3.compareTo(BigDecimal.valueOf(3.6f).setScale(9, RoundingMode.HALF_UP)) == 0); // false
System.out.println(c4.compareTo(BigDecimal.valueOf(3.6f).setScale(2, RoundingMode.HALF_UP)) == 0); // true
}
To compare two floats, f1 and f2 within precision of #.### I believe you would need to do like this:
((int) (f1 * 1000 + 0.5)) == ((int) (f2 * 1000 + 0.5))
f1 * 1000 lifts 3.14159265... to 3141.59265, + 0.5 results in 3142.09265 and the (int) chops off the decimals, 3142. That is, it includes 3 decimals and rounds the last digit properly.

Categories