Guarantees concerning Math.atan2 - java

The documentation for Math.atan2 says
The computed result must be within 2 ulps of the exact result.
The fact that it says 2 ulps presumably means there are cases where the returned value is not the closest double to the true result. Does anyone know if it is guaranteed to return the same value for equivalent pairs of int parameters? In other words, if a, b and k are positive int values and neither a * k nor b * k overflows, is it guaranteed that
Math.atan2(a, b) == Math.atan2(a * k, b * k)
Edit
Note that this is definitely not the case for non-overflowing long multiplications. For example
long a = 959786689;
long b = 363236985;
long k = 9675271;
System.out.println(Math.atan2(a, b));
System.out.println(Math.atan2(a * k, b * k));
prints
1.2089992287797169
1.208999228779717
but I could not find an example in int values.

Does anyone know if it is guaranteed to return the same value for equivalent pairs of int parameters?
Simply put, no. The Math documentation is the source of truth, and it provides no guarantee beyond the 2 ulp limit you reference. This is by design (as we'll see below), therefore any other source is either exposing an implementation detail or simply wrong.
Attempting to find lower bounds heuristically is impractical, since the behavior of Math is documented to be platform-specific:
Unlike some of the numeric methods of class StrictMath, all implementations of the equivalent functions of class Math are not defined to return the bit-for-bit same results. This relaxation permits better-performing implementations where strict reproducibility is not required.
Therefore even if you see tighter bounds in your tests there is no reason to believe these bounds are portable across platforms, processors, or Java versions.
However, as Math's documentation notes, StrictMath has more explicit behavior. StrictMath is documented to perform consistently across platforms, and is expected to have the same behavior as the reference implementation fdlibm. That project's readme notes:
FDLIBM is intended to provide a reasonably portable ... reference quality (below one ulp for major functions like sin,cos,exp,log) math library.
You can reference the source code for atan2 and determine precise bounds by examining its implementation; any other implementations of StrictMath.atan2() are required to give the same results as the reference implementation.
It's interesting to note that StrictMath.atan2() doesn't include the same 2 ulp comment as Math.atan2(). While it would be nice if it repeated fdlibm's "below one ulp" comment explicitly, I interpret the absence of this comment to mean StrictMath's implementation does not need to include that caveat - it will always be below one ulp.
tl;dr if you need precise results or stable results cross-platform use StrictMath. Math trades off precision for speed.

Edit: at first I thought this can be answered using "results must be semi-monotonic" requirement from the javadoc, but it actually can't be applied, so I re-wrote the answer.
Almost everything I can say is already covered by dimo414's answer. I just want to add: when using Math.atan2 on the same platform or even when using StrictMath.atan2 there is no formal guarantee (from the documentation) that atan2(y, x) == atan2(y * k, x * k). Sure, StrictMath's implementation actually uses y / x, so when y / x is precisely the same double value, the results will be equal (here I reasonably imply that function is deterministic), but keep it in mind.
Answering that part about int parameters: int holds 32 bits (actually, more like 31 bits plus one bit for sign) and can be represented by double type without any loss of precision, so no new problems there.
And the difference you described in the question (for non-overflowing long values) is caused by a loss of precision when converting long values to double, it has nothing to do with Math.atan2 itself, and it happens before the function is even called. double type can hold only 53 bits of mantissa, but in your case a * k requires 54 bits, so it is rounded to the nearest number a double can represent (b * k is okay though, it requires only 52 bits):
long a = 959786689;
long b = 363236985;
long k = 9675271;
System.out.println(a * k);
System.out.println((double) (a * k));
System.out.println((long) (double) (a * k));
System.out.println((long) (double) (a * k) == a * k);
System.out.println(b * k);
System.out.println((double) (b * k));
System.out.println((long) (double) (b * k));
System.out.println((long) (double) (b * k) == b * k);
Output:
9286196318267719
9.28619631826772E15
9286196318267720
false
3514416267097935
3.514416267097935E15
3514416267097935
true
And to address the example from the comment:
We have double a = 1.02551177480084, b = 1.12312341356234, k = 5;. In this case none of a, b, a * k, b * k can be represented as double without loss of precision. I'll use BigDecimal to demonstrate it, because it can show the true (not rounded) value of double:
double a = 1.02551177480084;
System.out.println("a is " + new BigDecimal(a));
System.out.println("a * 5 is " + new BigDecimal(a * 5));
System.out.println("a * 5 should be " + new BigDecimal(a).multiply(new BigDecimal("5")));
outputs
a is 1.0255117748008399924941613789997063577175140380859375
a * 5 is 5.12755887400420018451541182002983987331390380859375 // precision loss here
a * 5 should be 5.1275588740041999624708068949985317885875701904296875
and the difference can be clearly seen (same can be done with b instead of a).
There is a more simple test (since atan2() essentially uses a/b):
double a = 1.02551177480084, b = 1.12312341356234, k = 5;
System.out.println(a / b == (a * k) / (b * k));
outputs
false

Related

Can we assume that x == (int)sqrt(x * x) for all positive integers?

In C++ the sqrt function operates only with double values.
If we use integers (unsigned long long) can we be sure that
x == sqrt(x * x)
for any positive x where x * x <= MAXIMUM_VALUE?
Is it depend on the machine architecture and compiler?
In Java, Math.sqrt(x) takes a double value. You stated that x is such that x * x is below Integer.MAX_VALUE. Every integer is perfectly representable in double - double in java is explicitly defined as an iEEE-754 style double with a 52-bit mantissa; therefore in java a double can perfectly represent all integral values between -2^52 and +2^52, which easily covers all int values (as that is defined as signed 32-bit on java), but it does not cover all long values. (Defined as signed 64-bit; 64 is more than 52, so no go).
Thus, x * x loses no precision when it ends up getting converted from int to double. Then, Math.sqrt() on this number will give a result that is also perfectly representable as a double (because it is x, and given that x*x fits in an int, x must also fit), and thus, yes, this will always work out for all x.
But, hey, why not give it a shot, right?
public static void main(String[] args) {
int i = 1;
while (true) {
if (i * i < 0) break;
int j = (int) Math.sqrt(i * i);
if (i != j) System.out.println("Oh dear! " + i + " -> " + j);
i++;
}
System.out.println("Done at " + i);
}
> Done at 46341
Thus proving it by exhaustively trying it all.
Turns out, none exist - any long value such that x * x still fits (thus, is <2^63-1) has the property that x == (long) Math.sqrt(x * x);. This is presumably because at x*x, the number fits perfectly in a long, even if not all integer numbers that are this large do. Proof:
long p = 2000000000L;
for (; true; p++) {
long pp = p * p;
if (pp < 0) break;
long q = (long) Math.sqrt(pp);
if (q != p) System.out.println("PROBLEM: " + p + " -> " + q);
}
System.out.println("Abort: " + p);
> Abort: 3037000500
Surely if any number exists that doesn't hold, there is at least one in this high end range. Starting from 0 takes very long.
But do we know that sqrt will always return an exact value for a perfect square, or might it be slightly inaccurate?
We should - it's java. Unlike C, almost everything is 'well defined', and a JVM cannot legally call itself one if it fails to produce the exact answer as specified. The leeway that the Math.sqrt docs provide is not sufficient for any answer other than precisely x to be a legal implementation, therefore, yes, this is a guarantee.
In theory the JVM has some very minor leeway with floating point numbers, which strictfp disables, but [A] that's more about using 80-bit registers to represent numbers instead of 64, which cannot possibly ruin this hypothesis, and [B] a while back a java tag question showed up to show strictfp having any effect on any hardware and any VM version and the only viable result was a non-reproducible thing from 15 years ago. I feel quite confident to state that this will always hold, regardless of hardware or VM version.
I think we can believe.
Type casting a floating point number to an integer is to take only integer part of it. I believe you may concern, for example, sqrt(4) yields a floating point number like 1.999...9 and it is type casted to 1. (Yielding 2.000...1 is fine because it will be type casted to 2.)
But the floating number 4 is like
(1 * 2-0 + 0 + 2-1 + ... + 0 * 2-23) * 22
according to Floating-point arithmetic.
Which means, it must not be smaller than 4 like 3.999...9. So also, sqrt of the number must not be smaller than
(1 * 2-0) * 2
So sqrt of a square of an integer will at least yield a floating point number greater than but close enough to the integer.
Just try it. Yes it works in Java, for non-negative numbers. Even works for long contrary to common opinion.
class Code {
public static void main(String[] args) throws Throwable {
for (long x=(long)Math.sqrt(Long.MAX_VALUE);; --x) {
if (!(x == (long)Math.sqrt(x * x))) {
System.err.println("Not true for: "+x);
break;
}
}
System.err.println("Done");
}
}
(The first number that doesn't work is 3037000500L which goes negative when squared.)
Even for longs, the range of testable values is around 2^31 or 2*10^9 so for something this trivial it is reasonable to check every single value. You can even brute force reasonable cryptographic functions for 32-bit values - something more people should realise. Won't work so well for the full 64 bits.
BigInteger - sqrt(since 9)
Use cases requiring tighter constraint over possibilities of overflow can use BigInteger
BigInteger should work for any practical use case.
Still for normal use case, this might not be efficient.
Constraints
BigInteger Limits
BigInteger must support values in the range -2^Integer.MAX_VALUE (exclusive) to +2^Integer.MAX_VALUE (exclusive) and may support values outside of that range. An ArithmeticException is thrown when a BigInteger constructor or method would generate a value outside of the supported range. The range of probable prime values is limited and may be less than the full supported positive range of BigInteger. The range must be at least 1 to 2500000000.
Implementation Note:
In the reference implementation, BigInteger constructors and operations throw ArithmeticException when the result is out of the supported range of -2^Integer.MAX_VALUE (exclusive) to +2^Integer.MAX_VALUE (exclusive).
Array size limit when initialized as byte array
String length limit when initialized as String
Definitely may not support 1/0
jshell> new BigInteger("1").divide(new BigInteger("0"))
| Exception java.lang.ArithmeticException: BigInteger divide by zero
| at MutableBigInteger.divideKnuth (MutableBigInteger.java:1178)
| at BigInteger.divideKnuth (BigInteger.java:2300)
| at BigInteger.divide (BigInteger.java:2281)
| at (#1:1)
An example code
import java.math.BigInteger;
import java.util.Arrays;
import java.util.List;
public class SquareAndSqrt {
static void valid() {
List<String> values = Arrays.asList("1", "9223372036854775807",
"92233720368547758079223372036854775807",
new BigInteger("2").pow(Short.MAX_VALUE - 1).toString());
for (String input : values) {
final BigInteger value = new BigInteger(input);
final BigInteger square = value.multiply(value);
final BigInteger sqrt = square.sqrt();
System.out.println("value: " + input + System.lineSeparator()
+ ", square: " + square + System.lineSeparator()
+ ", sqrt: " + sqrt + System.lineSeparator()
+ ", " + value.equals(sqrt));
System.out.println(System.lineSeparator().repeat(2)); // pre java 11 - System.out.println(new String(new char[2]).replace("\0", System.lineSeparator()));
}
}
static void mayBeInValid() {
try {
new BigInteger("2").pow(Integer.MAX_VALUE);
} catch (ArithmeticException e) {
System.out.print("value: 2^Integer.MAX_VALUE, Exception: " + e);
System.out.println(System.lineSeparator().repeat(2));
}
}
public static void main(String[] args) {
valid();
mayBeInValid();
}
}
in cmath library sqrt function always convert argument to double or float so the range of double or float much more than unsigned long long so it always give positive.
for reference you can use
https://learn.microsoft.com/en-us/cpp/standard-library/cmath?view=msvc-170,
https://learn.microsoft.com/en-us/cpp/c-language/type-float?view=msvc-170

Conflict while implementing int and long in Java for a simple problem

I was attempting a simple code of determining whether a number is perfect square or not and wrote the below code :
public boolean isPerfectSquare(int num) {
int l = 1;
int r = num;
while (l <= r) {
int mid = l - (l - r) / 2;
if (mid * mid == num)
return true;
else if (mid * mid < num)
l = mid + 1;
else
r = mid - 1;
}
return false;
}
While this works mostly, it doesn't seems to work for all test cases. For example, for 808201 == 899 * 899 it returns false. However when the variables are changed from int to long it works. Why?
The tricky bit is mid * mid.
an int is a 32-bit number; it can therefore represent each integer starting from -2147483648 (which is -2^31) to 2147483647 (which is 2^31-1 - one less, because 0 also needs representation, of course).
That does mean that if you calculate x*x, you run into an issue. What if x*x is going to end up being more than 2147483647? Then it 'overflows'. Which messes up your math. It'll be more than that if x is higher than the square root of 2147483647, which is 46340. That still works. 46431 won't. Let's try it:
System.out.println(46340 * 46340);
System.out.println(46341 * 46341);
this prints:
2147395600
-2147479015
minus? Wha?
Well, that's that overflow thing kicking in. The real answer to 46341 * 46341 = 2147488281, but 2147488281 is not a number that an int can hold. You can try that too:
int z = 2147488281; // this is a compile-time error. try it!
When you're using longs, the exact same rules apply, except longs are 64-bit. That means they represent from -9223372036854775808 to 9223372036854775807 (-2^63 to +2^63-1). Thus, the largest value of x, such that x*x still fits, is 3037000499. Let's try it:
NB: in java, x * y is an expression where x and y have types. If the type of both is int, then the expression does int multiplication, and 46340 is as high as you can go before it overflows. If either x or y is long, then the other one is first upgraded to a long, and then long multiplication is done.By putting an L after a number, it has the long type, thus, in the next snippet, it's long multiplication.
System.out.println(3037000499L * 3037000499L);
System.out.println(3037000500L * 3037000500L);
prints:
9223372030926249001
-9223372036709301616
in other words, when you use longs, you can go much further, but it, too, has limits.
If you want to avoid this, you either have to avoid using any math where an intermediate result such as mid * mid can ever be larger than your inputs (so, think of a way to determine this without doing that, or think of a way to detect that overflow has occurred; if it has, you already know it couldn't possibly work, and you can make some good guesses as to what your new value for l has to be.
That, or, use BigInteger which is limitless, at the cost of memory and speed.
NB: Don't use l as a variable name. It looks way too much like the digit 1 :)

Does Java Float.compare() Always Produce The Correct Result?

This seems like a question that I could easily find an answer for but I couldn't see any entry on this. I know how the floating-point arithmetic works and in order to compare floating numbers I need to use an epsilon check. When I shared this with my team, one of my colleagues asked me this and I couldn't answer.
Does the compare method on Java always produce the correct result, i.e, the result an epsilon check for f1 and f2 would yield?
Float.compare(float f1, float f2);
Note: Especially consider this question for the equality case.
No, Float.compare does not use any kind of epsilon checking.
Here's for example the OpenJDK 13 implementation of the method:
/**
* Compares the two specified {#code float} values. The sign
* of the integer value returned is the same as that of the
* integer that would be returned by the call:
* <pre>
* new Float(f1).compareTo(new Float(f2))
* </pre>
*
* #param f1 the first {#code float} to compare.
* #param f2 the second {#code float} to compare.
* #return the value {#code 0} if {#code f1} is
* numerically equal to {#code f2}; a value less than
* {#code 0} if {#code f1} is numerically less than
* {#code f2}; and a value greater than {#code 0}
* if {#code f1} is numerically greater than
* {#code f2}.
* #since 1.4
*/
public static int compare(float f1, float f2) {
if (f1 < f2)
return -1; // Neither val is NaN, thisVal is smaller
if (f1 > f2)
return 1; // Neither val is NaN, thisVal is larger
// Cannot use floatToRawIntBits because of possibility of NaNs.
int thisBits = Float.floatToIntBits(f1);
int anotherBits = Float.floatToIntBits(f2);
return (thisBits == anotherBits ? 0 : // Values are equal
(thisBits < anotherBits ? -1 : // (-0.0, 0.0) or (!NaN, NaN)
1)); // (0.0, -0.0) or (NaN, !NaN)
}
Source
If you read the Javadoc of Float.compare() they talk about "numerically equal". That means that values that represent the same theoretical number, but are differently encoded, are considered equal. Examples are 0.0 == -0.0 and subnormal numbers.
Epsilon checks (where you check if two numbers are within a range of one other, typically a very small number), is not a de-facto standard, and has a lot of practical issues (like which epsilon to choose when you don't know the magnutide of the numbers, or what to do when the two numbers have vastly different magnitudes). Because of that reason, Java only implements exact operations.
No, java.lang.Float.compare( double, double ) does not return 0 (equal) for values within a hidden epsilon value of each other.
However, you can easily do such a comparison by hand, or with one of a few common libraries.
By hand, you can write a function that returns zero if a fuzzy check for equality passes, else returns Float.compare(). The fuzzy check for equality can be something like Math.abs( f1 - f2 ) < epsilon.
Alternatively, in Apache Commons Math, the Precision class provides compares() and equals() methods for floats and doubles that accept an epsilon value. And the class provides a value for epsilon in constant EPSILON.
Alternatively, in Google Guava, the DoubleMath class provides fuzzyCompare() and fuzzyEquals() methods that accept doubles and an epsilon value.
Sounds like you need to check that two float values are equal within the given non-negative delta, correct?
You can use JUnit's Assertions.assertEquals() method to achieve that:
assertEquals​(float f1, float f2, float delta)
Commonly done with a test like: if (abs(a-b) < DELTA) ... where DELTA is globally declared as some very-small numeric constant for consistency throughout the program. Since the test is so very simple, I like to explicitly state it.

Is there an efficient way to check for an approximate floating point equality in C++? [duplicate]

What would be the most efficient way to compare two double or two float values?
Simply doing this is not correct:
bool CompareDoubles1 (double A, double B)
{
return A == B;
}
But something like:
bool CompareDoubles2 (double A, double B)
{
diff = A - B;
return (diff < EPSILON) && (-diff < EPSILON);
}
Seems to waste processing.
Does anyone know a smarter float comparer?
Be extremely careful using any of the other suggestions. It all depends on context.
I have spent a long time tracing bugs in a system that presumed a==b if |a-b|<epsilon. The underlying problems were:
The implicit presumption in an algorithm that if a==b and b==c then a==c.
Using the same epsilon for lines measured in inches and lines measured in mils (.001 inch). That is a==b but 1000a!=1000b. (This is why AlmostEqual2sComplement asks for the epsilon or max ULPS).
The use of the same epsilon for both the cosine of angles and the length of lines!
Using such a compare function to sort items in a collection. (In this case using the builtin C++ operator == for doubles produced correct results.)
Like I said: it all depends on context and the expected size of a and b.
By the way, std::numeric_limits<double>::epsilon() is the "machine epsilon". It is the difference between 1.0 and the next value representable by a double. I guess that it could be used in the compare function but only if the expected values are less than 1. (This is in response to #cdv's answer...)
Also, if you basically have int arithmetic in doubles (here we use doubles to hold int values in certain cases) your arithmetic will be correct. For example 4.0/2.0 will be the same as 1.0+1.0. This is as long as you do not do things that result in fractions (4.0/3.0) or do not go outside of the size of an int.
The comparison with an epsilon value is what most people do (even in game programming).
You should change your implementation a little though:
bool AreSame(double a, double b)
{
return fabs(a - b) < EPSILON;
}
Edit: Christer has added a stack of great info on this topic on a recent blog post. Enjoy.
Comparing floating point numbers for depends on the context. Since even changing the order of operations can produce different results, it is important to know how "equal" you want the numbers to be.
Comparing floating point numbers by Bruce Dawson is a good place to start when looking at floating point comparison.
The following definitions are from The art of computer programming by Knuth:
bool approximatelyEqual(float a, float b, float epsilon)
{
return fabs(a - b) <= ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon);
}
bool essentiallyEqual(float a, float b, float epsilon)
{
return fabs(a - b) <= ( (fabs(a) > fabs(b) ? fabs(b) : fabs(a)) * epsilon);
}
bool definitelyGreaterThan(float a, float b, float epsilon)
{
return (a - b) > ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon);
}
bool definitelyLessThan(float a, float b, float epsilon)
{
return (b - a) > ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon);
}
Of course, choosing epsilon depends on the context, and determines how equal you want the numbers to be.
Another method of comparing floating point numbers is to look at the ULP (units in last place) of the numbers. While not dealing specifically with comparisons, the paper What every computer scientist should know about floating point numbers is a good resource for understanding how floating point works and what the pitfalls are, including what ULP is.
I found that the Google C++ Testing Framework contains a nice cross-platform template-based implementation of AlmostEqual2sComplement which works on both doubles and floats. Given that it is released under the BSD license, using it in your own code should be no problem, as long as you retain the license. I extracted the below code from http://code.google.com/p/googletest/source/browse/trunk/include/gtest/internal/gtest-internal.h https://github.com/google/googletest/blob/master/googletest/include/gtest/internal/gtest-internal.h and added the license on top.
Be sure to #define GTEST_OS_WINDOWS to some value (or to change the code where it's used to something that fits your codebase - it's BSD licensed after all).
Usage example:
double left = // something
double right = // something
const FloatingPoint<double> lhs(left), rhs(right);
if (lhs.AlmostEquals(rhs)) {
//they're equal!
}
Here's the code:
// Copyright 2005, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Authors: wan#google.com (Zhanyong Wan), eefacm#gmail.com (Sean Mcafee)
//
// The Google C++ Testing Framework (Google Test)
// This template class serves as a compile-time function from size to
// type. It maps a size in bytes to a primitive type with that
// size. e.g.
//
// TypeWithSize<4>::UInt
//
// is typedef-ed to be unsigned int (unsigned integer made up of 4
// bytes).
//
// Such functionality should belong to STL, but I cannot find it
// there.
//
// Google Test uses this class in the implementation of floating-point
// comparison.
//
// For now it only handles UInt (unsigned int) as that's all Google Test
// needs. Other types can be easily added in the future if need
// arises.
template <size_t size>
class TypeWithSize {
public:
// This prevents the user from using TypeWithSize<N> with incorrect
// values of N.
typedef void UInt;
};
// The specialization for size 4.
template <>
class TypeWithSize<4> {
public:
// unsigned int has size 4 in both gcc and MSVC.
//
// As base/basictypes.h doesn't compile on Windows, we cannot use
// uint32, uint64, and etc here.
typedef int Int;
typedef unsigned int UInt;
};
// The specialization for size 8.
template <>
class TypeWithSize<8> {
public:
#if GTEST_OS_WINDOWS
typedef __int64 Int;
typedef unsigned __int64 UInt;
#else
typedef long long Int; // NOLINT
typedef unsigned long long UInt; // NOLINT
#endif // GTEST_OS_WINDOWS
};
// This template class represents an IEEE floating-point number
// (either single-precision or double-precision, depending on the
// template parameters).
//
// The purpose of this class is to do more sophisticated number
// comparison. (Due to round-off error, etc, it's very unlikely that
// two floating-points will be equal exactly. Hence a naive
// comparison by the == operation often doesn't work.)
//
// Format of IEEE floating-point:
//
// The most-significant bit being the leftmost, an IEEE
// floating-point looks like
//
// sign_bit exponent_bits fraction_bits
//
// Here, sign_bit is a single bit that designates the sign of the
// number.
//
// For float, there are 8 exponent bits and 23 fraction bits.
//
// For double, there are 11 exponent bits and 52 fraction bits.
//
// More details can be found at
// http://en.wikipedia.org/wiki/IEEE_floating-point_standard.
//
// Template parameter:
//
// RawType: the raw floating-point type (either float or double)
template <typename RawType>
class FloatingPoint {
public:
// Defines the unsigned integer type that has the same size as the
// floating point number.
typedef typename TypeWithSize<sizeof(RawType)>::UInt Bits;
// Constants.
// # of bits in a number.
static const size_t kBitCount = 8*sizeof(RawType);
// # of fraction bits in a number.
static const size_t kFractionBitCount =
std::numeric_limits<RawType>::digits - 1;
// # of exponent bits in a number.
static const size_t kExponentBitCount = kBitCount - 1 - kFractionBitCount;
// The mask for the sign bit.
static const Bits kSignBitMask = static_cast<Bits>(1) << (kBitCount - 1);
// The mask for the fraction bits.
static const Bits kFractionBitMask =
~static_cast<Bits>(0) >> (kExponentBitCount + 1);
// The mask for the exponent bits.
static const Bits kExponentBitMask = ~(kSignBitMask | kFractionBitMask);
// How many ULP's (Units in the Last Place) we want to tolerate when
// comparing two numbers. The larger the value, the more error we
// allow. A 0 value means that two numbers must be exactly the same
// to be considered equal.
//
// The maximum error of a single floating-point operation is 0.5
// units in the last place. On Intel CPU's, all floating-point
// calculations are done with 80-bit precision, while double has 64
// bits. Therefore, 4 should be enough for ordinary use.
//
// See the following article for more details on ULP:
// http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm.
static const size_t kMaxUlps = 4;
// Constructs a FloatingPoint from a raw floating-point number.
//
// On an Intel CPU, passing a non-normalized NAN (Not a Number)
// around may change its bits, although the new value is guaranteed
// to be also a NAN. Therefore, don't expect this constructor to
// preserve the bits in x when x is a NAN.
explicit FloatingPoint(const RawType& x) { u_.value_ = x; }
// Static methods
// Reinterprets a bit pattern as a floating-point number.
//
// This function is needed to test the AlmostEquals() method.
static RawType ReinterpretBits(const Bits bits) {
FloatingPoint fp(0);
fp.u_.bits_ = bits;
return fp.u_.value_;
}
// Returns the floating-point number that represent positive infinity.
static RawType Infinity() {
return ReinterpretBits(kExponentBitMask);
}
// Non-static methods
// Returns the bits that represents this number.
const Bits &bits() const { return u_.bits_; }
// Returns the exponent bits of this number.
Bits exponent_bits() const { return kExponentBitMask & u_.bits_; }
// Returns the fraction bits of this number.
Bits fraction_bits() const { return kFractionBitMask & u_.bits_; }
// Returns the sign bit of this number.
Bits sign_bit() const { return kSignBitMask & u_.bits_; }
// Returns true iff this is NAN (not a number).
bool is_nan() const {
// It's a NAN if the exponent bits are all ones and the fraction
// bits are not entirely zeros.
return (exponent_bits() == kExponentBitMask) && (fraction_bits() != 0);
}
// Returns true iff this number is at most kMaxUlps ULP's away from
// rhs. In particular, this function:
//
// - returns false if either number is (or both are) NAN.
// - treats really large numbers as almost equal to infinity.
// - thinks +0.0 and -0.0 are 0 DLP's apart.
bool AlmostEquals(const FloatingPoint& rhs) const {
// The IEEE standard says that any comparison operation involving
// a NAN must return false.
if (is_nan() || rhs.is_nan()) return false;
return DistanceBetweenSignAndMagnitudeNumbers(u_.bits_, rhs.u_.bits_)
<= kMaxUlps;
}
private:
// The data type used to store the actual floating-point number.
union FloatingPointUnion {
RawType value_; // The raw floating-point number.
Bits bits_; // The bits that represent the number.
};
// Converts an integer from the sign-and-magnitude representation to
// the biased representation. More precisely, let N be 2 to the
// power of (kBitCount - 1), an integer x is represented by the
// unsigned number x + N.
//
// For instance,
//
// -N + 1 (the most negative number representable using
// sign-and-magnitude) is represented by 1;
// 0 is represented by N; and
// N - 1 (the biggest number representable using
// sign-and-magnitude) is represented by 2N - 1.
//
// Read http://en.wikipedia.org/wiki/Signed_number_representations
// for more details on signed number representations.
static Bits SignAndMagnitudeToBiased(const Bits &sam) {
if (kSignBitMask & sam) {
// sam represents a negative number.
return ~sam + 1;
} else {
// sam represents a positive number.
return kSignBitMask | sam;
}
}
// Given two numbers in the sign-and-magnitude representation,
// returns the distance between them as an unsigned number.
static Bits DistanceBetweenSignAndMagnitudeNumbers(const Bits &sam1,
const Bits &sam2) {
const Bits biased1 = SignAndMagnitudeToBiased(sam1);
const Bits biased2 = SignAndMagnitudeToBiased(sam2);
return (biased1 >= biased2) ? (biased1 - biased2) : (biased2 - biased1);
}
FloatingPointUnion u_;
};
EDIT: This post is 4 years old. It's probably still valid, and the code is nice, but some people found improvements. Best go get the latest version of AlmostEquals right from the Google Test source code, and not the one I pasted up here.
For a more in depth approach read Comparing floating point numbers. Here is the code snippet from that link:
// Usable AlmostEqual function
bool AlmostEqual2sComplement(float A, float B, int maxUlps)
{
// Make sure maxUlps is non-negative and small enough that the
// default NAN won't compare as equal to anything.
assert(maxUlps > 0 && maxUlps < 4 * 1024 * 1024);
int aInt = *(int*)&A;
// Make aInt lexicographically ordered as a twos-complement int
if (aInt < 0)
aInt = 0x80000000 - aInt;
// Make bInt lexicographically ordered as a twos-complement int
int bInt = *(int*)&B;
if (bInt < 0)
bInt = 0x80000000 - bInt;
int intDiff = abs(aInt - bInt);
if (intDiff <= maxUlps)
return true;
return false;
}
Realizing this is an old thread but this article is one of the most straight forward ones I have found on comparing floating point numbers and if you want to explore more it has more detailed references as well and it the main site covers a complete range of issues dealing with floating point numbers The Floating-Point Guide :Comparison.
We can find a somewhat more practical article in Floating-point tolerances revisited and notes there is absolute tolerance test, which boils down to this in C++:
bool absoluteToleranceCompare(double x, double y)
{
return std::fabs(x - y) <= std::numeric_limits<double>::epsilon() ;
}
and relative tolerance test:
bool relativeToleranceCompare(double x, double y)
{
double maxXY = std::max( std::fabs(x) , std::fabs(y) ) ;
return std::fabs(x - y) <= std::numeric_limits<double>::epsilon()*maxXY ;
}
The article notes that the absolute test fails when x and y are large and fails in the relative case when they are small. Assuming he absolute and relative tolerance is the same a combined test would look like this:
bool combinedToleranceCompare(double x, double y)
{
double maxXYOne = std::max( { 1.0, std::fabs(x) , std::fabs(y) } ) ;
return std::fabs(x - y) <= std::numeric_limits<double>::epsilon()*maxXYOne ;
}
I ended up spending quite some time going through material in this great thread. I doubt everyone wants to spend so much time so I would highlight the summary of what I learned and the solution I implemented.
Quick Summary
Is 1e-8 approximately same as 1e-16? If you are looking at noisy sensor data then probably yes but if you are doing molecular simulation then may be not! Bottom line: You always need to think of tolerance value in context of specific function call and not just make it generic app-wide hard-coded constant.
For general library functions, it's still nice to have parameter with default tolerance. A typical choice is numeric_limits::epsilon() which is same as FLT_EPSILON in float.h. This is however problematic because epsilon for comparing values like 1.0 is not same as epsilon for values like 1E9. The FLT_EPSILON is defined for 1.0.
The obvious implementation to check if number is within tolerance is fabs(a-b) <= epsilon however this doesn't work because default epsilon is defined for 1.0. We need to scale epsilon up or down in terms of a and b.
There are two solution to this problem: either you set epsilon proportional to max(a,b) or you can get next representable numbers around a and then see if b falls into that range. The former is called "relative" method and later is called ULP method.
Both methods actually fails anyway when comparing with 0. In this case, application must supply correct tolerance.
Utility Functions Implementation (C++11)
//implements relative method - do not use for comparing with zero
//use this most of the time, tolerance needs to be meaningful in your context
template<typename TReal>
static bool isApproximatelyEqual(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon())
{
TReal diff = std::fabs(a - b);
if (diff <= tolerance)
return true;
if (diff < std::fmax(std::fabs(a), std::fabs(b)) * tolerance)
return true;
return false;
}
//supply tolerance that is meaningful in your context
//for example, default tolerance may not work if you are comparing double with float
template<typename TReal>
static bool isApproximatelyZero(TReal a, TReal tolerance = std::numeric_limits<TReal>::epsilon())
{
if (std::fabs(a) <= tolerance)
return true;
return false;
}
//use this when you want to be on safe side
//for example, don't start rover unless signal is above 1
template<typename TReal>
static bool isDefinitelyLessThan(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon())
{
TReal diff = a - b;
if (diff < tolerance)
return true;
if (diff < std::fmax(std::fabs(a), std::fabs(b)) * tolerance)
return true;
return false;
}
template<typename TReal>
static bool isDefinitelyGreaterThan(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon())
{
TReal diff = a - b;
if (diff > tolerance)
return true;
if (diff > std::fmax(std::fabs(a), std::fabs(b)) * tolerance)
return true;
return false;
}
//implements ULP method
//use this when you are only concerned about floating point precision issue
//for example, if you want to see if a is 1.0 by checking if its within
//10 closest representable floating point numbers around 1.0.
template<typename TReal>
static bool isWithinPrecisionInterval(TReal a, TReal b, unsigned int interval_size = 1)
{
TReal min_a = a - (a - std::nextafter(a, std::numeric_limits<TReal>::lowest())) * interval_size;
TReal max_a = a + (std::nextafter(a, std::numeric_limits<TReal>::max()) - a) * interval_size;
return min_a <= b && max_a >= b;
}
The portable way to get epsilon in C++ is
#include <limits>
std::numeric_limits<double>::epsilon()
Then the comparison function becomes
#include <cmath>
#include <limits>
bool AreSame(double a, double b) {
return std::fabs(a - b) < std::numeric_limits<double>::epsilon();
}
The code you wrote is bugged :
return (diff < EPSILON) && (-diff > EPSILON);
The correct code would be :
return (diff < EPSILON) && (diff > -EPSILON);
(...and yes this is different)
I wonder if fabs wouldn't make you lose lazy evaluation in some case. I would say it depends on the compiler. You might want to try both. If they are equivalent in average, take the implementation with fabs.
If you have some info on which of the two float is more likely to be bigger than then other, you can play on the order of the comparison to take better advantage of the lazy evaluation.
Finally you might get better result by inlining this function. Not likely to improve much though...
Edit: OJ, thanks for correcting your code. I erased my comment accordingly
`return fabs(a - b) < EPSILON;
This is fine if:
the order of magnitude of your inputs don't change much
very small numbers of opposite signs can be treated as equal
But otherwise it'll lead you into trouble. Double precision numbers have a resolution of about 16 decimal places. If the two numbers you are comparing are larger in magnitude than EPSILON*1.0E16, then you might as well be saying:
return a==b;
I'll examine a different approach that assumes you need to worry about the first issue and assume the second is fine your application. A solution would be something like:
#define VERYSMALL (1.0E-150)
#define EPSILON (1.0E-8)
bool AreSame(double a, double b)
{
double absDiff = fabs(a - b);
if (absDiff < VERYSMALL)
{
return true;
}
double maxAbs = max(fabs(a) - fabs(b));
return (absDiff/maxAbs) < EPSILON;
}
This is expensive computationally, but it is sometimes what is called for. This is what we have to do at my company because we deal with an engineering library and inputs can vary by a few dozen orders of magnitude.
Anyway, the point is this (and applies to practically every programming problem): Evaluate what your needs are, then come up with a solution to address your needs -- don't assume the easy answer will address your needs. If after your evaluation you find that fabs(a-b) < EPSILON will suffice, perfect -- use it! But be aware of its shortcomings and other possible solutions too.
As others have pointed out, using a fixed-exponent epsilon (such as 0.0000001) will be useless for values away from the epsilon value. For example, if your two values are 10000.000977 and 10000, then there are NO 32-bit floating-point values between these two numbers -- 10000 and 10000.000977 are as close as you can possibly get without being bit-for-bit identical. Here, an epsilon of less than 0.0009 is meaningless; you might as well use the straight equality operator.
Likewise, as the two values approach epsilon in size, the relative error grows to 100%.
Thus, trying to mix a fixed point number such as 0.00001 with floating-point values (where the exponent is arbitrary) is a pointless exercise. This will only ever work if you can be assured that the operand values lie within a narrow domain (that is, close to some specific exponent), and if you properly select an epsilon value for that specific test. If you pull a number out of the air ("Hey! 0.00001 is small, so that must be good!"), you're doomed to numerical errors. I've spent plenty of time debugging bad numerical code where some poor schmuck tosses in random epsilon values to make yet another test case work.
If you do numerical programming of any kind and believe you need to reach for fixed-point epsilons, READ BRUCE'S ARTICLE ON COMPARING FLOATING-POINT NUMBERS.
Comparing Floating Point Numbers
Here's proof that using std::numeric_limits::epsilon() is not the answer — it fails for values greater than one:
Proof of my comment above:
#include <stdio.h>
#include <limits>
double ItoD (__int64 x) {
// Return double from 64-bit hexadecimal representation.
return *(reinterpret_cast<double*>(&x));
}
void test (__int64 ai, __int64 bi) {
double a = ItoD(ai), b = ItoD(bi);
bool close = std::fabs(a-b) < std::numeric_limits<double>::epsilon();
printf ("%.16f and %.16f %s close.\n", a, b, close ? "are " : "are not");
}
int main()
{
test (0x3fe0000000000000L,
0x3fe0000000000001L);
test (0x3ff0000000000000L,
0x3ff0000000000001L);
}
Running yields this output:
0.5000000000000000 and 0.5000000000000001 are close.
1.0000000000000000 and 1.0000000000000002 are not close.
Note that in the second case (one and just larger than one), the two input values are as close as they can possibly be, and still compare as not close. Thus, for values greater than 1.0, you might as well just use an equality test. Fixed epsilons will not save you when comparing floating-point values.
Qt implements two functions, maybe you can learn from them:
static inline bool qFuzzyCompare(double p1, double p2)
{
return (qAbs(p1 - p2) <= 0.000000000001 * qMin(qAbs(p1), qAbs(p2)));
}
static inline bool qFuzzyCompare(float p1, float p2)
{
return (qAbs(p1 - p2) <= 0.00001f * qMin(qAbs(p1), qAbs(p2)));
}
And you may need the following functions, since
Note that comparing values where either p1 or p2 is 0.0 will not work,
nor does comparing values where one of the values is NaN or infinity.
If one of the values is always 0.0, use qFuzzyIsNull instead. If one
of the values is likely to be 0.0, one solution is to add 1.0 to both
values.
static inline bool qFuzzyIsNull(double d)
{
return qAbs(d) <= 0.000000000001;
}
static inline bool qFuzzyIsNull(float f)
{
return qAbs(f) <= 0.00001f;
}
Unfortunately, even your "wasteful" code is incorrect. EPSILON is the smallest value that could be added to 1.0 and change its value. The value 1.0 is very important — larger numbers do not change when added to EPSILON. Now, you can scale this value to the numbers you are comparing to tell whether they are different or not. The correct expression for comparing two doubles is:
if (fabs(a - b) <= DBL_EPSILON * fmax(fabs(a), fabs(b)))
{
// ...
}
This is at a minimum. In general, though, you would want to account for noise in your calculations and ignore a few of the least significant bits, so a more realistic comparison would look like:
if (fabs(a - b) <= 16 * DBL_EPSILON * fmax(fabs(a), fabs(b)))
{
// ...
}
If comparison performance is very important to you and you know the range of your values, then you should use fixed-point numbers instead.
General-purpose comparison of floating-point numbers is generally meaningless. How to compare really depends on a problem at hand. In many problems, numbers are sufficiently discretized to allow comparing them within a given tolerance. Unfortunately, there are just as many problems, where such trick doesn't really work. For one example, consider working with a Heaviside (step) function of a number in question (digital stock options come to mind) when your observations are very close to the barrier. Performing tolerance-based comparison wouldn't do much good, as it would effectively shift the issue from the original barrier to two new ones. Again, there is no general-purpose solution for such problems and the particular solution might require going as far as changing the numerical method in order to achieve stability.
You have to do this processing for floating point comparison, since float's can't be perfectly compared like integer types. Here are functions for the various comparison operators.
Floating Point Equal to (==)
I also prefer the subtraction technique rather than relying on fabs() or abs(), but I'd have to speed profile it on various architectures from 64-bit PC to ATMega328 microcontroller (Arduino) to really see if it makes much of a performance difference.
So, let's forget about all this absolute value stuff and just do some subtraction and comparison!
Modified from Microsoft's example here:
/// #brief See if two floating point numbers are approximately equal.
/// #param[in] a number 1
/// #param[in] b number 2
/// #param[in] epsilon A small value such that if the difference between the two numbers is
/// smaller than this they can safely be considered to be equal.
/// #return true if the two numbers are approximately equal, and false otherwise
bool is_float_eq(float a, float b, float epsilon) {
return ((a - b) < epsilon) && ((b - a) < epsilon);
}
bool is_double_eq(double a, double b, double epsilon) {
return ((a - b) < epsilon) && ((b - a) < epsilon);
}
Example usage:
constexpr float EPSILON = 0.0001; // 1e-4
is_float_eq(1.0001, 0.99998, EPSILON);
I'm not entirely sure, but it seems to me some of the criticisms of the epsilon-based approach, as described in the comments below this highly-upvoted answer, can be resolved by using a variable epsilon, scaled according to the floating point values being compared, like this:
float a = 1.0001;
float b = 0.99998;
float epsilon = std::max(std::fabs(a), std::fabs(b)) * 1e-4;
is_float_eq(a, b, epsilon);
This way, the epsilon value scales with the floating point values and is therefore never so small of a value that it becomes insignificant.
For completeness, let's add the rest:
Greater than (>), and less than (<):
/// #brief See if floating point number `a` is > `b`
/// #param[in] a number 1
/// #param[in] b number 2
/// #param[in] epsilon a small value such that if `a` is > `b` by this amount, `a` is considered
/// to be definitively > `b`
/// #return true if `a` is definitively > `b`, and false otherwise
bool is_float_gt(float a, float b, float epsilon) {
return a > b + epsilon;
}
bool is_double_gt(double a, double b, double epsilon) {
return a > b + epsilon;
}
/// #brief See if floating point number `a` is < `b`
/// #param[in] a number 1
/// #param[in] b number 2
/// #param[in] epsilon a small value such that if `a` is < `b` by this amount, `a` is considered
/// to be definitively < `b`
/// #return true if `a` is definitively < `b`, and false otherwise
bool is_float_lt(float a, float b, float epsilon) {
return a < b - epsilon;
}
bool is_double_lt(double a, double b, double epsilon) {
return a < b - epsilon;
}
Greater than or equal to (>=), and less than or equal to (<=)
/// #brief Returns true if `a` is definitively >= `b`, and false otherwise
bool is_float_ge(float a, float b, float epsilon) {
return a > b - epsilon;
}
bool is_double_ge(double a, double b, double epsilon) {
return a > b - epsilon;
}
/// #brief Returns true if `a` is definitively <= `b`, and false otherwise
bool is_float_le(float a, float b, float epsilon) {
return a < b + epsilon;
}
bool is_double_le(double a, double b, double epsilon) {
return a < b + epsilon;
}
Additional improvements:
A good default value for epsilon in C++ is std::numeric_limits<T>::epsilon(), which evaluates to either 0 or FLT_EPSILON, DBL_EPSILON, or LDBL_EPSILON. See here: https://en.cppreference.com/w/cpp/types/numeric_limits/epsilon. You can also see the float.h header for FLT_EPSILON, DBL_EPSILON, and LDBL_EPSILON.
See https://en.cppreference.com/w/cpp/header/cfloat and
https://www.cplusplus.com/reference/cfloat/
You could template the functions instead, to handle all floating point types: float, double, and long double, with type checks for these types via a static_assert() inside the template.
Scaling the epsilon value is a good idea to ensure it works for really large and really small a and b values. This article recommends and explains it: http://realtimecollisiondetection.net/blog/?p=89. So, you should scale epsilon by a scaling value equal to max(1.0, abs(a), abs(b)), as that article explains. Otherwise, as a and/or b increase in magnitude, the epsilon would eventually become so small relative to those values that it becomes lost in the floating point error. So, we scale it to become larger in magnitude like they are. However, using 1.0 as the smallest allowed scaling factor for epsilon also ensures that for really small-magnitude a and b values, epsilon itself doesn't get scaled so small that it also becomes lost in the floating point error. So, we limit the minimum scaling factor to 1.0.
If you want to "encapsulate" the above functions into a class, don't. Instead, wrap them up in a namespace if you like in order to namespace them. Ex: if you put all of the stand-alone functions into a namespace called float_comparison, then you could access the is_eq() function like this, for instance: float_comparison::is_eq(1.0, 1.5);.
It might also be nice to add comparisons against zero, not just comparisons between two values.
So, here is a better type of solution with the above improvements in place:
namespace float_comparison {
/// Scale the epsilon value to become large for large-magnitude a or b,
/// but no smaller than 1.0, per the explanation above, to ensure that
/// epsilon doesn't ever fall out in floating point error as a and/or b
/// increase in magnitude.
template<typename T>
static constexpr T scale_epsilon(T a, T b, T epsilon =
std::numeric_limits<T>::epsilon()) noexcept
{
static_assert(std::is_floating_point_v<T>, "Floating point comparisons "
"require type float, double, or long double.");
T scaling_factor;
// Special case for when a or b is infinity
if (std::isinf(a) || std::isinf(b))
{
scaling_factor = 0;
}
else
{
scaling_factor = std::max({(T)1.0, std::abs(a), std::abs(b)});
}
T epsilon_scaled = scaling_factor * std::abs(epsilon);
return epsilon_scaled;
}
// Compare two values
/// Equal: returns true if a is approximately == b, and false otherwise
template<typename T>
static constexpr bool is_eq(T a, T b, T epsilon =
std::numeric_limits<T>::epsilon()) noexcept
{
static_assert(std::is_floating_point_v<T>, "Floating point comparisons "
"require type float, double, or long double.");
// test `a == b` first to see if both a and b are either infinity
// or -infinity
return a == b || std::abs(a - b) <= scale_epsilon(a, b, epsilon);
}
/*
etc. etc.:
is_eq()
is_ne()
is_lt()
is_le()
is_gt()
is_ge()
*/
// Compare against zero
/// Equal: returns true if a is approximately == 0, and false otherwise
template<typename T>
static constexpr bool is_eq_zero(T a, T epsilon =
std::numeric_limits<T>::epsilon()) noexcept
{
static_assert(std::is_floating_point_v<T>, "Floating point comparisons "
"require type float, double, or long double.");
return is_eq(a, (T)0.0, epsilon);
}
/*
etc. etc.:
is_eq_zero()
is_ne_zero()
is_lt_zero()
is_le_zero()
is_gt_zero()
is_ge_zero()
*/
} // namespace float_comparison
See also:
The macro forms of some of the functions above in my repo here: utilities.h.
UPDATE 29 NOV 2020: it's a work-in-progress, and I'm going to make it a separate answer when ready, but I've produced a better, scaled-epsilon version of all of the functions in C in this file here: utilities.c. Take a look.
ADDITIONAL READING I need to do now have done: Floating-point tolerances revisited, by Christer Ericson. VERY USEFUL ARTICLE! It talks about scaling epsilon in order to ensure it never falls out in floating point error, even for really large-magnitude a and/or b values!
My class based on previously posted answers. Very similar to Google's code but I use a bias which pushes all NaN values above 0xFF000000. That allows a faster check for NaN.
This code is meant to demonstrate the concept, not be a general solution. Google's code already shows how to compute all the platform specific values and I didn't want to duplicate all that. I've done limited testing on this code.
typedef unsigned int U32;
// Float Memory Bias (unsigned)
// ----- ------ ---------------
// NaN 0xFFFFFFFF 0xFF800001
// NaN 0xFF800001 0xFFFFFFFF
// -Infinity 0xFF800000 0x00000000 ---
// -3.40282e+038 0xFF7FFFFF 0x00000001 |
// -1.40130e-045 0x80000001 0x7F7FFFFF |
// -0.0 0x80000000 0x7F800000 |--- Valid <= 0xFF000000.
// 0.0 0x00000000 0x7F800000 | NaN > 0xFF000000
// 1.40130e-045 0x00000001 0x7F800001 |
// 3.40282e+038 0x7F7FFFFF 0xFEFFFFFF |
// Infinity 0x7F800000 0xFF000000 ---
// NaN 0x7F800001 0xFF000001
// NaN 0x7FFFFFFF 0xFF7FFFFF
//
// Either value of NaN returns false.
// -Infinity and +Infinity are not "close".
// -0 and +0 are equal.
//
class CompareFloat{
public:
union{
float m_f32;
U32 m_u32;
};
static bool CompareFloat::IsClose( float A, float B, U32 unitsDelta = 4 )
{
U32 a = CompareFloat::GetBiased( A );
U32 b = CompareFloat::GetBiased( B );
if ( (a > 0xFF000000) || (b > 0xFF000000) )
{
return( false );
}
return( (static_cast<U32>(abs( a - b ))) < unitsDelta );
}
protected:
static U32 CompareFloat::GetBiased( float f )
{
U32 r = ((CompareFloat*)&f)->m_u32;
if ( r & 0x80000000 )
{
return( ~r - 0x007FFFFF );
}
return( r + 0x7F800000 );
}
};
I'd be very wary of any of these answers that involves floating point subtraction (e.g., fabs(a-b) < epsilon). First, the floating point numbers become more sparse at greater magnitudes and at high enough magnitudes where the spacing is greater than epsilon, you might as well just be doing a == b. Second, subtracting two very close floating point numbers (as these will tend to be, given that you're looking for near equality) is exactly how you get catastrophic cancellation.
While not portable, I think grom's answer does the best job of avoiding these issues.
There are actually cases in numerical software where you want to check whether two floating point numbers are exactly equal. I posted this on a similar question
https://stackoverflow.com/a/10973098/1447411
So you can not say that "CompareDoubles1" is wrong in general.
In terms of the scale of quantities:
If epsilon is the small fraction of the magnitude of quantity (i.e. relative value) in some certain physical sense and A and B types is comparable in the same sense, than I think, that the following is quite correct:
#include <limits>
#include <iomanip>
#include <iostream>
#include <cmath>
#include <cstdlib>
#include <cassert>
template< typename A, typename B >
inline
bool close_enough(A const & a, B const & b,
typename std::common_type< A, B >::type const & epsilon)
{
using std::isless;
assert(isless(0, epsilon)); // epsilon is a part of the whole quantity
assert(isless(epsilon, 1));
using std::abs;
auto const delta = abs(a - b);
auto const x = abs(a);
auto const y = abs(b);
// comparable generally and |a - b| < eps * (|a| + |b|) / 2
return isless(epsilon * y, x) && isless(epsilon * x, y) && isless((delta + delta) / (x + y), epsilon);
}
int main()
{
std::cout << std::boolalpha << close_enough(0.9, 1.0, 0.1) << std::endl;
std::cout << std::boolalpha << close_enough(1.0, 1.1, 0.1) << std::endl;
std::cout << std::boolalpha << close_enough(1.1, 1.2, 0.01) << std::endl;
std::cout << std::boolalpha << close_enough(1.0001, 1.0002, 0.01) << std::endl;
std::cout << std::boolalpha << close_enough(1.0, 0.01, 0.1) << std::endl;
return EXIT_SUCCESS;
}
I use this code:
bool AlmostEqual(double v1, double v2)
{
return (std::fabs(v1 - v2) < std::fabs(std::min(v1, v2)) * std::numeric_limits<double>::epsilon());
}
Found another interesting implementation on: https://en.cppreference.com/w/cpp/types/numeric_limits/epsilon
#include <cmath>
#include <limits>
#include <iomanip>
#include <iostream>
#include <type_traits>
#include <algorithm>
template<class T>
typename std::enable_if<!std::numeric_limits<T>::is_integer, bool>::type
almost_equal(T x, T y, int ulp)
{
// the machine epsilon has to be scaled to the magnitude of the values used
// and multiplied by the desired precision in ULPs (units in the last place)
return std::fabs(x-y) <= std::numeric_limits<T>::epsilon() * std::fabs(x+y) * ulp
// unless the result is subnormal
|| std::fabs(x-y) < std::numeric_limits<T>::min();
}
int main()
{
double d1 = 0.2;
double d2 = 1 / std::sqrt(5) / std::sqrt(5);
std::cout << std::fixed << std::setprecision(20)
<< "d1=" << d1 << "\nd2=" << d2 << '\n';
if(d1 == d2)
std::cout << "d1 == d2\n";
else
std::cout << "d1 != d2\n";
if(almost_equal(d1, d2, 2))
std::cout << "d1 almost equals d2\n";
else
std::cout << "d1 does not almost equal d2\n";
}
In a more generic way:
template <typename T>
bool compareNumber(const T& a, const T& b) {
return std::abs(a - b) < std::numeric_limits<T>::epsilon();
}
Note:
As pointed out by #SirGuy, this approach is flawed.
I am leaving this answer here as an example not to follow.
I use this code. Unlike the above answers this allows one to
give a abs_relative_error that is explained in the comments of the code.
The first version compares complex numbers, so that the error
can be explained in terms of the angle between two "vectors"
of the same length in the complex plane (which gives a little
insight). Then from there the correct formula for two real
numbers follows.
https://github.com/CarloWood/ai-utils/blob/master/almost_equal.h
The latter then is
template<class T>
typename std::enable_if<std::is_floating_point<T>::value, bool>::type
almost_equal(T x, T y, T const abs_relative_error)
{
return 2 * std::abs(x - y) <= abs_relative_error * std::abs(x + y);
}
where abs_relative_error is basically (twice) the absolute value of what comes closest to being defined in the literature: a relative error. But that is just the choice of the name.
What it really is seen most clearly in the complex plane I think. If |x| = 1, and y lays in a circle around x with diameter abs_relative_error, then the two are considered equal.
I use the following function for floating-point numbers comparison:
bool approximatelyEqual(double a, double b)
{
return fabs(a - b) <= ((fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * std::numeric_limits<double>::epsilon());
}
It depends on how precise you want the comparison to be. If you want to compare for exactly the same number, then just go with ==. (You almost never want to do this unless you actually want exactly the same number.) On any decent platform you can also do the following:
diff= a - b; return fabs(diff)<EPSILON;
as fabs tends to be pretty fast. By pretty fast I mean it is basically a bitwise AND, so it better be fast.
And integer tricks for comparing doubles and floats are nice but tend to make it more difficult for the various CPU pipelines to handle effectively. And it's definitely not faster on certain in-order architectures these days due to using the stack as a temporary storage area for values that are being used frequently. (Load-hit-store for those who care.)
/// testing whether two doubles are almost equal. We consider two doubles
/// equal if the difference is within the range [0, epsilon).
///
/// epsilon: a positive number (supposed to be small)
///
/// if either x or y is 0, then we are comparing the absolute difference to
/// epsilon.
/// if both x and y are non-zero, then we are comparing the relative difference
/// to epsilon.
bool almost_equal(double x, double y, double epsilon)
{
double diff = x - y;
if (x != 0 && y != 0){
diff = diff/y;
}
if (diff < epsilon && -1.0*diff < epsilon){
return true;
}
return false;
}
I used this function for my small project and it works, but note the following:
Double precision error can create a surprise for you. Let's say epsilon = 1.0e-6, then 1.0 and 1.000001 should NOT be considered equal according to the above code, but on my machine the function considers them to be equal, this is because 1.000001 can not be precisely translated to a binary format, it is probably 1.0000009xxx. I test it with 1.0 and 1.0000011 and this time I get the expected result.
You cannot compare two double with a fixed EPSILON. Depending on the value of double, EPSILON varies.
A better double comparison would be:
bool same(double a, double b)
{
return std::nextafter(a, std::numeric_limits<double>::lowest()) <= b
&& std::nextafter(a, std::numeric_limits<double>::max()) >= b;
}
My way may not be correct but useful
Convert both float to strings and then do string compare
bool IsFlaotEqual(float a, float b, int decimal)
{
TCHAR form[50] = _T("");
_stprintf(form, _T("%%.%df"), decimal);
TCHAR a1[30] = _T(""), a2[30] = _T("");
_stprintf(a1, form, a);
_stprintf(a2, form, b);
if( _tcscmp(a1, a2) == 0 )
return true;
return false;
}
operator overlaoding can also be done
I write this for java, but maybe you find it useful. It uses longs instead of doubles, but takes care of NaNs, subnormals, etc.
public static boolean equal(double a, double b) {
final long fm = 0xFFFFFFFFFFFFFL; // fraction mask
final long sm = 0x8000000000000000L; // sign mask
final long cm = 0x8000000000000L; // most significant decimal bit mask
long c = Double.doubleToLongBits(a), d = Double.doubleToLongBits(b);
int ea = (int) (c >> 52 & 2047), eb = (int) (d >> 52 & 2047);
if (ea == 2047 && (c & fm) != 0 || eb == 2047 && (d & fm) != 0) return false; // NaN
if (c == d) return true; // identical - fast check
if (ea == 0 && eb == 0) return true; // ±0 or subnormals
if ((c & sm) != (d & sm)) return false; // different signs
if (abs(ea - eb) > 1) return false; // b > 2*a or a > 2*b
d <<= 12; c <<= 12;
if (ea < eb) c = c >> 1 | sm;
else if (ea > eb) d = d >> 1 | sm;
c -= d;
return c < 65536 && c > -65536; // don't use abs(), because:
// There is a posibility c=0x8000000000000000 which cannot be converted to positive
}
public static boolean zero(double a) { return (Double.doubleToLongBits(a) >> 52 & 2047) < 3; }
Keep in mind that after a number of floating-point operations, number can be very different from what we expect. There is no code to fix that.

How to add two java.lang.Numbers?

I have two Numbers. Eg:
Number a = 2;
Number b = 3;
//Following is an error:
Number c = a + b;
Why arithmetic operations are not supported on Numbers? Anyway how would I add these two numbers in java? (Of course I'm getting them from somewhere and I don't know if they are Integer or float etc).
You say you don't know if your numbers are integer or float... when you use the Number class, the compiler also doesn't know if your numbers are integers, floats or some other thing. As a result, the basic math operators like + and - don't work; the computer wouldn't know how to handle the values.
START EDIT
Based on the discussion, I thought an example might help. Computers store floating point numbers as two parts, a coefficient and an exponent. So, in a theoretical system, 001110 might be broken up as 0011 10, or 32 = 9. But positive integers store numbers as binary, so 001110 could also mean 2 + 4 + 8 = 14. When you use the class Number, you're telling the computer you don't know if the number is a float or an int or what, so it knows it has 001110 but it doesn't know if that means 9 or 14 or some other value.
END EDIT
What you can do is make a little assumption and convert to one of the types to do the math. So you could have
Number c = a.intValue() + b.intValue();
which you might as well turn into
Integer c = a.intValue() + b.intValue();
if you're willing to suffer some rounding error, or
Float c = a.floatValue() + b.floatValue();
if you suspect that you're not dealing with integers and are okay with possible minor precision issues. Or, if you'd rather take a small performance blow instead of that error,
BigDecimal c = new BigDecimal(a.floatValue()).add(new BigDecimal(b.floatValue()));
It would also work to make a method to handle the adding for you. Now I do not know the performance impact this will cause but I assume it will be less than using BigDecimal.
public static Number addNumbers(Number a, Number b) {
if(a instanceof Double || b instanceof Double) {
return a.doubleValue() + b.doubleValue();
} else if(a instanceof Float || b instanceof Float) {
return a.floatValue() + b.floatValue();
} else if(a instanceof Long || b instanceof Long) {
return a.longValue() + b.longValue();
} else {
return a.intValue() + b.intValue();
}
}
The only way to correctly add any two types of java.lang.Number is:
Number a = 2f; // Foat
Number b = 3d; // Double
Number c = new BigDecimal( a.toString() ).add( new BigDecimal( b.toString() ) );
This works even for two arguments with a different number-type. It will (should?) not produce any sideeffects like overflows or loosing precision, as far as the toString() of the number-type does not reduce precision.
java.lang.Number is just the superclass of all wrapper classes of primitive types (see java doc). Use the appropriate primitive type (double, int, etc.) for your purpose, or the respective wrapper class (Double, Integer, etc.).
Consider this:
Number a = 1.5; // Actually Java creates a double and boxes it into a Double object
Number b = 1; // Same here for int -> Integer boxed
// What should the result be? If Number would do implicit casts,
// it would behave different from what Java usually does.
Number c = a + b;
// Now that works, and you know at first glance what that code does.
// Nice explicit casts like you usually use in Java.
// The result is of course again a double that is boxed into a Double object
Number d = a.doubleValue() + (double)b.intValue();
Use the following:
Number c = a.intValue() + b.intValue(); // Number is an object and not a primitive data type.
Or:
int a = 2;
int b = 3;
int c = 2 + 3;
I think there are 2 sides to your question.
Why is operator+ not supported on Number?
Because the Java language spec. does not specify this, and there is no operator overloading. There is also not a compile-time natural way to cast the Number to some fundamental type, and there is no natural add to define for some type of operations.
Why are basic arithmic operations not supported on Number?
(Copied from my comment:)
Not all subclasses can implement this in a way you would expect. Especially with the Atomic types it's hard to define a usefull contract for e.g. add.
Also, a method add would be trouble if you try to add a Long to a Short.
If you know the Type of one number but not the other it is possible to do something like
public Double add(Double value, Number increment) {
return value + Double.parseDouble(increment.toString());
}
But it can be messy, so be aware of potential loss of accuracy and NumberFormatExceptions
Number is an abstract class which you cannot make an instance of. Provided you have a correct instance of it, you can get number.longValue() or number.intValue() and add them.
First of all, you should be aware that Number is an abstract class. What happens here is that when you create your 2 and 3, they are interpreted as primitives and a subtype is created (I think an Integer) in that case. Because an Integer is a subtype of Number, you can assign the newly created Integer into a Number reference.
However, a number is just an abstraction. It could be integer, it could be floating point, etc., so the semantics of math operations would be ambiguous.
Number does not provide the classic map operations for two reasons:
First, member methods in Java cannot be operators. It's not C++. At best, they could provide an add()
Second, figuring out what type of operation to do when you have two inputs (e.g., a division of a float by an int) is quite tricky.
So instead, it is your responsibility to make the conversion back to the specific primitive type you are interested in it and apply the mathematical operators.
The best answer would be to make util with double dispatch drilling down to most known types (take a look at Smalltalk addtition implementation)

Categories