Java Estimating Double's Representation Error - java

Every now and then I see some rounding errors which are caused by floor'ing some values as shown in the two examples below.
// floor(number, precision)
double balance = floor(0.7/0.1, 3) // = 6.999
double balance = floor(0.7*0.1, 3) // = 0.069
The problem of course is 0.7/0.1 and 0.7*0.1 are not exactly the number it should be due to representation errors [check the NOTE below].
One solution could be to add an epsilon so any representation error is mitigated just before applying the floor.
double balance = floor(0.7/0.1 + 1e-10, 3) // = 7.0
double balance = floor(0.7*0.1 + 1e-10, 3) // = 0.07
What epsilon should I use so it's guaranteed to work in all cases? I feel this solution is rather hacky unless I have a good strategy for choosing the correct epsilon which probably depends on the numbers I'm dealing with.
For instance, if there was a way of getting an estimation of the error (as in representation - number) or at least the sign of it (whether representation > number or not), that would be helpful to determine in what direction I should correct the result before applying the floor.
Any other workaround you can think of is very welcome.
NOTE: I know the real problem is I'm using doubles and it has representation errors. Please refrain from saying anything like I should store the balance in a long ((long) Math.floor(3931809L/0.080241D) is equally erratic). I also tried using BigDecimal but the performance degraded quite a lot (it's a realtime application). Also, note I'm not very concerned about propagating small errors over time, I do a lot of calculations like those above but I start from a fresh balance number every time (I do maybe 3 of those operations before returning and starting over).
EDIT: To make that clear, that's the only operation I do, and I repeat it 3 times on the same balance. For example, I take a balance in USD and I convert it to RUB, then to JPY then to EUR, and I return the balance and start over from the beginning (with a fresh balance number, ie no rounding error is propagated other than on these 3 operations). The values are not constrained apart from them being positive numbers (ie, in the range [0, +inf)) and the precision is always below 8 (8 decimal digits, ie 0.00000001 is the smallest balance I will ever have to deal with).

What epsilon should I use so it's guaranteed to work in all cases?
There is no epsilon that is guaranteed to work in all cases1. Period.
If you analyze (mathematically2) the computations that your application is performing, then it may be possible to figure out a value of epsilon that will work for you.
But note that there are dangers in repeatedly "rounding off" the errors in a multi-step computation. You can end up with the wrong answer in the end. That's what the maths says.
Finally, ask yourself this: If it is legitimate / safe to just make the epsilon based adjustments, why (after 50 something years) do typical hand-held calculators still insist that 1.0 / 3 * 3 is 0.9999999999....
1 - Lets be clear. You have not attempted to specify what your "cases" are. So I am assuming that you mean all possible computations.
2 - The analysis is complicated by the fact that the epsilon between a Real number and the corresponding floating binary representation (e.g. a "double") depends on the binary magnitude of the number.

A binary floating point number (double precision) has 53 bits of precision or approximately 15.95 decimal digits.
Let r be a Real and d be the double that is closest to r
epsilon(r) = |r - d| is in the range 0 to r * 2floor(log2(r)) -53
and, if you have to deal with numbers in the range 0 to N, then the maximum epsilon value across the range will be approximately:
N * 2floor(log2(N)) - 53
When you perform a computation you need to estimate the cumulative error for all of the steps in the computation. Addition, multiplication and division are relatively "safe". For example with multiplication:
Let r1 = d1 + e1 and r2 = d2 + e2
r1 * r2 = (d1 + e1) * (d2 + e2) = d1 * d2 + d2 * e1 + d1 * e2 + e1 * e2
Unless the epsilon values are already large, the e1 * e2 term vanishes relative to the others, and the epsilon is going to be ≤ 2 * max(|d1|, |d2|) * max(|e1|, |e2|).
(I think. It is a long, long time since I last did this stuff in anger.)
However, subtraction has nasty properties; see Theorem 9 of the Goldberg paper!
The floor function you are using is also a bit tricky.
Let r = d + e
epsilon(floor(r)) = |floor(r, 3) - floor(d, 3)|
which is ≤ max(|ceiling(e, 3)|, 10-3)

Related

How does Math.pow() handle fractional exponents (specifically nth roots) [duplicate]

I'm trying to determine the asymptotic run-time of one of my algorithms, which uses exponents, but I'm not sure of how exponents are calculated programmatically.
I'm specifically looking for the pow() algorithm used for double-precision, floating point numbers.
I've had a chance to look at fdlibm's implementation. The comments describe the algorithm used:
* n
* Method: Let x = 2 * (1+f)
* 1. Compute and return log2(x) in two pieces:
* log2(x) = w1 + w2,
* where w1 has 53-24 = 29 bit trailing zeros.
* 2. Perform y*log2(x) = n+y' by simulating muti-precision
* arithmetic, where |y'|<=0.5.
* 3. Return x**y = 2**n*exp(y'*log2)
followed by a listing of all the special cases handled (0, 1, inf, nan).
The most intense sections of the code, after all the special-case handling, involve the log2 and 2** calculations. And there are no loops in either of those. So, the complexity of floating-point primitives notwithstanding, it looks like a asymptotically constant-time algorithm.
Floating-point experts (of which I'm not one) are welcome to comment. :-)
Unless they've discovered a better way to do it, I believe that approximate values for trig, logarithmic and exponential functions (for exponential growth and decay, for example) are generally calculated using arithmetic rules and Taylor Series expansions to produce an approximate result accurate to within the requested precision. (See any Calculus book for details on power series, Taylor series, and Maclaurin series expansions of functions.) Please note that it's been a while since I did any of this so I couldn't tell you, for example, exactly how to calculate the number of terms in the series you need to include guarantee an error that small enough to be negligible in a double-precision calculation.
For example, the Taylor/Maclaurin series expansion for e^x is this:
+inf [ x^k ] x^2 x^3 x^4 x^5
e^x = SUM [ --- ] = 1 + x + --- + ----- + ------- + --------- + ....
k=0 [ k! ] 2*1 3*2*1 4*3*2*1 5*4*3*2*1
If you take all of the terms (k from 0 to infinity), this expansion is exact and complete (no error).
However, if you don't take all the terms going to infinity, but you stop after say 5 terms or 50 terms or whatever, you produce an approximate result that differs from the actual e^x function value by a remainder which is fairly easy to calculate.
The good news for exponentials is that it converges nicely and the terms of its polynomial expansion are fairly easy to code iteratively, so you might (repeat, MIGHT - remember, it's been a while) not even need to pre-calculate how many terms you need to guarantee your error is less than precision because you can test the size of the contribution at each iteration and stop when it becomes close enough to zero. In practice, I do not know if this strategy is viable or not - I'd have to try it. There are important details I have long since forgotten about. Stuff like: machine precision, machine error and rounding error, etc.
Also, please note that if you are not using e^x, but you are doing growth/decay with another base like 2^x or 10^x, the approximating polynomial function changes.
The usual approach, to raise a to the b, for an integer exponent, goes something like this:
result = 1
while b > 0
if b is odd
result *= a
b -= 1
b /= 2
a = a * a
It is generally logarithmic in the size of the exponent. The algorithm is based on the invariant "a^b*result = a0^b0", where a0 and b0 are the initial values of a and b.
For negative or non-integer exponents, logarithms and approximations and numerical analysis are needed. The running time will depend on the algorithm used and what precision the library is tuned for.
Edit: Since there seems to be some interest, here's a version without the extra multiplication.
result = 1
while b > 0
while b is even
a = a * a
b = b / 2
result = result * a
b = b - 1
You can use exp(n*ln(x)) for calculating xn. Both x and n can be double-precision, floating point numbers. Natural logarithm and exponential function can be calculated using Taylor series. Here you can find formulas: http://en.wikipedia.org/wiki/Taylor_series
If I were writing a pow function targeting Intel, I would return exp2(log2(x) * y). Intel's microcode for log2 is surely faster than anything I'd be able to code, even if I could remember my first year calculus and grad school numerical analysis.
e^x = (1 + fraction) * (2^exponent), 1 <= 1 + fraction < 2
x * log2(e) = log2(1 + fraction) + exponent, 0 <= log2(1 + fraction) < 1
exponent = floor(x * log2(e))
1 + fraction = 2^(x * log2(e) - exponent) = e^((x * log2(e) - exponent) * ln2) = e^(x - exponent * ln2), 0 <= x - exponent * ln2 < ln2

Next round on floating-point rounding

This question is related to this one and many others, but not a duplicate. I work with doubles, which are surely correct to say 6 decimal places and using
String.format("%f.6", x)
always returns the correct value. However,
String.valueOf(x)
does not. I'd need to "round" x, so that it'd produce the same (or shorter in case of trailing zeros) result as formatting to 6 decimal places. I don't want the exact representation of the decimal number and I know it does not exist. Using
x = Double.parseDouble(String.format("%.6f", x))
gives me what I want, but I'm looking for some more straightforward method (faster and producing no garbage). The obvious way for rounding
x = 1e-6 * Math.round(1e6 * x)
does not work.
As an example consider the following snipped
double wanted = 0.07;
double given = 0.07000000455421122;
double wrong = 1e-6 * Math.round(1e6 * given);
double slow = Double.parseDouble(String.format("%.6f", given));
double diff = wrong - wanted;
double ulp = Math.ulp(wrong);
computing these values
wanted 0.07
given 0.07000000455421122
wrong 0.06999999999999999
slow 0.07
diff -1.3877787807814457E-17
ulp 1.3877787807814457E-17
The problem seems to be that 1e-6 * 70000 produces the best possible result, but it's one ulp away from what I want.
Once again: I'm not asking how to round, I'm asking how to do it faster. So I'm afraid, BigDecimal is not the way to go.
The problem is that 1e-6 is not exactly 10-6 (it is in fact 0.000000999999999999999954748111825886258685613938723690807819366455078125)
Instead of multiplying by this, you should divide by 1e6 which is exactly 106, then the result will be the closest floating point number to 0.07 (which is 0.07 or 0.070000000000000006661338147750939242541790008544921875).

When using doubles, why isn't (x / (y * z)) the same as (x / y / z)? [duplicate]

This question already has answers here:
How to avoid floating point precision errors with floats or doubles in Java?
(12 answers)
Double calculation producing odd result [duplicate]
(3 answers)
Closed 7 years ago.
This is partly academic, as for my purposes I only need it rounded to two decimal places; but I am keen to know what is going on to produce two slightly different results.
This is the test that I wrote to narrow it to the simplest implementation:
#Test
public void shouldEqual() {
double expected = 450.00d / (7d * 60); // 1.0714285714285714
double actual = 450.00d / 7d / 60; // 1.0714285714285716
assertThat(actual).isEqualTo(expected);
}
But it fails with this output:
org.junit.ComparisonFailure:
Expected :1.0714285714285714
Actual :1.0714285714285716
Can anyone explain in detail what is going on under the hood to result in the value at 1.000000000000000X being different?
Some of the points I'm looking for in an answer are:
Where is the precision lost?
Which method is preferred, and why?
Which is actually correct? (In pure maths, both can't be right. Perhaps both are wrong?)
Is there a better solution or method for these arithmetic operations?
I see a bunch of questions that tell you how to work around this problem, but not one that really explains what's going on, other than "floating-point roundoff error is bad, m'kay?" So let me take a shot at it. Let me first point out that nothing in this answer is specific to Java. Roundoff error is a problem inherent to any fixed-precision representation of numbers, so you get the same issues in, say, C.
Roundoff error in a decimal data type
As a simplified example, imagine we have some sort of computer that natively uses an unsigned decimal data type, let's call it float6d. The length of the data type is 6 digits: 4 dedicated to the mantissa, and 2 dedicated to the exponent. For example, the number 3.142 can be expressed as
3.142 x 10^0
which would be stored in 6 digits as
503142
The first two digits are the exponent plus 50, and the last four are the mantissa. This data type can represent any number from 0.001 x 10^-50 to 9.999 x 10^+49.
Actually, that's not true. It can't store any number. What if you want to represent 3.141592? Or 3.1412034? Or 3.141488906? Tough luck, the data type can't store more than four digits of precision, so the compiler has to round anything with more digits to fit into the constraints of the data type. If you write
float6d x = 3.141592;
float6d y = 3.1412034;
float6d z = 3.141488906;
then the compiler converts each of these three values to the same internal representation, 3.142 x 10^0 (which, remember, is stored as 503142), so that x == y == z will hold true.
The point is that there is a whole range of real numbers which all map to the same underlying sequence of digits (or bits, in a real computer). Specifically, any x satisfying 3.1415 <= x <= 3.1425 (assuming half-even rounding) gets converted to the representation 503142 for storage in memory.
This rounding happens every time your program stores a floating-point value in memory. The first time it happens is when you write a constant in your source code, as I did above with x, y, and z. It happens again whenever you do an arithmetic operation that increases the number of digits of precision beyond what the data type can represent. Either of these effects is called roundoff error. There are a few different ways this can happen:
Addition and subtraction: if one of the values you're adding has a different exponent from the other, you will wind up with extra digits of precision, and if there are enough of them, the least significant ones will need to be dropped. For example, 2.718 and 121.0 are both values that can be exactly represented in the float6d data type. But if you try to add them together:
1.210 x 10^2
+ 0.02718 x 10^2
-------------------
1.23718 x 10^2
which gets rounded off to 1.237 x 10^2, or 123.7, dropping two digits of precision.
Multiplication: the number of digits in the result is approximately the sum of the number of digits in the two operands. This will produce some amount of roundoff error, if your operands already have many significant digits. For example, 121 x 2.718 gives you
1.210 x 10^2
x 0.02718 x 10^2
-------------------
3.28878 x 10^2
which gets rounded off to 3.289 x 10^2, or 328.9, again dropping two digits of precision.
However, it's useful to keep in mind that, if your operands are "nice" numbers, without many significant digits, the floating-point format can probably represent the result exactly, so you don't have to deal with roundoff error. For example, 2.3 x 140 gives
1.40 x 10^2
x 0.23 x 10^2
-------------------
3.22 x 10^2
which has no roundoff problems.
Division: this is where things get messy. Division will pretty much always result in some amount of roundoff error unless the number you're dividing by happens to be a power of the base (in which case the division is just a digit shift, or bit shift in binary). As an example, take two very simple numbers, 3 and 7, divide them, and you get
3. x 10^0
/ 7. x 10^0
----------------------------
0.428571428571... x 10^0
The closest value to this number which can be represented as a float6d is 4.286 x 10^-1, or 0.4286, which distinctly differs from the exact result.
As we'll see in the next section, the error introduced by rounding grows with each operation you do. So if you're working with "nice" numbers, as in your example, it's generally best to do the division operations as late as possible because those are the operations most likely to introduce roundoff error into your program where none existed before.
Analysis of roundoff error
In general, if you can't assume your numbers are "nice", roundoff error can be either positive or negative, and it's very difficult to predict which direction it will go just based on the operation. It depends on the specific values involved. Look at this plot of the roundoff error for 2.718 z as a function of z (still using the float6d data type):
In practice, when you're working with values that use the full precision of your data type, it's often easier to treat roundoff error as a random error. Looking at the plot, you might be able to guess that the magnitude of the error depends on the order of magnitude of the result of the operation. In this particular case, when z is of the order 10-1, 2.718 z is also on the order of 10-1, so it will be a number of the form 0.XXXX. The maximum roundoff error is then half of the last digit of precision; in this case, by "the last digit of precision" I mean 0.0001, so the roundoff error varies between -0.00005 and +0.00005. At the point where 2.718 z jumps up to the next order of magnitude, which is 1/2.718 = 0.3679, you can see that the roundoff error also jumps up by an order of magnitude.
You can use well-known techniques of error analysis to analyze how a random (or unpredictable) error of a certain magnitude affects your result. Specifically, for multiplication or division, the "average" relative error in your result can be approximated by adding the relative error in each of the operands in quadrature - that is, square them, add them, and take the square root. With our float6d data type, the relative error varies between 0.0005 (for a value like 0.101) and 0.00005 (for a value like 0.995).
Let's take 0.0001 as a rough average for the relative error in values x and y. The relative error in x * y or x / y is then given by
sqrt(0.0001^2 + 0.0001^2) = 0.0001414
which is a factor of sqrt(2) larger than the relative error in each of the individual values.
When it comes to combining operations, you can apply this formula multiple times, once for each floating-point operation. So for instance, for z / (x * y), the relative error in x * y is, on average, 0.0001414 (in this decimal example) and then the relative error in z / (x * y) is
sqrt(0.0001^2 + 0.0001414^2) = 0.0001732
Notice that the average relative error grows with each operation, specifically as the square root of the number of multiplications and divisions you do.
Similarly, for z / x * y, the average relative error in z / x is 0.0001414, and the relative error in z / x * y is
sqrt(0.0001414^2 + 0.0001^2) = 0.0001732
So, the same, in this case. This means that for arbitrary values, on average, the two expressions introduce approximately the same error. (In theory, that is. I've seen these operations behave very differently in practice, but that's another story.)
Gory details
You might be curious about the specific calculation you presented in the question, not just an average. For that analysis, let's switch to the real world of binary arithmetic. Floating-point numbers in most systems and languages are represented using IEEE standard 754. For 64-bit numbers, the format specifies 52 bits dedicated to the mantissa, 11 to the exponent, and one to the sign. In other words, when written in base 2, a floating point number is a value of the form
1.1100000000000000000000000000000000000000000000000000 x 2^00000000010
52 bits 11 bits
The leading 1 is not explicitly stored, and constitutes a 53rd bit. Also, you should note that the 11 bits stored to represent the exponent are actually the real exponent plus 1023. For example, this particular value is 7, which is 1.75 x 22. The mantissa is 1.75 in binary, or 1.11, and the exponent is 1023 + 2 = 1025 in binary, or 10000000001, so the content stored in memory is
01000000000111100000000000000000000000000000000000000000000000000
^ ^
exponent mantissa
but that doesn't really matter.
Your example also involves 450,
1.1100001000000000000000000000000000000000000000000000 x 2^00000001000
and 60,
1.1110000000000000000000000000000000000000000000000000 x 2^00000000101
You can play around with these values using this converter or any of many others on the internet.
When you compute the first expression, 450/(7*60), the processor first does the multiplication, obtaining 420, or
1.1010010000000000000000000000000000000000000000000000 x 2^00000001000
Then it divides 450 by 420. This produces 15/14, which is
1.0001001001001001001001001001001001001001001001001001001001001001001001...
in binary. Now, the Java language specification says that
Inexact results must be rounded to the representable value nearest to the infinitely precise result; if the two nearest representable values are equally near, the one with its least significant bit zero is chosen. This is the IEEE 754 standard's default rounding mode known as round to nearest.
and the nearest representable value to 15/14 in 64-bit IEEE 754 format is
1.0001001001001001001001001001001001001001001001001001 x 2^00000000000
which is approximately 1.0714285714285714 in decimal. (More precisely, this is the least precise decimal value that uniquely specifies this particular binary representation.)
On the other hand, if you compute 450 / 7 first, the result is 64.2857142857..., or in binary,
1000000.01001001001001001001001001001001001001001001001001001001001001001...
for which the nearest representable value is
1.0000000100100100100100100100100100100100100100100101 x 2^00000000110
which is 64.28571428571429180465... Note the change in the last digit of the binary mantissa (compared to the exact value) due to roundoff error. Dividing this by 60 gives you
1.000100100100100100100100100100100100100100100100100110011001100110011...
Look at the end: the pattern is different! It's 0011 that repeats, instead of 001 as in the other case. The closest representable value is
1.0001001001001001001001001001001001001001001001001010 x 2^00000000000
which differs from the other order of operations in the last two bits: they're 10 instead of 01. The decimal equivalent is 1.0714285714285716.
The specific rounding that causes this difference should be clear if you look at the exact binary values:
1.0001001001001001001001001001001001001001001001001001001001001001001001...
1.0001001001001001001001001001001001001001001001001001100110011001100110...
^ last bit of mantissa
It works out in this case that the former result, numerically 15/14, happens to be the most accurate representation of the exact value. This is an example of how leaving division until the end benefits you. But again, this rule only holds as long as the values you're working with don't use the full precision of the data type. Once you start working with inexact (rounded) values, you no longer protect yourself from further roundoff errors by doing the multiplications first.
It has to do with how the double type is implemented and the fact that the floating-point types don't make the same precision guarantees as other simpler numerical types. Although the following answer is more specifically about sums, it also answers your question by explaining how there is no guarantee of infinite precision in floating-point mathematical operations: Why does changing the sum order returns a different result?. Essentially you should never attempt to determine the equality of floating-point values without specifying an acceptable margin of error. Google's Guava library includes DoubleMath.fuzzyEquals(double, double, double) to determine the equality of two double values within a certain precision. If you wish to read up on the specifics of floating-point equality this site is quite useful; the same site also explains floating-point rounding errors. In summation: the expected and actual values of your calculation differ because of the rounding differing between the calculations due to the order of operations.
Let's simplify things a bit. What you want to know is why 450d / 420 and 450d / 7 / 60 (specifically) give different results.
Let's see how division is performed in IEE double-precision floating point format. Without going deep into implementation details, it's basically XOR-ing the sign bit, subtracting the exponent of the divisor from the exponent of the dividend, dividing the mantissas, and normalizing the result.
First, we should represent our numbers in the proper format for double:
450 is 0 10000000111 1100001000000000000000000000000000000000000000000000
420 is 0 10000000111 1010010000000000000000000000000000000000000000000000
7 is 0 10000000001 1100000000000000000000000000000000000000000000000000
60 is 0 10000000100 1110000000000000000000000000000000000000000000000000
Let's first divide 450 by 420
First comes the sign bit, it's 0 (0 xor 0 == 0).
Then comes the exponent. 10000000111b - 10000000111b + 1023 == 10000000111b - 10000000111b + 01111111111b == 01111111111b
Looking good, now the mantissa:
1.1100001000000000000000000000000000000000000000000000 / 1.1010010000000000000000000000000000000000000000000000 == 1.1100001 / 1.101001. There are a couple of different ways to do this, I'll talk a bit about them later. The result is 1.0(001) (you can verify it here).
Now we should normalize the result. Let's see the guard, round and sticky bit values:
0001001001001001001001001001001001001001001001001001 0 0 1
Guard bit's 0, we don't do any rounding. The result is, in binary:
0 01111111111 0001001001001001001001001001001001001001001001001001
Which gets represented as 1.0714285714285714 in decimal.
Now let's divide 450 by 7 by analogy.
Sign bit = 0
Exponent = 10000000111b - 10000000001b + 01111111111b == -01111111001b + 01111111111b + 01111111111b == 10000000101b
Mantissa = 1.1100001 / 1.11 == 1.00000(001)
Rounding:
0000000100100100100100100100100100100100100100100100 1 0 0
Guard bit is set, round and sticky bits are not. We are rounding to-nearest (default mode for IEEE), and we're stuck right between the two possible values which we could round to. As the lsb is 0, we add 1. This gives us the rounded mantissa:
0000000100100100100100100100100100100100100100100101
The result is
0 10000000101 0000000100100100100100100100100100100100100100100101
Which gets represented as 64.28571428571429 in decimal.
Now we will have to divide it by 60... But you already know that we have lost some precision. Dividing 450 by 420 didn't require rounding at all, but here, we already had to round the result at least once. But, for completeness's sake, let's finish the job:
Dividing 64.28571428571429 by 60
Sign bit = 0
Exponent = 10000000101b - 10000000100b + 01111111111b == 01111111110b
Mantissa = 1.0000000100100100100100100100100100100100100100100101 / 1.111 == 0.10001001001001001001001001001001001001001001001001001100110011
Round and shift:
0.1000100100100100100100100100100100100100100100100100 1 1 0 0
1.0001001001001001001001001001001001001001001001001001 1 0 0
Rounding just as in the previous case, we get the mantissa: 0001001001001001001001001001001001001001001001001010.
As we shifted by 1, we add that to the exponent, getting
Exponent = 01111111111b
So, the result is:
0 01111111111 0001001001001001001001001001001001001001001001001010
Which gets represented as 1.0714285714285716 in decimal.
Tl;dr:
The first division gave us:
0 01111111111 0001001001001001001001001001001001001001001001001001
And the last division gave us:
0 01111111111 0001001001001001001001001001001001001001001001001010
The difference is in the last 2 bits only, but we could have lost more - after all, to get the second result, we had to round two times instead of none!
Now, about mantissa division. Floating-point division is implemented in two major ways.
The way mandated by the IEEE long division (here are some good examples; it's basically the regular long division, but with binary instead of decimal), and it's pretty slow. That is what your computer did.
There is also a faster but less accrate option, multiplication by inverse. First, a reciprocal of the divisor is found, and then multiplication is performed.
That's because double division often lead to a loss of precision. Said loss can vary depending on the order of the divisions.
When you divide by 7d, you already lost some precision with the actual result. Then only you divide an erroneous result by 60.
When you divide by 7d * 60, you only have to use division once, thus losing precision only once.
Note that double multiplication can sometimes fail too, but that's much less common.
Certainly the order of the operations mixed with the fact that doubles aren't precise :
450.00d / (7d * 60) --> a = 7d * 60 --> result = 450.00d / a
vs
450.00d / 7d / 60 --> a = 450.00d /7d --> result = a / 60

Whats wrong with this simple 'double' calculation? [duplicate]

This question already has answers here:
How to resolve a Java Rounding Double issue [duplicate]
(13 answers)
Closed 9 years ago.
Whats wrong with this simple 'double' calculation in java?
I know some decimal numbers can not be represented in float / double binary formats properly, but with the variable d3, java is able to store and display 2.64 with no problems.
double d1 = 4.64;
double d2 = 2.0;
double d3 = 2.64;
double d4 = d1 - d2;
System.out.println("d1 : " + d1);
System.out.println("d2 : " + d2);
System.out.println("d3 : " + d3);
System.out.println("d4 : " + d4);
System.out.println("d1 - d2 : " + (d1 - d2));
Answer,
d1 : 4.64
d2 : 2.0
d3 : 2.64
d4 : 2.6399999999999997
d1 - d2 : 2.6399999999999997
The problem
In binary 2.64 is 10.10100011110101110000101000111101 recurring, in other words not exactly representable in binary, hence the small error. Java is being kind to you with d3 but as soon as actual calculations are involved it has to fall back on the real representation.
Binary Calculator
Further more:
2.64= 10.10100011110101110000101000111101
4.64=100.1010001111010111000010100011110
Now, even though the .64 is the same in both cases, it is held to different precisions because the 4=100 uses up more of the double significant figures than 2=10, so when you say 4.64-2.0 and 2.64 the .64 is represented with a different rounding error in both cases, this lost information cannot be recovered for the final answer.
N.B. I'm not using the double number of significant figures here, just whatever the binary calculator would produce, however the effect is the same whatever the number of significant figures
Never assume double values are exact (although their inaccuracies are microscopic and caused only because certain numbers can't be exactly expressed in binary).
Floating point numbers aren't exact, but only from a decimal point of view
While you should always expect that doubles will have small errors for the last few decimal places it would be wrong to think of binary representations as "bad" or worse that decimal.
We are all used to certain numbers (like 1/3 for example) not being exactly representable in decimal and we accept that such a number will end up as 0.333333333333 rather than the true value (which I cannot write down without infinite space); it is within that context that binary numbers cannot be exactly expressed. 1/10 is such a number that cannot be exactly expressed in binary; this suprises us only because we are used to decimal
d1 - d2 returns the exact result of binary float arithmetic and it is 2.6399999999999997 and so it is printed. If you want to round it, you can do it during printing
System.out.printf("d1 - d2 : %.2f", d4);
or with Commons-Math
d4 = Precision.round(d4, 2);
Mainly because of the fact that double is a double-precision 64-bit IEEE 754 floating point. It's not meant for keeping exact decimal values. That's why doubles are not recommended for exact calculations. Use the String constructor of BigDecimal instead, like:
new BigDecimal("2.64")
It's because the errors in the internal representations of 4.64 and 2.0 combine constructively (meaning they make a larger error).
Technically speaking, 2.64 isn't stored exactly, either. However, there is a particular representation that corresponds to 2.64. Think about the fact that 4.64 and 2.0 aren't stored exactly, either, though. The errors in 4.64 and 2.0 are combining to produce an even larger error, one large enough that their subtraction does not give the representation of 2.64.
The answer is off by 3*10^-16. To give something of an example of how that can happen, let's pretend the representation for 4.64 is 2*10^-16 too small and the representation for 2.0 is 1*10^-16 too large. Then you would get
(4.64 - 2*10^-16) - (2.0 + 1*10^-16) = 2.64 - 3*10^-16
So when the calculation is done, the two errors have combined to create an even bigger error. But if the representation for 2.64 is only off by 1*10^-16, then this would not be considered equal to 2.64 by the computer.
It's also possible that 4.64 just has a larger error than 2.64 even if 2.0 has no error. If 4.64's representation is 3*10^-16 too small, you get the same thing:
(4.64 - 3*10^-16) - 2.0 = 2.64 - 3*10^-16
Again, if the representation of 2.64 is only off by 1*10^-16, then this result would not be considered equal to 2.64.
I don't know the exact errors in the real representations, but something similar to that is happening, just with different values. Hope that makes sense. Feel free to ask for clarification.
Nothing wrong with it. But try using BigDecimal
http://docs.oracle.com/javase/6/docs/api/java/math/BigDecimal.html
Note: double and float are internally represented as binary fractions according to the IEEE standard 754 and can therefore not represent decimal fractions exactly

How to actually avoid floating point errors when you need to use float?

I am trying to affect the translation of a 3D model using some UI buttons to shift the position by 0.1 or -0.1.
My model position is a three dimensional float so simply adding 0.1f to one of the values causes obvious rounding errors. While I can use something like BigDecimal to retain precision, I still have to convert it from a float and back to a float at the end and it always results in silly numbers that are making my UI look like a mess.
I could just pretty the displayed values but the rounding errors will only get worse with more editing and they make my save files rather hard to read.
So how do I actually avoid these errors when I need to use a float?
The Kahan summation and pairwise summation algorithms help to reduce floating point errors. Here's some Java code for the Kahan algorithm.
I would use a Rational class. There are many out there - this one looks like it should work.
One significant cost will be when the Rational is rendered into a float and one when the denominator is reduced to the gcd. The one I posted keeps the numerator and denominator in fully reduced state at all times which should be quite efficient if you are always adding or subtracting 1/10.
This implementation holds the values normalised (i.e. with consistent sign) but unreduced.
You should choose your implementation to best fit your usage.
A simple solution is to either use fixed precision. i.e. an integer 10x or 100x what you want.
float f = 10;
f += 0.1f;
becomes
int i = 100;
i += 1; // use an many times as you like
// use i / 10.0 as required.
I wouldn't use float in any case as you get more rounding errors than double for next to no benefit (unless you have millions of float values) double gives you 8 more digits of precision and with sensible rounding would won't see those errors.
If you stick with floats:
The easiest way to avoid the error is using floats which are exact, but
near the desired value which is
round(2^n * value) * 1/2^n.
n is the number of bits, value the number to use (in your case 0.1)
In your case with increasing precision:
n = 4 => 0.125
n = 8 (byte) => 0.9765625
n = 16 (short)=> 0.100006103516....
The long number chains are artefacts of the binary conversion,
the real number has much less bits.
As the floats are exact, addition and subtraction will
not introduce offset errors, but will always be
predictable as long as the number of bits is
not longer than the float value holds.
If you fear that your display will be compromised by
using this solution (because they are odd floats), use
and store only integers (step increase -1/1).
The final value which is internally set is
x = value * step.
As the step increases or decreases by an amount of 1,
precision will be retained.

Categories