Java's Math.IEEERemainder function states:
The remainder value is mathematically equal to f1 - f2 × n, where n is
the mathematical integer closest to the exact mathematical value of
the quotient f1/f2, and if two mathematical integers are equally close
to f1/f2, then n is the integer that is even
For the following:
double f1 = 0.1;
double f2 = 0.04;
System.out.println(Math.IEEEremainder(f1, f2));
The output is -0.019999999999999997
However, 0.1/0.04 = 2.5 which is equidistant from both the integers 2 and 3. Shouldn't we pick n = 2 here, resulting in 0.1 - 0.04*2 = 0.02, instead of -0.02 ?
See: Is floating point math broken?
You would think that 0.1 / 0.04 would return exactly 2.5, but that's not true. According to this article, 0.1 cannot be accurately represented using IEEE 754, and is actually represented as 0.100000000000000005551....
In this case, the quotient is slightly higher due to that minuscule offset, which results in a value of 3 for n, as it's no longer equidistant between 2 and 3.
Computing it results in the following:
0.1 - 0.04 * 3 = 0.1 - 0.12 = -0.02 ~= -0.019999999999999997
Related
I know about roundoff error in programming languages!
System.out.println(0.1 + 0.1 + 0.1);
this code output is 0.30000000000000004 because 0.3 in binary needs an infinite number of digits to be represented and 0.3 is an irrational number in binary.
but what about this one?
System.out.println(0.1 + 0.1);
why output is 0.2 ? 0.2 is also an irrational number in binary! so the output should be 0.200000002 or 0.1999999999!
what's the difference between them?
When you convert a floating point number to a String in Java, it's done in a way that uses the least number of digits necessary to distinguish the number from the adjacent numbers.
This means that 0.2 is displayed as "0.2", since you don't need any more digits. The true value of 0.2 is of course a bit greater than 0.2:
jshell> new BigDecimal(0.2)
$1 ==> 0.200000000000000011102230246251565404236316680908203125
Another interpretation of your question is "why is 0.1 + 0.1 equal to 0.2?"
It's because the error in computing 0.1+0.1 is not large enough to make it become distinct from 0.2. It's of course not exactly the same value as 0.2, but out of all floating point numbers 0.2 is the closest.
jshell> new BigDecimal(0.1).add(new BigDecimal(0.1))
$2 ==> 0.2000000000000000111022302462515654042363166809082031250
When a double value is printed, Java uses the return value of Double.toString(double), which says:
There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double.
So, lets print the values of 0.1 - 1.0, and their adjacent values. We use Math.nextDown(double) and Math.nextUp(double) to find the adjacent values. We use new BigDecimal(double) and toPlainString() to see more digits from the value.
We also calculate the value from summing multiple 0.1 values and mark that value with a <, =, or >, as appropriate.
System.out.printf("%9s%-13s%-12s%s%n", "", "double", "sum", "BigDecimal");
double sum = 0.1;
for (int i = 1; i <= 10; i++, sum += 0.1) {
double val = i / 10d;
double down = Math.nextDown(val);
double up = Math.nextUp(val);
System.out.printf("%d:%n %-21s%-3s%s%n %-21s%-3s%s%n %-21s%-3s%s%n",
i,
down, (sum == down ? "<" : " "), new BigDecimal(down).toPlainString(),
val, (sum == val ? "=" : " "), new BigDecimal(val).toPlainString(),
up, (sum == up ? ">" : " "), new BigDecimal(up).toPlainString());
}
Output
double sum BigDecimal
1:
0.09999999999999999 0.09999999999999999167332731531132594682276248931884765625
0.1 = 0.1000000000000000055511151231257827021181583404541015625
0.10000000000000002 0.10000000000000001942890293094023945741355419158935546875
2:
0.19999999999999998 0.1999999999999999833466546306226518936455249786376953125
0.2 = 0.200000000000000011102230246251565404236316680908203125
0.20000000000000004 0.2000000000000000388578058618804789148271083831787109375
3:
0.29999999999999993 0.29999999999999993338661852249060757458209991455078125
0.3 0.299999999999999988897769753748434595763683319091796875
0.30000000000000004 > 0.3000000000000000444089209850062616169452667236328125
4:
0.39999999999999997 0.399999999999999966693309261245303787291049957275390625
0.4 = 0.40000000000000002220446049250313080847263336181640625
0.4000000000000001 0.400000000000000077715611723760957829654216766357421875
5:
0.49999999999999994 0.499999999999999944488848768742172978818416595458984375
0.5 = 0.5
0.5000000000000001 0.50000000000000011102230246251565404236316680908203125
6:
0.5999999999999999 0.5999999999999998667732370449812151491641998291015625
0.6 = 0.59999999999999997779553950749686919152736663818359375
0.6000000000000001 0.600000000000000088817841970012523233890533447265625
7:
0.6999999999999998 0.69999999999999984456877655247808434069156646728515625
0.7 = 0.6999999999999999555910790149937383830547332763671875
0.7000000000000001 0.70000000000000006661338147750939242541790008544921875
8:
0.7999999999999999 < 0.79999999999999993338661852249060757458209991455078125
0.8 0.8000000000000000444089209850062616169452667236328125
0.8000000000000002 0.80000000000000015543122344752191565930843353271484375
9:
0.8999999999999999 < 0.899999999999999911182158029987476766109466552734375
0.9 0.90000000000000002220446049250313080847263336181640625
0.9000000000000001 0.9000000000000001332267629550187848508358001708984375
10:
0.9999999999999999 < 0.99999999999999988897769753748434595763683319091796875
1.0 1
1.0000000000000002 1.0000000000000002220446049250313080847263336181640625
As you can see, because of cumulative rounding issues, the summed value is not always exactly the value closest to what you'd expect, so it has to print extra digits for that "unique" value.
The summed value is wrong for 0.3, 0.8, 0.9, and 1.0.
I want to check if a double has Double.MAX_VALUE.
Is this the right way (version 1):
boolean hasMaxVal(double val){
return val == Double.MAX_VALUE;
}
or do i need to do something like this (version 2):
boolean hasMaxVal(double val){
return Math.abs(val - Double.MAX_VALUE) < 0.00001
}
Java's double type is a double-precision IEEE 754 floating-point number. This means there are 53 bits of precision in the mantissa, and hence the precision of the number is limited to about 16 significant figures in a decimal format.
Double.MAX_VALUE is approximately 1.798×10308, so the 16th significant figure has a magnitude on the order of 10308 - 16 = 10292. We can confirm this using the Math.ulp method, which returns a double value's "unit of least precision":
> Double.MAX_VALUE
1.7976931348623157E308
> Math.ulp(Double.MAX_VALUE)
1.9958403095347198E292
This means if you do want to test for a value "close to" Double.MAX_VALUE, it only makes sense to do so within an epsilon of at least 2E292. Your epsilon of 0.00001 is far too small for there to be any values within that range other than Double.MAX_VALUE itself, so your test is equivalent to val == Double.MAX_VALUE.
I am working on a code where I am comparing Double and float values:
class Demo {
public static void main(String[] args) {
System.out.println(2.0 - 1.1); // 0.8999999999999999
System.out.println(2.0 - 1.1 == 0.9); // false
System.out.println(2.0F - 1.1F); // 0.9
System.out.println(2.0F - 1.1F == 0.9F); // true
System.out.println(2.0F - 1.1F == 0.9); // false
}
}
Output is given below:
0.8999999999999999
false
0.9
true
false
I believe the Double value can save more precision than the float.
Please explain this, looks like the float value is not lose precision but the double one lose?
Edit:
#goodvibration I'm aware of that 0.9 can not be exactly saved in any computer language, i'm just confused how java works with this in detail, why 2.0F - 1.1F == 0.9F, but 2.0 - 1.1 != 0.9, another interesting found may help:
class Demo {
public static void main(String[] args) {
System.out.println(2.0 - 0.9); // 1.1
System.out.println(2.0 - 0.9 == 1.1); // true
System.out.println(2.0F - 0.9F); // 1.1
System.out.println(2.0F - 0.9F == 1.1F); // true
System.out.println(2.0F - 0.9F == 1.1); // false
}
}
I know I can't count on the float or double precision, just.. can't figure it out drive me crazy, whats the real deal behind this? Why 2.0 - 0.9 == 1.1 but 2.0 - 1.1 != 0.9 ??
The difference between float and double:
IEEE 754 single-precision binary floating-point format
IEEE 754 double-precision binary floating-point format
Let's run your numbers in a simple C program, in order to get their binary representations:
#include <stdio.h>
typedef union {
float val;
struct {
unsigned int fraction : 23;
unsigned int exponent : 8;
unsigned int sign : 1;
} bits;
} F;
typedef union {
double val;
struct {
unsigned long long fraction : 52;
unsigned long long exponent : 11;
unsigned long long sign : 1;
} bits;
} D;
int main() {
F f = {(float )(2.0 - 1.1)};
D d = {(double)(2.0 - 1.1)};
printf("%d %d %d\n" , f.bits.sign, f.bits.exponent, f.bits.fraction);
printf("%lld %lld %lld\n", d.bits.sign, d.bits.exponent, d.bits.fraction);
return 0;
}
The printout of this code is:
0 126 6710886
0 1022 3602879701896396
Based on the two format specifications above, let's convert these numbers to rational values.
In order to achieve high accuracy, let's do this in a simple Python program:
from decimal import Decimal
from decimal import getcontext
getcontext().prec = 100
TWO = Decimal(2)
def convert(sign, exponent, fraction, e_len, f_len):
return (-1) ** sign * TWO ** (exponent - 2 ** (e_len - 1) + 1) * (1 + fraction / TWO ** f_len)
def toFloat(sign, exponent, fraction):
return convert(sign, exponent, fraction, 8, 23)
def toDouble(sign, exponent, fraction):
return convert(sign, exponent, fraction, 11, 52)
f = toFloat(0, 126, 6710886)
d = toDouble(0, 1022, 3602879701896396)
print('{:.40f}'.format(f))
print('{:.40f}'.format(d))
The printout of this code is:
0.8999999761581420898437500000000000000000
0.8999999999999999111821580299874767661094
If we print these two values while specifying between 8 and 15 decimal digits, then we shall experience the same thing that you have observed (the double value printed as 0.9, while the float value printed as close to 0.9):
In other words, this code:
for n in range(8, 15 + 1):
string = '{:.' + str(n) + 'f}';
print(string.format(f))
print(string.format(d))
Gives this printout:
0.89999998
0.90000000
0.899999976
0.900000000
0.8999999762
0.9000000000
0.89999997616
0.90000000000
0.899999976158
0.900000000000
0.8999999761581
0.9000000000000
0.89999997615814
0.90000000000000
0.899999976158142
0.900000000000000
Our conclusion is therefore that Java prints decimals with a precision of between 8 and 15 digits by default.
Nice question BTW...
Pop quiz: Represent 1/3rd, in decimal.
Answer: You can't; not precisely.
Computers count in binary. There are many more numbers that 'cannot be completely represented'. Just like, in the decimal question, if you have only a small piece of paper to write it on, you may simply go with 0.3333333 and call it a day, and you'd then have a number that is quite close to, but not entirely the same as, 1 / 3, so do computers represent fractions.
Or, think about it this way: a float occupies 32-bits; a double occupies 64. There are only 2^32 (about 4 billion) different numbers that a 32-bit value can represent. And yet, even between 0 and 1 there are an infinite amount of numbers. So, given that there are at most 2^32 specific, concrete numbers that are 'representable precisely' as a float, any number that isn't in that blessed set of about 4 billion values, is not representable. Instead of just erroring out, you simply get the one in this pool of 4 billion values that IS representable, and is the closest number to the one you wanted.
In addition, because computers count in binary and not decimal, your sense of what is 'representable' and what isn't, is off. You may think that 1/3 is a big problem, but surely 1/10 is easy, right? That's simply 0.1 and that is a precise representation. Ah, but, a tenth works well in decimal. After all, decimal is based around the number 10, no surprise there. But in binary? a half, a fourth, an eighth, a sixteenth: Easy in binary. A tenth? That is as difficult as a third: NOT REPRESENTABLE.
0.9 is, itself, not a representable number. And yet, when you printed your float, that's what you got.
The reason is, printing floats/doubles is an art, more than a science. Given that only a few numbers are representable, and given that these numbers don't feel 'natural' to humans due to the binary v. decimal thing, you really need to add a 'rounding' strategy to the number or it'll look crazy (nobody wants to read 0.899999999999999999765). And that is precisely what System.out.println and co do.
But you really should take control of the rounding function: Never use System.out.println to print doubles and floats. Use System.out.printf("%.6f", yourDouble); instead, and in this case, BOTH would print 0.9. Because whilst neither can actually represent 0.9 precisely, the number that is closest to it in floats (or rather, the number you get when you take the number closest to 2.0 (which is 2.0), and the number closest to 1.1 (which is not 1.1 precisely), subtract them, and then find the number closest to that result) – prints as 0.9 even though it isn't for floats, and does not print as 0.9 in double.
I'm using BigDecimal for my numbers in my application, for example, with JPA. I did a bit of researching about the terms 'precision' and 'scale' but I don't understand what are they exactly.
Can anyone explain me the meaning of 'precision' and 'scale' for a BigDecimal value?
#Column(precision = 11, scale = 2)
Thanks!
A BigDecimal is defined by two values: an arbitrary precision integer and a 32-bit integer scale. The value of the BigDecimal is defined to be .
Precision:
The precision is the number of digits in the unscaled value.
For instance, for the number 123.45, the precision returned is 5.
So, precision indicates the length of the arbitrary precision integer. Here are a few examples of numbers with the same scale, but different precision:
12345 / 100000 = 0.12345 // scale = 5, precision = 5
12340 / 100000 = 0.1234 // scale = 4, precision = 4
1 / 100000 = 0.00001 // scale = 5, precision = 1
In the special case that the number is equal to zero (i.e. 0.000), the precision is always 1.
Scale:
If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. For example, a scale of -3 means the unscaled value is multiplied by 1000.
This means that the integer value of the ‘BigDecimal’ is multiplied by .
Here are a few examples of the same precision, with different scales:
12345 with scale 5 = 0.12345
12345 with scale 4 = 1.2345
…
12345 with scale 0 = 12345
12345 with scale -1 = 123450 †
BigDecimal.toString:
The toString method for a BigDecimal behaves differently based on the scale and precision. (Thanks to #RudyVelthuis for pointing this out.)
If scale == 0, the integer is just printed out, as-is.
If scale < 0, E-Notation is always used (e.g. 5 scale -1 produces "5E+1")
If scale >= 0 and precision - scale -1 >= -6 a plain decimal number is produced (e.g. 10000000 scale 1 produces "1000000.0")
Otherwise, E-notation is used, e.g. 10 scale 8 produces "1.0E-7" since precision - scale -1 equals is less than -6.
More examples:
19/100 = 0.19 // integer=19, scale=2, precision=2
1/1000 = 0.0001 // integer=1, scale = 4, precision = 1
Precision: Total number of significant digits
Scale: Number of digits to the right of the decimal point
See BigDecimal class documentation for details.
Quoting Javadoc:
The precision is the number of digits in the unscaled value.
and
If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. For example, a scale of -3 means the unscaled value is multiplied by 1000.
Precision is the total number of significant digits in a number.
Scale is the number of digits to the right of the decimal point.
Examples:
123.456 Precision=6 Scale=3
10 Precision=2 Scale=0
-96.9923 Precision=6 Scale=4
0.0 Precision=1 Scale=1
Negative Scale
For a negative scale value, we apply the following formula:
result = (given number) * 10 ^ (-(scale value))
Example
Given number = 1234.56
scale = -5
-> (1234.56) * 10^(-(-5))
-> (1234.56) * 10^(+5)
-> 123456000
Reference: https://www.logicbig.com/quick-info/programming/precision-and-scale.html
From your example annotation the maximum digits is 2 after the decimal point and 9 before (totally 11):
123456789,01
I have to calculate some floating point variables and my colleague suggest me to use BigDecimal instead of double since it will be more precise. But I want to know what it is and how to make most out of BigDecimal?
A BigDecimal is an exact way of representing numbers. A Double has a certain precision. Working with doubles of various magnitudes (say d1=1000.0 and d2=0.001) could result in the 0.001 being dropped altogether when summing as the difference in magnitude is so large. With BigDecimal this would not happen.
The disadvantage of BigDecimal is that it's slower, and it's a bit more difficult to program algorithms that way (due to + - * and / not being overloaded).
If you are dealing with money, or precision is a must, use BigDecimal. Otherwise Doubles tend to be good enough.
I do recommend reading the javadoc of BigDecimal as they do explain things better than I do here :)
My English is not good so I'll just write a simple example here.
double a = 0.02;
double b = 0.03;
double c = b - a;
System.out.println(c);
BigDecimal _a = new BigDecimal("0.02");
BigDecimal _b = new BigDecimal("0.03");
BigDecimal _c = _b.subtract(_a);
System.out.println(_c);
Program output:
0.009999999999999998
0.01
Does anyone still want to use double? ;)
There are two main differences from double:
Arbitrary precision, similarly to BigInteger they can contain number of arbitrary precision and size (whereas a double has a fixed number of bits)
Base 10 instead of Base 2, a BigDecimal is n*10^-scale where n is an arbitrary large signed integer and scale can be thought of as the number of digits to move the decimal point left or right
It is still not true to say that BigDecimal can represent any number. But two reasons you should use BigDecimal for monetary calculations are:
It can represent all numbers that can be represented in decimal notion and that includes virtually all numbers in the monetary world (you never transfer 1/3 $ to someone).
The precision can be controlled to avoid accumulated errors. With a double, as the magnitude of the value increases, its precision decreases and this can introduce significant error into the result.
If you write down a fractional value like 1 / 7 as decimal value you get
1/7 = 0.142857142857142857142857142857142857142857...
with an infinite repetition of the digits 142857. Since you can only write a finite number of digits you will inevitably introduce a rounding (or truncation) error.
Numbers like 1/10 or 1/100 expressed as binary numbers with a fractional part also have an infinite number of digits after the decimal point:
1/10 = binary 0.0001100110011001100110011001100110...
Doubles store values as binary and therefore might introduce an error solely by converting a decimal number to a binary number, without even doing any arithmetic.
Decimal numbers (like BigDecimal), on the other hand, store each decimal digit as is (binary coded, but each decimal on its own). This means that a decimal type is not more precise than a binary floating point or fixed point type in a general sense (i.e. it cannot store 1/7 without loss of precision), but it is more accurate for numbers that have a finite number of decimal digits as is often the case for money calculations.
Java's BigDecimal has the additional advantage that it can have an arbitrary (but finite) number of digits on both sides of the decimal point, limited only by the available memory.
If you are dealing with calculation, there are laws on how you should calculate and what precision you should use. If you fail that you will be doing something illegal.
The only real reason is that the bit representation of decimal cases are not precise. As Basil simply put, an example is the best explanation. Just to complement his example, here's what happens:
static void theDoubleProblem1() {
double d1 = 0.3;
double d2 = 0.2;
System.out.println("Double:\t 0,3 - 0,2 = " + (d1 - d2));
float f1 = 0.3f;
float f2 = 0.2f;
System.out.println("Float:\t 0,3 - 0,2 = " + (f1 - f2));
BigDecimal bd1 = new BigDecimal("0.3");
BigDecimal bd2 = new BigDecimal("0.2");
System.out.println("BigDec:\t 0,3 - 0,2 = " + (bd1.subtract(bd2)));
}
Output:
Double: 0,3 - 0,2 = 0.09999999999999998
Float: 0,3 - 0,2 = 0.10000001
BigDec: 0,3 - 0,2 = 0.1
Also we have that:
static void theDoubleProblem2() {
double d1 = 10;
double d2 = 3;
System.out.println("Double:\t 10 / 3 = " + (d1 / d2));
float f1 = 10f;
float f2 = 3f;
System.out.println("Float:\t 10 / 3 = " + (f1 / f2));
// Exception!
BigDecimal bd3 = new BigDecimal("10");
BigDecimal bd4 = new BigDecimal("3");
System.out.println("BigDec:\t 10 / 3 = " + (bd3.divide(bd4)));
}
Gives us the output:
Double: 10 / 3 = 3.3333333333333335
Float: 10 / 3 = 3.3333333
Exception in thread "main" java.lang.ArithmeticException: Non-terminating decimal expansion
But:
static void theDoubleProblem2() {
BigDecimal bd3 = new BigDecimal("10");
BigDecimal bd4 = new BigDecimal("3");
System.out.println("BigDec:\t 10 / 3 = " + (bd3.divide(bd4, 4, BigDecimal.ROUND_HALF_UP)));
}
Has the output:
BigDec: 10 / 3 = 3.3333
BigDecimal is Oracle's arbitrary-precision numerical library. BigDecimal is part of the Java language and is useful for a variety of applications ranging from the financial to the scientific (that's where sort of am).
There's nothing wrong with using doubles for certain calculations. Suppose, however, you wanted to calculate Math.Pi * Math.Pi / 6, that is, the value of the Riemann Zeta Function for a real argument of two (a project I'm currently working on). Floating-point division presents you with a painful problem of rounding error.
BigDecimal, on the other hand, includes many options for calculating expressions to arbitrary precision. The add, multiply, and divide methods as described in the Oracle documentation below "take the place" of +, *, and / in BigDecimal Java World:
http://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html
The compareTo method is especially useful in while and for loops.
Be careful, however, in your use of constructors for BigDecimal. The string constructor is very useful in many cases. For instance, the code
BigDecimal onethird = new BigDecimal("0.33333333333");
utilizes a string representation of 1/3 to represent that infinitely-repeating number to a specified degree of accuracy. The round-off error is most likely somewhere so deep inside the JVM that the round-off errors won't disturb most of your practical calculations. I have, from personal experience, seen round-off creep up, however. The setScale method is important in these regards, as can be seen from the Oracle documentation.
If you need to use division in your arithmetic, you need to use double instead of BigDecimal. Division (divide(BigDecimal) method) in BigDecimal is pretty useless as BigDecimal can't handle repeating decimal rational numbers (division where divisors are and will throw java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result.
Just try BigDecimal.ONE.divide(new BigDecimal("3"));
Double, on the other hand, will handle division fine (with the understood precision which is roughly 15 significant digits)