I'm an experienced developer but not a math expert. I know enough about the IEEE floating point specification to be afraid of making assumptions about parsing, printing, and comparing them.
I know I can parse a double from a String using Double.parseDouble(String s). I know I can also parse that same string into a BigDecimal using new BigDecimal(String s), and then ask the BigDecimal for a double using BigDecimal.doubleValue().
I glanced at the API and code for both techniques, and it seems that BigDecimal has a lot of different parsing and conversion options.
Are both techniques (Double.parseDouble(s) and new BigDecimal(s).doubleValue()) guaranteed, for all string inputs, to produce exactly the same double primitive value, provided the value is not outside the range of plus or minus Double.MAX_DOUBLE?
For most input values, both techniques should yield the same values. While it's still possible they might not, it doesn't seem likely.
The BigDecimal(String) constructor Javadocs states:
API Note:
For values other than float and double NaN and ±Infinity, this constructor is compatible with the values returned by Float.toString(float) and Double.toString(double).
However, the Double.parseDouble(String) method states:
Returns a new double initialized to the value represented by the specified String, as performed by the valueOf method of class Double.
And that goes on to describe the format accepted by the method.
Let's Test It!
Let's test some values. It looks like an incredibly huge effort to test this exhaustively, but let's test including some string values that represent values known to produce floating-point errors or to be inexact representations.
public static void main(String[] args)
{
String[] values = {"0", "0.1", "0.33333333333333333333", "-0", "-3.14159265", "10.1e100",
"0.00000000000000000000000000000000000000000000000000142857142857",
"10000000000.000000000000000001", "2.718281828459",
"-1.23456789e-123", "9.87654321e+71", "66666666.66666667",
"1.7976931348623157E308", "2.2250738585072014E-308", "4.9E-324",
"3.4028234663852886E38", "1.1754943508222875E-38", "1.401298464324817E-45",
String.valueOf(Math.E), String.valueOf(Math.PI), String.valueOf(Math.sqrt(2))
};
for (String value : values) {
System.out.println(isDoubleEqual(value));
}
}
// Test if the representations yield the same exact double value.
public static boolean isDoubleEqual(String s) {
double d1 = Double.parseDouble(s);
double d2 = new BigDecimal(s).doubleValue();
return d1 == d2;
}
For these values, I get all trues. This is not by any means exhaustive, so it would be very difficult to prove it true for all possible double values. All it would take is one false to show a counterexample. However, this seems to be some evidence that it is true for all legal double string representations.
I also tried leading spaces, e.g. " 4". The BigDecimal(String) constructor threw a NumberFormatException but Double.parseDouble trimmed the input correctly.
The BigDecimal(String) constructor won't accept Infinity or NaN, but you only asked about the normal finite range. The Double.parseDouble method accepts hexadecimal floating point representations but BigDecimal(String) does not.
If you include these edge cases, one method may throw an exception where the other would not. If you're looking for normal base-10 strings of finite values within range, the answer is "it seems likely".
Are both techniques (Double.parseDouble(s) and new BigDecimal(s).doubleValue()) guaranteed, for all string inputs, to produce exactly the same double primitive value, provided the value is not outside the range of plus or minus Double.MAX_DOUBLE?
No, certainly not.
For one thing, there are some string inputs that Double.parseDouble(s) supports but new BigDecimal(s) does not (for example, hex literals).
For another, Double.parseDouble("-0") yields negative zero, whereas new BigDecimal("-0").doubleValue() yields positive zero (because BigDecimal doesn't have a concept of signed zero).
And while not directly relevant to your question, I've been asked to point out for the benefit of other readers that Double.parseDouble(s) supports NaNs and infinities, whereas BigDecimal does not.
Related
The second of below method calls, to setYCoordinate(), gets incorrect value -89.99999435599995 instead of -89.99999435599994.
The first call to setXCoordinate() gets correct value 29.99993874900002.
setXCoordinate(BigDecimal.valueOf(29.99993874900002))
setYCoordinate(BigDecimal.valueOf(-89.99999435599994))
I put a breakpoint inside BigDecimal.valueOf() - this method's code looks as below -
public static BigDecimal valueOf(double val) {
// Reminder: a zero double returns '0.0', so we cannot fastpath
// to use the constant ZERO. This might be important enough to
// justify a factory approach, a cache, or a few private
// constants, later.
return new BigDecimal(Double.toString(val));
}
The argument received by valueOf i.e. "double val" itself is -89.99999435599995 when inspected. Why? I have Java version set as below in my Maven pom.xml
<java.version>1.8</java.version>
Because a double can't retain that much precision; you shouldn't use a double, but rather a String when initializing your BigDecimal:
new BigDecimal("29.99993874900002");
new BigDecimal("-89.99999435599994");
See: Is floating point math broken?
Your confusion has nothing to do with BigDecimal.
double d = -89.99999435599994;
System.out.println(d); //or inspecting it in a debugger
yields:
-89.99999435599995
This is just the way doubles work in java, in combination with the way Double.toString defines the String-representation. This conversion happens before any method is invoked, when the literal is interpreted as double. The details are specified in JLS Chapter 3.10.2. Floating-Point Literals and the JavaDocs of Double.valueOf(String).
If you need to express the value -89.99999435599994 as BigDecimal, the easiest way is to use the constructor taking a String, as other answers have already pointed out:
BigDecimal bd = new BigDecimal("-89.99999435599994");
BigDecimal bd = new BigDecimal("-89.99999435599994");
System.out.println(bd);
yields:
-89.99999435599994
You're right on the edge of precision for a double-precision floating point value, with 16 digits specified, and there's just shy of a full 16 digits of decimal accuracy available. If you skip BigDecimal entirely, just set a double to -89.99999435599994 and print it back out, you'll get -89.99999435599995.
I have two pieces of code new BigDecimal("1.240472701") and new BigDecimal(1.240472701). Now if i use compareTo method of java on both the methods then i get that they are not equal.
When i printed the values using System.out.println() method of java. I get different results for both the values. For example
new BigDecimal("1.240472701") -> 1.240472701
new BigDecimal(1.240472701) -> 1.2404727010000000664291519569815136492252349853515625
So i want to understand what could be reason for this?
You can refer the Java doc of public BigDecimal(double val) for this:
public BigDecimal(double val)
Translates a double into a BigDecimal
which is the exact decimal representation of the double's binary
floating-point value. The scale of the returned BigDecimal is the
smallest value such that (10^scale × val) is an integer.
The results of this constructor can be somewhat unpredictable. One
might assume that writing new BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with
a scale of 1), but it is actually equal to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that
matter, as a binary fraction of any finite length). Thus, the value
that is being passed in to the constructor is not exactly equal to
0.1, appearances notwithstanding.
The String constructor, on the other hand, is perfectly predictable: writing new BigDecimal("0.1") creates
a BigDecimal which is exactly equal to 0.1, as one would expect.
Therefore, it is generally recommended that the String constructor be
used in preference to this one.
When a double must be used as a source
for a BigDecimal, note that this constructor provides an exact
conversion; it does not give the same result as converting the double
to a String using the Double.toString(double) method and then using
the BigDecimal(String) constructor. To get that result, use the static
valueOf(double) method.
The string "1.240472701" is a textual representation of a decimal value. The BigDecimal code parses this and creates a BigDecimal with the exact value represented in the string.
But the double 1.240472701 is merely a (close) approximation of that exact decimal value. Double cannot represent all decimal values exactly, so the exact value stored in the double differs slightly. If you pass that to a BigDecimal, it takes that differing value and turns it into an exact BigDecimal. But the BigDecimal only has the inexact double to go by, it does not know the exact text representation. So it can only represent the value in the double, not the value of the source text.
In the first case:
String --> BigDecimal
Because BigDecimal is made to exactly represent decimal values, that conversion is exact.
In the second case:
1 2
Source code text --> double --> BigDecimal
In the second case, precision is lost in the first conversion (1). The second conversion (2) is exact, but the input -- the double -- is an inexact representation of the source code text 1.240472701 (in reality, it is 1.2404727010000000664291519569815136492252349853515625).
So: never initialize a BigDecimal with a double, if you can avoid it. Use a string instead.
That is why the first BigDecimal is exact and the second is not.
Since user thegauravmahawar provided the answer from docs. Yes, it is because of Scaling in BigDecimal case.
So the values might seem equal to You but internally java uses Scaling while storing the value of BigDecimal type.
Reason: Scaling.
Improvement:
You could call setScale to the same thing on the numbers you're comparing:
like this
new BigDecimal ("7.773").setScale(2).equals(new BigDecimal("7.774").setScale (2))
This will save you from making any mistake.
It is well documented that using a double can lead to inaccuracies and that BigDecimal guarantees accuracy so long as there are no doubles in the mix.
However, is accuracy guaranteed if the double in question is a small whole number?
For example, although the following will be inaccurate/unsafe:
BigDecimal bdDouble = new BigDecimal(0.1d); // 0.1000000000000000055511151231257827021181583404541015625
will the following always be accurate/safe?
BigDecimal bdDouble = new BigDecimal(1.0d); // 1
Is it safe to assume that small whole number doubles are safe to use with BigDecimals - if so, what is the smallest whole number that would introduce an inaccuracy?
>> Additional info in response to initial answers:
Thanks for the answers. Very helpful.
Just to add a little more detail, I have a legacy interface which supplies doubles, but I can be certain that these doubles will represent whole numbers having being themselves converted from Strings to doubles via Double.parseDouble(String) where the String is a guaranteed whole number representation.
I do not want to create a new interface which passes me Strings or BigDecimals if I can avoid it.
I can immediately convert the double to a BigDecimal on my side of the interface and make all internal calculations using BigDecimal calls, but I want to be sure that is as safe as creating a new BigDecimal/String interface.
Given that in my original example using 0.1d does not accurately result in 0.1, as shown by the fact that the actual BigDecimal is 0.1000000000000000055511151231257827021181583404541015625, it appears that some fractions will introduce an inaccuracy.
On the other hand, given that in my original example using 1.0d does accurately results in 1, it appears that whole numbers retain accuarcy. It appears that this is guaranteed up to a value of 2^53, if I understand your answers correctly.
Is that a correct assumption?
The BigDecimal aspect isn't as relevant to this question as "what is the range of integers that can be exactly represented in double?" - in that every finite double value can be represented exactly by BigDecimal, and that's the value you'll get if you call the BigDecimal(double) constructor. So you can be confident that if the value you wish to represent is an integer which is exactly representable by a double, if you pass that double to the BigDecimal constructor, you'll get a BigDecimal which exactly represents the same integer.
The significand of a double is 52 bits. Due to normalization, that means you should expect to be able to store integer values in the range [-253, 253] exactly. Those are pretty large numbers.
Of course, if you're only in the business of representing integers, it's questionable as to why you're using double at all... and you need to make sure that any conversions you're using from original source data to double aren't losing any information loss - but purely on the matter of "what range of integers are exactly representable as double values" I believe the above is correct...
A short answer is no. Because of the way a floating point variable is stored in memory there is no "small" value 0.000001 uses the same number of bits as 100000, every value is represented in the same way 0.xxx..eyy
A better way to initialize a BigDecimal is to initialize it with a string.
BigDecimal bdDouble = new BigDecimal("0.1");
According to the JavaDoc for BigDecimal, the compareTo function does not account for the scale during comparison.
Now I have a test case that looks something like this:
BigDecimal result = callSomeService(foo);
assertTrue(result.compareTo(new BigDecimal(0.7)) == 0); //this does not work
assertTrue(result.equals(new BigDecimal(0.7).setScale(10, BigDecimal.ROUND_HALF_UP))); //this works
The value I'm expecting the function to return is 0.7 and has a scale of 10. Printing the value shows me the expected result. But the compareTo() function doesn't seem to be working the way I think it should.
What's going on here?
new BigDecimal(0.7) does not represent 0.7.
It represents 0.6999999999999999555910790149937383830547332763671875 (exactly).
The reason for this is that the double literal 0.7 doesn't represent 0.7 exactly.
If you need precise BigDecimal values, you must use the String constructor (actually all constructors that don't take double values will work).
Try new BigDecimal("0.7") instead.
The JavaDoc of the BigDecimal(double) constructor has some related notes:
The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal to 0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that matter, as a binary fraction of any finite length). Thus, the value that is being passed in to the constructor is not exactly equal to 0.1, appearances notwithstanding.
The String constructor, on the other hand, is perfectly predictable: writing new BigDecimal("0.1") creates a BigDecimal which is exactly equal to 0.1, as one would expect. Therefore, it is generally recommended that the String constructor be used in preference to this one.
When a double must be used as a source for a BigDecimal, note that this constructor provides an exact conversion; it does not give the same result as converting the double to a String using the Double.toString(double) method and then using the BigDecimal(String) constructor. To get that result, use the static valueOf(double) method.
So to summarize: If you want to create a BigDecimal with a fixed decimal value, use the String constructor. If you already have a double value, then BigDecimal.valueOf(double) will provide a more intuitive behaviour than using new BigDecimal(double).
i do the below java print command for this double variable
double test=58.15;
When i do a System.out.println(test); and System.out.println(new Double(test).toString()); It prints as 58.15.
When i do a System.out.println(new BigDecimal(test)) I get the below value
58.14999999999999857891452847979962825775146484375
I am able to understand "test" double variable value is internally stored as 58.1499999. But when i do the below two System.out.println i am getting the output as 58.15 and not 58.1499999.
System.out.println(test);
System.out.println(new Double(test).toString());
It prints the output as 58.15 for the above two.
Is the above System.out.println statements are doing some rounding of the value 58.1499999 and printing it as 58.15?
System.out.println(new BigDecimal("58.15"));
To construct a BigDecimal from a hard-coded constant, you must always use one of constants in the class (ZERO, ONE, or TEN) or one of the string constructors. The reason is that one you put the value in a double, you've already lost precision that can never be regained.
EDIT: polygenelubricants is right. Specifically, you're using Double.toString or equivalent. To quote from there:
How many digits must be printed for
the fractional part of m or a? There
must be at least one digit to
represent the fractional part, and
beyond that as many, but only as many,
more digits as are needed to uniquely
distinguish the argument value from
adjacent values of type double. That
is, suppose that x is the exact
mathematical value represented by the
decimal representation produced by
this method for a finite nonzero
argument d. Then d must be the double
value nearest to x; or if two double
values are equally close to x, then d
must be one of them and the least
significant bit of the significand of
d must be 0.
Yes, println (or more precisely, Double.toString) rounds. For proof, System.out.println(.1D); prints 0.1, which is impossible to represent in binary.
Also, when using BigDecimal, don't use the double constructor, because that would attempt to precisely represent an imprecise value. Use the String constructor instead.
out.println and Double.toString() use the format specified in Double.toString(double).
BigDecimal uses more precision by default, as described in the javadoc, and when you call toString() it outputs all of the characters up to the precision level available to a primitive double since .15 does not have an exact binary representation.