The second of below method calls, to setYCoordinate(), gets incorrect value -89.99999435599995 instead of -89.99999435599994.
The first call to setXCoordinate() gets correct value 29.99993874900002.
setXCoordinate(BigDecimal.valueOf(29.99993874900002))
setYCoordinate(BigDecimal.valueOf(-89.99999435599994))
I put a breakpoint inside BigDecimal.valueOf() - this method's code looks as below -
public static BigDecimal valueOf(double val) {
// Reminder: a zero double returns '0.0', so we cannot fastpath
// to use the constant ZERO. This might be important enough to
// justify a factory approach, a cache, or a few private
// constants, later.
return new BigDecimal(Double.toString(val));
}
The argument received by valueOf i.e. "double val" itself is -89.99999435599995 when inspected. Why? I have Java version set as below in my Maven pom.xml
<java.version>1.8</java.version>
Because a double can't retain that much precision; you shouldn't use a double, but rather a String when initializing your BigDecimal:
new BigDecimal("29.99993874900002");
new BigDecimal("-89.99999435599994");
See: Is floating point math broken?
Your confusion has nothing to do with BigDecimal.
double d = -89.99999435599994;
System.out.println(d); //or inspecting it in a debugger
yields:
-89.99999435599995
This is just the way doubles work in java, in combination with the way Double.toString defines the String-representation. This conversion happens before any method is invoked, when the literal is interpreted as double. The details are specified in JLS Chapter 3.10.2. Floating-Point Literals and the JavaDocs of Double.valueOf(String).
If you need to express the value -89.99999435599994 as BigDecimal, the easiest way is to use the constructor taking a String, as other answers have already pointed out:
BigDecimal bd = new BigDecimal("-89.99999435599994");
BigDecimal bd = new BigDecimal("-89.99999435599994");
System.out.println(bd);
yields:
-89.99999435599994
You're right on the edge of precision for a double-precision floating point value, with 16 digits specified, and there's just shy of a full 16 digits of decimal accuracy available. If you skip BigDecimal entirely, just set a double to -89.99999435599994 and print it back out, you'll get -89.99999435599995.
Related
I am working with an application that is based entirely on doubles, and am having trouble in one utility method that parses a string into a double. I've found a fix where using BigDecimal for the conversion solves the issue, but raises another problem when I go to convert the BigDecimal back to a double: I'm losing several places of precision. For example:
import java.math.BigDecimal;
import java.text.DecimalFormat;
public class test {
public static void main(String [] args){
String num = "299792.457999999984";
BigDecimal val = new BigDecimal(num);
System.out.println("big decimal: " + val.toString());
DecimalFormat nf = new DecimalFormat("#.0000000000");
System.out.println("double: "+val.doubleValue());
System.out.println("double formatted: "+nf.format(val.doubleValue()));
}
}
This produces the following output:
$ java test
big decimal: 299792.457999999984
double: 299792.458
double formatted: 299792.4580000000
The formatted double demonstrates that it's lost the precision after the third place (the application requires those lower places of precision).
How can I get BigDecimal to preserve those additional places of precision?
Thanks!
Update after catching up on this post. Several people mention this is exceeding the precision of the double data type. Unless I'm reading this reference incorrectly:
http://java.sun.com/docs/books/jls/third_edition/html/typesValues.html#4.2.3
then the double primitive has a maximum exponential value of Emax = 2K-1-1, and the standard implementation has K=11. So, the max exponent should be 511, no?
You've reached the maximum precision for a double with that number. It can't be done. The value gets rounded up in this case. The conversion from BigDecimal is unrelated and the precision problem is the same either way. See this for example:
System.out.println(Double.parseDouble("299792.4579999984"));
System.out.println(Double.parseDouble("299792.45799999984"));
System.out.println(Double.parseDouble("299792.457999999984"));
Output is:
299792.4579999984
299792.45799999987
299792.458
For these cases double has more than 3 digits of precision after the decimal point. They just happen to be zeros for your number and that's the closest representation you can fit into a double. It's closer for it to round up in this case, so your 9's seem to disappear. If you try this:
System.out.println(Double.parseDouble("299792.457999999924"));
You'll notice that it keeps your 9's because it was closer to round down:
299792.4579999999
If you require that all of the digits in your number be preserved then you'll have to change your code that operates on double. You could use BigDecimal in place of them. If you need performance then you might want to explore BCD as an option, although I'm not aware of any libraries offhand.
In response to your update: the maximum exponent for a double-precision floating-point number is actually 1023. That's not your limiting factor here though. Your number exceeds the precision of the 52 fractional bits that represent the significand, see IEEE 754-1985.
Use this floating-point conversion to see your number in binary. The exponent is 18 since 262144 (2^18) is nearest. If you take the fractional bits and go up or down one in binary, you can see there's not enough precision to represent your number:
299792.457999999900 // 0010010011000100000111010100111111011111001110110101
299792.457999999984 // here's your number that doesn't fit into a double
299792.458000000000 // 0010010011000100000111010100111111011111001110110110
299792.458000000040 // 0010010011000100000111010100111111011111001110110111
The problem is that a double can hold 15 digits, while a BigDecimal can hold an arbitrary number. When you call toDouble(), it attempts to apply a rounding mode to remove the excess digits. However, since you have a lot of 9's in the output, that means that they keep getting rounded up to 0, with a carry to the next-highest digit.
To keep as much precision as you can, you need to change the BigDecimal's rounding mode so that it truncates:
BigDecimal bd1 = new BigDecimal("12345.1234599999998");
System.out.println(bd1.doubleValue());
BigDecimal bd2 = new BigDecimal("12345.1234599999998", new MathContext(15, RoundingMode.FLOOR));
System.out.println(bd2.doubleValue());
Only that many digits are printed so that, when parsing the string back to double, it will result in the exact same value.
Some detail can be found in the javadoc for Double#toString
How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double. That is, suppose that x is the exact mathematical value represented by the decimal representation produced by this method for a finite nonzero argument d. Then d must be the double value nearest to x; or if two double values are equally close to x, then d must be one of them and the least significant bit of the significand of d must be 0.
If it's entirely based on doubles ... why are you using BigDecimal? Wouldn't Double make more sense? If it's too large of value (or too much precision) for that then ... you can't convert it; that would be the reason to use BigDecimal in the first place.
As to why it's losing precision, from the javadoc
Converts this BigDecimal to a double. This conversion is similar to the narrowing primitive conversion from double to float as defined in the Java Language Specification: if this BigDecimal has too great a magnitude represent as a double, it will be converted to Double.NEGATIVE_INFINITY or Double.POSITIVE_INFINITY as appropriate. Note that even when the return value is finite, this conversion can lose information about the precision of the BigDecimal value.
You've hit the maximum possible precision for the double. If you would still like to store the value in primitives... one possible way is to store the part before the decimal point in a long
long l = 299792;
double d = 0.457999999984;
Since you are not using up (that's a bad choice of words) the precision for storing the decimal section, you can hold more digits of precision for the fractional component. This should be easy enough to do with some rounding etc..
I'm an experienced developer but not a math expert. I know enough about the IEEE floating point specification to be afraid of making assumptions about parsing, printing, and comparing them.
I know I can parse a double from a String using Double.parseDouble(String s). I know I can also parse that same string into a BigDecimal using new BigDecimal(String s), and then ask the BigDecimal for a double using BigDecimal.doubleValue().
I glanced at the API and code for both techniques, and it seems that BigDecimal has a lot of different parsing and conversion options.
Are both techniques (Double.parseDouble(s) and new BigDecimal(s).doubleValue()) guaranteed, for all string inputs, to produce exactly the same double primitive value, provided the value is not outside the range of plus or minus Double.MAX_DOUBLE?
For most input values, both techniques should yield the same values. While it's still possible they might not, it doesn't seem likely.
The BigDecimal(String) constructor Javadocs states:
API Note:
For values other than float and double NaN and ±Infinity, this constructor is compatible with the values returned by Float.toString(float) and Double.toString(double).
However, the Double.parseDouble(String) method states:
Returns a new double initialized to the value represented by the specified String, as performed by the valueOf method of class Double.
And that goes on to describe the format accepted by the method.
Let's Test It!
Let's test some values. It looks like an incredibly huge effort to test this exhaustively, but let's test including some string values that represent values known to produce floating-point errors or to be inexact representations.
public static void main(String[] args)
{
String[] values = {"0", "0.1", "0.33333333333333333333", "-0", "-3.14159265", "10.1e100",
"0.00000000000000000000000000000000000000000000000000142857142857",
"10000000000.000000000000000001", "2.718281828459",
"-1.23456789e-123", "9.87654321e+71", "66666666.66666667",
"1.7976931348623157E308", "2.2250738585072014E-308", "4.9E-324",
"3.4028234663852886E38", "1.1754943508222875E-38", "1.401298464324817E-45",
String.valueOf(Math.E), String.valueOf(Math.PI), String.valueOf(Math.sqrt(2))
};
for (String value : values) {
System.out.println(isDoubleEqual(value));
}
}
// Test if the representations yield the same exact double value.
public static boolean isDoubleEqual(String s) {
double d1 = Double.parseDouble(s);
double d2 = new BigDecimal(s).doubleValue();
return d1 == d2;
}
For these values, I get all trues. This is not by any means exhaustive, so it would be very difficult to prove it true for all possible double values. All it would take is one false to show a counterexample. However, this seems to be some evidence that it is true for all legal double string representations.
I also tried leading spaces, e.g. " 4". The BigDecimal(String) constructor threw a NumberFormatException but Double.parseDouble trimmed the input correctly.
The BigDecimal(String) constructor won't accept Infinity or NaN, but you only asked about the normal finite range. The Double.parseDouble method accepts hexadecimal floating point representations but BigDecimal(String) does not.
If you include these edge cases, one method may throw an exception where the other would not. If you're looking for normal base-10 strings of finite values within range, the answer is "it seems likely".
Are both techniques (Double.parseDouble(s) and new BigDecimal(s).doubleValue()) guaranteed, for all string inputs, to produce exactly the same double primitive value, provided the value is not outside the range of plus or minus Double.MAX_DOUBLE?
No, certainly not.
For one thing, there are some string inputs that Double.parseDouble(s) supports but new BigDecimal(s) does not (for example, hex literals).
For another, Double.parseDouble("-0") yields negative zero, whereas new BigDecimal("-0").doubleValue() yields positive zero (because BigDecimal doesn't have a concept of signed zero).
And while not directly relevant to your question, I've been asked to point out for the benefit of other readers that Double.parseDouble(s) supports NaNs and infinities, whereas BigDecimal does not.
I have two pieces of code new BigDecimal("1.240472701") and new BigDecimal(1.240472701). Now if i use compareTo method of java on both the methods then i get that they are not equal.
When i printed the values using System.out.println() method of java. I get different results for both the values. For example
new BigDecimal("1.240472701") -> 1.240472701
new BigDecimal(1.240472701) -> 1.2404727010000000664291519569815136492252349853515625
So i want to understand what could be reason for this?
You can refer the Java doc of public BigDecimal(double val) for this:
public BigDecimal(double val)
Translates a double into a BigDecimal
which is the exact decimal representation of the double's binary
floating-point value. The scale of the returned BigDecimal is the
smallest value such that (10^scale × val) is an integer.
The results of this constructor can be somewhat unpredictable. One
might assume that writing new BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with
a scale of 1), but it is actually equal to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that
matter, as a binary fraction of any finite length). Thus, the value
that is being passed in to the constructor is not exactly equal to
0.1, appearances notwithstanding.
The String constructor, on the other hand, is perfectly predictable: writing new BigDecimal("0.1") creates
a BigDecimal which is exactly equal to 0.1, as one would expect.
Therefore, it is generally recommended that the String constructor be
used in preference to this one.
When a double must be used as a source
for a BigDecimal, note that this constructor provides an exact
conversion; it does not give the same result as converting the double
to a String using the Double.toString(double) method and then using
the BigDecimal(String) constructor. To get that result, use the static
valueOf(double) method.
The string "1.240472701" is a textual representation of a decimal value. The BigDecimal code parses this and creates a BigDecimal with the exact value represented in the string.
But the double 1.240472701 is merely a (close) approximation of that exact decimal value. Double cannot represent all decimal values exactly, so the exact value stored in the double differs slightly. If you pass that to a BigDecimal, it takes that differing value and turns it into an exact BigDecimal. But the BigDecimal only has the inexact double to go by, it does not know the exact text representation. So it can only represent the value in the double, not the value of the source text.
In the first case:
String --> BigDecimal
Because BigDecimal is made to exactly represent decimal values, that conversion is exact.
In the second case:
1 2
Source code text --> double --> BigDecimal
In the second case, precision is lost in the first conversion (1). The second conversion (2) is exact, but the input -- the double -- is an inexact representation of the source code text 1.240472701 (in reality, it is 1.2404727010000000664291519569815136492252349853515625).
So: never initialize a BigDecimal with a double, if you can avoid it. Use a string instead.
That is why the first BigDecimal is exact and the second is not.
Since user thegauravmahawar provided the answer from docs. Yes, it is because of Scaling in BigDecimal case.
So the values might seem equal to You but internally java uses Scaling while storing the value of BigDecimal type.
Reason: Scaling.
Improvement:
You could call setScale to the same thing on the numbers you're comparing:
like this
new BigDecimal ("7.773").setScale(2).equals(new BigDecimal("7.774").setScale (2))
This will save you from making any mistake.
I notice some issues with the Java float precision
Float.parseFloat("0.0065") - 0.001 // 0.0055000000134110451
new Float("0.027") - 0.001 // 0.02600000000700354575
Float.valueOf("0.074") - 0.001 // 0.07399999999999999999
I not only have a problem with Float but also with Double.
Can someone explain what is happening behind the scenes, and how can we get an accurate number? What would be the right way to handle this when dealing with these issues?
The problem is simply that float has finite precision; it cannot represent 0.0065 exactly. (The same is true of double, of course: it has greater precision, but still finite.)
A further problem, which makes the above problem more obvious, is that 0.001 is a double rather than a float, so your float is getting promoted to a double to perform the subtraction, and of course at that point the system has no way to recover the missing precision that a double could have represented to begin with. To address that, you would write:
float f = Float.parseFloat("0.0065") - 0.001f;
using 0.001f instead of 0.001.
See What Every Computer Scientist Should Know About Floating-Point Arithmetic. Your results look correct to me.
If you don't like how floating-point numbers work, try something like BigDecimal instead.
You're getting the right results. There is no such float as 0.027 exactly, nor is there such a double. You will always get these errors if you use float or double.
float and double are stored as binary fractions: something like 1/2 + 1/4 + 1/16... You can't get all decimal values to be stored exactly as finite-precision binary fractions. It's just not mathematically possible.
The only alternative is to use BigDecimal, which you can use to get exact decimal values.
From the Java Tutorials page on Primitive Data Types:
A floating-point literal is of type float if it ends with the letter F or f; otherwise its type is double and it can optionally end with the letter D or d.
So I think your literals (0.001) are doubles and you're subtracting doubles from floats.
Try this instead:
System.out.println((0.0065F - 0.001D)); // 0.005500000134110451
System.out.println((0.0065F - 0.001F)); // 0.0055
... and you'll get:
0.005500000134110451
0.0055
So add F suffixes to your literals and you should get better results:
Float.parseFloat("0.0065") - 0.001F
new Float("0.027") - 0.001F
Float.valueOf("0.074") - 0.001F
I would convert your float to a string and then use BigDecimal.
This link explains it well
new BigDecimal(String.valueOf(yourDoubleValue));
Dont use the BigDecimal double constructor though as you will still get errors
Long story short if you require arbitrary precision use BigDecimal not float or double. You will see all sorts of rounding issues of this nature using float.
As an aside be very careful not to use the float/double constructor of BigDecimal because it will have the same issue. Use the String constructor instead.
Floating point cannot accurately represent decimal numbers. If you need an accurate representation of a number in Java, you should use the java.math.BigDecimal class:
BigDecimal d = new BigDecimal("0.0065");
i do the below java print command for this double variable
double test=58.15;
When i do a System.out.println(test); and System.out.println(new Double(test).toString()); It prints as 58.15.
When i do a System.out.println(new BigDecimal(test)) I get the below value
58.14999999999999857891452847979962825775146484375
I am able to understand "test" double variable value is internally stored as 58.1499999. But when i do the below two System.out.println i am getting the output as 58.15 and not 58.1499999.
System.out.println(test);
System.out.println(new Double(test).toString());
It prints the output as 58.15 for the above two.
Is the above System.out.println statements are doing some rounding of the value 58.1499999 and printing it as 58.15?
System.out.println(new BigDecimal("58.15"));
To construct a BigDecimal from a hard-coded constant, you must always use one of constants in the class (ZERO, ONE, or TEN) or one of the string constructors. The reason is that one you put the value in a double, you've already lost precision that can never be regained.
EDIT: polygenelubricants is right. Specifically, you're using Double.toString or equivalent. To quote from there:
How many digits must be printed for
the fractional part of m or a? There
must be at least one digit to
represent the fractional part, and
beyond that as many, but only as many,
more digits as are needed to uniquely
distinguish the argument value from
adjacent values of type double. That
is, suppose that x is the exact
mathematical value represented by the
decimal representation produced by
this method for a finite nonzero
argument d. Then d must be the double
value nearest to x; or if two double
values are equally close to x, then d
must be one of them and the least
significant bit of the significand of
d must be 0.
Yes, println (or more precisely, Double.toString) rounds. For proof, System.out.println(.1D); prints 0.1, which is impossible to represent in binary.
Also, when using BigDecimal, don't use the double constructor, because that would attempt to precisely represent an imprecise value. Use the String constructor instead.
out.println and Double.toString() use the format specified in Double.toString(double).
BigDecimal uses more precision by default, as described in the javadoc, and when you call toString() it outputs all of the characters up to the precision level available to a primitive double since .15 does not have an exact binary representation.