I have found this great solution for rounding:
static Double round(Double d, int precise) {
BigDecimal bigDecimal = new BigDecimal(d);
bigDecimal = bigDecimal.setScale(precise, RoundingMode.HALF_UP);
return bigDecimal.doubleValue();
}
However, the results are confusing:
System.out.println(round(2.655d,2)); // -> 2.65
System.out.println(round(1.655d,2)); // -> 1.66
Why is it giving this output? I'm using jre 1.7.0_45.
You have to replace
BigDecimal bigDecimal = new BigDecimal(d);
with
BigDecimal bigDecimal = BigDecimal.valueOf(d);
and you will get the expected results:
2.66
1.66
Explanation from Java API:
BigDecimal.valueOf(double val) - uses the double's canonical string representation provided by the Double.toString() method. This is preferred way to convert a double (or float) into a BigDecimal.
new BigDecimal(double val) - uses the exact decimal representation of the double's binary floating-point value and thus results of this constructor can be somewhat unpredictable.
You may try to change your program like this:-
static Double round(Double d, int precise)
{
BigDecimal bigDecimal = BigDecimal.valueOf(d);
bigDecimal = bigDecimal.setScale(precise, RoundingMode.HALF_UP);
return bigDecimal.doubleValue();
}
Sample Ideone
Success time: 0.07 memory: 381184 signal:0
Rounded: 2.66
Rounded: 1.66
Success time: 0.07 memory: 381248 signal:0
Rounded: 2.66
Rounded: 1.66
Reason why you are getting the expected result with BigDecimal.valueOf and not with new BigDecimal, in the words of Joachim Sauer:
BigDecimal.valueOf(double) will use the canonical String representation of the double value passed in to instantiate the BigDecimal object. In other words: The value of the BigDecimal object will be what you see when you do System.out.println(d).
If you use new BigDecimal(d) however, then the BigDecimal will try to represent the double value as accurately as possible. This will usually result in a lot more digits being stored than you want.
Hence resulting in some confusion which you are watching in your program.
From the Java Doc:
BigDecimal.valueOf(double val) - Translates a double into a BigDecimal, using the double's canonical string representation
provided by the Double.toString(double) method.
new BigDecimal(double val) -
Translates a double into a BigDecimal which is the exact decimal
representation of the double's binary floating-point value. The scale
of the returned BigDecimal is the smallest value such that (10scale ×
val) is an integer. Notes:
The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to 0.1 (an unscaled value of 1,
with a scale of 1), but it is actually equal to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that
matter, as a binary fraction of any finite length). Thus, the value
that is being passed in to the constructor is not exactly equal to
0.1, appearances notwithstanding.
The String constructor, on the other hand, is perfectly predictable: writing new BigDecimal("0.1") creates a BigDecimal
which is exactly equal to 0.1, as one would expect. Therefore, it
is generally recommended that the String constructor be used in
preference to this one.
When a double must be used as a source for a BigDecimal, note that this constructor provides an exact conversion; it does not give the
same result as converting the double to a String using the
Double.toString(double) method and then using the BigDecimal(String)
constructor. To get that result, use the static valueOf(double)
method.
This test case ends up pretty self-explanatory:
public static void main (String[] args) throws java.lang.Exception
{
System.out.println("Rounded: " + round(2.655d,2)); // -> 2.65
System.out.println("Rounded: " + round(1.655d,2)); // -> 1.66
}
public static Double round(Double d, int precise)
{
BigDecimal bigDecimal = new BigDecimal(d);
System.out.println("Before round: " + bigDecimal.toPlainString());
bigDecimal = bigDecimal.setScale(precise, RoundingMode.HALF_UP);
System.out.println("After round: " + bigDecimal.toPlainString());
return bigDecimal.doubleValue();
}
Output:
Before round: 2.654999999999999804600747665972448885440826416015625
After round: 2.65
Rounded: 2.65
Before round: 1.6550000000000000266453525910037569701671600341796875
After round: 1.66
Rounded: 1.66
A dirty hack to fix it would be to round in two steps:
static Double round(Double d, int precise)
{
BigDecimal bigDecimal = new BigDecimal(d);
System.out.println("Before round: " + bigDecimal.toPlainString());
bigDecimal = bigDecimal.setScale(15, RoundingMode.HALF_UP);
System.out.println("Hack round: " + bigDecimal.toPlainString());
bigDecimal = bigDecimal.setScale(precise, RoundingMode.HALF_UP);
System.out.println("After round: " + bigDecimal.toPlainString());
return bigDecimal.doubleValue();
}
Here, 15 is a bit under the maximum number of digits a double can represent in base 10. Output:
Before round: 2.654999999999999804600747665972448885440826416015625
Hack round: 2.655000000000000
After round: 2.66
Rounded: 2.66
Before round: 1.6550000000000000266453525910037569701671600341796875
Hack round: 1.655000000000000
After round: 1.66
Rounded: 1.66
As said in API
The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with
a scale of 1), but it is actually equal to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that
matter, as a binary fraction of any finite length). Thus, the value
that is being passed in to the constructor is not exactly equal to
0.1, appearances notwithstanding.
The String constructor, on the other hand, is perfectly predictable: writing new BigDecimal("0.1") creates a BigDecimal which
is exactly equal to 0.1, as one would expect. Therefore, it is
generally recommended that the String constructor be used in
preference to this one.
When a double must be used as a source for a BigDecimal, note that this constructor provides an exact conversion; it does not give the
same result as converting the double to a String using the
Double.toString(double) method and then using the BigDecimal(String)
constructor. To get that result, use the static valueOf(double)
method.
It's because of cannot represent double value exactly. So you have to use BigDecimal bigDecimal = BigDecimal.valueOf(d); instead of BigDecimal bigDecimal = new BigDecimal(d);
Rounding a double resp Double in itself does not make much sense, as a double datatype cannot be rounded (easily, or at all?).
What you are doing is:
Take a Double d as input and a int precise number of digits behind the seperator.
Create a BigDecimal from that d.
Round the BigDecimal correctly.
Return the double value of that BigDecimal, which has no rounding applied to it anymore.
You can go two ways:
You can return a BigDecimal that represents the rounded double, and later decide what you do with it.
You can return a String representing the rounded BigDecimal.
Either of those ways will make sense.
Decimal numbers can't be represented exactly in double.
So 2.655 ends up being this:
2.65499999999999980460074766597
whereas 1.655 ends up being this:
1.655000000000000026645352591
Related
In Java, I would like 0.101d, 0.109999999d, and 0.11000d to all be functionally equivalent. I have attempted to use BigDecimal and a MathContext with 2 digits of precision and RoundingMode.CEILING to do this, but my unit test shows that 0.11000 rounds to 0.12. I want 0.110000d to Round to 0.11.
private static MathContext targetMathContext = new MathContext(2, RoundingMode.CEILING);
public static double roundedTarget(double d) {
BigDecimal bd = new BigDecimal(d,targetMathContext);
return bd.doubleValue();
}
JUnit:
double c = 0.445d;
double s = 0.5d;
double p = (s-c)/s; // 0.1099999..... in dfp
double rgpp = roundedTarget(p); // 0.11
double rgppp = roundedTarget(rgpp); // 0.12
// operation is not idempotent as f(x) != f(f(x)) :(
Assert.assertEquals("These values should be equal",rgpp,rgppp);
Solution:
public static double roundedTarget(double d) {
return BigDecimal.valueOf(d)
.setScale(2,BigDecimal.ROUND_CEILING)
.doubleValue();
}
I'm reluctant to call this operation idempotent, since the input of your first application of the function would be different than the next (since the result you get would change the value of x).
In either event, main issue is that you're using doubles in one spot (and introducing floating-point inaccuracies), and BigDecimal in another (if used correctly, is less impacted by those inaccuracies).
The easiest thing to do would be to set a scale of 2 decimal places on your doubles, and then round them however you like. As an example, all of these values satisfy the conditions you mention you want in your comments.
BigDecimal firstDecimal = BigDecimal.valueOf(0.101).setScale(2, RoundingMode.CEILING);
BigDecimal secondDecimal = BigDecimal.valueOf(0.10999).setScale(2, RoundingMode.CEILING);
BigDecimal thirdDecimal = BigDecimal.valueOf(0.110000).setScale(2, RoundingMode.CEILING);
BigDecimal fourthDecimal = BigDecimal.valueOf(0.1101).setScale(2, RoundingMode.CEILING);
System.out.println(firstDecimal); // 0.11
System.out.println(secondDecimal); // 0.11
System.out.println(thirdDecimal); // 0.11
System.out.println(fourthDecimal); // 0.12
The main takeaway here is: if you're going to use BigDecimal, be consistent with it throughout. There's no real reason to interlace or interweave working with raw doubles and BigDecimal, as it will only lead to headaches like this.
import java.math.BigDecimal;
public class TestNumber {
public static void main(String[] args) {
BigDecimal bd = new BigDecimal(0);
bd = bd.add(new BigDecimal(19.89));
System.out.println(bd.doubleValue() + " - \t " + bd);
}
}
I have multiple BidDecimals fields and arithmetic operations/comparations, the problem is with arithmetic results and decimals values.
For the above example the output is as follows:
19.89 - 19.8900000000000005684341886080801486968994140625
I expects:
19.89
The unexpected result creates other undesirable outputs to perform operations on the field type BigDecimal
The precision is already lost once you use the BigDecimal constructor that accepts double values. The value youre seeing is the true IEEE 754 representation of the number. You can use
bd = bd.add(new BigDecimal("19.89"));
The double value displayed by println is not the same as the actual value stored in that double variable.
In any range there are an infinite number of real numbers but only a finite number of representable floating point values. When you define a floating point value, that value may not map to a representable floating point value, in which case you get the representable value that is closest to what you want. (Also keep in mind the representation is in binary, and a lot of numbers that are familiar to us become repeating decimals in binary that have to get truncated.) Here of course it's off by only 0.0000000000000005684341886080801486968994140625.
The lines
double d = 19.89d;
System.out.println(d);
will show you a cleaned-up approximation of what's in d. Java is hiding the messy trailing decimals from you.
On the other hand, these lines
double d = 19.89d
BigDecimal b = new BigDecimal(d);
System.out.println(b);
result in the BigDecimal getting initialized with the entire contents of d, which the BigDecimal reproduces faithfully out to the last trailing digit.
When println is passed the BigDecimal, the BigDecimal's toString method returns a string showing the digits it stored, and println writes that string to the console.
Using
BigDecimal b = new BigDecimal("19.89");
will result in the actual decimal value 19.89 getting stored in the BigDecimal, because no floating point evaluation is involved.
If you have a double and you need to make a BigDecimal out of it, without adding all the extra precision, try something like
double d = 19.89; // or something else
bd = new BigDecimal(d, new MathContext(15));
This tells it to keep only 15 digits of precision (which is about how many digits of precision a double supports). This creates a BigDecimal whose toString() returns
"19.8900000000000"
which isn't quite perfect, since all the trailing zeroes will show up, but it doesn't give you the extra non-zero digits you're getting.
I tried the following code. but getting different result when subtracting using BigDecimal.
double d1 = 0.1;
double d2 = 0.1;
System.out.println("double result: "+ (d2-d1));
float f1 = 0.1F;
float f2 = 0.1F;
System.out.println("float result: "+ (f2-f1));
BigDecimal b1 = new BigDecimal(0.01);
BigDecimal b2 = new BigDecimal(0.01);
b1 = b1.subtract(b2);
System.out.println("BigDecimal result: "+ b1);
Result:
double result: 0.0
float result: 0.0
BigDecimal result: 0E-59
I am still working on this. can anyone please clarify.
[There are a lot of answers here telling you that binary floating-point can't exactly represent 0.01, and implying that the result you're seeing is somehow inexact. Whilst the first part of that is true, it's not really the core issue here.]
The answer is that "0E-59" is equal to 0. Recall that a BigDecimal is the combination of an unscaled value and a decimal scale factor:
System.out.println(b1.unscaledValue());
System.out.println(b1.scale());
displays:
0
59
The unscaled value is 0, as expected. The "strange" scale value is simply an artifact of the decimal expansion of the non-exact floating-point representation of 0.01:
System.out.println(b2.unscaledValue());
System.out.println(b2.scale());
displays:
1000000000000000020816681711721685132943093776702880859375
59
The next obvious question is, why doesn't BigDecimal.toString just display b1 as "0", for convenience? The answer is that the string representation needs to be unambiguous. From the Javadoc for toString:
There is a one-to-one mapping between the distinguishable BigDecimal values and the result of this conversion. That is, every distinguishable BigDecimal value (unscaled value and scale) has a unique string representation as a result of using toString. If that string representation is converted back to a BigDecimal using the BigDecimal(String) constructor, then the original value will be recovered.
If it just displayed "0", then you wouldn't be able to get back to this exact BigDecimal object.
Use constructor from String: b1 = new BigDecimal("0.01");
Java loss of precision
(slide 23)
http://strangeloop2010.com/system/talks/presentations/000/014/450/BlochLee-JavaPuzzlers.pdf
Interesting, the values appear to be equal and subtraction does give you zero, it appears to just be an issue with the printing code. The following code:
import java.math.BigDecimal;
public class Test {
public static void main(String args[]) {
BigDecimal b1 = new BigDecimal(0.01);
BigDecimal b2 = new BigDecimal(0.01);
BigDecimal b3 = new BigDecimal(0);
if (b1.compareTo(b2) == 0) System.out.println("equal 1");
b1 = b1.subtract(b2);
if (b1.compareTo(b3) == 0) System.out.println("equal 2");
System.out.println("BigDecimal result: "+ b1);
}
}
outputs both equal messages, indicating that the values are the same and that you get zero when you subtract.
You could try to raise this as a bug and see what Oracle comes back with. It's likely they'll just state that 0e-59 is still zero, so not a bug, or that the rather complex behaviour being described on the BigDecimal documentation page is working as intended. Specifically, the point that states:
There is a one-to-one mapping between the distinguishable BigDecimal values and the result of this conversion. That is, every distinguishable BigDecimal value (unscaled value and scale) has a unique string representation as a result of using toString. If that string representation is converted back to a BigDecimal using the BigDecimal(String) constructor, then the original value will be recovered.
That fact that the original value needs to be recoverable means that toString() needs to generate a unique string for each scale, which is why you're getting 0e-59. Otherwise, converting the string back to a BigDecimal may give you a different value (unscaled-value/scale tuple).
If you really want zero to show up as "0" regardless of the scale, you can use something like:
if (b1.compareTo(BigDecimal.ZERO) == 0) b1 = new BigDecimal(0);
You have to get the return value:
BigDecimal b3 = b1.subtract(b2);
System.out.println("BigDecimal result: "+ b3);
BigDecimal(double val)
1.The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with
a scale of 1), but it is actually equal to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that
matter, as a binary fraction of any finite length). Thus, the value
that is being passed in to the constructor is not exactly equal to
0.1, appearances notwithstanding.
2.The String constructor, on the other hand, is perfectly predictable: writing new BigDecimal("0.1") creates a BigDecimal which is exactly
equal to 0.1, as one would expect. Therefore, it is generally
recommended that the String constructor be used in preference to this
one.
3.When a double must be used as a source for a BigDecimal, note that this constructor provides an exact conversion; it does not give the
same result as converting the double to a String using the
Double.toString(double) method and then using the BigDecimal(String)
constructor. To get that result, use the static valueOf(double)
method.
So the real question is: with the following code,
BigDecimal b1 = new BigDecimal(0.01);
BigDecimal b2 = new BigDecimal(0.01);
b1 = b1.subtract(b2);
why does b1.toString() evaluate to "0E-59" and not to something like "0.0", "0E0" or just "0"?
The reason is that toString() prints the canonical format of the BigDecimal. See BigDecimal.toString() for more information.
At the end, 0E-59 is 0.0 - it is 0*10^59 which mathematically evaluates to 0. So, the unexpected result is a matter of the internal representation of the BigDecimal.
To get the float or double values, use
b1.floatValue());
or
b1.doubleValue());
Both evaluate to 0.0.
It's a known issue, BigDecimal(double val) API The results of this constructor can be somewhat unpredictable. Though it looks really wierd in this interpertation. Actual reason is that new BigDecimal(0.01) produces a BigDecimal with approx values
0.01000000000000000020816681711721685132943093776702880859375
which has a long precision, and so the result of subtract has a long precision too.
Anyway, we can solves the "problem" this way
BigDecimal b1 = new BigDecimal("0.01");
BigDecimal b2 = new BigDecimal("0.01");
or we can use a constructor with setting a precision
BigDecimal b1 = new BigDecimal(0.01, new MathContext(1));
BigDecimal b2 = new BigDecimal(0.01, new MathContext(1));
Use like this:
BigDecimal b1 = BigDecimal.valueOf(0.01);
BigDecimal b2 = BigDecimal.valueOf(0.01);
b1 = b1.subtract(b2);
System.out.println("BigDecimal result: "+ b1);
My coworker did this experiment:
public class DoubleDemo {
public static void main(String[] args) {
double a = 1.435;
double b = 1.43;
double c = a - b;
System.out.println(c);
}
}
For this first-grade operation I expected this output:
0.005
But unexpectedly the output was:
0.0050000000000001155
Why does double fails in such a simple operation? And if double is not the datatype for this work, what should I use?
double is internally stored as a fraction in binary -- like 1/4 + 1/8 + 1/16 + ...
The value 0.005 -- or the value 1.435 -- cannot be stored as an exact fraction in binary, so double cannot store the exact value 0.005, and the subtracted value isn't quite exact.
If you care about precise decimal arithmetic, use BigDecimal.
You may also find this article useful reading.
double and float are not exactly real numbers.
There are infinite number of real numbers in any range, but only finite number of bits to represent them! for this reason, rounding errors is expected for double and floats.
The number you get is the closest number possible that can be represented by double in floating point representation.
For more details, you might want to read this article [warning: might be high-level].
You might want to use BigDecimal to get exactly a decimal number [but you will again encounter rounding errors when you try to get 1/3].
Yes it worked this way using BigDecimal operations
private static void subtractUsingBigDecimalOperation(double a, double b) {
BigDecimal c = BigDecimal.valueOf(a).subtract(BigDecimal.valueOf(b));
System.out.println(c);
}
double and float arithmetic are never going to be exactly correct because of the rounding that occurs "under the hood".
Essentially doubles and floats can have an infinite amount of decimals but in memory they must be represented by some real number of bits. So when you do this decimal arithmetic a rounding procedure occurs and is often off by a very small amount if you take all of the decimals into account.
As suggested earlier, if you need completely exact values then use BigDecimal which stores its values differently. Here's the API
public class BigDecimalExample {
public static void main(String args[]) throws IOException {
//floating point calculation
double amount1 = 2.15;
double amount2 = 1.10;
System.out.println("difference between 2.15 and 1.0 using double is: " + (amount1 - amount2));
//Use BigDecimal for financial calculation
BigDecimal amount3 = new BigDecimal("2.15");
BigDecimal amount4 = new BigDecimal("1.10") ;
System.out.println("difference between 2.15 and 1.0 using BigDecimal is: " + (amount3.subtract(amount4)));
}
}
Output:
difference between 2.15 and 1.0 using double is: 1.0499999999999998
difference between 2.15 and 1.0 using BigDecmial is: 1.05
//just try to make a quick example to make b to have the same precision as a has, by using BigDecimal
private double getDesiredPrecision(Double a, Double b){
String[] splitter = a.toString().split("\\.");
splitter[0].length(); // Before Decimal Count
int numDecimals = splitter[1].length(); //After Decimal Count
BigDecimal bBigDecimal = new BigDecimal(b);
bBigDecimal = bBigDecimal.setScale(numDecimals,BigDecimal.ROUND_HALF_EVEN);
return bBigDecimal.doubleValue();
}
How is it that Java's BigDecimal can be this painful?
Double d = 13.3D;
BigDecimal bd1 = new BigDecimal(d);
BigDecimal bd2 = new BigDecimal(String.valueOf(d));
System.out.println("RESULT 1: "+bd1.toString());
System.out.println("RESULT 2: "+bd2.toString());
RESULT 1: 13.300000000000000710542735760100185871124267578125
RESULT 2: 13.3
Is there any situation where Result 1 would be desired? I know that Java 1.5 changed the toString() method but was this the intended consequence?
Also I realise that BigDecimal has doubleValue() etc, but the library that I am working with helpfully uses a toString() and I can't change that :-(
Cheers.
Well, the API does address this apparent inconsistency in the constructor BigDecimal(double val):
The results of this constructor can be somewhat unpredictable. One might
assume that writing new
BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to
0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal
to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be
represented exactly as a double (or,
for that matter, as a binary fraction
of any finite length). Thus, the value
that is being passed in to the
constructor is not exactly equal to
0.1, appearances notwithstanding.
The String constructor, on the other hand, is perfectly predictable:
writing new BigDecimal("0.1") creates
a BigDecimal which is exactly equal to
0.1, as one would expect. Therefore, it is generally recommended that the
String constructor be used in
preference to this one.
When a double must be used as a source for a BigDecimal, note that
this constructor provides an exact
conversion; it does not give the same
result as converting the double to a
String using the
Double.toString(double) method and
then using the BigDecimal(String)
constructor. To get that result, use
the static valueOf(double) method.
Moral of the story: The pain seems self-inflicted, just use new BigDecimal(String val) or BigDecimal.valueOf(double val) instead =)
Your problem has nothing to do with BigDecimal, and everything with Double, which cannot represent 13.3 accurately, since it uses binary fractions internally.
So your error is introduced in the very first line. The first BigDecimal simply preserves it, while String.valueOf() does some fishy rounding that causes the second one to have the desired content, pretty much through luck.
You might want to inform yourself about how floating-point values are implemented (IEEE 754-1985). And suddenly, everything will become crystal-clear.
This isn't the fault of BigDecimal - it's the fault of double. BigDecimal is accurately representing the exact value of d. String.valueOf is only showing the result to a few decimal places.
Fractions represented with binary number types(i.e. double, float) cannot be accurately stored in those types.
Double d = 13.3;
BigDecimal bdNotOk = new BigDecimal(d);
System.out.println("not ok: " + bdNotOk.toString());
BigDecimal bdNotOk2 = new BigDecimal(13.3);
System.out.println("not ok2: " + bdNotOk2.toString());
double x = 13.3;
BigDecimal ok = BigDecimal.valueOf(x);
System.out.println("ok: " + ok.toString());
double y = 13.3;
// pretty lame, constructor's behavior is different from valueOf static method
BigDecimal bdNotOk3 = new BigDecimal(y);
System.out.println("not ok3: " + bdNotOk3.toString());
BigDecimal ok2 = new BigDecimal("13.3");
System.out.println("ok2: " + ok2.toString());
Double e = 0.0;
for(int i = 0; i < 10; ++i) e = e + 0.1; // some fractions cannot be accurately represented with binary
System.out.println("not ok4: " + e.toString()); // should be 1
BigDecimal notOk5 = BigDecimal.valueOf(e);
System.out.println("not ok5: " + notOk5.toString()); // should be 1
/*
* here are some fractions that can be represented exactly in binary:
* 0.5 = 0.1 = 1 / 2
* 0.25 = 0.01 = 1 / 4
* 0.75 = 0.11 = 3 / 4
* 0.125 = 0.001 = 1 / 8
*/
output:
not ok: 13.300000000000000710542735760100185871124267578125
not ok2: 13.300000000000000710542735760100185871124267578125
ok: 13.3
not ok3: 13.300000000000000710542735760100185871124267578125
ok2: 13.3
not ok4: 0.9999999999999999
not ok5: 0.9999999999999999
Just use BigDecimal.valueOf(d) or new BigDecimal(s).