The accuracy of a double in general programming and Java - java

I understand that due to the nature of a float/double one should not use them for precision important calculations. However, i'm a little confused on their limitations due to mixed answers on similar questions, whether or not floats and doubles will always be inaccurate regardless of significant digits or are only inaccurate up to the 16th digit.
I've ran a few examples in Java,
System.out.println(Double.parseDouble("999999.9999999999");
// this outputs correctly w/ 16 digits
System.out.println(Double.parseDouble("9.99999999999999");
// This also outputs correctly w/ 15 digits
System.out.println(Double.parseDouble("9.999999999999999");
// But this doesn't output correctly w/ 16 digits. Outputs 9.999999999999998
I can't find the link to another answer that stated that values like 1.98 and 2.02 would round down to 2.0 and therefore create inaccuracies but testing shows that the values are printed correctly. So my first question is whether or not floating/double values will always be inaccurate or is there a lower limit where you can be assured of precision.
My second question is in regards to using BigDecimal. I know that I should be using BigDecimal for precision important calculations. Therefore I should be using BigDecimal's methods for arithmetic and comparing. However, BigDecimal also includes a doubleValue() method which will convert the BigDecimal to a double. Would it be safe for me to do a comparison between double values that I know for sure have less than 16 digits? There will be no arithmetic done on them at all so the inherent values should not have changed.
For example, is it safe for me to do the following?
BigDecimal myDecimal = new BigDecimal("123.456");
BigDecimal myDecimal2 = new BigDecimal("234.567");
if (myDecimal.doubleValue() < myDecimal2.doubleValue()) System.out.println("myDecimal is smaller than myDecimal2");
Edit: After reading some of the responses to my own answer i've realized my understanding was incorrect and have deleted it. Here are some snippets from it that might help in the future.
"A double cannot hold 0.1 precisely. The closest representable value to 0.1 is 0.1000000000000000055511151231257827021181583404541015625. Java Double.toString only prints enough digits to uniquely identify the double, not the exact value." - Patricia Shanahan
Sources:
https://stackoverflow.com/a/5749978 - States that a double can hold up to 15 digits

I suggest you read this page:
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
Once you've read and understood it, and perhaps converted several examples to their binary representations in the 64 bit floating point format, then you'll have a much better idea of what significant digits a Double can hold.

As a side note, (perhaps trivial) a nice and reliable way to store a known precision of value is to simply multiply it by the relevant factor and store as some integral type, which are completely precise.
For example:
double costInPounds = <something>; //e.g. 3.587
int costInPence = (int)(costInPounds * 100 + 0.5); //359
Plainly some precision can be lost, but if a required/desired precision is known, this can save a lot of bother with floating point values, and once this has been done, no precision can be lost by further manipulations.
The + 0.5 is to ensure that rounding works as expected. (int) takes the 'floor' of the provided double value, so adding 0.5 makes it round up and down as expected.

Related

Comma double numbers multiplication

Why this java code returns 61.004999999999995 instead of 61,005 ?? I don´t get it.
System.out.println(105*0.581);
It occurs due to the nature of floating point numbers . Computers are not very intelligent working with floating point numbers , so we have to work based on approximations.
Instead of 6.005 == 6.004999 , you should do this: 6.005 - 6.004999 < = 0.001
You fall into a floating point precision problem. In computer science there is a simple (but anoing) fact : you cannot represent all real numbers. It's also true for Java.
If you want to go deeper, you can study how floating point number are stores in memory. Key words are : bit of sign; mantissa and exponent. Be aware that the precision also depends on the system memory (32or64)
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
Java speaking, for more precision you can use BigDecimal :
System.out.println(new BigDecimal(105).multiply(new BigDecimal(0.581));
You can also round it with round(MathContext mc) which in this case will give you 61.005 if you set the precision to 5.
System.out.println(new BigDecimal(105).multiply(new BigDecimal(0.581)).round(new MathContext(5)));
https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html
If it's just a question about how to display it and the precision dosen't matter, you can use the DecimalFormat.
System.out.println(new DecimalFormat("###.###").format(105*0.581));
https://docs.oracle.com/javase/8/docs/api/java/text/DecimalFormat.html

Tan function in java [duplicate]

I came to know about the accuracy issues when I executed the following following program:
public static void main(String args[])
{
double table[][] = new double[5][4];
int i, j;
for(i = 0, j = 0; i <= 90; i+= 15)
{
if(i == 15 || i == 75)
continue;
table[j][0] = i;
double theta = StrictMath.toRadians((double)i);
table[j][1] = StrictMath.sin(theta);
table[j][2] = StrictMath.cos(theta);
table[j++][3] = StrictMath.tan(theta);
}
System.out.println("angle#sin#cos#tan");
for(i = 0; i < table.length; i++){
for(j = 0; j < table[i].length; j++)
System.out.print(table[i][j] + "\t");
System.out.println();
}
}
And the output is:
angle#sin#cos#tan
0.0 0.0 1.0 0.0
30.0 0.49999999999999994 0.8660254037844387 0.5773502691896257
45.0 0.7071067811865475 0.7071067811865476 0.9999999999999999
60.0 0.8660254037844386 0.5000000000000001 1.7320508075688767
90.0 1.0 6.123233995736766E-17 1.633123935319537E16
(Please forgive the unorganised output).
I've noted several things:
sin 30 i.e. 0.5 is stored as 0.49999999999999994.
tan 45 i.e. 1.0 is stored as 0.9999999999999999.
tan 90 i.e. infinity or undefined is stored as 1.633123935319537E16 (which is a very big number).
Naturally, I was quite confused to see the output (even after deciphering the output).
So I've read this post, and the best answer tells me:
These accuracy problems are due to the internal representation of floating > point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at >least using modern C++ compilers. For most operations, this isn't much of an issue.
answered Oct 7 '08 at 7:42
Konrad Rudolph
So, my question is:
Is there any way to prevent such inaccurate results (in Java)?
Should I round-off the results? In that case, how would I store infinity i.e. Double.POSITIVE_INFINITY?
You have to take a bit of a zen* approach to floating-point numbers: rather than eliminating the error, learn to live with it.
In practice this usually means doing things like:
when displaying the number, use String.format to specify the amount of precision to display (it'll do the appropriate rounding for you)
when comparing against an expected value, don't look for equality (==). Instead, look for a small-enough delta: Math.abs(myValue - expectedValue) <= someSmallError
EDIT: For infinity, the same principle applies, but with a tweak: you have to pick some number to be "large enough" to treat as infinity. This is again because you have to learn to live with, rather than solve, imprecise values. In the case of something like tan(90 degrees), a double can't store π/2 with infinite precision, so your input is something very close to, but not exactly, 90 degrees -- and thus, the result is something very big, but not quite infinity. You may ask "why don't they just return Double.POSITIVE_INFINITY when you pass in the closest double to π/2," but that could lead to ambiguity: what if you really wanted the tan of that number, and not 90 degrees? Or, what if (due to previous floating-point error) you had something that was slightly farther from π/2 than the closest possible value, but for your needs it's still π/2? Rather than make arbitrary decisions for you, the JDK treats your close-to-but-not-exactly π/2 number at face value, and thus gives you a big-but-not-infinity result.
For some operations, especially those relating to money, you can use BigDecimal to eliminate floating-point errors: you can really represent values like 0.1 (instead of a value really really close to 0.1, which is the best a float or double can do). But this is much slower, and doesn't help you for things like sin/cos (at least with the built-in libraries).
* this probably isn't actually zen, but in the colloquial sense
You have to use BigDecimal instead of double. Unfortunately, StrictMath doesn't support BigDecimal, so you will have to use another library, or your own implementation of sin/cos/tan.
This is inherent in using floating-point numbers, in any language. Actually, it's inherent in using any representation with a fixed maximum precision.
There are several solutions. One is to use an extended-precision math package -- BigDecimal is often suggested for Java. BigDecimal can handle many more digits of precision, and also -- because it's a decimal representation rather than a 2's-complement representation -- tends to round off in ways that are less surprising to humans who are used to working in base 10. (That doesn't necessarily make them more correct, please note. Binary can't represent 1/3 exactly, but neither can decimal.)
There are also extended-precision 2's-complement floating-point representations. Java directly supports float and double (which are usually also supported by the hardware), but it's possible to write versions which support more digits of accuracy.
Of course any of the extended-precision packages will slow down your computations. So you shouldn't resort to them unless you actually need them.
Another may to use fixed-point binary rather than floating point. For example, the standard solution for most financial calculations is simply to compute in terms of the smallest unit of currency -- pennies, in the US -- in integers, converting to and from the display format (eg dollars and cents) only for I/O. That's also the approach used for time in Java -- the internal clock reports an integer number of milliseconds (or nanoseconds, if you use the nanotime call), which gives both more than sufficient precision and a more than sufficient range of values for most practical purposes. Again, this means that roundoff tends to happen in a way that matches human expectations... and again, that's less about accuracy than about not surprising the users. And these representations, because they process as integers or longs, allow fast computation -- faster than floating point, in fact.
There are yet other solutions which involve computing in rational numbers, or other variations, in an attempt to compromise between computational cost and precision.
But I also have to ask... Do you really NEED more precision than float is giving you? I know the roundoff is surprising, but in many cases it's perfectly acceptable to just let it happen, possibly rounding off to a less surprising number of fractional digts when you display the results to the user. In many cases, float or double are Just Fine for real-world use. That's why the hardware supports them, and that's why they're in the language.

Wrong Decimal Converting from Double to String in java

I have A String that is formatted correctly to be cast to a double and it works fine for most decimals. The issue is that for .33, .67, and possibly others I haven't tested, the decimal becomes something like .6700000000002, or .329999999998. I understand why this happens but does any one have a suggestion to fix it.
It's a result of IEEE-754 rounding rules, some numbers cannot be represented precisely in two's complement. For example, 1/10 is not precisely representable.
You can add more precision (but not infinite) by using BigDecimal.
BigDecimal oneTenth = new BigDecimal("1").divide(new BigDecimal("10"));
System.out.println(oneTenth);
Which outputs 0.1
Some decimal numbers can not be represented accurately with the internal base 2 machine representation.
That's double precision for you. Binary numbers and decimals don't work well together. Unless you are doing something really precise it should be fine, if you are printing it you should use either decimal format or printf.
Value of floating point numbers are not stored directly but with exponential values. You may write 3.1233453456356 as number, but this is stored something like 3 and 2^6 in memory. It tries to store a value as close as your number, but those differences can happen.
It shouldn't be a problem unless you're testing for equality. With floating-point tests for equality you'll need to allow a "delta" so that:
if (a == b)
becomes
if (abs(a-b) < 0.000001)
or a similar small delta value. For printing, limit it to two decimal places and the formatter will round it for you.

Loss of precision after subtracting double from double [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Retain precision with Doubles in java
Alright so I've got the following chunk of code:
int rotation = e.getWheelRotation();
if(rotation < 0)
zoom(zoom + rotation * -.05);
else if(zoom - .05 > 0)
zoom(zoom - rotation * .05);
System.out.println(zoom);
Now, the zoom variable is of type double, initially set to 1. So, I would expect the results to be like 1 - .05 = .95; .95 - .05 = .9; .9 - .05 = .85; etc. This appears to be not the case though when I print the result as you can see below:
0.95
0.8999999999999999
0.8499999999999999
0.7999999999999998
0.7499999999999998
0.6999999999999997
Hopefully someone is able to clearly explain. I searched the internet and I read it has something to do with some limitations when we're storing floats in binary but I still don't quite understand. A solution to my problem is not shockingly important but I would like to understand this kind of behavior.
Java uses IEEE-754 floating point numbers. They're not perfectly precise. The famous example is:
System.out.println(0.1d + 0.2d);
...which outputs 0.30000000000000004.
What you're seeing is just a symptom of that imprecision. You can improve the precision by using double rather than float.
If you're dealing with financial calculations, you might prefer BigDecimal to float or double.
float and double have limited precision because its fractional part is represented as a series of powers of 2 e.g. 1/2 + 1/4 + 1/8 ... If you have an number like 1/10 it has to be approximated.
For this reason, whenever you deal with floating point you must use reasonable rounding or you can see small errors.
e.g.
System.out.printf("%.2f%n", zoom);
To minimise round errors, you could count the number of rotations instead and divide this int value by 20.0. You won't see a rounding error this way, and it will be faster, with less magic numbers.
float and double have precision issues. I would recommend you take a look at the BigDecimal Class. That should take care of precision issues.
Since decimal numbers (and integer numbers as well) can have an infinite number of possible values, they are impossible to map precisely to bits using a standard format. Computers circumvent this problem by limiting the range the numbers can assume.
For example, an int in java can represent nothing larger then Integer.MAX_VALUE or 2^31 - 1.
For decimal numbers, there is also a problem with the numbers after the comma, which also might be infinite. This is solved by not allowing all decimal values, but limiting to a (smartly chosen) number of possibilities, based on powers of 2. This happens automatically but is often nothing to worry about, you can interpret your result of 0.899999 as 0.9. In case you do need explicit precision, you will have to resort to other data types, which might have other limitations.

Is there a rounding algorithm that undoes base 2 conversion and infers precision? *hopefully in java*

If I have a number like 3.01 the computer seems to think the best double is the 64 bit number:
3.0099999999999997868371792719699442386627197265625
Is there some way better than looking for say more than four 9's or 0's that I can generically "round" to the precise base 10 representation?
Is there an algo that would take that 3.00999999... mess and return 3.01 WITHOUT me specifying that I want that precision.
I think most of the numbers I'm dealing with should be small enough that 64-bits will not have ambiguities.
No - because presumably you might have actually specified 3.0099999999999997868 as the input number, and wouldn't want that same value to be rounded to 3.01. Basically, you've lost information when converting from a decimal value to binary floating point - you can't get that information back.
If you're interested in decimal values rather than just the magnitude, you should consider using BigDecimal instead of double. (What do these values represent?)
EDIT: As noted by other answers, Java will give you 3.01 anyway when you just use toString, however you came to the original value. This is specified in Double.toString:
How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double. That is, suppose that x is the exact mathematical value represented by the decimal representation produced by this method for a finite nonzero argument d. Then d must be the double value nearest to x; or if two double values are equally close to x, then d must be one of them and the least significant bit of the significand of d must be 0.
If that's good enough for you, it'll make life easier... but it sounds like you should be thinking about it more fundamentally.
If you want 10 digits of precision, you need to round to that precision. Even if you use BigDecimal you can avoid representation error, but sooner or later you will have to know how to deal with precision.
double d = 3.01;
System.out.println(d); // rounds the answer slightly
prints
3.01
There are many workarounds for representation and rounding error, however often the built in tools will deal with it for you.
It's clear that you cannot expect to always get the original number back since there are many numbers that map to the same float. For example, you cannot distinguish between these numbers:
3.0099999999999997868371792719699442386627197265625
3.009999999999999786837179271969944238662
3.009999999999999786837179271
3.0099999999999997
3.01
However, Python has an interesting take on this: if you give it the number 3.0099999999999997868371792719699442386627197265625, it will reply with 3.01:
Python 2.7.2+ (default, Nov 30 2011, 19:22:03)
[GCC 4.6.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 3.0099999999999997868371792719699442386627197265625
3.01
This is becauuse 3.01 is the shortest string that gives the same floating point number. In other words, it's the shortest x so that
float(repr(x)) == x
where repr is the Python function that turns an object into a string (here it turns 3.0099... into 3.01) and float converts a string to a float.
There are obvisouly many strings that will result in the same internal float, but this is the shortest and therefore "probably" what you meant.
This feature was added in Python 2.7, as a backport of a Python 3.1 feature. It was discussed in Issue1580 and you should be able to find the code there and translate it into Java if you want.

Categories