What's the right way to parseFloat in Java - java

I notice some issues with the Java float precision
Float.parseFloat("0.0065") - 0.001 // 0.0055000000134110451
new Float("0.027") - 0.001 // 0.02600000000700354575
Float.valueOf("0.074") - 0.001 // 0.07399999999999999999
I not only have a problem with Float but also with Double.
Can someone explain what is happening behind the scenes, and how can we get an accurate number? What would be the right way to handle this when dealing with these issues?

The problem is simply that float has finite precision; it cannot represent 0.0065 exactly. (The same is true of double, of course: it has greater precision, but still finite.)
A further problem, which makes the above problem more obvious, is that 0.001 is a double rather than a float, so your float is getting promoted to a double to perform the subtraction, and of course at that point the system has no way to recover the missing precision that a double could have represented to begin with. To address that, you would write:
float f = Float.parseFloat("0.0065") - 0.001f;
using 0.001f instead of 0.001.

See What Every Computer Scientist Should Know About Floating-Point Arithmetic. Your results look correct to me.
If you don't like how floating-point numbers work, try something like BigDecimal instead.

You're getting the right results. There is no such float as 0.027 exactly, nor is there such a double. You will always get these errors if you use float or double.
float and double are stored as binary fractions: something like 1/2 + 1/4 + 1/16... You can't get all decimal values to be stored exactly as finite-precision binary fractions. It's just not mathematically possible.
The only alternative is to use BigDecimal, which you can use to get exact decimal values.

From the Java Tutorials page on Primitive Data Types:
A floating-point literal is of type float if it ends with the letter F or f; otherwise its type is double and it can optionally end with the letter D or d.
So I think your literals (0.001) are doubles and you're subtracting doubles from floats.
Try this instead:
System.out.println((0.0065F - 0.001D)); // 0.005500000134110451
System.out.println((0.0065F - 0.001F)); // 0.0055
... and you'll get:
0.005500000134110451
0.0055
So add F suffixes to your literals and you should get better results:
Float.parseFloat("0.0065") - 0.001F
new Float("0.027") - 0.001F
Float.valueOf("0.074") - 0.001F

I would convert your float to a string and then use BigDecimal.
This link explains it well
new BigDecimal(String.valueOf(yourDoubleValue));
Dont use the BigDecimal double constructor though as you will still get errors

Long story short if you require arbitrary precision use BigDecimal not float or double. You will see all sorts of rounding issues of this nature using float.
As an aside be very careful not to use the float/double constructor of BigDecimal because it will have the same issue. Use the String constructor instead.

Floating point cannot accurately represent decimal numbers. If you need an accurate representation of a number in Java, you should use the java.math.BigDecimal class:
BigDecimal d = new BigDecimal("0.0065");

Related

Is there a way to get right results from BigDecimal.floatValue() function? [duplicate]

I am working with an application that is based entirely on doubles, and am having trouble in one utility method that parses a string into a double. I've found a fix where using BigDecimal for the conversion solves the issue, but raises another problem when I go to convert the BigDecimal back to a double: I'm losing several places of precision. For example:
import java.math.BigDecimal;
import java.text.DecimalFormat;
public class test {
public static void main(String [] args){
String num = "299792.457999999984";
BigDecimal val = new BigDecimal(num);
System.out.println("big decimal: " + val.toString());
DecimalFormat nf = new DecimalFormat("#.0000000000");
System.out.println("double: "+val.doubleValue());
System.out.println("double formatted: "+nf.format(val.doubleValue()));
}
}
This produces the following output:
$ java test
big decimal: 299792.457999999984
double: 299792.458
double formatted: 299792.4580000000
The formatted double demonstrates that it's lost the precision after the third place (the application requires those lower places of precision).
How can I get BigDecimal to preserve those additional places of precision?
Thanks!
Update after catching up on this post. Several people mention this is exceeding the precision of the double data type. Unless I'm reading this reference incorrectly:
http://java.sun.com/docs/books/jls/third_edition/html/typesValues.html#4.2.3
then the double primitive has a maximum exponential value of Emax = 2K-1-1, and the standard implementation has K=11. So, the max exponent should be 511, no?
You've reached the maximum precision for a double with that number. It can't be done. The value gets rounded up in this case. The conversion from BigDecimal is unrelated and the precision problem is the same either way. See this for example:
System.out.println(Double.parseDouble("299792.4579999984"));
System.out.println(Double.parseDouble("299792.45799999984"));
System.out.println(Double.parseDouble("299792.457999999984"));
Output is:
299792.4579999984
299792.45799999987
299792.458
For these cases double has more than 3 digits of precision after the decimal point. They just happen to be zeros for your number and that's the closest representation you can fit into a double. It's closer for it to round up in this case, so your 9's seem to disappear. If you try this:
System.out.println(Double.parseDouble("299792.457999999924"));
You'll notice that it keeps your 9's because it was closer to round down:
299792.4579999999
If you require that all of the digits in your number be preserved then you'll have to change your code that operates on double. You could use BigDecimal in place of them. If you need performance then you might want to explore BCD as an option, although I'm not aware of any libraries offhand.
In response to your update: the maximum exponent for a double-precision floating-point number is actually 1023. That's not your limiting factor here though. Your number exceeds the precision of the 52 fractional bits that represent the significand, see IEEE 754-1985.
Use this floating-point conversion to see your number in binary. The exponent is 18 since 262144 (2^18) is nearest. If you take the fractional bits and go up or down one in binary, you can see there's not enough precision to represent your number:
299792.457999999900 // 0010010011000100000111010100111111011111001110110101
299792.457999999984 // here's your number that doesn't fit into a double
299792.458000000000 // 0010010011000100000111010100111111011111001110110110
299792.458000000040 // 0010010011000100000111010100111111011111001110110111
The problem is that a double can hold 15 digits, while a BigDecimal can hold an arbitrary number. When you call toDouble(), it attempts to apply a rounding mode to remove the excess digits. However, since you have a lot of 9's in the output, that means that they keep getting rounded up to 0, with a carry to the next-highest digit.
To keep as much precision as you can, you need to change the BigDecimal's rounding mode so that it truncates:
BigDecimal bd1 = new BigDecimal("12345.1234599999998");
System.out.println(bd1.doubleValue());
BigDecimal bd2 = new BigDecimal("12345.1234599999998", new MathContext(15, RoundingMode.FLOOR));
System.out.println(bd2.doubleValue());
Only that many digits are printed so that, when parsing the string back to double, it will result in the exact same value.
Some detail can be found in the javadoc for Double#toString
How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double. That is, suppose that x is the exact mathematical value represented by the decimal representation produced by this method for a finite nonzero argument d. Then d must be the double value nearest to x; or if two double values are equally close to x, then d must be one of them and the least significant bit of the significand of d must be 0.
If it's entirely based on doubles ... why are you using BigDecimal? Wouldn't Double make more sense? If it's too large of value (or too much precision) for that then ... you can't convert it; that would be the reason to use BigDecimal in the first place.
As to why it's losing precision, from the javadoc
Converts this BigDecimal to a double. This conversion is similar to the narrowing primitive conversion from double to float as defined in the Java Language Specification: if this BigDecimal has too great a magnitude represent as a double, it will be converted to Double.NEGATIVE_INFINITY or Double.POSITIVE_INFINITY as appropriate. Note that even when the return value is finite, this conversion can lose information about the precision of the BigDecimal value.
You've hit the maximum possible precision for the double. If you would still like to store the value in primitives... one possible way is to store the part before the decimal point in a long
long l = 299792;
double d = 0.457999999984;
Since you are not using up (that's a bad choice of words) the precision for storing the decimal section, you can hold more digits of precision for the fractional component. This should be easy enough to do with some rounding etc..

Double Precision when a float value is passed in double

I have on question regarding double precision.When a float value is passed into double then I get some different result. For e.g.
float f= 54.23f;
double d1 = f;
System.out.println(d1);
The output is 54.22999954223633. Can someone explain the reason behind this behaviour. Is it like double defaults to 14 places of decimal precision.
The same value is printed differently for float and double because the Java specification requires printing as many digits as needed to distinguish the value from adjacent representable values in the same type (per my answer here, and see the linked documentation for more precision in the definition).
Since float has fewer bits to represent values, and hence fewer values, they are spaced more widely apart, and you do not need as many digits to distinguish them. When you put the value into a double and print it, the Java rules require that more digits be printed so that the value is distinguished from nearby double values. The println function does not know that the value originally came from a float and does not contain as much information as can fit into a double.
54.23f is exactly 54.229999542236328125 (in hexadecimal, 0x1.b1d70ap+5). The float values just below and just above this are 54.2299957275390625 (0x1.b1d708p+5) and 54.23000335693359375 (0x1.b1d70cp+5). As you can see, printing “54.229999” would distinguish the value from 54.229995… and from 54.23…. However, the double values just below and just above 54.23f are 54.22999954223632101957264239899814128875732421875 and 54.22999954223633523042735760100185871124267578125. To distinguish the value, you need “54.22999954223633”.
This is because the float hides the extra decimals and double shows them. The double will represent the actual number quite precisely and shows more digits.
Try this:
System.out.println(f.doubleValue()); (need to make it a Float first ofcourse)
So as you can see, the information is there, it is just rounded.
Hope this helps
This is due to the Internal Representation.
Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and the significand (mantissa), from left to right.
This is called as Accuracy Problems.
The fact that floating-point numbers cannot precisely represent all real numbers, and that floating-point operations cannot precisely represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.
It is not a problem. It is how double works. You do not have to handle it and care about it. The precision of double is enough. Think, the difference between you number and the expected result is in the 14 position after decimal point.
If you need arbitrarily good precision, use the java.math.BigDecimal class.
Or if you still want to use double. Do like this:
double d = 5.5451521841;
NumberFormat nf = new DecimalFormat("##.###");
System.out.println(nf.format(d));
Please let me know in case of any doubt.
Actually this is only about different visual representation or converting float / double to String. Let's take a look at internal binary representation
float f = 0.23f;
double d = f;
System.out.println(Integer.toBinaryString(Float.floatToIntBits(f)));
System.out.println(Long.toBinaryString(Double.doubleToLongBits(d)));
output
111110011010111000010100011111
11111111001101011100001010001111100000000000000000000000000000
it means that f was converted to d1 without any distortion, significant digits are the same
double and float represent numbers in different formats.
Because of this you are bound to find certain numbers that store perfectly in one format but not in the other. You happen to have found one that correctly fits in a float but does not fit exactly in a `double.
This problem can also show itself when two different formatters are used.

Java float and double diff

I am using jdk 1.6. This is my code.
float f = 10.0f;
double d = 10.0;
System.out.println("Equal Status : " + (f == d));
then the system shows the answer as true. But if I modified the value as
float f = 10.1f;
double d = 10.1;
System.out.println("Equal Status : " + (f == d));
then the system shows the answer as false. I know the system use Bit matching for == checking. But what is the reason behind. Can you explain about it? Thanks in advance.
While this is not "my" answer, this is about as close to "must read" literature for programmers who want to move from "meh" to "good." Great is something truly special, so don't think that "good" is anything to sneeze at. :)
What Every Programmer Needs to know about Floating Point
The link #Sam suggested is great but still too technical for me :P
I will just give some opinions to OP for handling floating point (probably a bit off-topic because you are asking for the reason behind. For the reason behind, read the link #Sam suggested).
Never assume floating point number is going to give you accurate representations. Sometimes it can but not always. Floating point has its constraint in "significant figures" which it is "accurate" for the first n-th digit.
Your situation is even worse cause you are mixing float and double, but the idea to solve is similar.
You need to decide to what precision your application needs the calculation result to be, and decide an Epsilon value base on it. For example, your application needs only accuracy to 3 decimal place, probably a Epsilon of 0.0005 is reasonable.
Comparing two floating point number shouldn't be done by ==, you should use
(a + EPSILON > b && a - EPSILON < b). Similarly, a > b should be expressed as a - EPSILON > b
Points to remember are
10.1 is a repeating sequence in binary 1010101010......
When comparing a float and a double the float is converted to a
double by adding zerro's to fill the number out
so you will be comparing
1010101...00000000... to 1010101.....101010... which are different.
float f = 10.1f;
double d = 10.1;
System.out.println("Equal Status : " + (f == (float)d));
will give the answer of true
IMHO, Generally speaking for 99% of use case double is a better choice because it is more accurate. i.e. don't use float unless you have to.
BigDecimal can be used to display the actual representation of a float or double. You don't see this normally as the toString will perform a small amount of rounding (as it is coded to accomodate the types representation limitations)
System.out.println("10.1f is actually " + new BigDecimal(10.1f));
System.out.println("10.1 is actually " + new BigDecimal(10.1));
prints
10.1f is actually 10.1000003814697265625
10.1 is actually 10.0999999999999996447286321199499070644378662109375
You can see that the double value is closer to the desired 10.1 but is not exactly this value. The reason the values are different is that in each case, it have the closest resprentable value for that type.
float is a 32 bit type whereas double is a 64 bit type.
You ran into the classical floating point precision problem.
Floats are imprecise. The actual values of 10.1f and 10.1 will be slightly different due to rounding. Floats are binary, not decimal, so numbers that look "simple" to us, like 10.1, can't be represented exactly as floats.
You would want to refresh yourself on the IEEE floating point standards for both 32 and 64-bit floating point representations. If you peel into the internals of this, you'll see clearly as to why these floating points behave finicky.
If you're curious about how it's represented internally (which is why it's failing), you can use this code, which shows the hexadecimal representations of these numbers. From there, you can match them up with the exponents and mantissas of single and double precision.
System.out.printf("Float 10.0: 0x%X\n", Float.floatToRawIntBits((float)10.0));
System.out.printf("Double 10.0: 0x%X\n", Double.doubleToRawLongBits(10.0));
System.out.printf("Float 10.1: 0x%X\n", Float.floatToRawIntBits((float)10.1));
System.out.printf("Double 10.1: 0x%X\n", Double.doubleToRawLongBits(10.1));
prints
Float 10.0: 0x41200000
Double 10.0: 0x4024000000000000
Float 10.1: 0x4121999A
Double 10.1: 0x4024333333333333
You'll notice that there is some repetition in the way the values are represented. This is because 1/10 can't be represented in a finite space of base 2.

Can we use double to store monetary fields and use BigDecimal for arithmetic

I know the problem with double/float, and it's recommended to use BigDecimal instead of double/float to represent monetary fields. But double/float is more effective and space-saving. Then my question is:
It's acceptable to use double/float to represent monetary fields in Java class, but use BigDecimal to take care of the arithmetic (i.e. convert double/float to BigDecimal before any arithmetic) and equal-checking?
The reason is to save some space. And I really see lots of projects are using double/float to represent the monetary fields.
Is there any pitfall for this?
Thanks in advance.
No, you can't.
Suppose double is enough to store two values x and y. Then you convert them to safe BigDecimal and multiple them. The result is accurate, however if you store the multiplication result back in double, chances are you will loose the precision. Proof:
double x = 1234567891234.0;
double y = 1234567891234.0;
System.out.println(x);
System.out.println(y);
BigDecimal bigZ = new BigDecimal(x).multiply(new BigDecimal(y));
double z = bigZ.doubleValue();
System.out.println(bigZ);
System.out.println(z);
Results:
1.234567891234E12 //precise 'x'
1.234567891234E12 //precise 'y'
1524157878065965654042756 //precise 'x * y'
1.5241578780659657E24 //loosing precision
x and y are accurate, as well as the multiplication using BigDecimal. However after casting back to double we loose least significant digits.
I would also recommend that you use nothing but BigDecimal for ALL arithmetic that may involve currency.
Make sure that you always use the String constructor of BigDecimal. Why? Try the following code in a JUnit test:
assertEquals(new BigDecimal("0.01").toString(), new BigDecimal(0.01).toString());
You get the following output:
expected:<0.01[]> but was <0.01[000000000000000020816681711721685132943093776702880859375]>
The truth is, you cannot store EXACTLY 0.01 as a 'double' amount. Only BigDecimal stores the number you require EXACTLY as you want it.
And remember that BigDecimal is immutable. The following will compile:
BigDecimal amount = new BigDecimal("123.45");
BigDecimal more = new BigDecimal("12.34");
amount.add(more);
System.out.println("Amount is now: " + amount);
but the resulting output will be:
Amount is now: 123.45
That's because you need to assign the result to a new (or the same) BigDecimal variable.
In other words:
amount = amount.add(more)
What is acceptable depends on your project. You can use double and long in some projects may be expected to do so. However in other projects, this is considered unacceptable. As a double you can represent values up to 70,000,000,000,000.00 to the cent (larger than the US national debt), with fixed place long you can represent 90,000,000,000,000,000.00 accurately.
If you have to deal with hyper-inflationary currencies (a bad idea in any case) but for some reason still need to account for every cent, use BigDecimal.
If you use double or long or BigDecimal, you must round the result. How you do this varies with each data type and BigDecimal is the least error prone as you are requires to specify what rounding and the precision for different operations. With double or long, you are left to your own devices.
long will be much better choice than double/float.
Are you sure that using BigDecimal type will be a real bottleneck?
Pit fall is that floats/doubles can not store all values without losing precision. Even if you do your use BigDecimal and preserve precision during calculations, you are still storing the end product as a float/double.
The "proper" solution to this, in my experience, is to store monetary values as integers (e.g. Long) representing thousands of a dollar. This gives sufficient resolution for most tasks, e.g. interest accruement, while side stepping the problem of using floats/doubles. As an added "bonus", this requires about the same amount of storage as floats/doubles.
If the only use of double is to store decimal values, then yes, you can under some conditions: if you can guarantee that your values have no more than 15 decimal digits, then converting a value to double (53 bits of precision) and converting the double back to decimal with 15-digit precision (or less) will give you the original value, i.e. without any loss, from an application of David Matula's theorem proved in his article In-and-out conversions. Note that for this result to be applicable, the conversions must be done with correct rounding.
Note however that a double may not be the best choice: monetary values are generally expressed not in floating point, but in fixed point with a few digits (p) after the decimal point, and in this case, converting the value to an integer with a scaling by 10^p and storing this integer (as others suggested) is better.

losing precision converting from java BigDecimal to double

I am working with an application that is based entirely on doubles, and am having trouble in one utility method that parses a string into a double. I've found a fix where using BigDecimal for the conversion solves the issue, but raises another problem when I go to convert the BigDecimal back to a double: I'm losing several places of precision. For example:
import java.math.BigDecimal;
import java.text.DecimalFormat;
public class test {
public static void main(String [] args){
String num = "299792.457999999984";
BigDecimal val = new BigDecimal(num);
System.out.println("big decimal: " + val.toString());
DecimalFormat nf = new DecimalFormat("#.0000000000");
System.out.println("double: "+val.doubleValue());
System.out.println("double formatted: "+nf.format(val.doubleValue()));
}
}
This produces the following output:
$ java test
big decimal: 299792.457999999984
double: 299792.458
double formatted: 299792.4580000000
The formatted double demonstrates that it's lost the precision after the third place (the application requires those lower places of precision).
How can I get BigDecimal to preserve those additional places of precision?
Thanks!
Update after catching up on this post. Several people mention this is exceeding the precision of the double data type. Unless I'm reading this reference incorrectly:
http://java.sun.com/docs/books/jls/third_edition/html/typesValues.html#4.2.3
then the double primitive has a maximum exponential value of Emax = 2K-1-1, and the standard implementation has K=11. So, the max exponent should be 511, no?
You've reached the maximum precision for a double with that number. It can't be done. The value gets rounded up in this case. The conversion from BigDecimal is unrelated and the precision problem is the same either way. See this for example:
System.out.println(Double.parseDouble("299792.4579999984"));
System.out.println(Double.parseDouble("299792.45799999984"));
System.out.println(Double.parseDouble("299792.457999999984"));
Output is:
299792.4579999984
299792.45799999987
299792.458
For these cases double has more than 3 digits of precision after the decimal point. They just happen to be zeros for your number and that's the closest representation you can fit into a double. It's closer for it to round up in this case, so your 9's seem to disappear. If you try this:
System.out.println(Double.parseDouble("299792.457999999924"));
You'll notice that it keeps your 9's because it was closer to round down:
299792.4579999999
If you require that all of the digits in your number be preserved then you'll have to change your code that operates on double. You could use BigDecimal in place of them. If you need performance then you might want to explore BCD as an option, although I'm not aware of any libraries offhand.
In response to your update: the maximum exponent for a double-precision floating-point number is actually 1023. That's not your limiting factor here though. Your number exceeds the precision of the 52 fractional bits that represent the significand, see IEEE 754-1985.
Use this floating-point conversion to see your number in binary. The exponent is 18 since 262144 (2^18) is nearest. If you take the fractional bits and go up or down one in binary, you can see there's not enough precision to represent your number:
299792.457999999900 // 0010010011000100000111010100111111011111001110110101
299792.457999999984 // here's your number that doesn't fit into a double
299792.458000000000 // 0010010011000100000111010100111111011111001110110110
299792.458000000040 // 0010010011000100000111010100111111011111001110110111
The problem is that a double can hold 15 digits, while a BigDecimal can hold an arbitrary number. When you call toDouble(), it attempts to apply a rounding mode to remove the excess digits. However, since you have a lot of 9's in the output, that means that they keep getting rounded up to 0, with a carry to the next-highest digit.
To keep as much precision as you can, you need to change the BigDecimal's rounding mode so that it truncates:
BigDecimal bd1 = new BigDecimal("12345.1234599999998");
System.out.println(bd1.doubleValue());
BigDecimal bd2 = new BigDecimal("12345.1234599999998", new MathContext(15, RoundingMode.FLOOR));
System.out.println(bd2.doubleValue());
Only that many digits are printed so that, when parsing the string back to double, it will result in the exact same value.
Some detail can be found in the javadoc for Double#toString
How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double. That is, suppose that x is the exact mathematical value represented by the decimal representation produced by this method for a finite nonzero argument d. Then d must be the double value nearest to x; or if two double values are equally close to x, then d must be one of them and the least significant bit of the significand of d must be 0.
If it's entirely based on doubles ... why are you using BigDecimal? Wouldn't Double make more sense? If it's too large of value (or too much precision) for that then ... you can't convert it; that would be the reason to use BigDecimal in the first place.
As to why it's losing precision, from the javadoc
Converts this BigDecimal to a double. This conversion is similar to the narrowing primitive conversion from double to float as defined in the Java Language Specification: if this BigDecimal has too great a magnitude represent as a double, it will be converted to Double.NEGATIVE_INFINITY or Double.POSITIVE_INFINITY as appropriate. Note that even when the return value is finite, this conversion can lose information about the precision of the BigDecimal value.
You've hit the maximum possible precision for the double. If you would still like to store the value in primitives... one possible way is to store the part before the decimal point in a long
long l = 299792;
double d = 0.457999999984;
Since you are not using up (that's a bad choice of words) the precision for storing the decimal section, you can hold more digits of precision for the fractional component. This should be easy enough to do with some rounding etc..

Categories