Dividing two large longs while retaining precision and accuracy - java

I have two large long values, e.g. long a = 106951484895 and long b = 47666297253. I want to divide one by the other, while still retaining precision and accuracy. a / b gives just 2, which is neither precise nor accurate. (double)a / b returns 2.243754834308401 which is precise, but I don't know whether it is accurate. Is it accurate, or is there a better way?

If you check the calculation in Wolfram Alpha, you'll see that the exact result is
2.243754834308400900535121747859167616725725368773485418854923... Your figure of
2.243754834308401 is dead on. Unless you need more precision, the calculation with doubles will suffice.

In Java there is a BigInteger class, for when you need unlimited precision using whole numbers. For decimal numbers, use BigDecimal
Wolfram Alpha will give you 2.243754834308400900535121747859167616725725368773485418854923..., and our figure is 2.243754834308401, which is same as
bigDecimal1.divide(bigDecimal2, MathContext.DECIMAL64)
If you will go for
bigDecimal1.divide(bigDecimal2, new MathContext(1000, RoundingMode.HALF_EVEN)))
You will get
2.24375483430840090053512174785916761672572536877348541885492361677904883097801043287594504
1050407263946829396490609763285697017511527160785819555506643456210143732768535294645618694
7909645722613183738163350810420038396599808994188248448801742297144651257940243013656347535
1346061895880989881427322957327423437070470786123178209350642720039431463073874604572487035
0881416301899886949878414127297558394219666911873273296141755170873372894249298571586701215
5058865276866526572281643300564007415077913939597358974242706109866167162174559269200972437
4468185209762552814414640557312348785977139301334520631262090283427956618755742143233766150
5750523038219597199472866720344664485953248792408356275728443143815091921547456137582778817
3173376404446432448382818379181981559569409501831858179303080342832602945081961262782040747
0742627855109348071601512026092932232568603874560325500767085983329631127368742840999544420
8119473080650114494849915293461361824567061678496514955637978679644265088391509259402889160
For more accuracy you can keep increasing the precision given in the MathContext constructor.
This RoundingMode.HALF_EVEN aka Banker's rounding is analogous to the rounding policy used for float and double arithmetic in Java.

Related

Java big decimal or double

I have a number in the format of (13,2) 13 digits and 2 decimal places. I need to do some calculation on it (like multiplication, division). I am planning to use BigDecimal for the calculations. Shall i use double or float for the calculation as BigDecimal is bit on slower side?
The most important consideration is not speed but correctness.
If your value is a sample of a continuous value, like a measurement of a real-world property like size, distance, weight, angle, etc., then an IEEE-754 double or float is probably a better choice. This is because in this case powers of ten are not necessarily "rounder" than other values (e.g. angular measurements in radians can be transcendental numbers but still "round").
If your value is a discrete value like a measurement of money, then double is incorrect and a floating-point decimal type like BigDecimal is correct. This is because, in this case, discrete increments are meaningful, and a value of "0.01" is "rounder" and more correct than a number like "0.009999999999999" or "0.010000000000000001".
The simplest, most natural representation for data with two decimal places is BigDecimal with scale factor 2. Start with that. In most cases it will be fast enough.
If, when measured, it really is a serious performance problem, there are two more options:
Use long to represent the number of hundredths. For example, US currency can be represented exactly as a long number of cents. Be very careful to ensure variable names and comments make it clear where dollars are being used, and where cents are being used.
Use double to represent the amount as a fraction. This avoids the dollars-vs-cents bookkeeping, at the expense of rounding issues. You may need to periodically correct the rounding by multiplying by 100, rounding to nearest integer, and dividing by 100 again

The accuracy of a double in general programming and Java

I understand that due to the nature of a float/double one should not use them for precision important calculations. However, i'm a little confused on their limitations due to mixed answers on similar questions, whether or not floats and doubles will always be inaccurate regardless of significant digits or are only inaccurate up to the 16th digit.
I've ran a few examples in Java,
System.out.println(Double.parseDouble("999999.9999999999");
// this outputs correctly w/ 16 digits
System.out.println(Double.parseDouble("9.99999999999999");
// This also outputs correctly w/ 15 digits
System.out.println(Double.parseDouble("9.999999999999999");
// But this doesn't output correctly w/ 16 digits. Outputs 9.999999999999998
I can't find the link to another answer that stated that values like 1.98 and 2.02 would round down to 2.0 and therefore create inaccuracies but testing shows that the values are printed correctly. So my first question is whether or not floating/double values will always be inaccurate or is there a lower limit where you can be assured of precision.
My second question is in regards to using BigDecimal. I know that I should be using BigDecimal for precision important calculations. Therefore I should be using BigDecimal's methods for arithmetic and comparing. However, BigDecimal also includes a doubleValue() method which will convert the BigDecimal to a double. Would it be safe for me to do a comparison between double values that I know for sure have less than 16 digits? There will be no arithmetic done on them at all so the inherent values should not have changed.
For example, is it safe for me to do the following?
BigDecimal myDecimal = new BigDecimal("123.456");
BigDecimal myDecimal2 = new BigDecimal("234.567");
if (myDecimal.doubleValue() < myDecimal2.doubleValue()) System.out.println("myDecimal is smaller than myDecimal2");
Edit: After reading some of the responses to my own answer i've realized my understanding was incorrect and have deleted it. Here are some snippets from it that might help in the future.
"A double cannot hold 0.1 precisely. The closest representable value to 0.1 is 0.1000000000000000055511151231257827021181583404541015625. Java Double.toString only prints enough digits to uniquely identify the double, not the exact value." - Patricia Shanahan
Sources:
https://stackoverflow.com/a/5749978 - States that a double can hold up to 15 digits
I suggest you read this page:
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
Once you've read and understood it, and perhaps converted several examples to their binary representations in the 64 bit floating point format, then you'll have a much better idea of what significant digits a Double can hold.
As a side note, (perhaps trivial) a nice and reliable way to store a known precision of value is to simply multiply it by the relevant factor and store as some integral type, which are completely precise.
For example:
double costInPounds = <something>; //e.g. 3.587
int costInPence = (int)(costInPounds * 100 + 0.5); //359
Plainly some precision can be lost, but if a required/desired precision is known, this can save a lot of bother with floating point values, and once this has been done, no precision can be lost by further manipulations.
The + 0.5 is to ensure that rounding works as expected. (int) takes the 'floor' of the provided double value, so adding 0.5 makes it round up and down as expected.

How to actually avoid floating point errors when you need to use float?

I am trying to affect the translation of a 3D model using some UI buttons to shift the position by 0.1 or -0.1.
My model position is a three dimensional float so simply adding 0.1f to one of the values causes obvious rounding errors. While I can use something like BigDecimal to retain precision, I still have to convert it from a float and back to a float at the end and it always results in silly numbers that are making my UI look like a mess.
I could just pretty the displayed values but the rounding errors will only get worse with more editing and they make my save files rather hard to read.
So how do I actually avoid these errors when I need to use a float?
The Kahan summation and pairwise summation algorithms help to reduce floating point errors. Here's some Java code for the Kahan algorithm.
I would use a Rational class. There are many out there - this one looks like it should work.
One significant cost will be when the Rational is rendered into a float and one when the denominator is reduced to the gcd. The one I posted keeps the numerator and denominator in fully reduced state at all times which should be quite efficient if you are always adding or subtracting 1/10.
This implementation holds the values normalised (i.e. with consistent sign) but unreduced.
You should choose your implementation to best fit your usage.
A simple solution is to either use fixed precision. i.e. an integer 10x or 100x what you want.
float f = 10;
f += 0.1f;
becomes
int i = 100;
i += 1; // use an many times as you like
// use i / 10.0 as required.
I wouldn't use float in any case as you get more rounding errors than double for next to no benefit (unless you have millions of float values) double gives you 8 more digits of precision and with sensible rounding would won't see those errors.
If you stick with floats:
The easiest way to avoid the error is using floats which are exact, but
near the desired value which is
round(2^n * value) * 1/2^n.
n is the number of bits, value the number to use (in your case 0.1)
In your case with increasing precision:
n = 4 => 0.125
n = 8 (byte) => 0.9765625
n = 16 (short)=> 0.100006103516....
The long number chains are artefacts of the binary conversion,
the real number has much less bits.
As the floats are exact, addition and subtraction will
not introduce offset errors, but will always be
predictable as long as the number of bits is
not longer than the float value holds.
If you fear that your display will be compromised by
using this solution (because they are odd floats), use
and store only integers (step increase -1/1).
The final value which is internally set is
x = value * step.
As the step increases or decreases by an amount of 1,
precision will be retained.

Loss of precision after subtracting double from double [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Retain precision with Doubles in java
Alright so I've got the following chunk of code:
int rotation = e.getWheelRotation();
if(rotation < 0)
zoom(zoom + rotation * -.05);
else if(zoom - .05 > 0)
zoom(zoom - rotation * .05);
System.out.println(zoom);
Now, the zoom variable is of type double, initially set to 1. So, I would expect the results to be like 1 - .05 = .95; .95 - .05 = .9; .9 - .05 = .85; etc. This appears to be not the case though when I print the result as you can see below:
0.95
0.8999999999999999
0.8499999999999999
0.7999999999999998
0.7499999999999998
0.6999999999999997
Hopefully someone is able to clearly explain. I searched the internet and I read it has something to do with some limitations when we're storing floats in binary but I still don't quite understand. A solution to my problem is not shockingly important but I would like to understand this kind of behavior.
Java uses IEEE-754 floating point numbers. They're not perfectly precise. The famous example is:
System.out.println(0.1d + 0.2d);
...which outputs 0.30000000000000004.
What you're seeing is just a symptom of that imprecision. You can improve the precision by using double rather than float.
If you're dealing with financial calculations, you might prefer BigDecimal to float or double.
float and double have limited precision because its fractional part is represented as a series of powers of 2 e.g. 1/2 + 1/4 + 1/8 ... If you have an number like 1/10 it has to be approximated.
For this reason, whenever you deal with floating point you must use reasonable rounding or you can see small errors.
e.g.
System.out.printf("%.2f%n", zoom);
To minimise round errors, you could count the number of rotations instead and divide this int value by 20.0. You won't see a rounding error this way, and it will be faster, with less magic numbers.
float and double have precision issues. I would recommend you take a look at the BigDecimal Class. That should take care of precision issues.
Since decimal numbers (and integer numbers as well) can have an infinite number of possible values, they are impossible to map precisely to bits using a standard format. Computers circumvent this problem by limiting the range the numbers can assume.
For example, an int in java can represent nothing larger then Integer.MAX_VALUE or 2^31 - 1.
For decimal numbers, there is also a problem with the numbers after the comma, which also might be infinite. This is solved by not allowing all decimal values, but limiting to a (smartly chosen) number of possibilities, based on powers of 2. This happens automatically but is often nothing to worry about, you can interpret your result of 0.899999 as 0.9. In case you do need explicit precision, you will have to resort to other data types, which might have other limitations.

Can we use double to store monetary fields and use BigDecimal for arithmetic

I know the problem with double/float, and it's recommended to use BigDecimal instead of double/float to represent monetary fields. But double/float is more effective and space-saving. Then my question is:
It's acceptable to use double/float to represent monetary fields in Java class, but use BigDecimal to take care of the arithmetic (i.e. convert double/float to BigDecimal before any arithmetic) and equal-checking?
The reason is to save some space. And I really see lots of projects are using double/float to represent the monetary fields.
Is there any pitfall for this?
Thanks in advance.
No, you can't.
Suppose double is enough to store two values x and y. Then you convert them to safe BigDecimal and multiple them. The result is accurate, however if you store the multiplication result back in double, chances are you will loose the precision. Proof:
double x = 1234567891234.0;
double y = 1234567891234.0;
System.out.println(x);
System.out.println(y);
BigDecimal bigZ = new BigDecimal(x).multiply(new BigDecimal(y));
double z = bigZ.doubleValue();
System.out.println(bigZ);
System.out.println(z);
Results:
1.234567891234E12 //precise 'x'
1.234567891234E12 //precise 'y'
1524157878065965654042756 //precise 'x * y'
1.5241578780659657E24 //loosing precision
x and y are accurate, as well as the multiplication using BigDecimal. However after casting back to double we loose least significant digits.
I would also recommend that you use nothing but BigDecimal for ALL arithmetic that may involve currency.
Make sure that you always use the String constructor of BigDecimal. Why? Try the following code in a JUnit test:
assertEquals(new BigDecimal("0.01").toString(), new BigDecimal(0.01).toString());
You get the following output:
expected:<0.01[]> but was <0.01[000000000000000020816681711721685132943093776702880859375]>
The truth is, you cannot store EXACTLY 0.01 as a 'double' amount. Only BigDecimal stores the number you require EXACTLY as you want it.
And remember that BigDecimal is immutable. The following will compile:
BigDecimal amount = new BigDecimal("123.45");
BigDecimal more = new BigDecimal("12.34");
amount.add(more);
System.out.println("Amount is now: " + amount);
but the resulting output will be:
Amount is now: 123.45
That's because you need to assign the result to a new (or the same) BigDecimal variable.
In other words:
amount = amount.add(more)
What is acceptable depends on your project. You can use double and long in some projects may be expected to do so. However in other projects, this is considered unacceptable. As a double you can represent values up to 70,000,000,000,000.00 to the cent (larger than the US national debt), with fixed place long you can represent 90,000,000,000,000,000.00 accurately.
If you have to deal with hyper-inflationary currencies (a bad idea in any case) but for some reason still need to account for every cent, use BigDecimal.
If you use double or long or BigDecimal, you must round the result. How you do this varies with each data type and BigDecimal is the least error prone as you are requires to specify what rounding and the precision for different operations. With double or long, you are left to your own devices.
long will be much better choice than double/float.
Are you sure that using BigDecimal type will be a real bottleneck?
Pit fall is that floats/doubles can not store all values without losing precision. Even if you do your use BigDecimal and preserve precision during calculations, you are still storing the end product as a float/double.
The "proper" solution to this, in my experience, is to store monetary values as integers (e.g. Long) representing thousands of a dollar. This gives sufficient resolution for most tasks, e.g. interest accruement, while side stepping the problem of using floats/doubles. As an added "bonus", this requires about the same amount of storage as floats/doubles.
If the only use of double is to store decimal values, then yes, you can under some conditions: if you can guarantee that your values have no more than 15 decimal digits, then converting a value to double (53 bits of precision) and converting the double back to decimal with 15-digit precision (or less) will give you the original value, i.e. without any loss, from an application of David Matula's theorem proved in his article In-and-out conversions. Note that for this result to be applicable, the conversions must be done with correct rounding.
Note however that a double may not be the best choice: monetary values are generally expressed not in floating point, but in fixed point with a few digits (p) after the decimal point, and in this case, converting the value to an integer with a scaling by 10^p and storing this integer (as others suggested) is better.

Categories