I'm sure this would be a simple question to answer but I can't for the life of me decide what has to be done. So here's it: assuming we follow the "best practice" of using BigDecimal for financial calculations, how can one handle stuff (computations) which throw an exception?
As as example: suppose, I have to split a "user amount" for investing in bonds between "n" different entities. Now consider the case of user submitting $100 for investing to be split between 3 bonds. The equivalent code would look like:
public static void main(String[] args) throws Exception {
BigDecimal bd1 = new BigDecimal("100.0");
BigDecimal bd2 = new BigDecimal("3");
System.out.println(bd1.divide(bd2));
}
But as we all know, this particular code snippet would throw an ArithmeticException since the division is non-terminating. How does one handle such scenarios in their code when using infinite precision data types during computations?
TIA,
sasuke
UPDATE: Given that RoundingMode would help remedy this issue, the next question is, why is 100.0/3 not 33.33 instead of 33.3? Wouldn't 33.33 be a "more" accurate answer as in you expect 33 cents instead of 30? Is there any way wherein I can tweak this?
The answer is to use one of the BigDecimal.divide() methods which specify a RoundingMode.
For example, the following uses the rounding mode half even or bankers rounding (but half up or one of the other rounding modes may be more appropriate depending on requirements) and will round to 2 decimal places:
bd1.divide(bd2, 2, RoundingMode.HALF_EVEN);
divide has an overload that takes a rounding mode. You need to choose one. I believe "half even" is the most commonly used one for monetary calculations.
bd1.divide(bd2, 5, BigDecimal.ROUND_FLOOR)
It's an exemple, depending on the rounding you want.
Related
I have written the following simple function that calculates the arctan of the inverse of an integer. I was wondering how to use BigDecimal instead of double to increase the accuracy of the results. I was also thinking of using a BigInteger to store the growing multiples of xSquare that the "term" value is divided by.
I have limited experience with the syntax for how to perform calculations on BigDecimals. How would I revise this function to use them?
/* Thanks to https://www.cygnus-software.com/misc/pidigits.htm for explaining the general calculation method
credited to John Machin.
*/
public static double atanInvInt(int x) {
// Returns the arc tangent of an inverse integer
/* Terminates once the remaining amount reaches zero or the denominator reaches 2101.
If the former happens, the accuracy should be determined by the number format used, such as double.
If the latter happens, the result should be off by at most one from the correct nearest value
in the seventh decimal place, if allowed by the accuracy of the number format used.
This likely only happens if the integer is 1.
*/
int xSquare = x*x;
double result = ((double)1)/x;
double term = ((double)1)/x;
int divisor = 1;
double midResult;
while ((term > 0)) {
term = term / xSquare;
divisor += 2;
midResult = result - term/divisor;
term = term /xSquare;
divisor += 2;
result = midResult + term/divisor;
if (divisor >= 2101) {
return ((result + midResult) / 2);
}
}
return result;
}
The BigDecimal provides very intuitive wrapper methods to provide all the different operations. you can have something like this to have an arbitrary precision of, for example, 99:
public static void main(String[] args) {
System.out.println(atanInvInt(5, 99));
// 0.197395559849880758370049765194790293447585103787852101517688940241033969978243785732697828037288045
}
public static BigDecimal atanInvInt(int x, int scale) {
BigDecimal one = new BigDecimal("1");
BigDecimal two = new BigDecimal("2");
BigDecimal xVal = new BigDecimal(x);
BigDecimal xSquare = xVal.multiply(xVal);
BigDecimal divisor = new BigDecimal(1);
BigDecimal result = one.divide(xVal, scale, RoundingMode.FLOOR);
BigDecimal term = one.divide(xVal, scale, RoundingMode.FLOOR);
BigDecimal midResult;
while (term.compareTo(new BigDecimal(0)) > 0) {
term = term.divide(xSquare, scale, RoundingMode.FLOOR);
divisor = divisor.add(two);
midResult = result.subtract(term.divide(divisor, scale, RoundingMode.FLOOR));
term = term.divide(xSquare, scale, RoundingMode.FLOOR);
divisor = divisor.add(two);
result = midResult.add(term.divide(divisor, scale, RoundingMode.FLOOR));
if (divisor.compareTo(new BigDecimal(2101)) >= 0) {
return result.add(midResult).divide(two, scale, RoundingMode.FLOOR);
}
}
return result;
}
For anyone who wanted to know why it was beneficial to pose this question to begin with: That is a fair question. I have written a rather long answer to it. I believe that writing this answer helped me to articulate to myself things about the BigDecimal class that are more intuitive now that I have Armando Carballo’s answer than they were before, so writing it was hopefully educational. I can only hope that reading it will be as well, though likely in a different way if at all.
The official documentation lists methods, but it doesn’t explain how they are used in the same way that Armando Carballo’s code demonstrates. For example, while the way the BigDecimal.divide method works is pretty intuitive, there is nothing in the official documentation that says “to take the mean of two numbers, not only should you have BigDecimals for those two numbers, but you should also create a BigDecimal equal to 2 and apply the BigDecimal.divide method to the result of a BigDecimal.add operation with the 2 BigDecimal as the input for the divisor.” This is something that is simple enough to be perfectly intuitive once you see it, but if you’ve never used object-oriented methods for the specific purpose of performing arithmetic before, it may be less intuitive the first time you are trying to figure out how to take the mean.
As another example, consider the idea that to figure out whether a number is greater than or equal to another number, instead of using a Boolean operator on the two numbers, you use a compareTo method that can give three possible outputs on one number with the other number as an input, then apply a Boolean operator to the output of that method. This makes perfect sense once you see it in action and have a quick sense of how the compareTo method works, but may be less obvious when you’re staring at a quick description of the compareTo method in the official documentation, even if the description is clear and you are able to figure out what the compareTo method will output with a given BigDecimal value calling the method and a given BigDecimal input as the comparison value. For anyone who has used compareTo methods with other classes besides BigDecimal extensively, this is probably obvious even if they’re new to the specific class, but if you haven’t used Booleans on the result of ANY compareTo method recently, it’s faster to see it.
When working with ints, you might very well write code a bit like this:
int x = 5;
x = x + 2;
System.out.println(x) // should be 7
Here, the “2” value was never declared to be an int. The result of the addition was the same as if we had declared y=2 and said that x = x+y instead of x = x+2, but with the above lines of code no named variable, or Integer object if we used those instead of primitive ints, was created for the “2”. With BigDecimal, on the other hand, since the BigDecimal.add method requires BigDecimals as inputs, it would be mandatory to create a BigDecimal equal to 2 in order to add 2. I don’t see anything in the official documentation that says “use this as a more accurate substitute for doubles, or for longs if you want something more versatile than BigInteger, but in addition to using it as a substitute for declared variables, also create BigDecimal objects equal to small integers that by themselves wouldn’t call for the use of the BigDecimal class so that you can use them in operations. Both your variables and the small values you are adding to them need to be BigDecimals if you want to use BigDecimals.”
Finally, let me explain something that has the potential to make the BigDecimal class more intimidating than it needs to be. Anyone who has ever worked with primitive arrays and tried to predict in advance at the time the array is created exactly how large it needs to be, or is familiar with how lower-level languages involve certain situations in which a programmer needs to know exactly how many bytes something takes up, may feel the need for caution when dealing with something that seems to demand a specified level of precision upfront. The documentation says this: “If no rounding mode is specified and the exact result cannot be represented, an exception is thrown; otherwise, calculations can be carried out to a chosen precision and rounding mode by supplying an appropriate MathContext object to the operation.” A newbie reading that sentence for the first time may be thinking that they are going to have to think extensively about rounding when writing their code for the first time or else face exceptions as soon as a value cannot be represented exactly, or that they are going to have to read the documentation on the MathContext object as well before using BigDecimal, which in turn might lead to reading IEEE standards that help grant an understanding of floating point numbers but are far removed from what the person actually wanted to code. Seeing that some of the constructors for BigDecimal take arrays as inputs and that others take a MathContext as an input, along with noticing that one of the constructors for the related BigInteger class takes a byte array as the input, may strengthen the feeling that using this object class requires a very fine understanding of the exact number of digits that will be used for the specific calculations the class is used for and that understanding MathContext is more or less essential to even the most basic use of the class. While I’m sure understanding MathContext is helpful, baby’s first BigDecimal project can actually work perfectly well without the need to learn this added functionality at the same time as the first use of the BigDecimal. Reading up on the scale parameter might also lead to the belief by a coder looking up info on the class for the first time that it is necessary to predict the order of magnitude of the answer in advance in order to use the class at all.
Armando Caballo’s commendable answer shows that these concerns of a hypothetical newbie are overblown, as while rounding mode does need to be specified fairly often and a consistent scale is often called as a parameter when using the divide method, the scale parameter is actually a fairly arbitrary specification of the desired accuracy in terms of number of decimal places and not something that requires pinpoint predictions about exactly what numbers the class will handle (unless the ultimate purpose for which the BigDecimal is being used requires a finely controlled level of accuracy, in which case it is fairly easy to specify). An “infinite” series of added and subtracted terms to compute an arc tangent was processed without ever declaring a MathContext object.
This question already has answers here:
Use of java.math.MathContext
(5 answers)
Closed 7 years ago.
first of, my search skills may be not as good as I hoped, so maybe this kind of question exists already. If so please tell me..
See this code below:
new BigDecimal("5").add(new BigDecimal("7"));
vs
new BigDecimal("5").add(new BigDecimal("7"), mathContext);
In which situations would I really need a mathcontext (except divisions)?
I never use a mathcontext unless I divide something. As far as I know this always worked, so what may be the drawbacks here? Do I need a mathcontext on add, subtract and multiply? I'm not so good into the BigDecimal, I simply want to use it to not lose any information like when using doubles.
As I sometimes see code with mathcontext on adding something, I'm too afraid to just remove it only because it's my opinion that it is useless...
I read that question but didn't really find a proper answer to my specific question...
I begin with BigDecimals without mathcontext and then calculate with them. So my question is, will I ever have drawbacks with this regarding information loss / precision etc? Or will this simply lead to maximum information and that's it?
Edit: I don't want to round, never. In cases of a division like 1/3 I would have to, of course, but in the cases of add, multiply and subtract I don't want any rounding. Do I then need a mathcontext in any circumstance?
If you are doing mathematical operations, that need rounding.
If you add, subtract or multiply two numbers with some decimal parts and you would like to round the result, you also can use mathcontext.
If you don't need to round anything, then you don't need it.
So it is not only limited to avoid problems with endless rest from dividing like 1/3
I could imagine a case where you want the result to be rounded while de operands are not. An example for addition.
1.23 + 3.01 = 4.24
So, maybe you want your result to have just decimal place, so you would use a MathContext to make it
1.23 + 3.01 = 4.2
I have no idea for a real world example but I think it's immaginable they exist.
I am currently writing a calculator application. I know that a double is not the best choice for good math. Most of the functions in the application have great precision but the ones that don't get super ugly results. My solution is to show users only 12 decimals of precision. I chose 12 because my lowest precision comes from my numerical derive function.
The issue I am having is that if I multiply it by a scaler then round then divide by the scaler the precision will most likely be thrown out of whack. If I use DecimalFormat there is no way to show only 12 and have the E for scientific notation show up correctly, but not be there if it doesn’t need to be.
for example I want
1.23456789111213 to be 1.234567891112
but never
1.234567891112E0
but I also want
1.23456789111213E23 to be 1.234567891112E23
So basically I want to format the string of a number to 12 decimals places, preserving scientific notation, but not being scientific when it shouldn't
Use String.format("%.12G", doubleVariable);
That is how you use format() to display values in scientific notation, but without the scientific notation if not needed. The one caveat is that you end up with a '+' after the 'E', so yours would end up like 1.234567891112E+23
String.format("%.12d", doubleVariable);
Should give you what you are looking for in your first matter. I'm sorry but I don't know how to define when your E-notification is showed.
You'll be interested in BigDecimal, for example:
BigDecimal number = new BigDecimal("1.23456789111213");
number = number.setScale(12, RoundingMode.HALF_UP);
System.out.println(number);
Choose appropriate to you RoundingMode.
I've inherited a project in which monetary amounts use the double type.
Worse, the framework it uses, and the framework's own classes, use double for money.
The framework ORM also handles retrieval of values from (and storage to) the database. In the database money values are type number(19, 7), but the framework ORM maps them to doubles.
Short of entirely bypassing the framework classes and ORM, is there anything I can do to calculate monetary values precisely?
Edit: yeah, I know BigDecimal should be used. The problem is that I am tightly tied to a framework that where, e.g., the class framework.commerce.pricing.ItemPriceInfo has members double mRawTotalPrice; and double mListPrice. My company's application's own code extends, e.g, this ItemPriceInfoClass.
Realistically, I can't say to my company, "scrap two years of work, and hundreds of thousands of dollars spent, basing code on this framework, because of rounding errors"
If tolerable, treat the monetary type as integral. In other words, if you're working in the US, track cents instead of dollars, if cents provides the granularity you need. Doubles can accurately represent integers up to a very large value (2^53) (no rounding errors up to that value).
But really, the right thing to do is bypass the framework entirely and use something more reasonable. That's such an amateur mistake for the framework to make - who knows what else is lurking?
I didn't see you mention refactoring. I think that's your best option here. Instead of throwing together some hacks to get things working better for now, why not fix it the right way?
Here's some information on double vs BigDecimal. This post suggests using BigDecimal even though it is slower.
Plenty of people will suggest using BigDecimal and if you don't know how to use rounding in your project, that is what you should do.
If you know how to use decimal rounding correctly, use double. Its many orders of magnitude faster, much clear and simpler and thus less error prone IMHO. If you use dollars and cents (or need two decimal places), you can get an accurate result for values up to 70 trillion dollars.
Basically, you won't get round errors if you correct for it using approriate rounding.
BTW: The thought of rounding errors strikes terror into the heart of many developers, but they are not random errors and you can manage them fairly easily.
EDIT: consider this simple example of a rounding error.
double a = 100000000.01;
double b = 100000000.09;
System.out.println(a+b); // prints 2.0000000010000002E8
There are a number of possible rounding strategies. You can either round the result when printing/displaying. e.g.
System.out.printf("%.2f%n", a+b); // prints 200000000.10
or round the result mathematically
double c = a + b;
double r= (double)((long)(c * 100 + 0.5))/100;
System.out.println(r); // prints 2.000000001E8
In my case, I round the result when sending from the server (writing to a socket and a file), but use my own routine to avoid any object creation.
A more general round function is as follows, but if you can use printf or DecimalFormat, can be simpler.
private static long TENS[] = new long[19]; static {
TENS[0] = 1;
for (int i = 1; i < TENS.length; i++) TENS[i] = 10 * TENS[i - 1];
}
public static double round(double v, int precision) {
assert precision >= 0 && precision < TENS.length;
double unscaled = v * TENS[precision];
assert unscaled > Long.MIN_VALUE && unscaled < Long.MAX_VALUE;
long unscaledLong = (long) (unscaled + (v < 0 ? -0.5 : 0.5));
return (double) unscaledLong / TENS[precision];
}
note: you could use BigDecimal to perform the final rounding. esp if you need a specifc round method.
Well, you don't have that many options in reality:
You can refactor the project to use e.g. BigDecimal (or something that better suits its needs) to represent money.
Be extremely careful for overflow/underflow and loss of precision, which means adding tons of checks, and refactoring even larger proportion of the system in an unnecessary way. Not to mention how much research would be necessary if you are to do that.
Keep things the way they are and hope nobody notices (this is a joke).
IMHO, the best solution would be to simply refactor this out. It might be some heavy refactoring, but the evil is already done and I believe that it should be your best option.
Best,
Vassil
P.S. Oh and you can treat money as integers (counting cents), but that doesn't sound like a good idea if you are going to have currency conversions, calculating interest, etc.
I think this situation is at least minimally salvageable for your code. You get the value as a double via the ORM framework. You can then convert it to BigDecimal using the static valueOf method (see here for why) before doing any math/calculations on it, and then convert it back to double only for storing it.
Since you are extending these classes anyway, you can add getters for your double value that gets them as BigDecimal when you need it.
This may not cover 100% of the cases (I would be especially worried about what the ORM or JDBC driver is doing to convert the double back to a Number type), but it is so much better than just doing the math on the raw doubles.
However, I am far from convinced that this approach is actually cheaper for the company in the long run.
Should we use double or BigDecimal for calculations in Java?
How much is the overhead in terms of performance for BigDecimal as compared to double?
For a serious financial application BigDecimal is a must.
Depends on how many digits you need you can go with a long and a decimal factor for visualization.
For general floating point calculations, you should use double. If you are absolutely sure that you really do need arbitrary precision arithmetic (most applications don't), then you can consider BigDecimal.
You will find that double will significantly outperform BigDecimal (not to mention being easier to work with) for any application where double is sufficient precision.
Update: You commented on another answer that you want to use this for a finance related application. This is one of the areas where you actually should consider using BigDecimal, otherwise you may get unexpected rounding effects from double calculations. Also, double values have limited precision, and you won't be able to accurately keep track of pennies at the same time as millions of dollars.
How much is the overhead in terms of performance for BigDecimal as compared to double?
A lot. For example, a multiplication of two doubles is a single machine instruction. Multiplying two BigDecimals is probably a minimum of 50 machine instructions, and has complexity of O(N * M) where M and N are the number of bytes used to represent the two numbers.
However, if your application requires the calculation to be "decimally correct", then you need to accept the overhead.
However (#2) ... even BigDecimal can't do this calculation with real number accuracy:
1/3 + 1/3 + 1/3 -> ?
To do that computation precisely you would need to implement a Rational type; i.e. a pair of BigInteger values ... and some thing to reduce the common factors.
However (#3) ... even a hypothetical Rational type won't give you a precise numeric representation for (say) Pi.
As always: it depends.
If you need the precision (even for "small" numbers, when representing amounts for example) go with BigDecimal.
In some scientific applications, double may be a better choice.
Even in finance we can't answer without knowing what area. For instance if you were doing currency conversions of $billions, where the conversion rate could be to 5 d.p. you might have problems with double. Whereas for simply adding and subtracting balances you'd be fine.
If you don't need to work in fractions of a cent/penny, maybe an integral type might be more appropriate, again it depends on the size of numbers involved.