How do I use BigDecimal to increase the accuracy of this method? - java

I have written the following simple function that calculates the arctan of the inverse of an integer. I was wondering how to use BigDecimal instead of double to increase the accuracy of the results. I was also thinking of using a BigInteger to store the growing multiples of xSquare that the "term" value is divided by.
I have limited experience with the syntax for how to perform calculations on BigDecimals. How would I revise this function to use them?
/* Thanks to https://www.cygnus-software.com/misc/pidigits.htm for explaining the general calculation method
credited to John Machin.
*/
public static double atanInvInt(int x) {
// Returns the arc tangent of an inverse integer
/* Terminates once the remaining amount reaches zero or the denominator reaches 2101.
If the former happens, the accuracy should be determined by the number format used, such as double.
If the latter happens, the result should be off by at most one from the correct nearest value
in the seventh decimal place, if allowed by the accuracy of the number format used.
This likely only happens if the integer is 1.
*/
int xSquare = x*x;
double result = ((double)1)/x;
double term = ((double)1)/x;
int divisor = 1;
double midResult;
while ((term > 0)) {
term = term / xSquare;
divisor += 2;
midResult = result - term/divisor;
term = term /xSquare;
divisor += 2;
result = midResult + term/divisor;
if (divisor >= 2101) {
return ((result + midResult) / 2);
}
}
return result;
}

The BigDecimal provides very intuitive wrapper methods to provide all the different operations. you can have something like this to have an arbitrary precision of, for example, 99:
public static void main(String[] args) {
System.out.println(atanInvInt(5, 99));
// 0.197395559849880758370049765194790293447585103787852101517688940241033969978243785732697828037288045
}
public static BigDecimal atanInvInt(int x, int scale) {
BigDecimal one = new BigDecimal("1");
BigDecimal two = new BigDecimal("2");
BigDecimal xVal = new BigDecimal(x);
BigDecimal xSquare = xVal.multiply(xVal);
BigDecimal divisor = new BigDecimal(1);
BigDecimal result = one.divide(xVal, scale, RoundingMode.FLOOR);
BigDecimal term = one.divide(xVal, scale, RoundingMode.FLOOR);
BigDecimal midResult;
while (term.compareTo(new BigDecimal(0)) > 0) {
term = term.divide(xSquare, scale, RoundingMode.FLOOR);
divisor = divisor.add(two);
midResult = result.subtract(term.divide(divisor, scale, RoundingMode.FLOOR));
term = term.divide(xSquare, scale, RoundingMode.FLOOR);
divisor = divisor.add(two);
result = midResult.add(term.divide(divisor, scale, RoundingMode.FLOOR));
if (divisor.compareTo(new BigDecimal(2101)) >= 0) {
return result.add(midResult).divide(two, scale, RoundingMode.FLOOR);
}
}
return result;
}

For anyone who wanted to know why it was beneficial to pose this question to begin with: That is a fair question. I have written a rather long answer to it. I believe that writing this answer helped me to articulate to myself things about the BigDecimal class that are more intuitive now that I have Armando Carballo’s answer than they were before, so writing it was hopefully educational. I can only hope that reading it will be as well, though likely in a different way if at all.
The official documentation lists methods, but it doesn’t explain how they are used in the same way that Armando Carballo’s code demonstrates. For example, while the way the BigDecimal.divide method works is pretty intuitive, there is nothing in the official documentation that says “to take the mean of two numbers, not only should you have BigDecimals for those two numbers, but you should also create a BigDecimal equal to 2 and apply the BigDecimal.divide method to the result of a BigDecimal.add operation with the 2 BigDecimal as the input for the divisor.” This is something that is simple enough to be perfectly intuitive once you see it, but if you’ve never used object-oriented methods for the specific purpose of performing arithmetic before, it may be less intuitive the first time you are trying to figure out how to take the mean.
As another example, consider the idea that to figure out whether a number is greater than or equal to another number, instead of using a Boolean operator on the two numbers, you use a compareTo method that can give three possible outputs on one number with the other number as an input, then apply a Boolean operator to the output of that method. This makes perfect sense once you see it in action and have a quick sense of how the compareTo method works, but may be less obvious when you’re staring at a quick description of the compareTo method in the official documentation, even if the description is clear and you are able to figure out what the compareTo method will output with a given BigDecimal value calling the method and a given BigDecimal input as the comparison value. For anyone who has used compareTo methods with other classes besides BigDecimal extensively, this is probably obvious even if they’re new to the specific class, but if you haven’t used Booleans on the result of ANY compareTo method recently, it’s faster to see it.
When working with ints, you might very well write code a bit like this:
int x = 5;
x = x + 2;
System.out.println(x) // should be 7
Here, the “2” value was never declared to be an int. The result of the addition was the same as if we had declared y=2 and said that x = x+y instead of x = x+2, but with the above lines of code no named variable, or Integer object if we used those instead of primitive ints, was created for the “2”. With BigDecimal, on the other hand, since the BigDecimal.add method requires BigDecimals as inputs, it would be mandatory to create a BigDecimal equal to 2 in order to add 2. I don’t see anything in the official documentation that says “use this as a more accurate substitute for doubles, or for longs if you want something more versatile than BigInteger, but in addition to using it as a substitute for declared variables, also create BigDecimal objects equal to small integers that by themselves wouldn’t call for the use of the BigDecimal class so that you can use them in operations. Both your variables and the small values you are adding to them need to be BigDecimals if you want to use BigDecimals.”
Finally, let me explain something that has the potential to make the BigDecimal class more intimidating than it needs to be. Anyone who has ever worked with primitive arrays and tried to predict in advance at the time the array is created exactly how large it needs to be, or is familiar with how lower-level languages involve certain situations in which a programmer needs to know exactly how many bytes something takes up, may feel the need for caution when dealing with something that seems to demand a specified level of precision upfront. The documentation says this: “If no rounding mode is specified and the exact result cannot be represented, an exception is thrown; otherwise, calculations can be carried out to a chosen precision and rounding mode by supplying an appropriate MathContext object to the operation.” A newbie reading that sentence for the first time may be thinking that they are going to have to think extensively about rounding when writing their code for the first time or else face exceptions as soon as a value cannot be represented exactly, or that they are going to have to read the documentation on the MathContext object as well before using BigDecimal, which in turn might lead to reading IEEE standards that help grant an understanding of floating point numbers but are far removed from what the person actually wanted to code. Seeing that some of the constructors for BigDecimal take arrays as inputs and that others take a MathContext as an input, along with noticing that one of the constructors for the related BigInteger class takes a byte array as the input, may strengthen the feeling that using this object class requires a very fine understanding of the exact number of digits that will be used for the specific calculations the class is used for and that understanding MathContext is more or less essential to even the most basic use of the class. While I’m sure understanding MathContext is helpful, baby’s first BigDecimal project can actually work perfectly well without the need to learn this added functionality at the same time as the first use of the BigDecimal. Reading up on the scale parameter might also lead to the belief by a coder looking up info on the class for the first time that it is necessary to predict the order of magnitude of the answer in advance in order to use the class at all.
Armando Caballo’s commendable answer shows that these concerns of a hypothetical newbie are overblown, as while rounding mode does need to be specified fairly often and a consistent scale is often called as a parameter when using the divide method, the scale parameter is actually a fairly arbitrary specification of the desired accuracy in terms of number of decimal places and not something that requires pinpoint predictions about exactly what numbers the class will handle (unless the ultimate purpose for which the BigDecimal is being used requires a finely controlled level of accuracy, in which case it is fairly easy to specify). An “infinite” series of added and subtracted terms to compute an arc tangent was processed without ever declaring a MathContext object.

Related

Can you have a half of any array? [duplicate]

I am an experienced php developer just starting to learn Java. I am following some Lynda courses at the moment and I'm still really early stages. I'm writing sample programs that ask for user input and do simple calculation and stuff.
Yesterday I came across this situation:
double result = 1 / 2;
With my caveman brain I would think result == 0.5, but no, not in Java. Apparantly 1 / 2 == 0.0. Yes, I know that if I change one of the operands to a double the result would also be a double.
This scares me actually. I can't help but think that this is very broken. It is very naive to think that an integer division results in an integer. I think it is even rarely the case.
But, as Java is very widely used and searching for 'why is java's division broken?' doesn't yield any results, I am probably wrong.
My questions are:
Why does division behave like this?
Where else can I expect to find such magic/voodoo/unexpected behaviour?
Java is a strongly typed language so you should be aware of the types of the values in expressions. If not...
1 is an int (as 2), so 1/2 is the integer division of 1 and 2, so the result is 0 as an int. Then the result is converted to a corresponding double value, so 0.0.
Integer division is different than float division, as in math (natural numbers division is different than real numbers division).
You are thinking like a PHP developer; PHP is dynamically typed language. This means that types are deduced at run-time, so a fraction cannot logically produce a whole number, thus a double (or float) is implied from the division operation.
Java, C, C++, C# and many other languages are strongly typed languages, so when an integer is divided by an integer you get an integer back, 100/50 gives me back 2, just like 100/45 gives me 2, because 100/45 is actually 2.2222..., truncate the decimal to get a whole number (integer division) and you get 2.
In a strongly typed language, if you want a result to be what you expect, you need to be explicit (or implicit), which is why having one of your parameters in your division operation be a double or float will result in floating point division (which gives back fractions).
So in Java, you could do one of the following to get a fractional number:
double result = 1.0 / 2;
double result = 1f / 2;
double result = (float)1 / 2;
Going from a loosely typed, dynamic language to a strongly typed, static language can be jarring, but there's no need to be scared. Just understand that you have to take extra care with validation beyond input, you also have to validate types.
Going from PHP to Java, you should know you can not do something like this:
$result = "2.0";
$result = "1.0" / $result;
echo $result * 3;
In PHP, this would produce the output 1.5 (since (1/2)*3 == 1.5), but in Java,
String result = "2.0";
result = "1.0" / result;
System.out.println(result * 1.5);
This will result in an error because you cannot divide a string (it's not a number).
Hope that can help.
I'm by no means a professional on this, but I think it's because of how the operators are defined to do integer arithmetic. Java uses integer division in order to compute the result because it sees that both are integers. It takes as inputs to this "division" method two ints, and the division operator is overloaded, and performs this integer division. If this were not the case, then Java would have to perform a cast in the overloaded method to a double each time, which is in essence useless if you can perform the cast prior anyways.
If you try it with c++, you will see the result is the same.
The reason is that before assigning the value to the variable, you should calculate it. The numbers you typed (1 and 2) are integers, so their memory allocation should be as integers. Then, the division should done according to integers. After that it will cast it to double, which gives 0.0.
Why does division behave like this?
Because the language specification defines it that way.
Where else can I expect to find such magic/voodoo/unexpected behaviour?
Since you're basically calling "magic/voodoo" something which is perfectly defined in the language specification, the answer is "everywhere".
So the question is actually why there was this design decision in Java. From my point of view, int division resulting in int is a perfectly sound design decision for a strongly typed language. Pure int arithmetic is used very often, so would an int division result in float or double, you'd need a lot of rounding which would not be good.
package demo;
public class ChocolatesPurchased
{
public static void main(String args[])
{
float p = 3;
float cost = 2.5f;
p *= cost;
System.out.println(p);
}
}

Java - Practical difference between 2.0 and 2.00000000000D

I'm rewriting some scientific code that someone else wrote a while back, and throughout the code constants are always declared as:
final double value = 2.0000000000D;
with an apparently arbitrary length of supposedly significant digits attached to it. I'm 95% sure that declaring variables in this way actually does nothing, and that setting the value to:
double value = 2.0;
would be exactly the same. But just to be sure, I'm asking SO, does declaring a constant in this fashion make any (meaningful) difference, or is this likely just a relic from another language where it might have made a difference?
EDIT: In response to an answer below: Yes, I verified that the answer is the same in this particular instance before asking this question. Maybe I should have been more specific. Is there ever an instance, where we would expect that adding additional "significant" digits would in fact return a different number? I suppose it's possible if we get to really large or really small values where floating point numbers start to have resolution issues?
In your example it makes no difference. You can verify this like,
final double value1 = 2.0000000000D;
double value2 = 2.0;
System.out.println(value1 == value2);
Which outputs
true
It's also legal to use double value2 = 2.;
Uh, there is definitely some misunderstanding in that legacy code.
double value=2
works well.
It is pointless to set decimals to zero, and it is likely errorneous to set a double with decimals. For example,
double preciseNumber=3.1415926535897932384626433832795028;
is bogous:
the developer assumes that double does have an arbitrary decimal precision - but that is not the case
the developer assumes that the double will store the identical decimal number as intended - but that's not the case since it is stored in binary fractions instead of decimal fractions.
For precise calculations, consider using BigDecimal (and pass the initial value in String to ensure that no decimals get lost).

Double my money: my framework uses doubles for monetary amounts

I've inherited a project in which monetary amounts use the double type.
Worse, the framework it uses, and the framework's own classes, use double for money.
The framework ORM also handles retrieval of values from (and storage to) the database. In the database money values are type number(19, 7), but the framework ORM maps them to doubles.
Short of entirely bypassing the framework classes and ORM, is there anything I can do to calculate monetary values precisely?
Edit: yeah, I know BigDecimal should be used. The problem is that I am tightly tied to a framework that where, e.g., the class framework.commerce.pricing.ItemPriceInfo has members double mRawTotalPrice; and double mListPrice. My company's application's own code extends, e.g, this ItemPriceInfoClass.
Realistically, I can't say to my company, "scrap two years of work, and hundreds of thousands of dollars spent, basing code on this framework, because of rounding errors"
If tolerable, treat the monetary type as integral. In other words, if you're working in the US, track cents instead of dollars, if cents provides the granularity you need. Doubles can accurately represent integers up to a very large value (2^53) (no rounding errors up to that value).
But really, the right thing to do is bypass the framework entirely and use something more reasonable. That's such an amateur mistake for the framework to make - who knows what else is lurking?
I didn't see you mention refactoring. I think that's your best option here. Instead of throwing together some hacks to get things working better for now, why not fix it the right way?
Here's some information on double vs BigDecimal. This post suggests using BigDecimal even though it is slower.
Plenty of people will suggest using BigDecimal and if you don't know how to use rounding in your project, that is what you should do.
If you know how to use decimal rounding correctly, use double. Its many orders of magnitude faster, much clear and simpler and thus less error prone IMHO. If you use dollars and cents (or need two decimal places), you can get an accurate result for values up to 70 trillion dollars.
Basically, you won't get round errors if you correct for it using approriate rounding.
BTW: The thought of rounding errors strikes terror into the heart of many developers, but they are not random errors and you can manage them fairly easily.
EDIT: consider this simple example of a rounding error.
double a = 100000000.01;
double b = 100000000.09;
System.out.println(a+b); // prints 2.0000000010000002E8
There are a number of possible rounding strategies. You can either round the result when printing/displaying. e.g.
System.out.printf("%.2f%n", a+b); // prints 200000000.10
or round the result mathematically
double c = a + b;
double r= (double)((long)(c * 100 + 0.5))/100;
System.out.println(r); // prints 2.000000001E8
In my case, I round the result when sending from the server (writing to a socket and a file), but use my own routine to avoid any object creation.
A more general round function is as follows, but if you can use printf or DecimalFormat, can be simpler.
private static long TENS[] = new long[19]; static {
TENS[0] = 1;
for (int i = 1; i < TENS.length; i++) TENS[i] = 10 * TENS[i - 1];
}
public static double round(double v, int precision) {
assert precision >= 0 && precision < TENS.length;
double unscaled = v * TENS[precision];
assert unscaled > Long.MIN_VALUE && unscaled < Long.MAX_VALUE;
long unscaledLong = (long) (unscaled + (v < 0 ? -0.5 : 0.5));
return (double) unscaledLong / TENS[precision];
}
note: you could use BigDecimal to perform the final rounding. esp if you need a specifc round method.
Well, you don't have that many options in reality:
You can refactor the project to use e.g. BigDecimal (or something that better suits its needs) to represent money.
Be extremely careful for overflow/underflow and loss of precision, which means adding tons of checks, and refactoring even larger proportion of the system in an unnecessary way. Not to mention how much research would be necessary if you are to do that.
Keep things the way they are and hope nobody notices (this is a joke).
IMHO, the best solution would be to simply refactor this out. It might be some heavy refactoring, but the evil is already done and I believe that it should be your best option.
Best,
Vassil
P.S. Oh and you can treat money as integers (counting cents), but that doesn't sound like a good idea if you are going to have currency conversions, calculating interest, etc.
I think this situation is at least minimally salvageable for your code. You get the value as a double via the ORM framework. You can then convert it to BigDecimal using the static valueOf method (see here for why) before doing any math/calculations on it, and then convert it back to double only for storing it.
Since you are extending these classes anyway, you can add getters for your double value that gets them as BigDecimal when you need it.
This may not cover 100% of the cases (I would be especially worried about what the ORM or JDBC driver is doing to convert the double back to a Number type), but it is so much better than just doing the math on the raw doubles.
However, I am far from convinced that this approach is actually cheaper for the company in the long run.

How do I round up currency values in Java?

Okay, here's my problem.
Basically I have this problem.
I have a number like .53999999.
How do I round it up to 54 without using any of the Math functions?
I'm guessing I have to multiply by 100 to scale it, then divide?
Something like that?
The issue is with money. let's say I have $50.5399999 I know how to get the $50, but I don't have how to get the cents. I can get the .539999 part, but I don't know how to get it to 54 cents.
I would use something like:
BigDecimal result = new BigDecimal("50.5399999").setScale(2, BigDecimal.ROUND_HALF_UP);
There is a great article called Make cents with BigDecimal on JavaWorld that you should take a look at.
You should use a decimal or currency type to represent money, not floating point.
Math with money is more complex than most engineers think (over generalization)
If you are doing currency calculations, I think you may be delving into problems that seem simple at their surface but are actually quite complex. For instance, rounding methods that are a result of business logic decisions that are repeated often can drastically affect the totals of calculations.
I would recommend looking at the Java Currency class for currency formatting.
Also having a look at this page on representing money in java may be helpful.
If this is homework, showing the teacher that you have thought through the real-world problem rather than just slung a bunch of code together that "works" - will surely be more impressive.
On a side note, I initially was going to suggest looking at the implementation of the Java math methods in the source code, so I took a look. I noticed that Java was using native methods for its rounding methods - just like it should.
However, a look at BigDecimal shows that there is Java source available for rounding in Java. So rather than just give you the code for your homework, I suggest that you look at the BigDecimal private method doRound(MathContext mc) in the Java source.
If 50.54 isn't representable in double precision, then rounding won't help.
If you're trying to convert 50.53999999 to a whole number of dollars and a whole number of cents, do the following:
double d = 50.539999; // or however many 9's, it doesn't matter
int dollars = (int)d;
double frac = d - dollars;
int cents = (int)((frac * 100) + 0.5);
Note that the addition of 0.5 in that last step is to round to the nearest whole number of cents. If you always want it to round up, change that to add 0.9999999 instead of 0.5.
Why would you not want to use any Math functions?
static long round(double a)
-Returns the closest long to the argument.
http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Math.html
To represent money I would take the following advice instead of re-inventing the wheel:
http://www.javapractices.com/topic/TopicAction.do?Id=13
Try storing your currency as number of cents (you could abstract this to number of base curreny units) with a long.
Edit: Since this is homework, you may not have control over the types. Consider this a lesson for future projects
long money = 5054;
long cents = money % 100;
long dollars = money / 100; // this works due to integer/long truncation
System.out.printf("$%d.%02.d", dollars, cents);
You need to make the number .535 and compare that with your original number to see if you'll round up or down. Here's how you get .535 from .53999999 (should work for any number):
num = .53999999;
int_num = (int)(num * 100); // cast to integer, however you do it in Java
compare_num = (int_num + 0.5) / 100;
compare_num would be .535 in this case. If num is greater than or equal to compare_num, round up to int_num + 1. Otherwise round down simply to int_num.
Sean seems to have it, except, if you want to impose proper rules then you may want to throw in an if statement like so:
double value = .539999;
int result = (int) (value*100);
if(((value*100)%result)>.5)
result++;
I suggest you use long for rounding a double value. It won't matter for small numbers but could make a difference.
double d = 50.539999;
long cents = (long)(d * 100 + 0.5);
double rounded = cents/100;
What exactly are you trying to do? Are you always trying to go to two digits? Or are you always trying to fix things like 99999 at the end no matter what?
According to the comments, it is in fact the former: rounding to dollars and cents. So indeed just round(100*a)/100 is what you want. (The latter would be much more complicated...)
Finally, if you just want to extract the cents part, the following would work:
dollars_and_cents = round(100*a)/100
cents = (dollars_and_cents-(int)dollars_and_cents)*100
(or does java just have frac? In which case the last line would just be frac(dollars_and_cents)*100.

Rounding negative numbers in Java

According to Wikipedia when rounding a negative number, you round the absolute number. So by that reasoning, -3.5 would be rounded to -4. But when I use java.lang.Math.round(-3.5) returns -3. Can someone please explain this?
According to the javadoc
Returns the closest long to the
argument. The result is rounded to an
integer by adding 1/2, taking the
floor of the result, and casting the
result to type long. In other words,
the result is equal to the value of
the expression:
(long)Math.floor(a + 0.5d)
Conceptually, you round up. In other words, to the next integer greater than the value and -3 is greater than -3.5, while -4 is less.
There are a variety of methods of rounding; the one you are looking at is called Symmetrical Arithmetic Rounding (as it states). The section you are referring to states: "This method occurs commonly used in mathematical applications, for example in accounting. It is the one generally taught in elementary mathematics classes." This seems to acknowledge that it is not a rule that is globally agreed upon, just the one that is most common.
Personally, I don't recall ever being taught that rule in school. My understanding of rounding has always been that .5 is rounded up, regardless of the sign of the number. Apparently the authors of Java have the same understanding. This is Asymmetrical Arithmetic Rounding.
Different tools and languages potentially use different rounding schemes. Excel apparently uses the symmetric method.
(Overall, I would advise that if you find a conflict between Wikipedia and experience, you look for information elsewhere. Wikipedia is not perfect.)
For what it's worth, java.math.BigDecimal has selectable rounding modes if you need more control over that sort of thing.
The Wikipedia article you cite does not say that's the only way to round, just the common way to round. Also mentioned in that article are several alternatives (unfortunately none of which describe Java's rounding method - even though they call it out as "Asymmetric Arithmetic Rounding" when indicating what JavaScript does).
You need to decide how you want your numbers rounded, then use that method. If Java's implementation matches that, then great. otherwise you'll need to implement it on your own.
According to Javadocs:
Returns the closest long to the argument. The result is rounded to an integer by adding 1/2, taking the floor of the result, and casting the result to type long. In other words, the result is equal to the value of the expression:
(long)Math.floor(a + 0.5d)
Turns out the convention is to round up. I guess Wikipedia is fallible. Turns out Microsoft got it wrong, though, as they round it to -4 as well, which is not convention (I checked with someone who has a PhD in math).
//for Example dividing an Int A with 200 ;
public class Solution {
public int solve(int A) {
double B = (double) A /200;
if (B<0){
B= (int) Math.round(Math.abs(B));
return (int) B * -1 ;
}
else{
B= (int) Math.round(B);
return (int) B ;
}
}
}

Categories