I am writing a "triangle solver" app for Android, and I was wondering if it would be possible to implement exact values for trig ratios and radian measures. For example, 90 degrees would be output as "pi / 2" instead of 1.57079632679...
I know that in order to get the exact value for a radian measure, I would divide it by pi and convert it to a fraction. I don't know how I would convert the decimal to a fraction.
like this:
int decimal = angleMeasure / Math.PI;
someMethodToTurnItIntoAFraction(decimal);
I don't even know where to begin with the trig ratios.
You need to take the number and divide it by each of the "special" numbers: pi,e, sqrt(2), sqrt(3), sqrt(5). After each division, determine if the resulting number is close to an exact fraction. To do the last part, use the continued fraction algorithm to find good approximations to the number. There are criteria you can use in the continued fraction expansion to determine if the approximation is nearly exact. If you get a nice fraction with small numbers that is nearly exact then that's your answer - the fraction times the special number that was divided by at the beginning. Oh and consider "1" as a divisor so simple fractions come out too.
Been there, done that, works well. I don't recall the algorithm for getting approximate fractions without storing and collapsing the entire continued fraction, but it's been linked here on SO recently.
What you're talking about is using Pi as a concept instead of a number. I'd do something like this:
class Fraction {
public int num;
public int den;
public Fraction(int n,int d) {
num=n;
den=d;
}
public Fraction() {
num=1;
den=1;
public double decValue() {
return ((double)num)/((double)den);
}
}
yadda, yadda....
public static Fraction someMethod(double decVal) {
Fraction f=new Fraction(1,1);
double howclose=0.0000001; //tiny amount of error allowed
while(abs((f.decValue()*Math.PI)-decVal)>howclose) {
if(f.decValue()*Math.PI>decVal) {
f.den++;
}
else {
f.num++;
}
}
return f;
}
Basically, work on getting the fraction closer and closer to the expected answer (decVal). The fraction will be in the form of:
num*PI
------
den
Basically, multiply the fraction that's in the result by Pi, and it should be very close to decVal.
Nobody stops you from using fractions as they come. Integer, Double, etc. are just objects, that can be used with 4 operations: +, -, *, /. You can use some kind of object Fraction, which will also perform these operations (not like operators, but like plain methods - consider BigInteger for example of such use), but do it in its own manner. For some aspects of creating new number types see SICP, and for implementation in Java see these notes.
EDIT
What I mean is not creating your someMethodToTurnItIntoAFraction, but using natural fractions themselves. I.e. your code will look like this:
Fraction f = new Fraction(angleMeasure, Fraction.PI);
System.out.println(f.getNum() + "/" + f.getDen());
It will take more time, but will keep your numbers precise.
IIRC, chips compute trigonometrical functions using Taylor's polynom, which is a row of additions of fractions. So you could implement that computation and keep it in fractions. Will be slow, of course.
http://en.wikipedia.org/wiki/Taylor_series
Related
In java, double takes 64 bits, but stores (or computes with) numbers unprecisely.
E.g. the following code:
double a = 10.125d;
double b = 7.065d;
System.out.println(a-b);
prints out 3.0599999999999996 rather than 3.06.
So, the question - what about utilizing those 64 bits to store two 32-bit integers (first to represent the whole part, second the decimal part)?
Then calculations would be precise, right?
The naive pseudo-code implementation with unhandled decimal transfer:
primitive double {
int wholePart;
int decimalPart;
public double + (double other) {
return double (this.wholePart + other.wholePart, this.decimalPart + other.decimalPart);
}
//other methods in the same fashion
public String toString() {
return wholePart + "." + decimalPart;
}
}
Is there a reason for Java to store double unprecisely and not to use the implementation mentioned above?
There is one big problem with your solution. int are signed, therefore it would be able to have negative decimal parts which don't make sense. Other than that you could not store the same range of values with your solution and you would be missing the values Double.NEGATIVE_INFINITY, Double.NaN and Double.POSITIVE_INFINITY. See how floating point are stored in binary e.g. in this SO question to understand why that is or read IEEE 754, which is the standard which defines how floating point numbers are stored in binary.
But yes, generally speaking if you need the precision it's a good idea to work with integer arithmetic instead of floating point arithmetic (again, for the reasons why see above linked question). The easiest way is to just pick another unit/ the smallest unit you'll need.
Assume for example you want to calculate prices in euros €. If you store them as floats you'll risk being inaccurate which you really don't want when working with prices. Therefore instead of storing € amounts, store how many cents (smallest possible unit here) something costs and you'll have eliminated the problem.
For large integer there also is BigInteger so that approach can also work for large or respectively very small float values.
class Rextester
{
public static void main(String args[])
{
double b = 1.13f * 100;
System.out.println(b);
}
}
In the Above code when f is not appended to 1.13 the output is 112.99999999999999 but when f is appended to 1.13 the value us 113. Why is this behaviour?
The f suffix is telling Java that the number is a single precision floating point number, instead of a double floating point number.
The problem with floating point numbers in general, is that certain numbers cannot be precisely represented. Each bit of the mantissa of the internal representation represents a fraction with a power of 2 in the denominator, so 1/2, 1/4, 1/8, 1/16 etc. Then the computer will select the closest number that represents the number you want.
What is happening in your case is that when you leave out the f, it uses the full double precision bits, and gets the closest number to it (112.9999999999). When you do the f your are telling the program to round it up to the closest single precision floating point, so the first 9 that doesn't fit gets rounded and propagates up to the value of 113.
It is a bit of a matter of coincidence for this specific number. Don't assume that using single precision floating point will always give you the expected result. Floating point arithmetic is always a bit messy in computing.
When you add f to the decimal it makes it a float constant which has only about 6 digits of precision. This makes representation error much more likely and much bigger.
When you drop the use of f, the decimal is a double which has half a trillion times the precision. This makes the representation error much smaller, and when printed as a double you are less likely to see it.
When you print a double, the libraries expects there to be some representation error and will show you the simplest/shortest number which has the same representation as the double. (This is actually an infinite number of numbers which map to the same representation)
However, this implicit rounding will only correct a very small amount and is unlikely to correct for the representation error of a float. Note: if you print using a float instead of a double it will perform greater rounding, hiding the error.
I've wrote a method for polynomial long division. And it works perfect with "good" polynomials. Under "good" I mean coefficients that divides accurate. Today I've faced with issue when tried to divide 2*x^3-18*x^2+.... / 7.00000(much zeros)0000028*x^2 + 5*x + ... After division 2*x^3 / 7.000...000028*x^2 I got 0.285714....53*x. On next step we need to multiply 0.2857....53*x on 7.00000...0000028*x^2 + 5*x + .. and subtract it from dividend polynomial 2*x^3-18*x^2+... and get new polynomial with degree = 2. But because of problem with double type I actually got polynomial 2.220....E-16*x^3 - 6*x^2 + .... I know that it is in fact zero near the x^3. I do not want to invent smth new and strange, that is why I am asking how to resolve it beautifully and correctly. Thanks.
Many division results such as 1/7 cannot be represented exactly in either double or BigDecimal. If you go with BigDecimal you would have to pick a number of digits to preserve, and deal with rounding error. For double, you get more convenient arithmetic, but a fixed number of significant bits.
You have two options.
One is to handle rounding error. When a result is very close to zero, so close that it is probably due to rounding error, treat it as zero. I don't know whether that will work for your algorithm or not. If you go this way, you can use either double or BigDecimal.
The second option is to use a rational number package. In rational number arithmetic all division results can be represented exactly. 1/7 remains 1/7, without being rounded to a terminating decimal or binary fraction. If you go this way, search for "java rational number" (no quotes) and decide which one you like best.
If I have an array of doubles that each have EXACTLY two decimal places, add them up altogether via a loop, and print out the total, what comes out is a number with MORE THAN two decimal places. Which is weird, because theoretically, adding two numbers that each have 2 and only 2 decimal places will NEVER produce a number that has a non-zero digit beyond the hundredths place.
Try executing this code:
double[] d = new double[2000];
for (int i = 0; i < d.length; i++) {
d[i] = 9.99;
}
double total = 0,00;
for (int i = 0; i < d.length; i++) {
total += d[i];
if (("" + total).matches("[0-9]+\\.[0-9]{3,}")) { // if there are 3 or more decimal places in the total
System.out.println("total: " + total + ", " + i); // print the total and the iteration when it occured
}
}
In my computer, this prints out:
total: 59.940000000000005, 5
If I round off the total to two decimal places then I'd get the same number as I would if I manually added 9.99 six times on a calculator. But how come this is happening and where are the extra decimal places coming from? Am I doing something wrong or (I doubt this is likely) is this a Java bug?
Are you familiar with base 10 to base 2 conversion (decimal to binary) for fractions? If not, look it up.
Then you'll see that although 9.99 looks pretty normal in base 10, it doesn't really look that nice in binary; It looks like a repeating decimal, but in binary. I'm sure you've seen a repeating decimal before, right? It doesn't end. But Java (or any language for that matter) has to save that infinite sequence of digits into a limited number of bytes. And that's when the extra digits appear. When you convert that truncated binary back to decimal, you're really dealing with a different number. The number stored in the variable isn't 9.99 exactly, it something like 9.9999999991 (just an example, I didn't work out the math).
But you're probably interested on how to solve this, right? Look up the BigDecimal class. That's what you want to use for your calculations, especially when dealing with currency. Also, look up DecimalFormat, which is a class for writing a number as a properly formatted string. I think it does rounding for you when you want to show only 2 decimal digits and your number has a lot more, for example.
If I have an array of doubles that each have EXACTLY two decimal places
Let's stop right there, because I suspect you don't. For example, you give 9.99 in your sample code. That isn't really 9.99. That's "the closest double to 9.99" as 9.99 itself can't be exactly represented in binary floating point.
At that point, the rest of your reasoning goes out of the window.
If you want values with an exact number of decimal digits, you should use a type which stores values in a decimal-centric manner, such as BigDecimal. Alternatively, store everything as integers and "know" that you're actually remembering "the value * 100" instead.
Doubles are represented in a binary format on the computer (). This means that certain numbers cannot be represented accurately, so the computer will use the closest number that can be represented.
E.g. 10.5 = 2^3+2+2^(-1) = 1.0101 * 2^3 (here the mantissa is in binary)
but 10.1 = 2^3+2+2^(-4)+2^(-5)+(infinite series here) = 1.0100001... * 2^3
9.99 is such a number with infinite representation. Thus when you add them together, the finite representation used by the computer is used in the calculation and the result will be even more further away from the mathematical sum than the originals were from their true representation. This is why you see more digits displayed than used in the original numbers.
this is because of floating point arithmetics.
doubles and floats are not exactly real numbers, there are finite number of bits to represent them while there are infinite number of real numbers [in any range], so you cannot represent all real numbers - You are getting the closest number you can have with the floating point representation.
Whenever you deal with floating points - remember that they are only an approximation to the number you are seeking. You might want to use BigDecimal if you want the exact number [or at least control the error].
More info can be found at this article
Use BigDecimal to perform floating point calculations with precision. It's a must when it comes to money.
This is a known issue that stems in the fact that binary calculations don't allow for precise floating point operations. Look at "floating point arithmetics" for more details.
This is due to inaccuracies when it comes to representing decimal numbers using a binary floating point value. In other words, the double literal 0.99 does not actually represent the mathematical value 9.99.
To reveal exactly what number a value, such as 9.99 represents you could let BigDecimal print the value.
Code to reveal the exact value:
System.out.println(new BigDecimal(9.99));
Output:
9.9900000000000002131628207280300557613372802734375
Btw, your reasoning would be completely accurate if you were taking about binary places instead of decimal places, since a number with two binary places can be exactly represented by a binary floating point value.
I am trying to take two numbers and get the square of them. Using several numbers some work, but this one is giving me problems pow(.0305,2). Using a calculator i get an answer of: 0.093025; but when I use java i get an answer of:9.609999999999999E-4. I need .0305 because I am taking 3.05/100 which is .0305.
I have found through trial that if I do .pow(.305,2), that does give me my need answer, but then I would have to get that with 3.05/100.
EDIT:(adding code)
double weight=3.05;
double TapeLength=100.00;
double ftwt= weight/TapeLength; this gives me: 0.093025
double ftwt= Math.pow(ftwt,2); //this gives me: 9.3025E-4
everything is cast as a double.
If you're really getting 9.609999999999999E-4 as the result, you're doing something wrong other than what you have in you question. The following code (in Eclipse 3.7.1):
class Test {
public static void main(String args[]) {
double dd = .0305;
System.out.println (Math.pow(dd, 2));
System.out.println ("%.8f\n", Math.pow(dd, 2));
}
}
produces:
9.3025E-4
0.00093025
which are both correct (a), just expressed in different output formats. The default for double is exponential format in this case but the last line shows how you can get different formats.
(a) Just on the off chance that you're confused by the exponential form (based on your comments), 0.00093025 is the same as 9.3025E-4 since the latter means9.3025 x 10-4 (9.3025 with the decimal point shifted four positions to the left.
Besides the problems with order of magnitude, which I think are just a typing mistakes, if your concern is that you get 0.00096xxxxx you should specify them as double from start. You are making them float which means you lose precision and only then they are cast to double. Then the error is compounded by squaring them.
Making them doubles will probably help, but you have to remember that when you are dealing with binary representation of decimal rationals there may not be an exact representation with a finite number of digits.
Using Math.pow(x, 2) is much, much slower than using x * x That is because it is the same was Math.exp(Math.log(x) * n) As it does much more calculations it tends to have a larger rounding error. (So its a bad idea all round IMHO)