Understanding the strictMath java library - java

I got bored and decided to dive into remaking the square root function without referencing any of the Math.java functions. I have gotten to this point:
package sqrt;
public class SquareRoot {
public static void main(String[] args) {
System.out.println(sqrtOf(8));
}
public static double sqrtOf(double n){
double x = log(n,2);
return powerOf(2, x/2);
}
public static double log(double n, double base)
{
return (Math.log(n)/Math.log(base));
}
public static double powerOf(double x, double y) {
return powerOf(e(),y * log(x, e()));
}
public static int factorial(int n){
if(n <= 1){
return 1;
}else{
return n * factorial((n-1));
}
}
public static double e(){
return 1/factorial(1);
}
public static double e(int precision){
return 1/factorial(precision);
}
}
As you may very well see, I came to the point in my powerOf() function that infinitely recalls itself. I could replace that and use Math.exp(y * log(x, e()), so I dived into the Math source code to see how it handled my problem, resulting in a goose chase.
public static double exp(double a) {
return StrictMath.exp(a); // default impl. delegates to StrictMath
}
which leads to:
public static double exp(double x)
{
if (x != x)
return x;
if (x > EXP_LIMIT_H)
return Double.POSITIVE_INFINITY;
if (x < EXP_LIMIT_L)
return 0;
// Argument reduction.
double hi;
double lo;
int k;
double t = abs(x);
if (t > 0.5 * LN2)
{
if (t < 1.5 * LN2)
{
hi = t - LN2_H;
lo = LN2_L;
k = 1;
}
else
{
k = (int) (INV_LN2 * t + 0.5);
hi = t - k * LN2_H;
lo = k * LN2_L;
}
if (x < 0)
{
hi = -hi;
lo = -lo;
k = -k;
}
x = hi - lo;
}
else if (t < 1 / TWO_28)
return 1;
else
lo = hi = k = 0;
// Now x is in primary range.
t = x * x;
double c = x - t * (P1 + t * (P2 + t * (P3 + t * (P4 + t * P5))));
if (k == 0)
return 1 - (x * c / (c - 2) - x);
double y = 1 - (lo - x * c / (2 - c) - hi);
return scale(y, k);
}
Values that are referenced:
LN2 = 0.6931471805599453, // Long bits 0x3fe62e42fefa39efL.
LN2_H = 0.6931471803691238, // Long bits 0x3fe62e42fee00000L.
LN2_L = 1.9082149292705877e-10, // Long bits 0x3dea39ef35793c76L.
INV_LN2 = 1.4426950408889634, // Long bits 0x3ff71547652b82feL.
INV_LN2_H = 1.4426950216293335, // Long bits 0x3ff7154760000000L.
INV_LN2_L = 1.9259629911266175e-8; // Long bits 0x3e54ae0bf85ddf44L.
P1 = 0.16666666666666602, // Long bits 0x3fc555555555553eL.
P2 = -2.7777777777015593e-3, // Long bits 0xbf66c16c16bebd93L.
P3 = 6.613756321437934e-5, // Long bits 0x3f11566aaf25de2cL.
P4 = -1.6533902205465252e-6, // Long bits 0xbebbbd41c5d26bf1L.
P5 = 4.1381367970572385e-8, // Long bits 0x3e66376972bea4d0L.
TWO_28 = 0x10000000, // Long bits 0x41b0000000000000L
Here is where I'm starting to get lost. But I can make a few assumptions that so far the answer is starting to become estimated. I then find myself here:
private static double scale(double x, int n)
{
if (Configuration.DEBUG && abs(n) >= 2048)
throw new InternalError("Assertion failure");
if (x == 0 || x == Double.NEGATIVE_INFINITY
|| ! (x < Double.POSITIVE_INFINITY) || n == 0)
return x;
long bits = Double.doubleToLongBits(x);
int exp = (int) (bits >> 52) & 0x7ff;
if (exp == 0) // Subnormal x.
{
x *= TWO_54;
exp = ((int) (Double.doubleToLongBits(x) >> 52) & 0x7ff) - 54;
}
exp += n;
if (exp > 0x7fe) // Overflow.
return Double.POSITIVE_INFINITY * x;
if (exp > 0) // Normal.
return Double.longBitsToDouble((bits & 0x800fffffffffffffL)
| ((long) exp << 52));
if (exp <= -54)
return 0 * x; // Underflow.
exp += 54; // Subnormal result.
x = Double.longBitsToDouble((bits & 0x800fffffffffffffL)
| ((long) exp << 52));
return x * (1 / TWO_54);
}
TWO_54 = 0x40000000000000L
While I am, I would say, very understanding of math and programming, I hit the point to where I find myself at a Frankenstein monster mix of the two. I noticed the intrinsic switch to bits (which I have little to no experience with), and I was hoping someone could explain to me the processes that are occurring "under the hood" so to speak. Specifically where I got lost is from "Now x is in primary range" in the exp() method on wards and what the values that are being referenced really represent. I'm was asking for someone to help me understand not only the methods themselves, but also how they arrive to the answer. Feel free to go as in depth as needed.
edit:
if someone could maybe make this tag: "strictMath" that would be great. I believe that its size and for the Math library deriving from it justifies its existence.

To the exponential function:
What happens is that
exp(x) = 2^k * exp(x-k*log(2))
is exploited for positive x. Some magic is used to get more consistent results for large x where the reduction x-k*log(2) will introduce cancellation errors.
On the reduced x a rational approximation with minimized maximal error over the interval 0.5..1.5 is used, see Pade approximations and similar. This is based on the symmetric formula
exp(x) = exp(x/2)/exp(-x/2) = (c(x²)+x)/(c(x²)-x)
(note that the c in the code is x+c(x)-2). When using Taylor series, approximations for c(x*x)=x*coth(x/2) are based on
c(u)=2 + 1/6*u - 1/360*u^2 + 1/15120*u^3 - 1/604800*u^4 + 1/23950080*u^5 - 691/653837184000*u^6
The scale(x,n) function implements the multiplication x*2^n by directly manipulating the exponent in the bit assembly of the double floating point format.
Computing square roots
To compute square roots it would be more advantageous to compute them directly. First reduce the interval of approximation arguments via
sqrt(x)=2^k*sqrt(x/4^k)
which can again be done efficiently by directly manipulating the bit format of double.
After x is reduced to the interval 0.5..2.0 one can then employ formulas of the form
u = (x-1)/(x+1)
y = (c(u*u)+u) / (c(u*u)-u)
based on
sqrt(x)=sqrt(1+u)/sqrt(1-u)
and
c(v) = 1+sqrt(1-v) = 2 - 1/2*v - 1/8*v^2 - 1/16*v^3 - 5/128*v^4 - 7/256*v^5 - 21/1024*v^6 - 33/2048*v^7 - ...
In a program without bit manipulations this could look like
double my_sqrt(double x) {
double c,u,v,y,scale=1;
int k=0;
if(x<0) return NaN;
while(x>2 ) { x/=4; scale *=2; k++; }
while(x<0.5) { x*=4; scale /=2; k--; }
// rational approximation of sqrt
u = (x-1)/(x+1);
v = u*u;
c = 2 - v/2*(1 + v/4*(1 + v/2));
y = 1 + 2*u/(c-u); // = (c+u)/(c-u);
// one Halley iteration
y = y*(1+8*x/(3*(3*y*y+x))) // = y*(y*y+3*x)/(3*y*y+x)
// reconstruct original scale
return y*scale;
}
One could replace the Halley step with two Newton steps, or
with a better uniform approximation in c one could replace the Halley step with one Newton step, or ...

Related

Binary search for square root [homework]

For an assignment I must create a method using a binary search to find the square root of an integer, and if it is not a square number, it should return an integer s such that s*s <= the number (so for 15 it would return 3). The code I have for it so far is
public class BinarySearch {
/**
* Integer square root Calculates the integer part of the square root of n,
* i.e. integer s such that s*s <= n and (s+1)*(s+1) > n
* requires n >= 0
*
* #param n number to find the square root of
* #return integer part of its square root
*/
private static int iSqrt(int n) {
int l = 0;
int r = n;
int m = ((l + r + 1) / 2);
// loop invariant
while (Math.abs(m * m - n) > 0) {
if ((m) * (m) > n) {
r = m;
m = ((l + r + 1) / 2);
} else {
l = m;
m = ((l + r + 1) / 2);
}
}
return m;
}
public static void main(String[] args) {
//gets stuck
System.out.println(iSqrt(15));
//calculates correctly
System.out.println(iSqrt(16));
}
}
And this returns the right number for square numbers, but gets stick in an endless loop for other integers. I know that the problem lies in the while condition, but I can't work out what to put due to the gap between square numbers getting much bigger as the numbers get bigger (so i can't just put that the gap must be below a threshold). The exercise is about invariants if that helps at all (hence why it is set up in this way). Thank you.
Think about it: Math.abs(m*m-n) > 0 is always true non-square numbers, because it is never zero, and .abs cannot be negative. It is your loop condition, that's why the loop never ends.
Does this give you enough info to get you going?
You need to change the while (Math.abs(m * m - n) > 0) to allow for a margin of error, instead of requiring it be exactly equal to zero as you do right now.
Try while((m+1)*(m+1) <= n || n < m * m)
#define EPSILON 0.0000001
double msqrt(double n){
assert(n >= 0);
if(n == 0 || n == 1){
return n;
}
double low = 1, high = n;
double mid = (low+high)/2.0;
while(abs(mid*mid - n) > EPSILON){
mid = (low+high)/2.0;
if(mid*mid < n){
low = mid+1;
}else{
high = mid-1;
}
}
return mid;}
As you can see above , you should simply apply binary search (bisection method)
and you can minimize Epsilon to get more accurate results but it will take more time to run.
Edit: I have written code in c++ (sorry)
As Ken Bloom said you have to have an error marge, 1. I've tested this code and it runs as expected for 15. Also you'll need to use float's, I think this algorithm is not possible for int's (although I have no mathematical proof)
private static int iSqrt(int n){
float l = 0;
float r = n;
float m = ((l + r)/2);
while (Math.abs(m*m-n) > 0.1) {
if ((m)*(m) > n) {
r=m;
System.out.println("r becomes: "+r);
} else {
l = m;
System.out.println("l becomes: "+l);
}
m = ((l + r)/2);
System.out.println("m becomes: "+m);
}
return (int)m;
}

Integer Factorization

Could anyone explain to me why the algorithm below is an error-free integer factorization method that always return a non-trivial factor of N.
I know how weird this sounds, but I designed this method 2 years ago and still don't understand the mathematical logic behind it, which is making it difficult for me to improve it. It's so simple that it involves only addition and subtraction.
public static long factorX( long N )
{
long x=0, y=0;
long b = (long)(Math.sqrt(N));
long a = b*(b+1)-N;
if( a==b ) return a;
while ( a!= 0 )
{
a-= ( 2+2*x++ - y);
if( a<0 ) { a+= (x+b+1); y++; }
}
return ( x+b+1 );
}
It seems that the above method actually finds a solution by iteration to the diophantine equation:
f(x,y) = a - x(x+1) + (x+b+1)y
where b = floor( sqrt(N) ) and a = b(b+1) - N
that is, when a = 0, f(x,y) = 0 and (x+b+1) is a factor of N.
Example: N = 8509
b = 92, a = 47
f(34,9) = 47 - 34(34+1) + 9(34+92+1) = 0
and so x+b+1 = 127 is a factor of N.
Rewriting the method:
public static long factorX(long N)
{
long x=1, y=0, f=1;
long b = (long)(Math.sqrt(N));
long a = b*(b+1)-N;
if( a==b ) return a;
while( f != 0 )
{
f = a - x*(x+1) + (x+b+1)*y;
if( f < 0 ) y++;
x++;
}
return x+b+1;
}
I'd really appreciate any suggestions on how to improve this method.
Here's a list of 10 18-digit random semiprimes:
349752871155505651 = 666524689 x 524741059 in 322 ms
259160452058194903 = 598230151 x 433211953 in 404 ms
339850094323758691 = 764567807 x 444499613 in 1037 ms
244246972999490723 = 606170657 x 402934339 in 560 ms
285622950750261931 = 576888113 x 495109787 in 174 ms
191975635567268981 = 463688299 x 414018719 in 101 ms
207216185150553571 = 628978741 x 329448631 in 1029 ms
224869951114090657 = 675730721 x 332780417 in 1165 ms
315886983148626037 = 590221057 x 535201141 in 110 ms
810807767237895131 = 957028363 x 847213937 in 226 ms
469066333624309021 = 863917189 x 542952889 in 914 ms
OK, I used Matlab to see what was going here. Here is the result for N=100000:
You are increasing x on each iteration, and the funny pattern of a variable is strongly related with the remainder N % x+b+1 (as you can see in the gray line of the plot, a + (N % (x+b+1)) - x = floor(sqrt(N))).
Thus, I think you are just finding the first factor larger than sqrt(N) by simple iteration, but with a rather obscure criterion to decide it is really a factor :D
(Sorry for the half-answer... I have to leave, I will maybe continue later).
Here is the matlab code, just in case you want it to test by yourself:
clear all
close all
N = int64(100000);
histx = [];
histDiffA = [];
histy = [];
hista = [];
histMod = [];
histb = [];
x=int64(0);
y=int64(0);
b = int64(floor(sqrt(double(N))));
a = int64(b*(b+1)-N);
if( a==b )
factor = a;
else
while ( a ~= 0 )
a = a - ( 2+2*x - y);
histDiffA(end+1) = ( 2+2*x - y);
x = x+1;
if( a<0 )
a = a + (x+b+1);
y = y+1;
end
hista(end+1) = a;
histb(end+1) = b;
histx(end+1) = x;
histy(end+1) = y;
histMod(end+1) = mod(N,(x+b+1));
end
factor = x+b+1;
end
figure('Name', 'Values');
hold on
plot(hista,'-or')
plot(hista+histMod-histx,'--*', 'Color', [0.7 0.7 0.7])
plot(histb,'-ob')
plot(histx,'-*g')
plot(histy,'-*y')
legend({'a', 'a+mod(N,x+b+1)-x', 'b', 'x', 'y'}); % 'Input',
hold off
fprintf( 'factor is %d \n', factor );
Your method is a variant of trial multiplication of (n-a)*(n+b), where n=floor(sqrt(N)) and b==1.
The algorithm then iterates a-- / b++ until the difference of the (n-a)*(n+b) - N == 0.
The partial differences (in respect of a and b) are in proportion to 2b and 2a respectively. Thus no true multiplication are necessary.
The complexity is a linear function of |a| or |b| -- the more "square" N is, the faster the method converges. In summary, there are much faster methods, one of the easiest to understand being the quadratic residue sieve.
Pardon my c#, I don't know Java.
Stepping x and y by 2 increases algorithm speed.
using System.Numerics; // needed for BigInteger
/* Methods ************************************************************/
private static BigInteger sfactor(BigInteger k) // factor odd integers
{
BigInteger x, y;
int flag;
x = y = iSqrt(k); // Integer Square Root
if (x % 2 == 0) { x -= 1; y += 1; } // if even make x & y odd
do
{
flag = BigInteger.Compare((x*y), k);
if (flag > 0) x -= 2;
y += 2;
} while(flag != 0);
return x; // return x
} // end of sfactor()
// Finds the integer square root of a positive number
private static BigInteger iSqrt(BigInteger num)
{
if (0 == num) { return 0; } // Avoid zero divide
BigInteger n = (num / 2) + 1; // Initial estimate, never low
BigInteger n1 = (n + (num / n)) >> 1; // right shift to divide by 2
while (n1 < n)
{
n = n1;
n1 = (n + (num / n)) >> 1; // right shift to divide by 2
}
return n;
} // end iSqrt()

Calculating powers of integers

Is there any other way in Java to calculate a power of an integer?
I use Math.pow(a, b) now, but it returns a double, and that is usually a lot of work, and looks less clean when you just want to use ints (a power will then also always result in an int).
Is there something as simple as a**b like in Python?
When it's power of 2. Take in mind, that you can use simple and fast shift expression 1 << exponent
example:
22 = 1 << 2 = (int) Math.pow(2, 2)
210 = 1 << 10 = (int) Math.pow(2, 10)
For larger exponents (over 31) use long instead
232 = 1L << 32 = (long) Math.pow(2, 32)
btw. in Kotlin you have shl instead of << so
(java) 1L << 32 = 1L shl 32 (kotlin)
Integers are only 32 bits. This means that its max value is 2^31 -1. As you see, for very small numbers, you quickly have a result which can't be represented by an integer anymore. That's why Math.pow uses double.
If you want arbitrary integer precision, use BigInteger.pow. But it's of course less efficient.
Best the algorithm is based on the recursive power definition of a^b.
long pow (long a, int b)
{
if ( b == 0) return 1;
if ( b == 1) return a;
if (isEven( b )) return pow ( a * a, b/2); //even a=(a^2)^b/2
else return a * pow ( a * a, b/2); //odd a=a*(a^2)^b/2
}
Running time of the operation is O(logb).
Reference:More information
No, there is not something as short as a**b
Here is a simple loop, if you want to avoid doubles:
long result = 1;
for (int i = 1; i <= b; i++) {
result *= a;
}
If you want to use pow and convert the result in to integer, cast the result as follows:
int result = (int)Math.pow(a, b);
Google Guava has math utilities for integers.
IntMath
import java.util.*;
public class Power {
public static void main(String args[])
{
Scanner sc=new Scanner(System.in);
int num = 0;
int pow = 0;
int power = 0;
System.out.print("Enter number: ");
num = sc.nextInt();
System.out.print("Enter power: ");
pow = sc.nextInt();
System.out.print(power(num,pow));
}
public static int power(int a, int b)
{
int power = 1;
for(int c = 0; c < b; c++)
power *= a;
return power;
}
}
Guava's math libraries offer two methods that are useful when calculating exact integer powers:
pow(int b, int k) calculates b to the kth the power, and wraps on overflow
checkedPow(int b, int k) is identical except that it throws ArithmeticException on overflow
Personally checkedPow() meets most of my needs for integer exponentiation and is cleaner and safter than using the double versions and rounding, etc. In almost all the places I want a power function, overflow is an error (or impossible, but I want to be told if the impossible ever becomes possible).
If you want get a long result, you can just use the corresponding LongMath methods and pass int arguments.
Well you can simply use Math.pow(a,b) as you have used earlier and just convert its value by using (int) before it. Below could be used as an example to it.
int x = (int) Math.pow(a,b);
where a and b could be double or int values as you want.
This will simply convert its output to an integer value as you required.
A simple (no checks for overflow or for validity of arguments) implementation for the repeated-squaring algorithm for computing the power:
/** Compute a**p, assume result fits in a 32-bit signed integer */
int pow(int a, int p)
{
int res = 1;
int i1 = 31 - Integer.numberOfLeadingZeros(p); // highest bit index
for (int i = i1; i >= 0; --i) {
res *= res;
if ((p & (1<<i)) > 0)
res *= a;
}
return res;
}
The time complexity is logarithmic to exponent p (i.e. linear to the number of bits required to represent p).
I managed to modify(boundaries, even check, negative nums check) Qx__ answer. Use at your own risk. 0^-1, 0^-2 etc.. returns 0.
private static int pow(int x, int n) {
if (n == 0)
return 1;
if (n == 1)
return x;
if (n < 0) { // always 1^xx = 1 && 2^-1 (=0.5 --> ~ 1 )
if (x == 1 || (x == 2 && n == -1))
return 1;
else
return 0;
}
if ((n & 1) == 0) { //is even
long num = pow(x * x, n / 2);
if (num > Integer.MAX_VALUE) //check bounds
return Integer.MAX_VALUE;
return (int) num;
} else {
long num = x * pow(x * x, n / 2);
if (num > Integer.MAX_VALUE) //check bounds
return Integer.MAX_VALUE;
return (int) num;
}
}
base is the number that you want to power up, n is the power, we return 1 if n is 0, and we return the base if the n is 1, if the conditions are not met, we use the formula base*(powerN(base,n-1)) eg: 2 raised to to using this formula is : 2(base)*2(powerN(base,n-1)).
public int power(int base, int n){
return n == 0 ? 1 : (n == 1 ? base : base*(power(base,n-1)));
}
There some issues with pow method:
We can replace (y & 1) == 0; with y % 2 == 0
bitwise operations always are faster.
Your code always decrements y and performs extra multiplication, including the cases when y is even. It's better to put this part into else clause.
public static long pow(long x, int y) {
long result = 1;
while (y > 0) {
if ((y & 1) == 0) {
x *= x;
y >>>= 1;
} else {
result *= x;
y--;
}
}
return result;
}
Use the below logic to calculate the n power of a.
Normally if we want to calculate n power of a. We will multiply 'a' by n number of times.Time complexity of this approach will be O(n)
Split the power n by 2, calculate Exponentattion = multiply 'a' till n/2 only. Double the value. Now the Time Complexity is reduced to O(n/2).
public int calculatePower1(int a, int b) {
if (b == 0) {
return 1;
}
int val = (b % 2 == 0) ? (b / 2) : (b - 1) / 2;
int temp = 1;
for (int i = 1; i <= val; i++) {
temp *= a;
}
if (b % 2 == 0) {
return temp * temp;
} else {
return a * temp * temp;
}
}
Apache has ArithmeticUtils.pow(int k, int e).
import java.util.Scanner;
class Solution {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int t = sc.nextInt();
for (int i = 0; i < t; i++) {
try {
long x = sc.nextLong();
System.out.println(x + " can be fitted in:");
if (x >= -128 && x <= 127) {
System.out.println("* byte");
}
if (x >= -32768 && x <= 32767) {
//Complete the code
System.out.println("* short");
System.out.println("* int");
System.out.println("* long");
} else if (x >= -Math.pow(2, 31) && x <= Math.pow(2, 31) - 1) {
System.out.println("* int");
System.out.println("* long");
} else {
System.out.println("* long");
}
} catch (Exception e) {
System.out.println(sc.next() + " can't be fitted anywhere.");
}
}
}
}
int arguments are acceptable when there is a double paramter. So Math.pow(a,b) will work for int arguments. It returns double you just need to cast to int.
int i = (int) Math.pow(3,10);
Without using pow function and +ve and -ve pow values.
public class PowFunction {
public static void main(String[] args) {
int x = 5;
int y = -3;
System.out.println( x + " raised to the power of " + y + " is " + Math.pow(x,y));
float temp =1;
if(y>0){
for(;y>0;y--){
temp = temp*x;
}
} else {
for(;y<0;y++){
temp = temp*x;
}
temp = 1/temp;
}
System.out.println("power value without using pow method. :: "+temp);
}
}
Unlike Python (where powers can be calculated by a**b) , JAVA has no such shortcut way of accomplishing the result of the power of two numbers.
Java has function named pow in the Math class, which returns a Double value
double pow(double base, double exponent)
But you can also calculate powers of integer using the same function. In the following program I did the same and finally I am converting the result into an integer (typecasting). Follow the example:
import java.util.*;
import java.lang.*; // CONTAINS THE Math library
public class Main{
public static void main(String[] args){
Scanner sc = new Scanner(System.in);
int n= sc.nextInt(); // Accept integer n
int m = sc.nextInt(); // Accept integer m
int ans = (int) Math.pow(n,m); // Calculates n ^ m
System.out.println(ans); // prints answers
}
}
Alternatively,
The java.math.BigInteger.pow(int exponent) returns a BigInteger whose value is (this^exponent). The exponent is an integer rather than a BigInteger. Example:
import java.math.*;
public class BigIntegerDemo {
public static void main(String[] args) {
BigInteger bi1, bi2; // create 2 BigInteger objects
int exponent = 2; // create and assign value to exponent
// assign value to bi1
bi1 = new BigInteger("6");
// perform pow operation on bi1 using exponent
bi2 = bi1.pow(exponent);
String str = "Result is " + bi1 + "^" +exponent+ " = " +bi2;
// print bi2 value
System.out.println( str );
}
}

How to re-implement sin() method in Java ? (to have results close to Math.sin() )

I know Math.sin() can work but I need to implement it myself using factorial(int) I have a factorial method already below are my sin method but I can't get the same result as Math.sin():
public static double factorial(double n) {
if (n <= 1) // base case
return 1;
else
return n * factorial(n - 1);
}
public static double sin(int n) {
double sum = 0.0;
for (int i = 1; i <= n; i++) {
if (i % 2 == 0) {
sum += Math.pow(1, i) / factorial(2 * i + 1);
} else {
sum += Math.pow(-1, i) / factorial(2 * i + 1);
}
}
return sum;
}
You should use the Taylor series. A great tutorial here
I can see that you've tried but your sin method is incorrect
public static sin(int n) {
// angle to radians
double rad = n*1./180.*Math.PI;
// the first element of the taylor series
double sum = rad;
// add them up until a certain precision (eg. 10)
for (int i = 1; i <= PRECISION; i++) {
if (i % 2 == 0)
sum += Math.pow(rad, 2*i+1) / factorial(2 * i + 1);
else
sum -= Math.pow(rad, 2*i+1) / factorial(2 * i + 1);
}
return sum;
}
A working example of calculating the sin function. Sorry I've jotted it down in C++, but hope you get the picture. It's not that different :)
Your formula is wrong and you are getting a rough result of sin(1) and all you're doing by changing n is changing the accuracy of this calculation. You should look the formula up in Wikipedia and there you'll see that your n is in the wrong place and shouldn't be used as the limit of the for loop but rather in the numerator of the fraction, in the Math.pow(...) method. Check out Taylor Series
It looks like you are trying to use the taylor series expansion for sin, but have not included the term for x. Therefore, your method will always attempt to approximate sin(1) regardless of argument.
The method parameter only controls accuracy. In a good implementation, a reasonable value for that parameter is auto-detected, preventing the caller from passing to low a value, which can result in highly inaccurate results for large x. Moreover, to assist fast convergence (and prevent unnecessary loss of significance) of the series, implementations usually use that sin(x + k * 2 * PI) = sin(x) to first move x into the range [-PI, PI].
Also, your method is not very efficient, due to the repeated evaluations of factorials. (To evaluate factorial(5) you compute factorial(3), which you have already computed in the previous iteration of the for-loop).
Finally, note that your factorial implementation accepts an argument of type double, but is only correct for integers, and your sin method should probably receive the angle as double.
Sin (x) can be represented as Taylor series:
Sin (x) = (x/1!) – (x3/3!) + (x5/5!) - (x7/7!) + …
So you can write your code like this:
public static double getSine(double x) {
double result = 0;
for (int i = 0, j = 1, k = 1; i < 100; i++, j = j + 2, k = k * -1) {
result = result + ((Math.pow(x, j) / factorial (j)) * k);
}
return result;
}
Here we have run our loop only 100 times. If you want to run more than that you need to change your base equation (otherwise infinity value will occur).
I have learned a very good trick from the book “How to solve it by computer” by R.G.Dromey. He explain it like this way:
(x3/3! ) = (x X x X x)/(3 X 2 X 1) = (x2/(3 X 2)) X (x1/1!) i = 3
(x5/5! ) = (x X x X x X x X x)/(5 X 4 X 3 X 2 X 1) = (x2/(5 X 4)) X (x3/3!) i = 5
(x7/7! ) = (x X x X x X x X x X x X x)/(7 X 6 X 5 X 4 X 3 X 2 X 1) = (x2/(7 X 6)) X (x5/5!) i = 7
So the terms (x2/(3 X 2)) , (x2/(5 X 4)), (x2/(7 X 6)) can be expressed as x2/(i X (i - 1)) for i = 3,5,7,…
Therefore to generate consecutive terms of the sine series we can write:
current ith term = (x2 / ( i X (i - 1)) ) X (previous term)
The code is following:
public static double getSine(double x) {
double result = 0;
double term = x;
result = x;
for (int i = 3, j = -1; i < 100000000; i = i + 2, j = j * -1) {
term = x * x * term / (i * (i - 1));
result = result + term * j;
}
return result;
}
Note that j variable used to alternate the sign of the term .

Newton's method with specified digits of precision

I'm trying to write a function in Java that calculates the n-th root of a number. I'm using Newton's method for this. However, the user should be able to specify how many digits of precision they want. This is the part with which I'm having trouble, as my answer is often not entirely correct. The relevant code is here: http://pastebin.com/d3rdpLW8. How could I fix this code so that it always gives the answer to at least p digits of precision? (without doing more work than is necessary)
import java.util.Random;
public final class Compute {
private Compute() {
}
public static void main(String[] args) {
Random rand = new Random(1230);
for (int i = 0; i < 500000; i++) {
double k = rand.nextDouble()/100;
int n = (int)(rand.nextDouble() * 20) + 1;
int p = (int)(rand.nextDouble() * 10) + 1;
double math = n == 0 ? 1d : Math.pow(k, 1d / n);
double compute = Compute.root(n, k, p);
if(!String.format("%."+p+"f", math).equals(String.format("%."+p+"f", compute))) {
System.out.println(String.format("%."+p+"f", math));
System.out.println(String.format("%."+p+"f", compute));
System.out.println(math + " " + compute + " " + p);
}
}
}
/**
* Returns the n-th root of a positive double k, accurate to p decimal
* digits.
*
* #param n
* the degree of the root.
* #param k
* the number to be rooted.
* #param p
* the decimal digit precision.
* #return the n-th root of k
*/
public static double root(int n, double k, int p) {
double epsilon = pow(0.1, p+2);
double approx = estimate_root(n, k);
double approx_prev;
do {
approx_prev = approx;
// f(x) / f'(x) = (x^n - k) / (n * x^(n-1)) = (x - k/x^(n-1)) / n
approx -= (approx - k / pow(approx, n-1)) / n;
} while (abs(approx - approx_prev) > epsilon);
return approx;
}
private static double pow(double x, int y) {
if (y == 0)
return 1d;
if (y == 1)
return x;
double k = pow(x * x, y >> 1);
return (y & 1) == 0 ? k : k * x;
}
private static double abs(double x) {
return Double.longBitsToDouble((Double.doubleToLongBits(x) << 1) >>> 1);
}
private static double estimate_root(int n, double k) {
// Extract the exponent from k.
long exp = (Double.doubleToLongBits(k) & 0x7ff0000000000000L);
// Format the exponent properly.
int D = (int) ((exp >> 52) - 1023);
// Calculate and return 2^(D/n).
return Double.longBitsToDouble((D / n + 1023L) << 52);
}
}
Just iterate until the update is less than say, 0.0001, if you want a precision of 4 decimals.
That is, set your epsilon to Math.pow(10, -n) if you want n digits of precision.
Let's recall what the error analysis of Newton's method says. Basically, it gives us an error for the nth iteration as a function of the error of the n-1 th iteration.
So, how can we tell if the error is less than k? We can't, unless we know the error at e(0). And if we knew the error at e(0), we would just use that to find the correct answer.
What you can do is say "e(0) <= m". You can then find n such that e(n) <= k for your desired k. However, this requires knowing the maximal value of f'' in your radius, which is (in general) just as hard a problem as finding the x intercept.
What you're checking is if the error changes by less than k, which is a perfectly acceptable way to do it. But it's not checking if the error is less than k. As Axel and others have noted, there are many other root-approximation algorithms, some of which will yield easier error analysis, and if you really want this, you should use one of those.
You have a bug in your code. Your pow() method's last line should read
return (y & 1) == 1 ? k : k * x;
rather than
return (y & 1) == 0 ? k : k * x;

Categories