http://introcs.cs.princeton.edu/java/13flow/Sqrt.java.html:
public class Sqrt {
public static void main(String[] args) {
// read in the command-line argument
double c = Double.parseDouble(args[0]);
double epsilon = 1e-15; // relative error tolerance
double t = c; // estimate of the square root of c
// repeatedly apply Newton update step until desired precision is achieved
while (Math.abs(t - c/t) > epsilon*t) {
t = (c/t + t) / 2.0;
}
// print out the estimate of the square root of c
System.out.println(t);
}
}
The thing is..I understand perfectly well how the program works itself. The problem I have is with the equation f(x) = x^2 - c and how that relates to the code above. Like, why divide it by x so that x(x - c/x)? There seems to be a missing mathematical explanation when it comes to some of these examples. In other words, I'm looking for an explanation from a simple mathematical stand point, NOT coding as so much.
You are given c and you want to solve
t = sqrt(c)
or equivalently,
c = t^2
or then again,
c - t^2 = 0.
I'll call the above equation f(t) = 0 (no mention of c since it is a given constant).
Newton method iterates over trial values of t, which I'll label t_i, t_{i+1}, ....
The Taylor expansion to 1st order is:
f(t_i + dt_i) = f(t_i) + dt_i * f'(t_i) + ...
So if you don't quite have f(t_i) = 0, you add a dt_i such that
f(t_i + dt_i) nearly = 0 = f(t_i) + dt_i * f'(t_i) + ...
So dt_i = -f(t_i) / f'(t_i), i.e. f(t_i + -f(t_i) / f'(t_i)) is closer to zero than f(t_i).
If you do the derivatives for f(t) = c - t^2, you'll see that the equation in the code t_{i+1} = (c / t_i + t_i) / 2 is just the iterative formula t_{i+1} = t_i + dt_i with the dt_i estimated above.
This is iterative method, so it does not give an exact solution. You need to decide when you want to stop (sufficient precision), otherwise the algorithm would go on forever. That's why you check f(t_i) < threshold instead of the true f(t_i) = 0. In their case they chose a threshold = epsilon * t^2; I think the multiplication by t^2 was used because if you used a fixed constant as a threshold, you might run into numerical accuracy problems (i.e. if you are playing with trillions, you could never get an fixed accuracy of 10^{-10} due to the finite precision of floating point representation.)
Based on the code, the following has already been explained on the Javadoc Comment:
* Computes the square root of a nonnegative number c using
* Newton's method:
* - initialize t = c
* - replace t with the average of c/t and t
* - repeat until desired accuracy reached
Ok, I'll give it a bash (see inline comments):
public class Sqrt {
public static void main(String[] args) {
// read in the command-line argument (i.e. this is the value that we want
// square root from.)
double c = Double.parseDouble(args[0]);
// Since the the square root of non-squares are irrational, we need some
// error tolerance. In other words, if the answer is less than epsilon wrong
// we'll take it.
double epsilon = 1e-15; // relative error tolerance
// t is our first guess (c / 2.0 works well too - in fact it tends to be
// better.)
double t = c; // estimate of the square root of c
// repeatedly apply Newton update step until desired precision is achieved
// The condition here is rather elegant and optimized... to see why it works,
// simply break it up. The absolute is there to cater for negative values, but
// for c >= 0:
// | c - c/t | > epsilon * t
// t * ( t - c / t ) > epsilon
// tt - c = > epsilon)
while (Math.abs(t - c/t) > epsilon*t) {
// Improve the guess by applying Newton's genius :-)
// Take the original number, divide by the guess add t and take the
// average.
t = ( c / t + t) / 2.0;
}
// print out the estimate of the square root of c
System.out.println(t);
}
}
ejlab.net jelmar
I believe the above mentioned code is from R.Sedgewick's book 'Introduction to Programming in Java', page 62. What he tries to say in the book is that you can use f(x)=x^2-c as a special case to find the square root of any positive number. So how it works:
Newton's method states X(n+1)=X(n)-(F(X(n))/F'(X(n))). Assume that in F(X)=X^2-C, where C=2 since we are looking for square root of 2 (if you want to find square root of 36 then C=36 etc). Then the first derivative of the function F(X) is F'(X)=2X. Applying the Newton's method we get
X(n+1)=X(n)-((X^2-C)/(2X))
for X(0)=2 we get
n=1, X(1)=2-(2^2-2)/(2*2) > X(1)=1.5;
n=2 X(2)=1.5 -(1.5^2-2)/(2*1.5) > X(2)=1.41666667
and so on...
Related
I have to write a program in which I write a,b c,d (coefficients of equation 3 degree) and as a result I should get X1, X2, X3 (solutions of equation). I have to use Viete's formulas and BigDecimal for this, because my lecturer requires it from me.
I came to the conclusion that I have to solve the following system of equations:
x1+x2+x3=-b/a
x1*x2+x1*x3+x2*x3=c/a
x1*x2*x3=-d/a
I have no idea how I can do it in Java.
I tried to use the JAMA package, but I don't think I can use it to solve such a system of equations.
How can I do that?
If you want to find the roots of a cubic polynomial in Java you can do it easily using Newton-Raphson's method.
The algorithm -
1. Input: initial x, func(x), derivFunc(x)
Output: Root of Func()
2. Compute values of func(x) and derivFunc(x) for given initial x
3. Compute h: h = func(x) / derivFunc(x)
4. While h is greater than allowed error ε
- h = func(x) / derivFunc(x)
- x = x – h
Here is a demonstration for solving the cubic equation x^3-x^2+2
class XYZ {
static final double EPSILON = 0.001;
// An example function whose solution
// is determined using Bisection Method.
// The function is x^3 - x^2 + 2
static double func(double x)
{
return x * x * x - x * x + 2;
}
// Derivative of the above function
// which is 3*x^x - 2*x
static double derivFunc(double x)
{
return 3 * x * x - 2 * x;
}
// Function to find the root
static void newtonRaphson(double x)
{
double h = func(x) / derivFunc(x);
while (Math.abs(h) >= EPSILON)
{
h = func(x) / derivFunc(x);
// x(i+1) = x(i) - f(x) / f'(x)
x = x - h;
}
System.out.print("The value of the"
+ " root is : "
+ Math.round(x * 100.0) / 100.0);
}
// Driver code
public static void main (String[] args)
{
// Initial values assumed
double x0 = -20;
newtonRaphson(x0);
}
}
Output - The value of root is : -1.00
To do it your way you have to solve a system of non-linear equations which is harder but can be done using the Newton Raphson's Multivariate method. You might want to look it up. Also note that this is an approximate method and guesses the roots after you put an initial 'guess' of your own (in this case its -20)
The Newton (Raphson, Kantorovich) method for the Viete equations gives you the (Weierstrass-)Durand-Kerner method of simultaneous root approximation. However, in the completed method you will no longer see the Viete identities, they kind of cancel out. You will need complex numbers over the demanded real numbers data type.
If you go with the simple Newton method like in the other answer, then after computing the one real root you can split off the linear factor belonging to it via the Horner-Ruffini scheme and then solve the remaining quadratic equation directly. Then you only need to consider the possible complex nature of the roots in constructing the output strings, as the real and imaginary parts have easy direct formulas.
I'm havinga Problem with a method, which takes a Polynom like f(x)=x²+1 and calculates possible zero points with the newton algorithm.
I have given requirements for specific variables so even if the naming is not good or a variable is not needed I have to use them :/
The Polynom I give my method as a parameter is a double-array: For f(x)=x²+1 it would be {1.0,0.0,1.0}
so its constructed like 1.0*x^0 + 0.0*x^1+1.0*x^2
For my Code:
x0 is the start value for the newton algorithm and eps is for the accuracy of the calculation
I followed my given Instructions and got the following code working:
public static double newton(double[] a, double x0, double eps) {
double z;
double xn;
double xa = x0;
double zaehler;
double nenner;
do {
zaehler = horner(a, xa);
nenner = horner(ableit(a), xa);
if(nenner == 0) {
return Double.POSITIVE_INFINITY;
}
xn = xa - (zaehler/nenner);
xa = xn;
} while((Math.abs(horner(a, xn))) >= eps);
z = xn;
return 0;
}
the method horner() calculates the y-Value of a given function for a given x-Value.
My Problem is if the Function doesn't has a zero-point like x²+1 and I start with x0=1 and eps=0.1 I get Infinity returned.
But If I start with x0=10 and eps=0.1for example I create an endless loop.
How can I deal with this or is this a general Problem with the Newton Algorithm?!
Is the only way to set a fixed maximum of Iterations?
The Code is working for Polynoms that have at least one zero-point!
The Newton–Raphson method requires the existence of a real root x such that f(x)=0. The function you use x^2+1 has no real roots, so your algorithm will not work in this case (nor in others where there is no root).
Since x^2+1 >= 1 for all real x this implies horner(a, xn) >= 1, so the loop
while((Math.abs(horner(a, xn))) >= eps)
will not terminate for eps < 1.
Maybe before starting to iterate, you should check the existence of a zero.
E.g. if the highest (according to the power of x) nonzero coefficient is odd then there will be a real zero.
Or extend your algorithm such that it previously tries to find some real aand b such that f(a)f(b) <= 0 (then between a and b there is a root).
I am trying to create a logistic regression algorithm in java but when I calculate the logarithm of the likelihood it is always returning NaN. My method which calculates the logarithm looks like this :
//Calculate log likelihood on given data
private double getLogLikelihood(double cat, double[] x) {
return cat * Math.log(findProbability(x))
+ (1 - cat) * Math.log(1 - findProbability(x));
}
And the findProbability method is just take an instance from the dataset and returning the sigmoid funcion result which is between 0 and 1.
//Calculate the sum of w * x for each weight and attribute
//call the sigmoid function with that s
public double findProbability(double[] x){
double s = 0;
for(int i = 0; i < this.weights.length; i++){
if(i >= x.length) break;
s += this.weights[i] * x[i];
}
return sigmoid(s);
}
private double sigmoid(double s){
return 1 / (1 + Math.exp(-s));
}
Moreover, my starting weights are :
[-0.2982955509135178, -0.4984900460081106, -1.816880187922516, -2.7325608512266073, 0.12542715714800834, 0.1516078084483485, 0.27631147403449774, 0.1371611094778011, 0.16029832096058613, 0.3117065974657231, 0.04262385176091778, 0.1948263133838624, 0.10788353525185314, 0.770608588466501, 0.2697281907888033, 0.09920694325563077, 0.003224073601703939, 0.021573742410541247, 0.21528348692817675, 0.3275511757298476, -0.1500597314893408, -0.7221692528386277, -2.062544912370121, 1.4315146889363015, 0.2522133355419722, 0.23919315019065995, 0.3200037377021523, 0.059466770771758076, 0.04012493980772944, 0.2553236501265919]
Finally, an instance from my dataset is :[M,17.99,10.38,122.8,1001,0.1184,0.2776,0.3001,0.1471,0.2419,0.07871,1.095,0.9053,8.589,153.4,0.006399,0.04904,0.05373,0.01587,0.03003,0.006193,25.38,17.33,184.6,2019,0.1622,0.6656,0.7119,0.2654,0.4601,0.1189]
I tried to initialize the starting weightss with different random numbers but thats didnt solve the problem.
The arithematic is causing a rounding error leaving you with 1.
double b = 1 + Math.exp(-3522);
b will be equal to 1, because otherwise you will need too many sig figs. You'll have to approximate the value to keep the precision. 1/(1+s) ~= 1 - s; Which means you need to calculate log(1) and log(s).
edit: sorry, I made a mistake, it appears Math.exp(-3522) is evaluated as 0 after rounding. Ill leave this answer because Math.exp(-x) might be too small to add to 1, or it might just be zero.
NaN is a result of dividing by zero or calling Math.log on a non-positive number, so u should try and find where exactly this happens. I suggest debugging or adding code to print the values of which u take the logarithm/dividy by.
EDIT: it seems it is a rounding error: exp(-s) will return a result so small that added with 1 it will still remain 1. This causes the logarithm to return -Inf. I'd suggest u try and find a mathematical way to solve this by trying to perhaps to approximate the log of the exponential.
I found a solution to my problem so I post it here:
I added an overflow check:
private double sigmoid(double s){
if(s>20){
s=20;
}else if(s<-20){
s=-20;
}
double exp = Math.exp(s);
return exp/(1+exp);
}
Also changing 1/(1+Math.exp(s) to exp/(1+exp) proved to be more stable in small disturbances of inputs.
How can I multiply and divide without using arithmetic operators? I read similar question here but i still have problem to multiply and divide.
Also, how can square root be calculated without using math functions?
if you have addition and negation, as in the highest voted answer to the post you gave, you can use looped additions and subtractions to implement multiplication and division.
As for the square root, just implement Newton's Iteration on the basis of the operations from step 1.
Using bitwise operators one example I found is here:
http://geeki.wordpress.com/2007/12/12/adding-two-numbers-with-bitwise-and-shift-operators/
Addition can be translated to multiplicity and division. For sqrt you could use Taylor series.
http://en.wikipedia.org/wiki/Taylor_series
Fast square root function(even faster than the library function!):
EDIT: not true, actually slower because of recent hardware improvements. This is however the code used in Quake II.
double fsqrt (double y)
{
double x, z, tempf;
unsigned long *tfptr = ((unsigned long *)&tempf) + 1;
tempf = y;
*tfptr = (0xbfcdd90a - *tfptr)>>1; /* estimate of 1/sqrt(y) */
x = tempf;
z = y*0.5; /* hoist out the “/2” */
x = (1.5*x) - (x*x)*(x*z); /* iteration formula */
x = (1.5*x) – (x*x)*(x*z);
// x = (1.5*x) – (x*x)*(x*z); /* not necessary in games */
return x*y;
}
Since the trigonometric functions in java.lang.Math are quite slow: is there a library that does a quick and good approximation? It seems possible to do a calculation several times faster without losing much precision. (On my machine a multiplication takes 1.5ns, and java.lang.Math.sin 46ns to 116ns). Unfortunately there is not yet a way to use the hardware functions.
UPDATE: The functions should be accurate enough, say, for GPS calculations. That means you would need at least 7 decimal digits accuracy, which rules out simple lookup tables. And it should be much faster than java.lang.Math.sin on your basic x86 system. Otherwise there would be no point in it.
For values over pi/4 Java does some expensive computations in addition to the hardware functions. It does so for a good reason, but sometimes you care more about the speed than for last bit accuracy.
Computer Approximations by Hart. Tabulates Chebyshev-economized approximate formulas for a bunch of functions at different precisions.
Edit: Getting my copy off the shelf, it turned out to be a different book that just sounds very similar. Here's a sin function using its tables. (Tested in C since that's handier for me.) I don't know if this will be faster than the Java built-in, but it's guaranteed to be less accurate, at least. :) You may need to range-reduce the argument first; see John Cook's suggestions. The book also has arcsin and arctan.
#include <math.h>
#include <stdio.h>
// Return an approx to sin(pi/2 * x) where -1 <= x <= 1.
// In that range it has a max absolute error of 5e-9
// according to Hastings, Approximations For Digital Computers.
static double xsin (double x) {
double x2 = x * x;
return ((((.00015148419 * x2
- .00467376557) * x2
+ .07968967928) * x2
- .64596371106) * x2
+ 1.57079631847) * x;
}
int main () {
double pi = 4 * atan (1);
printf ("%.10f\n", xsin (0.77));
printf ("%.10f\n", sin (0.77 * (pi/2)));
return 0;
}
Here is a collection of low-level tricks for quickly approximating trig functions. There is example code in C which I find hard to follow, but the techniques are just as easily implemented in Java.
Here's my equivalent implementation of invsqrt and atan2 in Java.
I could have done something similar for the other trig functions, but I have not found it necessary as profiling showed that only sqrt and atan/atan2 were major bottlenecks.
public class FastTrig
{
/** Fast approximation of 1.0 / sqrt(x).
* See http://www.beyond3d.com/content/articles/8/
* #param x Positive value to estimate inverse of square root of
* #return Approximately 1.0 / sqrt(x)
**/
public static double
invSqrt(double x)
{
double xhalf = 0.5 * x;
long i = Double.doubleToRawLongBits(x);
i = 0x5FE6EB50C7B537AAL - (i>>1);
x = Double.longBitsToDouble(i);
x = x * (1.5 - xhalf*x*x);
return x;
}
/** Approximation of arctangent.
* Slightly faster and substantially less accurate than
* {#link Math#atan2(double, double)}.
**/
public static double fast_atan2(double y, double x)
{
double d2 = x*x + y*y;
// Bail out if d2 is NaN, zero or subnormal
if (Double.isNaN(d2) ||
(Double.doubleToRawLongBits(d2) < 0x10000000000000L))
{
return Double.NaN;
}
// Normalise such that 0.0 <= y <= x
boolean negY = y < 0.0;
if (negY) {y = -y;}
boolean negX = x < 0.0;
if (negX) {x = -x;}
boolean steep = y > x;
if (steep)
{
double t = x;
x = y;
y = t;
}
// Scale to unit circle (0.0 <= y <= x <= 1.0)
double rinv = invSqrt(d2); // rinv ≅ 1.0 / hypot(x, y)
x *= rinv; // x ≅ cos θ
y *= rinv; // y ≅ sin θ, hence θ ≅ asin y
// Hack: we want: ind = floor(y * 256)
// We deliberately force truncation by adding floating-point numbers whose
// exponents differ greatly. The FPU will right-shift y to match exponents,
// dropping all but the first 9 significant bits, which become the 9 LSBs
// of the resulting mantissa.
// Inspired by a similar piece of C code at
// http://www.shellandslate.com/computermath101.html
double yp = FRAC_BIAS + y;
int ind = (int) Double.doubleToRawLongBits(yp);
// Find φ (a first approximation of θ) from the LUT
double φ = ASIN_TAB[ind];
double cφ = COS_TAB[ind]; // cos(φ)
// sin(φ) == ind / 256.0
// Note that sφ is truncated, hence not identical to y.
double sφ = yp - FRAC_BIAS;
double sd = y * cφ - x * sφ; // sin(θ-φ) ≡ sinθ cosφ - cosθ sinφ
// asin(sd) ≅ sd + ⅙sd³ (from first 2 terms of Maclaurin series)
double d = (6.0 + sd * sd) * sd * ONE_SIXTH;
double θ = φ + d;
// Translate back to correct octant
if (steep) { θ = Math.PI * 0.5 - θ; }
if (negX) { θ = Math.PI - θ; }
if (negY) { θ = -θ; }
return θ;
}
private static final double ONE_SIXTH = 1.0 / 6.0;
private static final int FRAC_EXP = 8; // LUT precision == 2 ** -8 == 1/256
private static final int LUT_SIZE = (1 << FRAC_EXP) + 1;
private static final double FRAC_BIAS =
Double.longBitsToDouble((0x433L - FRAC_EXP) << 52);
private static final double[] ASIN_TAB = new double[LUT_SIZE];
private static final double[] COS_TAB = new double[LUT_SIZE];
static
{
/* Populate trig tables */
for (int ind = 0; ind < LUT_SIZE; ++ ind)
{
double v = ind / (double) (1 << FRAC_EXP);
double asinv = Math.asin(v);
COS_TAB[ind] = Math.cos(asinv);
ASIN_TAB[ind] = asinv;
}
}
}
That might make it : http://sourceforge.net/projects/jafama/
I'm surprised that the built-in Java functions would be so slow. Surely the JVM is calling the native trig functions on your CPU, not implementing the algorithms in Java. Are you certain your bottleneck is calls to trig functions and not some surrounding code? Maybe some memory allocations?
Could you rewrite in C++ the part of your code that does the math? Just calling C++ code to compute trig functions probably wouldn't speed things up, but moving some context too, like an outer loop, to C++ might speed things up.
If you must roll your own trig functions, don't use Taylor series alone. The CORDIC algorithms are much faster unless your argument is very small. You could use CORDIC to get started, then polish the result with a short Taylor series. See this StackOverflow question on how to implement trig functions.
On the x86 the java.lang.Math sin and cos functions do not directly call the hardware functions because Intel didn't always do such a good job implimenting them. There is a nice explanation in bug #4857011.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4857011
You might want to think hard about an inexact result. It's amusing how often I spend time finding this in others code.
"But the comment says Sin..."
You could pre-store your sin and cos in an array if you only need some approximate values.
For example, if you want to store the values from 0° to 360°:
double sin[]=new double[360];
for(int i=0;i< sin.length;++i) sin[i]=Math.sin(i/180.0*Math.PI):
you then use this array using degrees/integers instead of radians/double.
I haven't heard of any libs, probably because it's rare enough to see trig heavy Java apps. It's also easy enough to roll your own with JNI (same precision, better performance), numerical methods (variable precision / performance ) or a simple approximation table.
As with any optimization, best to test that these functions are actually a bottleneck before bothering to reinvent the wheel.
Trigonometric functions are the classical example for a lookup table. See the excellent
Lookup table article at wikipedia
If you're searching a library for J2ME you can try:
the Fixed Point Integer Math Library MathFP
The java.lang.Math functions call the hardware functions. There should be simple appromiations you can make but they won't be as accurate.
On my labtop, sin and cos takes about 144 ns.
In the sin/cos test I was performing for integers zero to one million. I assume that 144 ns is not fast enough for you.
Do you have a specific requirement for the speed you need?
Can you qualify your requirement in terms of time per operation which is satisfactory?
Check out Apache Commons Math package if you want to use existing stuff.
If performance is really of the essence, then you can go about implementing these functions yourself using standard math methods - Taylor/Maclaurin series', specifically.
For example, here are several Taylor series expansions that might be useful (taken from wikipedia):
Could you elaborate on what you need to do if these routines are too slow. You might be able to do some coordinate transformations ahead of time some way or another.