Does anyone know how I could make a spiral motion following the Fibonacci pattern around a point in Robocode? I have methods like setTurnRight (double), setAhead (double), getX () and getY ().
I tried to make a simple spiral, without the required standard, that way, but it did not work ... It was more like a circle.
this.setAhead(this.direction * Double.POSITIVE_INFINITY);
if (this.direction == 1) {
this.setTurnRight(Utils.normalRelativeAngleDegrees(this.enemy.getBearing() + 60));
} else {
this.setTurnRight(Utils.normalRelativeAngleDegrees(this.enemy.getBearing() + 120));
}
physics of the game:
http://robowiki.net/wiki/Robocode/Game_Physics
Robocode Logarithmic Spiral
Here is a working run method to make a bot follow a logarithmic spiral and I believe it is a close approximation of the golden spiral (which is the spiral that can be approximated with the Fibonacci numbers).
public void run() {
double v = 5;
double c = Math.PI*2;
double a = .1;
double b = .0053468;
setMaxVelocity(v);
setAhead(100*999);
setTurnRight(360*999);
while(true)
{
double t = getTime();
double f = a*Math.pow(Math.E,b*t);
double w = v/(c*f);
setMaxTurnRate(w);
execute();
System.out.println(t+"\t"+w);
}
}
Explanation
To move in a circle (trivial spiral), you keep constant speed (how fast the bot is moving) and constant revolution speed (how fast the bot is turning). There are several ways to go from this trivial spiral movement to something more interesting. The simplest way to move in a spiral is to keep constant speed and vary the revolution speed. This answer from the game development exchange gives a good walk through on how to get an approximate equation for revolution speed.
w = v / (2*pi*t) or w = v / (2*pi*f(t)) where:
w = revolution speed
v = speed
pi = 3.14...
t = time or f(t) = function of the radius over time
This equation gives a way to move along a spiral and we can choose any spiral we want by specifying a f(t). To get the correct radius function for the golden spiral, check out this wiki page about the golden spiral. It gives this equation:
r = a*e^(b*theta) or in other words f(t) = a*e^(b*t) where:
f(t) = our radius function
a = arbitrary constant for scaling the spiral
e = Euler's constant
b = .0053468 (or .3063489 if using radians)
t = time
Conclusion
All that is left is to incorporate this code into your bot and choose your own values for a and v. v will determine the speed of the bot, so a larger v is a good idea (max is 10) and since the max for w is 8, you should scale a accordingly so that w stays between 0 and 8 for as long as possible (which is why I've included the println).
[NOTE: I couldn't think of an easy way to superimpose the golden spiral on the bot's path to check it's accuracy. So while it is clearly a logarithmic spiral, I am unsure to what degree it approximates the desired golden spiral ]
Related
I'm currently working on a terrain engine and I'm experimenting a little bit with noise. It's so fascinating to see what different structures, functions and pure imagination can create with just a few lines of code. Recently I saw this post: http://squall-digital.com/ProceduralGeneration.html, I was definitely intrigued by all of these techniques, but especially the first one caught my attention. The programmer made the gain (or persistence) of the noise to be proportional to the slope of the noise on that point. I'm currently trying to achieve this but I don't think I'm on the right track.
I'm currently using simplex noise. I know the author of the article uses Perlin Noise and yes, I have seen how to calculate the derivative of Perlin Noise, but obviously this implementation wouldn't work because of the fundamental differences in how Perlin and Simplex noise are generated. I thus set out on my own way to try and approximate the slope of noise on a given position.
I came up with the following "algorithm":
Calculate neighboring points of noise [(x + 1, z), (x - 1, z), (x, z + 1), (x, z - 1)].
Calculate their respective noise value
Calculate differenceX and differenceZ in noise values on the x-axis and the z-axis respectively
Create vectors from origin: (2, differenceX, 0) and (0, differenceZ, 2)
Scale to vectors of length 1
Add y-components of the resulting unit vectors
use this y-component as the "slope" approximated at the given point.
Now I have implemented this in code (I added "3D" vectors for the purpose of ease of understanding)
private static float slope(OpenSimplex2F simplex, float x, float z, float noise) {
float[] neighbours = getStraightNeighbours(simplex, x, z);
float xSlope = (neighbours[1] - neighbours[0]) / (2.0f * x);
float zSlope = (neighbours[3] - neighbours[2]) / (2.0f * z);
float[] vecX = new float[] { 1, xSlope, 0 };
float[] vecZ = new float[] { 0, zSlope, 1 };
float scaleX = Maths.sqrt(1.0f + xSlope * xSlope);
float scaleZ = Maths.sqrt(1.0f + zSlope * zSlope);
for (int i = 0; i < 3; i++) {
vecX[i] /= scaleX;
vecZ[i] /= scaleZ;
}
float[] grad = new float[] {
vecX[0] + vecZ[0],
vecX[1] + vecZ[1],
vecX[2] + vecZ[2]
};
return grad[1];
}
Now this gives me extremely underwhelming and rest assured, wrong results: Result
Is there anyone that can explain me if this is a good technique to approximate the slope of if this is completely wrong. I'm not the biggest math genius so I was already happy I could figure this out and that it produced a result in the first place. If anyone has a resource linked to the derivative of simplex noise (which would be a life saver, obviously), it'd be really appreciated!
I am just messing around a bit in processing since i know it better than any other language and stumbled up on this website Custom 2d physics engine. So far so good. i am at the point where i have 2 rectangles colliding and i need to resolve the collision. According to the paper i should use the code :
void ResolveCollision( Object A, Object B )
{
// Calculate relative velocity
Vec2 rv = B.velocity - A.velocity
// Calculate relative velocity in terms of the normal direction
float velAlongNormal = DotProduct( rv, normal )
// Do not resolve if velocities are separating
if(velAlongNormal > 0)
return;
// Calculate restitution
float e = min( A.restitution, B.restitution)
// Calculate impulse scalar
float j = -(1 + e) * velAlongNormal
j /= 1 / A.mass + 1 / B.mass
// Apply impulse
Vec2 impulse = j * normal
A.velocity -= 1 / A.mass * impulse
B.velocity += 1 / B.mass * impulse
}
This is written in C++ so i would need to port it to java. And here i get stuck on two things. 1: What does the author mean with "normal"? how do i get the "normal"? thing 2 are these 3 lines of code:
Vec2 impulse = j * normal
A.velocity -= 1 / A.mass * impulse
B.velocity += 1 / B.mass * impulse
He creates a vector wich has only 1 number? j * normal?
I don'really have a clear picture on what exactly happens which does not really benefit me.
He is probably referring to this as "normal". So normal is a vector with 2 elements since you are referring to a tutorial for 2D physics. And j*normal will multiply each element of normal with the scalar j.
normal, velocity and impulse are vectors with 2 elements for coordinates x, y. From the series of tutorials you are referring to, you can see normal defined here towards the end.
The "normal" vector at a point on the boundary of a 2D or 3D shape is the vector that is:
perpendicular to the boundary at that point;
has length 1; and
points outward instead of inside the shape
The normal vector is the same all along a straight line (2d) or flat surface (3d), so you will also hear people talk about the "normal" of the line or surface in these cases.
The normal vector is used for all kinds of important calculations in graphics and physics code.
How exactly to calculate the normal vector for a point, line, or surface depends on what data structures you have representing the geometry of your objects.
So, I saw this on Hacker News the other day: http://web.mit.edu/tee/www/bertrand/problem.html
It basically says what's the probability that a random chord on a circle with radius of 1 has a length greater than the square root of 3.
Looking at it, it seems obvious that the answer is 1/3, but comments on HN have people who are smarter than me debating this. https://news.ycombinator.com/item?id=10000926
I didn't want to debate, but I did want to make sure I wasn't crazy. So I coded what I thought would prove it to be P = 1/3, but I end up getting P ~ .36. So, something's got to be wrong with my code.
Can I get a sanity check?
package com.jonas.betrand;
import java.awt.geom.Point2D;
import java.util.Random;
public class Paradox {
final static double ROOT_THREE = Math.sqrt(3);
public static void main(String[] args) {
int greater = 0;
int less = 0;
for (int i = 0; i < 1000000; i++) {
Point2D.Double a = getRandomPoint();
Point2D.Double b = getRandomPoint();
//pythagorean
if (Math.sqrt(Math.pow((a.x - b.x), 2) + Math.pow((a.y - b.y), 2)) > ROOT_THREE) {
greater++;
} else {
less++;
}
}
System.out.println("Probability Observerd: " + (double)greater/(greater+less));
}
public static Point2D.Double getRandomPoint() {
//get an x such that -1 < x < 1
double x = Math.random();
boolean xsign = new Random().nextBoolean();
if (!xsign) {
x *= -1;
}
//formula for a circle centered on origin with radius 1: x^2 + y^2 = 1
double y = Math.sqrt(1 - (Math.pow(x, 2)));
boolean ysign = new Random().nextBoolean();
if (!ysign) {
y *= -1;
}
Point2D.Double point = new Point2D.Double(x, y);
return point;
}
}
EDIT: Thanks to a bunch of people setting me straight, I found that my method of finding a random point wasn't indeed so random. Here is a fix for that function which returns about 1/3.
public static Point2D.Double getRandomPoint() {
//get an x such that -1 < x < 1
double x = Math.random();
Random r = new Random();
if (!r.nextBoolean()) {
x *= -1;
}
//circle centered on origin: x^2 + y^2 = r^2. r is 1.
double y = Math.sqrt(1 - (Math.pow(x, 2)));
if (!r.nextBoolean()) {
y *= -1;
}
if (r.nextBoolean()) {
return new Point2D.Double(x, y);
} else {
return new Point2D.Double(y, x);
}
}
I believe you need to assume one fixed point say at (0, 1) and then choose a random amount of rotation in [0, 2*pi] around the circle for the location of the second point of the chord.
Just for the hell of it I wrote your incorrect version in Swift (learn Swift!):
struct P {
let x, y: Double
init() {
x = (Double(arc4random()) / 0xFFFFFFFF) * 2 - 1
y = sqrt(1 - x * x) * (arc4random() % 2 == 0 ? 1 : -1)
}
func dist(other: P) -> Double {
return sqrt((x - other.x) * (x - other.x) + (y - other.y) * (y - other.y))
}
}
let root3 = sqrt(3.0)
let total = 100_000_000
var samples = 0
for var i = 0; i < total; i++ {
if P().dist(P()) > root3 {
samples++
}
}
println(Double(samples) / Double(total))
And the answer is indeed 0.36. As the comments have been explaining, a random X value is more likely to choose the "flattened area" around pi/2 and highly unlikely to choose the "vertically squeezed" area around 0 and pi.
It is easily fixed however in the constructor for P:
(Double(arc4random()) / 0xFFFFFFFF is fancy-speak for random floating point number in [0, 1))
let angle = Double(arc4random()) / 0xFFFFFFFF * M_PI * 2
x = cos(angle)
y = sin(angle)
// outputs 0.33334509
Bertrand's paradox is exactly that: a paradox. The answer can be argued to be 1/3 or 1/2 depending on how the problem is interpreted. It seems you took the random chord approach where one side of the line is fixed and then you draw a random chord to any part of the circle. Using this method, the chances of drawing a chord that is longer than sqrt(3) is indeed 1/3.
But if you use a different approach, I'll call it the random radius approach, you'll see that it can be 1/2! The random radius is this, you draw a radius in the circle, and then you take a random chord that this radius bisects. At this point, a random chord will be longer than sqrt(3) 1/2 of the time.
Lastly, the random midpoint method. Choose a random point in the circle, and then draw a chord with this random point as the midpoint of the chord. If this point falls within a concentric circle of radius 1/2, then the chord is shorter than sqrt(3). If it falls outside the concentric circle, it is longer than sqrt(3). A circle of radius 1/2 has 1/4 the area of a circle with radius 1, so the chance of a chord smaller than sqrt(3) is 1/4.
As for your code, I haven't had time to look at it yet, but hope this clarifies the paradox (which is just an incomplete question not actually a paradox) :D
I would argue that the Bertrand paradox is less a paradox and more a cautionary lesson in probability. It's really asking the question: What do you mean by random?
Bertrand argued that there are three natural but different methods for randomly choosing a chord, giving three distinct answers. But of course, there are other random methods, but these methods are arguably not the most natural ones (that is, not the first that come to mind). For example, we could randomly position the two chord endpoints in a non-uniform manner. Or we position the chord midpoint according to some non-uniform density, like a truncated bi-variate normal.
To simulate the three methods with a programming language, you need to be able to generate uniform random variables on the unit interval, which is what all standard (pseudo)-random number generators should do. For one of the methods/solutions (the random midpoint one), you then have to take the square root of one of the uniform random variables. You then multiple the random variables by a suitable factor (or rescale). Then for each simulation method (or solution), some geometry gives the expressions for the two endpoints.
For more details, I have written a post about this problem. I recommend the links and books I have cited at the end of that post, under the section Further reading. For example, see Section 1.3 in this new set of published lecture notes. The Bertrand paradox is also in The Pleasures of Probability by Isaac. It’s covered in a non-mathematical way in the book Paradoxes from A to Z by Clark.
I have also uploaded some simulation code in MATLAB, R and Python, which can be found here.
For example, in Python (with NumPy):
import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt #for plotting
from matplotlib import collections as mc #for plotting line chords
###START Parameters START###
#Simulation disk dimensions
xx0=0; yy0=0; #center of disk
r=1; #disk radius
numbLines=10**2;#number of lines
###END Parameters END###
###START Simulate three solutions on a disk START###
#Solution A
thetaA1=2*np.pi*np.random.uniform(0,1,numbLines); #choose angular component uniformly
thetaA2=2*np.pi*np.random.uniform(0,1,numbLines); #choose angular component uniformly
#calculate chord endpoints
xxA1=xx0+r*np.cos(thetaA1);
yyA1=yy0+r*np.sin(thetaA1);
xxA2=xx0+r*np.cos(thetaA2);
yyA2=yy0+r*np.sin(thetaA2);
#calculate midpoints of chords
xxA0=(xxA1+xxA2)/2; yyA0=(yyA1+yyA2)/2;
#Solution B
thetaB=2*np.pi*np.random.uniform(0,1,numbLines); #choose angular component uniformly
pB=r*np.random.uniform(0,1,numbLines); #choose radial component uniformly
qB=np.sqrt(r**2-pB**2); #distance to circle edge (alonge line)
#calculate trig values
sin_thetaB=np.sin(thetaB);
cos_thetaB=np.cos(thetaB);
#calculate chord endpoints
xxB1=xx0+pB*cos_thetaB+qB*sin_thetaB;
yyB1=yy0+pB*sin_thetaB-qB*cos_thetaB;
xxB2=xx0+pB*cos_thetaB-qB*sin_thetaB;
yyB2=yy0+pB*sin_thetaB+qB*cos_thetaB;
#calculate midpoints of chords
xxB0=(xxB1+xxB2)/2; yyB0=(yyB1+yyB2)/2;
#Solution C
#choose a point uniformly in the disk
thetaC=2*np.pi*np.random.uniform(0,1,numbLines); #choose angular component uniformly
pC=r*np.sqrt(np.random.uniform(0,1,numbLines)); #choose radial component
qC=np.sqrt(r**2-pC**2); #distance to circle edge (alonge line)
#calculate trig values
sin_thetaC=np.sin(thetaC);
cos_thetaC=np.cos(thetaC);
#calculate chord endpoints
xxC1=xx0+pC*cos_thetaC+qC*sin_thetaC;
yyC1=yy0+pC*sin_thetaC-qC*cos_thetaC;
xxC2=xx0+pC*cos_thetaC-qC*sin_thetaC;
yyC2=yy0+pC*sin_thetaC+qC*cos_thetaC;
#calculate midpoints of chords
xxC0=(xxC1+xxC2)/2; yyC0=(yyC1+yyC2)/2;
###END Simulate three solutions on a disk END###
I understand that the dot (or inner) product of two quaternions is the angle between the rotations (including the axis-rotation). This makes the dot product equal to the angle between two points on the quaternion hypersphere.
I can not, however, find how to actually compute the dot product.
Any help would be appreciated!
current code:
public static float dot(Quaternion left, Quaternion right){
float angle;
//compute
return angle;
}
Defined are Quaternion.w, Quaternion.x, Quaternion.y, and Quaternion.z.
Note: It can be assumed that the quaternions are normalised.
The dot product for quaternions is simply the standard Euclidean dot product in 4D:
dot = left.x * right.x + left.y * right.y + left.z * right.z + left.w * right.w
Then the angle your are looking for is the arccos of the dot product (note that the dot product is not the angle): acos(dot).
However, if you are looking for the relative rotation between two quaternions, say from q1 to q2, you should compute the relative quaternion q = q1^-1 * q2 and then find the rotation associated withq.
Just NOTE: acos(dot) is very not stable from numerical point of view.
as was said previos, q = q1^-1 * q2 and than angle = 2*atan2(q.vec.length(), q.w)
Should it be 2 x acos(dot) to get the angle between quaternions.
The "right way" to compute the angle between two quaternions
There is really no such thing as the angle between two quaternions, there is only the quaternion that takes one quaternion to another via multiplication. However, you can measure the total angle of rotation of that mapping transformation, by computing the difference between the two quaternions (e.g. qDiff = q1.mul(q2.inverse()), or your library might be able to compute this directly using a call like qDiff = q1.difference(q2)), and then measuring the angle about the axis of the quaternion (your quaternion library probably has a routine for this, e.g. ang = qDiff.angle()).
Note that you will probably need to fix the value, since measuring the angle about an axis doesn't necessarily give the rotation "the short way around", e.g.:
if (ang > Math.PI) {
ang -= 2.0 * Math.PI;
} else if (ang < -Math.PI) {
ang += 2.0 * Math.PI;
}
Measuring the similarity of two quaternions using the dot product
Update: See this answer instead.
I assume that in the original question, the intent of treating the quaternions as 4d vectors is to enable a simple method for measuring the similarity of two quaternions, while still keeping in mind that the quaternions represent rotations. (The actual rotation mapping from one quaternion to another is itself a quaternion, not a scalar.)
Several answers suggest using the acos of the dot product. (First thing to note: the quaternions must be unit quaternions for this to work.) However, the other answers don't take into account the "double cover issue": both q and -q represent the exact same rotation.
Both acos(q1 . q2) and acos(q1 . (-q2)) should return the same value, since q2 and -q2 represent the same rotation. However (with the exception of x == 0), acos(x) and acos(-x) do not return the same value. Therefore, on average (given random quaternions), acos(q1 . q2) will not give you what you expect half of the time, meaning that it will not give you a measure of the angle between q1 and q2, assuming that you care at all that q1 and q2 represent rotations. So even if you only plan to use the dot product or acos of the dot product as a similarity metric, to test how similar q1 and q2 are in terms of the effect they have as a rotation, the answer you get will be wrong half the time.
More specifically, if you are trying to simply treat quaternions as 4d vectors, and you compute ang = acos(q1 . q2), you will sometimes get the value of ang that you expect, and the rest of the time the value you actually wanted (taking into account the double cover issue) will be PI - acos(-q1 . q2). Which of these two values you get will randomly fluctuate between these values depending on exactly how q1 and q2 were computed!.
To solve this problem, you have to normalize the quaternions so that they are in the same "hemisphere" of the double cover space. There are several ways to do this, and to be honest I'm not even sure which of these is the "right" or optimal way. They do all produce different results from other methods in some cases. Any feedback on which of the three normalization forms above is the correct or optimal one would be greatly appreciated.
import java.util.Random;
import org.joml.Quaterniond;
import org.joml.Vector3d;
public class TestQuatNorm {
private static Random random = new Random(1);
private static Quaterniond randomQuaternion() {
return new Quaterniond(
random.nextDouble() * 2 - 1, random.nextDouble() * 2 - 1,
random.nextDouble() * 2 - 1, random.nextDouble() * 2 - 1)
.normalize();
}
public static double normalizedDot0(Quaterniond q1, Quaterniond q2) {
return Math.abs(q1.dot(q2));
}
public static double normalizedDot1(Quaterniond q1, Quaterniond q2) {
return
(q1.w >= 0.0 ? q1 : new Quaterniond(-q1.x, -q1.y, -q1.z, -q1.w))
.dot(
q2.w >= 0.0 ? q2 : new Quaterniond(-q2.x, -q2.y, -q2.z, -q2.w));
}
public static double normalizedDot2(Quaterniond q1, Quaterniond q2) {
Vector3d v1 = new Vector3d(q1.x, q1.y, q1.z);
Vector3d v2 = new Vector3d(q2.x, q2.y, q2.z);
double dot = v1.dot(v2);
Quaterniond q2n = dot >= 0.0 ? q2
: new Quaterniond(-q2.x, -q2.y, -q2.z, -q2.w);
return q1.dot(q2n);
}
public static double acos(double val) {
return Math.toDegrees(Math.acos(Math.max(-1.0, Math.min(1.0, val))));
}
public static void main(String[] args) {
for (int i = 0; i < 1000; i++) {
var q1 = randomQuaternion();
var q2 = randomQuaternion();
double dot = q1.dot(q2);
double dot0 = normalizedDot0(q1, q2);
double dot1 = normalizedDot1(q1, q2);
double dot2 = normalizedDot2(q1, q2);
System.out.println(acos(dot) + "\t" + acos(dot0) + "\t" + acos(dot1)
+ "\t" + acos(dot2));
}
}
}
Also note that:
acos is known to not be very numerically accurate (given some worst-case inputs, up to half of the least significant digits can be wrong);
the implementation of acos is exceptionally slow in the JDK standard libraries;
acos returns NaN if its parameter is even slightly outside [-1,1], which is a common occurrence for dot products of even unit quaternions -- so you need to bound the value of the dot product to that range before calling acos. See this line in the code above:
return Math.toDegrees(Math.acos(Math.max(-1.0, Math.min(1.0, val))));
According to this cheatsheet Eq. (42), there is a more robust and accurate way of computing the angle between two vectors that replaces acos with atan2 (although note that this does not solve the double cover problem either, so you will need to use one of the above normalization forms before applying the following):
ang(q1, q2) = 2 * atan2(|q1 - q2|, |q1 + q2|)
I admit though that I don't understand this formulation, since quaternion subtraction and addition has no geometrical meaning.
Since the trigonometric functions in java.lang.Math are quite slow: is there a library that does a quick and good approximation? It seems possible to do a calculation several times faster without losing much precision. (On my machine a multiplication takes 1.5ns, and java.lang.Math.sin 46ns to 116ns). Unfortunately there is not yet a way to use the hardware functions.
UPDATE: The functions should be accurate enough, say, for GPS calculations. That means you would need at least 7 decimal digits accuracy, which rules out simple lookup tables. And it should be much faster than java.lang.Math.sin on your basic x86 system. Otherwise there would be no point in it.
For values over pi/4 Java does some expensive computations in addition to the hardware functions. It does so for a good reason, but sometimes you care more about the speed than for last bit accuracy.
Computer Approximations by Hart. Tabulates Chebyshev-economized approximate formulas for a bunch of functions at different precisions.
Edit: Getting my copy off the shelf, it turned out to be a different book that just sounds very similar. Here's a sin function using its tables. (Tested in C since that's handier for me.) I don't know if this will be faster than the Java built-in, but it's guaranteed to be less accurate, at least. :) You may need to range-reduce the argument first; see John Cook's suggestions. The book also has arcsin and arctan.
#include <math.h>
#include <stdio.h>
// Return an approx to sin(pi/2 * x) where -1 <= x <= 1.
// In that range it has a max absolute error of 5e-9
// according to Hastings, Approximations For Digital Computers.
static double xsin (double x) {
double x2 = x * x;
return ((((.00015148419 * x2
- .00467376557) * x2
+ .07968967928) * x2
- .64596371106) * x2
+ 1.57079631847) * x;
}
int main () {
double pi = 4 * atan (1);
printf ("%.10f\n", xsin (0.77));
printf ("%.10f\n", sin (0.77 * (pi/2)));
return 0;
}
Here is a collection of low-level tricks for quickly approximating trig functions. There is example code in C which I find hard to follow, but the techniques are just as easily implemented in Java.
Here's my equivalent implementation of invsqrt and atan2 in Java.
I could have done something similar for the other trig functions, but I have not found it necessary as profiling showed that only sqrt and atan/atan2 were major bottlenecks.
public class FastTrig
{
/** Fast approximation of 1.0 / sqrt(x).
* See http://www.beyond3d.com/content/articles/8/
* #param x Positive value to estimate inverse of square root of
* #return Approximately 1.0 / sqrt(x)
**/
public static double
invSqrt(double x)
{
double xhalf = 0.5 * x;
long i = Double.doubleToRawLongBits(x);
i = 0x5FE6EB50C7B537AAL - (i>>1);
x = Double.longBitsToDouble(i);
x = x * (1.5 - xhalf*x*x);
return x;
}
/** Approximation of arctangent.
* Slightly faster and substantially less accurate than
* {#link Math#atan2(double, double)}.
**/
public static double fast_atan2(double y, double x)
{
double d2 = x*x + y*y;
// Bail out if d2 is NaN, zero or subnormal
if (Double.isNaN(d2) ||
(Double.doubleToRawLongBits(d2) < 0x10000000000000L))
{
return Double.NaN;
}
// Normalise such that 0.0 <= y <= x
boolean negY = y < 0.0;
if (negY) {y = -y;}
boolean negX = x < 0.0;
if (negX) {x = -x;}
boolean steep = y > x;
if (steep)
{
double t = x;
x = y;
y = t;
}
// Scale to unit circle (0.0 <= y <= x <= 1.0)
double rinv = invSqrt(d2); // rinv ≅ 1.0 / hypot(x, y)
x *= rinv; // x ≅ cos θ
y *= rinv; // y ≅ sin θ, hence θ ≅ asin y
// Hack: we want: ind = floor(y * 256)
// We deliberately force truncation by adding floating-point numbers whose
// exponents differ greatly. The FPU will right-shift y to match exponents,
// dropping all but the first 9 significant bits, which become the 9 LSBs
// of the resulting mantissa.
// Inspired by a similar piece of C code at
// http://www.shellandslate.com/computermath101.html
double yp = FRAC_BIAS + y;
int ind = (int) Double.doubleToRawLongBits(yp);
// Find φ (a first approximation of θ) from the LUT
double φ = ASIN_TAB[ind];
double cφ = COS_TAB[ind]; // cos(φ)
// sin(φ) == ind / 256.0
// Note that sφ is truncated, hence not identical to y.
double sφ = yp - FRAC_BIAS;
double sd = y * cφ - x * sφ; // sin(θ-φ) ≡ sinθ cosφ - cosθ sinφ
// asin(sd) ≅ sd + ⅙sd³ (from first 2 terms of Maclaurin series)
double d = (6.0 + sd * sd) * sd * ONE_SIXTH;
double θ = φ + d;
// Translate back to correct octant
if (steep) { θ = Math.PI * 0.5 - θ; }
if (negX) { θ = Math.PI - θ; }
if (negY) { θ = -θ; }
return θ;
}
private static final double ONE_SIXTH = 1.0 / 6.0;
private static final int FRAC_EXP = 8; // LUT precision == 2 ** -8 == 1/256
private static final int LUT_SIZE = (1 << FRAC_EXP) + 1;
private static final double FRAC_BIAS =
Double.longBitsToDouble((0x433L - FRAC_EXP) << 52);
private static final double[] ASIN_TAB = new double[LUT_SIZE];
private static final double[] COS_TAB = new double[LUT_SIZE];
static
{
/* Populate trig tables */
for (int ind = 0; ind < LUT_SIZE; ++ ind)
{
double v = ind / (double) (1 << FRAC_EXP);
double asinv = Math.asin(v);
COS_TAB[ind] = Math.cos(asinv);
ASIN_TAB[ind] = asinv;
}
}
}
That might make it : http://sourceforge.net/projects/jafama/
I'm surprised that the built-in Java functions would be so slow. Surely the JVM is calling the native trig functions on your CPU, not implementing the algorithms in Java. Are you certain your bottleneck is calls to trig functions and not some surrounding code? Maybe some memory allocations?
Could you rewrite in C++ the part of your code that does the math? Just calling C++ code to compute trig functions probably wouldn't speed things up, but moving some context too, like an outer loop, to C++ might speed things up.
If you must roll your own trig functions, don't use Taylor series alone. The CORDIC algorithms are much faster unless your argument is very small. You could use CORDIC to get started, then polish the result with a short Taylor series. See this StackOverflow question on how to implement trig functions.
On the x86 the java.lang.Math sin and cos functions do not directly call the hardware functions because Intel didn't always do such a good job implimenting them. There is a nice explanation in bug #4857011.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4857011
You might want to think hard about an inexact result. It's amusing how often I spend time finding this in others code.
"But the comment says Sin..."
You could pre-store your sin and cos in an array if you only need some approximate values.
For example, if you want to store the values from 0° to 360°:
double sin[]=new double[360];
for(int i=0;i< sin.length;++i) sin[i]=Math.sin(i/180.0*Math.PI):
you then use this array using degrees/integers instead of radians/double.
I haven't heard of any libs, probably because it's rare enough to see trig heavy Java apps. It's also easy enough to roll your own with JNI (same precision, better performance), numerical methods (variable precision / performance ) or a simple approximation table.
As with any optimization, best to test that these functions are actually a bottleneck before bothering to reinvent the wheel.
Trigonometric functions are the classical example for a lookup table. See the excellent
Lookup table article at wikipedia
If you're searching a library for J2ME you can try:
the Fixed Point Integer Math Library MathFP
The java.lang.Math functions call the hardware functions. There should be simple appromiations you can make but they won't be as accurate.
On my labtop, sin and cos takes about 144 ns.
In the sin/cos test I was performing for integers zero to one million. I assume that 144 ns is not fast enough for you.
Do you have a specific requirement for the speed you need?
Can you qualify your requirement in terms of time per operation which is satisfactory?
Check out Apache Commons Math package if you want to use existing stuff.
If performance is really of the essence, then you can go about implementing these functions yourself using standard math methods - Taylor/Maclaurin series', specifically.
For example, here are several Taylor series expansions that might be useful (taken from wikipedia):
Could you elaborate on what you need to do if these routines are too slow. You might be able to do some coordinate transformations ahead of time some way or another.