Quaternion can describe not only rotation, but also an orientation, i.e. rotation from initial (zero) position.
I was wishing to model smooth rotation from one orientation to another. I calculated start orientation startOrientation and end orientation endOrientation and was wishing to describe intermediate orientations as startOrientation*(1-argument) + endOrientation*argument while argument changes from 0 to 1.
The code for monkey engine update function is follows:
#Override
public void simpleUpdate(float tpf) {
if( endOrientation != null ) {
if( !started ) {
started = true;
}
else {
fraction += tpf * speed;
argument = (float) ((1 - Math.cos(fraction * Math.PI)) / 2);
orientation = startOrientation.mult(1-argument).add(endOrientation.mult(argument));
//orientation = startOrientation.mult(1-fraction).add(endOrientation.mult(fraction));
log.debug("tpf = {}, fraction = {}, argument = {}", tpf, fraction, argument);
//log.debug("orientation = {}", orientation);
rootNode.setLocalRotation(orientation);
if( fraction >= 1 ) {
rootNode.setLocalRotation(endOrientation);
log.debug("Stopped rotating");
startOrientation = endOrientation = null;
fraction = 0;
started = false;
}
}
}
}
The cosine formula was expected to model smooth accelerating at the beginning and decelerating at the end.
The code works but not as expected: the smooth rotation starts and finishes long before fraction and argument values reach 1 and I don't understand, why.
Why the orientation value reaches endOrientation so fast?
You have stated that in your case startOrientation was being modified. However; the following remains true
Interpolating between quaternions
The method slerp is included within the Quaternion class for this purpose: interpolating between two rotations.
Assuming we have two quaternions startOrientation and endOrientation and we want the point interpolation between them then we interpolate between then using the following code:
float interpolation=0.2f;
Quaternion result=new Quaternion();
result.slerp(startOrientation, endOrientation, interpolation);
Why your approach may be dangerous
Quaternions are somewhat complex internally and follow somewhat different mathematical rules to say vectors. You have called the multiply(float scalar) method on the quaternion. Internally this looks like this
public QuaternionD mult(float scalar) {
return new QuaternionD(scalar * x, scalar * y, scalar * z, scalar * w);
}
So it just does a simple multiplication of all the elements. This explicitly does not return a rotation that is scalar times the size. In fact such a quaternion no longer represents a valid rotation at all since its no longer a unit quaternion. If you called normalise on this quaterion it would immediately undo the scaling. I'm sure Quaternion#multiply(float scalar) has some use but I am yet to find them.
It is also the case that "adding" quaternions does not combine them. In fact you multiply them. So combining q1 then q2 then q3 would be achieved as follows:
Quaternion q0 = q1.mult(q2).mult(q3);
The cheat sheet is incredibly useful for this
Formula vs slerp comparison
In your case your formula for interpolation is nearly but not quite correct. This shows a graph of yaw for interpolation between 2 quaternions using both methods
Related
I'm currently working on a terrain engine and I'm experimenting a little bit with noise. It's so fascinating to see what different structures, functions and pure imagination can create with just a few lines of code. Recently I saw this post: http://squall-digital.com/ProceduralGeneration.html, I was definitely intrigued by all of these techniques, but especially the first one caught my attention. The programmer made the gain (or persistence) of the noise to be proportional to the slope of the noise on that point. I'm currently trying to achieve this but I don't think I'm on the right track.
I'm currently using simplex noise. I know the author of the article uses Perlin Noise and yes, I have seen how to calculate the derivative of Perlin Noise, but obviously this implementation wouldn't work because of the fundamental differences in how Perlin and Simplex noise are generated. I thus set out on my own way to try and approximate the slope of noise on a given position.
I came up with the following "algorithm":
Calculate neighboring points of noise [(x + 1, z), (x - 1, z), (x, z + 1), (x, z - 1)].
Calculate their respective noise value
Calculate differenceX and differenceZ in noise values on the x-axis and the z-axis respectively
Create vectors from origin: (2, differenceX, 0) and (0, differenceZ, 2)
Scale to vectors of length 1
Add y-components of the resulting unit vectors
use this y-component as the "slope" approximated at the given point.
Now I have implemented this in code (I added "3D" vectors for the purpose of ease of understanding)
private static float slope(OpenSimplex2F simplex, float x, float z, float noise) {
float[] neighbours = getStraightNeighbours(simplex, x, z);
float xSlope = (neighbours[1] - neighbours[0]) / (2.0f * x);
float zSlope = (neighbours[3] - neighbours[2]) / (2.0f * z);
float[] vecX = new float[] { 1, xSlope, 0 };
float[] vecZ = new float[] { 0, zSlope, 1 };
float scaleX = Maths.sqrt(1.0f + xSlope * xSlope);
float scaleZ = Maths.sqrt(1.0f + zSlope * zSlope);
for (int i = 0; i < 3; i++) {
vecX[i] /= scaleX;
vecZ[i] /= scaleZ;
}
float[] grad = new float[] {
vecX[0] + vecZ[0],
vecX[1] + vecZ[1],
vecX[2] + vecZ[2]
};
return grad[1];
}
Now this gives me extremely underwhelming and rest assured, wrong results: Result
Is there anyone that can explain me if this is a good technique to approximate the slope of if this is completely wrong. I'm not the biggest math genius so I was already happy I could figure this out and that it produced a result in the first place. If anyone has a resource linked to the derivative of simplex noise (which would be a life saver, obviously), it'd be really appreciated!
I've been trying to improve the behavior of one of the bosses in a top-down perspective shooter game that I'm working on, and one thing I haven't been able to quite implement correctly is plotting an intercept trajectory between the boss' "hook" projectile and the player according to the player's movement.
I've tried implementing it using the quadratic equation described here: https://stackoverflow.com/a/2249237/1205340
But I had pretty much the same results as this algorithm I came up with, which often will aim close to the player's expected position, but almost always misses unless the player is backpedaling away from the boss.
private float findPlayerIntercept(Pair<Float> playerPos, Pair<Float> playerVel, int delta) {
float hookSpeed = HOOK_THROW_SPEED * delta;
Pair<Float> hPos = new Pair<Float>(position);
Pair<Float> pPos = new Pair<Float>(playerPos);
// While the hook hasn't intercepted the player yet.
while(Calculate.Distance(position, hPos) < Calculate.Distance(position, pPos)) {
float toPlayer = Calculate.Hypotenuse(position, pPos);
// Move the player according to player velocity.
pPos.x += playerVel.x;
pPos.y += playerVel.y;
// Aim the hook at the new player position and move it in that direction.
hPos.x += ((float)Math.cos(toPlayer) * hookSpeed);
hPos.y += ((float)Math.sin(toPlayer) * hookSpeed);
}
// Calculate the theta value between Stitches and the hook's calculated intercept point.
return Calculate.Hypotenuse(position, hPos);
}
This method is supposed to return the theta (angle) for the boss to throw his hook in order to intercept the player according to the player's movement vector at the time the hook is thrown.
For reference, the Calculate.Hypotenuse method just uses atan2 to calculate the angle between two points. Calculate.Distance gets the distance in pixels between two positions.
Does anyone have any suggestions on how to improve this algorithm? Or a better way to approach it?
Your question is confusing (as you also talk about a quadratic equation). If your game is a 2d platform game in which the boss throws a hook with a given velocity with a certain angle with the floor, then I foud your solution:
By playing with the kinematic equations, you find that
<math xmlns="http://www.w3.org/1998/Math/MathML">
<mrow class="MJX-TeXAtom-ORD">
<mo>θ</mo>
</mrow>
<mo>=</mo>
<mrow class="MJX-TeXAtom-ORD">
<mrow class="MJX-TeXAtom-ORD">
<mfrac>
<mrow>
<mi>arcsin</mi>
<mo><!-- --></mo>
<mo stretchy="false">(</mo>
<mi>d</mi>
<mi>g</mi>
<mo stretchy="false">)</mo>
</mrow>
<mrow class="MJX-TeXAtom-ORD">
<msup>
<mi>v</mi>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
</mrow>
<mo>∗<!-- ∗ --></mo>
<mrow class="MJX-TeXAtom-ORD">
<mfrac>
<mn>1</mn>
<mrow class="MJX-TeXAtom-ORD">
<mn>2</mn>
</mrow>
</mfrac>
</mrow>
</mrow>
</math>
With d being the distance between the player and the boss, g being the gravitational constant and v being the initial velocity of the hook.
The reason that the hook keeps missing is that you always use a fixed timestep of 1 unit when integrating the player and hook's motion. This means that both objects' trajectories are a series of straight-line "jumps". 1 unit is far too large a timestep for there to be accurate results - if the speeds are high enough, there is no guarantee that the loop condition while(Calculate.Distance(position, hPos) < Calculate.Distance(position, pPos)) will even be hit.
The quadratic equation approach you mentioned was along the correct lines, but since you haven't understood the link, I will try to derive a similar method here.
Let's say the player's and hook's initial positions and velocities are p0, u and q0, v respectively (2D vectors). v's direction is the unknown desired quantity. Below is a diagram of the setup:
Applying the cosine rule:
Which root should be used, and does it always exist?
If the term inside the square root is negative, there is no real root for t - no solutions (the hook will never reach the player).
If both roots (or the single root) are negative, there is also no valid solution - the hook needs to be fired "backwards in time".
If only one root is positive, use it.
If both roots are positive, use the smaller one.
If the speeds are equal, i.e. v = u, then the solution is simply:
Again, reject if negative.
Once a value for t is known, the collision point and thus the velocity direction can be calculated:
Update: sample Java code:
private float findPlayerIntercept(Pair<Float> playerPos, Pair<Float> playerVel, int delta)
{
// calculate the speeds
float v = HOOK_THROW_SPEED * delta;
float u = Math.sqrt(playerVel.x * playerVel.x +
playerVel.y * playerVel.y);
// calculate square distance
float c = (position.x - playerPos.x) * (position.x - playerPos.x) +
(position.y - playerPos.y) * (position.y - playerPos.y);
// calculate first two quadratic coefficients
float a = v * v - u * u;
float b = playerVel.x * (position.x - playerPos.x) +
playerVel.y * (position.y - playerPos.y);
// collision time
float t = -1.0f; // invalid value
// if speeds are equal
if (Math.abs(a)) < EPSILON) // some small number, e.g. 1e-5f
t = c / (2.0f * b);
else {
// discriminant
b /= a;
float d = b * b + c / a;
// real roots exist
if (d > 0.0f) {
// if single root
if (Math.abs(d) < EPSILON)
t = b / a;
else {
// how many positive roots?
float e = Math.sqrt(d);
if (Math.abs(b) < e)
t = b + e;
else if (b > 0.0f)
t = b - e;
}
}
}
// check if a valid root has been found
if (t < 0.0f) {
// nope.
// throw an exception here?
// or otherwise change return value format
}
// compute components and return direction angle
float x = playerVel.x + (playerPos.x - position.x) / t;
float y = playerVel.y + (playerPos.y - position.y) / t;
return Math.atan2(y, x);
}
I have problem to find method to compare two trajectories (curves).
The first original contains points (x,y).
The second one can be offset, smaller or larger scale, and with rotation - also array with points (x,y)
My first method that i did is to find smallest distance between two points and repeat this process in every iteration, sum of it and divide by number of points - then my result tell me value the average error per point:
http://www.mathopenref.com/coorddist.html
And also i find this method:
https://help.scilab.org/docs/6.0.0/en_US/fminsearch.html
But i cant figure out how to use it.
I would like compare both trajectories but my results have to include rotation, or at least offset for beginning.
My current result is calculate error per point (distance)
get coordinate (x,y) second trajectory.
in loop i try to find min_distance between (x,y) from 1. and point from original trajectory.
add smallest_distance what i found in 2 step.
divide sum of smallest distance by number of points from second trajectory.
My result describe average error(distance) per points if we compare with original trajectory.
But i can not figure how to handle if trajectory is rotated, scaled or is shifted.
Please look at my example trajectories:
http://pokazywarka.pl/trajectory/
http://pokazywarka.pl/trajectory2/
So you need to compare shape of 2 curves invariant on rotation,translation and scale.
Solution
Let assume 2 sinwaves for testing. Both rotated and scaled but with the same aspect ratio and one with added noise. I generated them in C++ like this:
struct _pnt2D
{
double x,y;
// inline
_pnt2D() {}
_pnt2D(_pnt2D& a) { *this=a; }
~_pnt2D() {}
_pnt2D* operator = (const _pnt2D *a) { *this=*a; return this; }
//_pnt2D* operator = (const _pnt2D &a) { ...copy... return this; }
};
List<_pnt2D> curve0,curve1; // curves points
_pnt2D p0,u0,v0,p1,u1,v1; // curves OBBs
const double deg=M_PI/180.0;
const double rad=180.0/M_PI;
void rotate2D(double alfa,double x0,double y0,double &x,double &y)
{
double a=x-x0,b=y-y0,c,s;
c=cos(alfa);
s=sin(alfa);
x=x0+a*c-b*s;
y=y0+a*s+b*c;
}
// this code is the init stuff:
int i;
double x,y,a;
_pnt2D p,*pp;
Randomize();
for (x=0;x<2.0*M_PI;x+=0.01)
{
y=sin(x);
p.x= 50.0+(100.0*x);
p.y=180.0-( 50.0*y);
rotate2D(+15.0*deg,200,180,p.x,p.y);
curve0.add(p);
p.x=150.0+( 50.0*x);
p.y=200.0-( 25.0*y)+5.0*Random();
rotate2D(-25.0*deg,250,100,p.x,p.y);
curve1.add(p);
}
OBB oriented bounding box
compute OBB which will find the rotation angle and position of both curves so rotate one of them so they start at the same position and has the same orientation.
If the OBB sizes are too different then the curves are different.
For above example it yealds this result:
Each OBB is defined by start point P and basis vectors U,V where |U|>=|V| and z coordinate of U x V is positive. That will ensure the same winding for all OBBs. It can be done in OBBox_compute by adding this to the end:
// |U|>=|V|
if ((u.x*u.x)+(u.y*u.y)<(v.x*v.x)+(v.y*v.y)) { _pnt2D p; p=u; u=v; v=p; }
// (U x V).z > 0
if ((u.x*v.y)-(u.y*v.x)<0.0)
{
p0.x+=v.x;
p0.y+=v.y;
v.x=-v.x;
v.y=-v.y;
}
So curve0 has p0,u0,v0 and curve1 has p1,u1,v1.
Now we want to rescale,translate and rotate curve1 to match curve0 It can be done like this:
// compute OBB
OBBox_compute(p0,u0,v0,curve0.dat,curve0.num);
OBBox_compute(p1,u1,v1,curve1.dat,curve1.num);
// difference angle = - acos((U0.U1)/(|U0|.|U1|))
a=-acos(((u0.x*u1.x)+(u0.y*u1.y))/(sqrt((u0.x*u0.x)+(u0.y*u0.y))*sqrt((u1.x*u1.x)+(u1.y*u1.y))));
// rotate curve1
for (pp=curve1.dat,i=0;i<curve1.num;i++,pp++)
rotate2D(a,p1.x,p1.y,pp->x,pp->y);
// rotate OBB1
rotate2D(a,0.0,0.0,u1.x,u1.y);
rotate2D(a,0.0,0.0,v1.x,v1.y);
// translation difference = P0-P1
x=p0.x-p1.x;
y=p0.y-p1.y;
// translate curve1
for (pp=curve1.dat,i=0;i<curve1.num;i++,pp++)
{
pp->x+=x;
pp->y+=y;
}
// translate OBB1
p1.x+=x;
p1.y+=y;
// scale difference = |P0|/|P1|
x=sqrt((u0.x*u0.x)+(u0.y*u0.y))/sqrt((u1.x*u1.x)+(u1.y*u1.y));
// scale curve1
for (pp=curve1.dat,i=0;i<curve1.num;i++,pp++)
{
pp->x=((pp->x-p0.x)*x)+p0.x;
pp->y=((pp->y-p0.y)*x)+p0.y;
}
// scale OBB1
u1.x*=x;
u1.y*=x;
v1.x*=x;
v1.y*=x;
You can use Understanding 4x4 homogenous transform matrices to do all this in one step. Here the result:
sampling
in case of non uniform or very different point density between curves or between any parts of it you should re-sample your curves to have common point density. You can use linear or polynomial interpolation for this. You also do not need to store the new sampling in memory but instead you could build function that returns point of each curve parametrized by arc-length from start.
point curve0(double distance);
point curve1(double distance);
comparison
Now you can substract the 2 curves and sum up the abs of the differences. Then divide it by the curve length and threshold the result.
for (double sum=0.0,l=0.0;d<=bigger_curve_length;l+=step)
sum+=fabs(curve0(l)-curve1(l));
sum/=bigger_curve_length;
if (sum>threshold) curves are different
else curves match
You should try this even with +180deg rotation as the orientation difference from OBB has only half of the true range.
Here few related QAs:
compare shapes
How can i produce multi point linear interpolation?
I am just messing around a bit in processing since i know it better than any other language and stumbled up on this website Custom 2d physics engine. So far so good. i am at the point where i have 2 rectangles colliding and i need to resolve the collision. According to the paper i should use the code :
void ResolveCollision( Object A, Object B )
{
// Calculate relative velocity
Vec2 rv = B.velocity - A.velocity
// Calculate relative velocity in terms of the normal direction
float velAlongNormal = DotProduct( rv, normal )
// Do not resolve if velocities are separating
if(velAlongNormal > 0)
return;
// Calculate restitution
float e = min( A.restitution, B.restitution)
// Calculate impulse scalar
float j = -(1 + e) * velAlongNormal
j /= 1 / A.mass + 1 / B.mass
// Apply impulse
Vec2 impulse = j * normal
A.velocity -= 1 / A.mass * impulse
B.velocity += 1 / B.mass * impulse
}
This is written in C++ so i would need to port it to java. And here i get stuck on two things. 1: What does the author mean with "normal"? how do i get the "normal"? thing 2 are these 3 lines of code:
Vec2 impulse = j * normal
A.velocity -= 1 / A.mass * impulse
B.velocity += 1 / B.mass * impulse
He creates a vector wich has only 1 number? j * normal?
I don'really have a clear picture on what exactly happens which does not really benefit me.
He is probably referring to this as "normal". So normal is a vector with 2 elements since you are referring to a tutorial for 2D physics. And j*normal will multiply each element of normal with the scalar j.
normal, velocity and impulse are vectors with 2 elements for coordinates x, y. From the series of tutorials you are referring to, you can see normal defined here towards the end.
The "normal" vector at a point on the boundary of a 2D or 3D shape is the vector that is:
perpendicular to the boundary at that point;
has length 1; and
points outward instead of inside the shape
The normal vector is the same all along a straight line (2d) or flat surface (3d), so you will also hear people talk about the "normal" of the line or surface in these cases.
The normal vector is used for all kinds of important calculations in graphics and physics code.
How exactly to calculate the normal vector for a point, line, or surface depends on what data structures you have representing the geometry of your objects.
I understand that the dot (or inner) product of two quaternions is the angle between the rotations (including the axis-rotation). This makes the dot product equal to the angle between two points on the quaternion hypersphere.
I can not, however, find how to actually compute the dot product.
Any help would be appreciated!
current code:
public static float dot(Quaternion left, Quaternion right){
float angle;
//compute
return angle;
}
Defined are Quaternion.w, Quaternion.x, Quaternion.y, and Quaternion.z.
Note: It can be assumed that the quaternions are normalised.
The dot product for quaternions is simply the standard Euclidean dot product in 4D:
dot = left.x * right.x + left.y * right.y + left.z * right.z + left.w * right.w
Then the angle your are looking for is the arccos of the dot product (note that the dot product is not the angle): acos(dot).
However, if you are looking for the relative rotation between two quaternions, say from q1 to q2, you should compute the relative quaternion q = q1^-1 * q2 and then find the rotation associated withq.
Just NOTE: acos(dot) is very not stable from numerical point of view.
as was said previos, q = q1^-1 * q2 and than angle = 2*atan2(q.vec.length(), q.w)
Should it be 2 x acos(dot) to get the angle between quaternions.
The "right way" to compute the angle between two quaternions
There is really no such thing as the angle between two quaternions, there is only the quaternion that takes one quaternion to another via multiplication. However, you can measure the total angle of rotation of that mapping transformation, by computing the difference between the two quaternions (e.g. qDiff = q1.mul(q2.inverse()), or your library might be able to compute this directly using a call like qDiff = q1.difference(q2)), and then measuring the angle about the axis of the quaternion (your quaternion library probably has a routine for this, e.g. ang = qDiff.angle()).
Note that you will probably need to fix the value, since measuring the angle about an axis doesn't necessarily give the rotation "the short way around", e.g.:
if (ang > Math.PI) {
ang -= 2.0 * Math.PI;
} else if (ang < -Math.PI) {
ang += 2.0 * Math.PI;
}
Measuring the similarity of two quaternions using the dot product
Update: See this answer instead.
I assume that in the original question, the intent of treating the quaternions as 4d vectors is to enable a simple method for measuring the similarity of two quaternions, while still keeping in mind that the quaternions represent rotations. (The actual rotation mapping from one quaternion to another is itself a quaternion, not a scalar.)
Several answers suggest using the acos of the dot product. (First thing to note: the quaternions must be unit quaternions for this to work.) However, the other answers don't take into account the "double cover issue": both q and -q represent the exact same rotation.
Both acos(q1 . q2) and acos(q1 . (-q2)) should return the same value, since q2 and -q2 represent the same rotation. However (with the exception of x == 0), acos(x) and acos(-x) do not return the same value. Therefore, on average (given random quaternions), acos(q1 . q2) will not give you what you expect half of the time, meaning that it will not give you a measure of the angle between q1 and q2, assuming that you care at all that q1 and q2 represent rotations. So even if you only plan to use the dot product or acos of the dot product as a similarity metric, to test how similar q1 and q2 are in terms of the effect they have as a rotation, the answer you get will be wrong half the time.
More specifically, if you are trying to simply treat quaternions as 4d vectors, and you compute ang = acos(q1 . q2), you will sometimes get the value of ang that you expect, and the rest of the time the value you actually wanted (taking into account the double cover issue) will be PI - acos(-q1 . q2). Which of these two values you get will randomly fluctuate between these values depending on exactly how q1 and q2 were computed!.
To solve this problem, you have to normalize the quaternions so that they are in the same "hemisphere" of the double cover space. There are several ways to do this, and to be honest I'm not even sure which of these is the "right" or optimal way. They do all produce different results from other methods in some cases. Any feedback on which of the three normalization forms above is the correct or optimal one would be greatly appreciated.
import java.util.Random;
import org.joml.Quaterniond;
import org.joml.Vector3d;
public class TestQuatNorm {
private static Random random = new Random(1);
private static Quaterniond randomQuaternion() {
return new Quaterniond(
random.nextDouble() * 2 - 1, random.nextDouble() * 2 - 1,
random.nextDouble() * 2 - 1, random.nextDouble() * 2 - 1)
.normalize();
}
public static double normalizedDot0(Quaterniond q1, Quaterniond q2) {
return Math.abs(q1.dot(q2));
}
public static double normalizedDot1(Quaterniond q1, Quaterniond q2) {
return
(q1.w >= 0.0 ? q1 : new Quaterniond(-q1.x, -q1.y, -q1.z, -q1.w))
.dot(
q2.w >= 0.0 ? q2 : new Quaterniond(-q2.x, -q2.y, -q2.z, -q2.w));
}
public static double normalizedDot2(Quaterniond q1, Quaterniond q2) {
Vector3d v1 = new Vector3d(q1.x, q1.y, q1.z);
Vector3d v2 = new Vector3d(q2.x, q2.y, q2.z);
double dot = v1.dot(v2);
Quaterniond q2n = dot >= 0.0 ? q2
: new Quaterniond(-q2.x, -q2.y, -q2.z, -q2.w);
return q1.dot(q2n);
}
public static double acos(double val) {
return Math.toDegrees(Math.acos(Math.max(-1.0, Math.min(1.0, val))));
}
public static void main(String[] args) {
for (int i = 0; i < 1000; i++) {
var q1 = randomQuaternion();
var q2 = randomQuaternion();
double dot = q1.dot(q2);
double dot0 = normalizedDot0(q1, q2);
double dot1 = normalizedDot1(q1, q2);
double dot2 = normalizedDot2(q1, q2);
System.out.println(acos(dot) + "\t" + acos(dot0) + "\t" + acos(dot1)
+ "\t" + acos(dot2));
}
}
}
Also note that:
acos is known to not be very numerically accurate (given some worst-case inputs, up to half of the least significant digits can be wrong);
the implementation of acos is exceptionally slow in the JDK standard libraries;
acos returns NaN if its parameter is even slightly outside [-1,1], which is a common occurrence for dot products of even unit quaternions -- so you need to bound the value of the dot product to that range before calling acos. See this line in the code above:
return Math.toDegrees(Math.acos(Math.max(-1.0, Math.min(1.0, val))));
According to this cheatsheet Eq. (42), there is a more robust and accurate way of computing the angle between two vectors that replaces acos with atan2 (although note that this does not solve the double cover problem either, so you will need to use one of the above normalization forms before applying the following):
ang(q1, q2) = 2 * atan2(|q1 - q2|, |q1 + q2|)
I admit though that I don't understand this formulation, since quaternion subtraction and addition has no geometrical meaning.