What is the difference between rotating a geometry in JMonkeyEngine using the rotate method:
float r = FastMath.DEG_TO_RAD * 45f; // convert degrees to radians
geom.rotate(r, 0.0f, 0.0f); // rotate the geometry around the x-axis by 45 degrees
and rotating a geometry using a quaternion:
Quaternion roll045 = new Quaternion(); // create the quaternion
roll045.fromAngleAxis(45*FastMath.DEG_TO_RAD, Vector3f.UNIT_X); // supply angle and axis as arguments)
geom.setLocalRotation(roll045); // rotate the geometry around the x-axis by 45 degrees
This is confusing for me because the result is the same for both. So I'd like to find out the difference and when to use one over the other.
The book that I'm reading says that the first method is relative, and the second using a quaternion is absolute but I'm still fuzzy on what that means.
Difference between using a quaternion and using Euler angles
To the question in your title there is no functional difference between using quaternions and angle representation, in fact internally the .rotate() function is
public Spatial rotate(float xAngle, float yAngle, float zAngle) {
TempVars vars = TempVars.get();
Quaternion q = vars.quat1;
q.fromAngles(xAngle, yAngle, zAngle);
rotate(q);
vars.release();
return this;
}
In other words, whether you use a quaternion directly or not you are using a quaternion.
Difference between .rotate() and .setLocalRotation()
However the two functions you are using are not equivalent, in fact there are both .rotate(angles) and .rotate(quaternion) (although the .setLocalRotation() is only available for quaternions). So the second part of your question is whats the difference between .rotate(anything) and .setLocalRotation(anything). Again looking at the source code gives us our answer
public Spatial rotate(Quaternion rot) {
this.localTransform.getRotation().multLocal(rot);
setTransformRefresh();
return this;
}
public void setLocalRotation(Quaternion quaternion) {
localTransform.setRotation(quaternion);
setTransformRefresh();
}
So, .rotate() rotates the object (in its local frame) by an amount from where it is now whereas .setLocalRotation() changes the rotation irrespective of where it is now.
Conclusion
If your object currently has no rotation the two functions are identical, however if the object has already been rotated then they are equivalent to "as well as the current rotation" and "instead of the current rotation".
Quaternions have many advantages over the standard angle approach; the most obvious of which is avoiding gimbal lock. Where you can use quaternions do use them. The angles methods are all convenience methods to help you if you need them.
Related
I am currently trying to make som polygons rotate for my Asteroids game! :)
In order to do that, I'm using the AffineTransform setToRotation() method, however I am very confused about the meaning of the parameters. For setToRotation(a, b, c) I understand that b and c are the x and y coordinates of the point, the shape revolves around. a somehow makes the shape rotate, but it doesn't appear to be in degrees. So what else is it? And what to the other two setToRotation methods (setToRotation(a, b) and setToRotation(a)) do? I dont understand them AT ALL.
Thanks for every answer!
As in the documentation:
theta - the angle of rotation measured in radians
All of the trigonometric functions in java.lang.Math either accept or return radians.
You can convert from degrees to radians using Math.toRadians.
The other method overloads are also described in the documentation. Unless you can describe what about them you don't understand, there is no point in trying to explain them again, as that explanation could be in the same terms that you don't understand.
So what else is it?
It is in radians. You can see its documentation here. To convert from degrees to radians, just divide by 180 then times by π, so π radians is 180 degrees for examples. Or you can use Math.toRadians.
And what to the other two setToRotation methods (setToRotation(a, b) and setToRotation(a)) do?
Those are also very well documented. See this and this.
Basically, the one that takes one parameter is equivalent to calling setToRotation(a, b, c) but with b and c all equal to 0, and the one that takes 2 parameters is equivalent to calling setToRotation(a) with the inverse tangent of the quotient of the two parameters (setToRotation(Math.atan2(a, b))).
I'm creating my own Game Application using Box2D and i'm facing some problems. I managed to render every body i wanted, moving them but i have to put high value to move them correctly. For example here is my Player body definition :
bodyDefPlayer = new BodyDef();
bodyDefPlayer.type = BodyType.DynamicBody;
bodyDefPlayer.position.set(positionX, (positionY * tileHeight) + 50);
playerBody = world.createBody(bodyDefPlayer);
polygonPlayer = new PolygonShape();
polygonPlayer.setAsBox(50, 50);
fixturePlayer = new FixtureDef();
fixturePlayer.shape = polygonPlayer;
fixturePlayer.density = 0.8f;
fixturePlayer.friction = 0.5f;
fixturePlayer.restitution = 0.0f;
playerBody.createFixture(fixturePlayer);
playerBody.setFixedRotation(true);
And here is how i have to apply my impulse to move him :
Vector2 vel = this.player.playerBody.getLinearVelocity();
Vector2 pos = this.player.playerBody.getWorldCenter();
player.playerBody.applyLinearImpulse(new Vector2(vel.x + 20000000, vel.y * 1000000), pos, true);
As you can see my values are pretty high plus the player isn't doing a curve when he is going down but more going straight down when he can.
I'd like to have some help please :)
Thanks !
It seems like you are using the Linear Impulse when you should be just applying forces. The Linear Impulse "smacks" a body with a large force to give it a hefty instantaneous velocity. This is good if you are hitting a golf ball (large force, small time), or simulating a fired bullet, but it does not look good for the movement of real bodies.
This is a function that I use on my Entities, which are a class to hold the box2D body and apply control forces to the body. In this case, this function, ApplyThrust, makes the body move towards a target (seek behavior):
void ApplyThrust()
{
// Get the distance to the target.
b2Vec2 toTarget = GetTargetPos() - GetBody()->GetWorldCenter();
toTarget.Normalize();
b2Vec2 desiredVel = GetMaxSpeed()*toTarget;
b2Vec2 currentVel = GetBody()->GetLinearVelocity();
b2Vec2 thrust = desiredVel - currentVel;
GetBody()->ApplyForceToCenter(GetMaxLinearAcceleration()*thrust);
}
In this case, the Entity has been given the command to move to a Target position, which it caches internally and can be recovered using GetTargetPos(). The function applies force to the body by generating a difference vector between the desired maximum velocity (towards the target) and the current velocity. If the body is already headed towards the target at the maximum velocity, the contribution from this function is effectively 0 (same vectors).
Note that this does NOT change the orientation of the body. There is actually a separate function for that.
Note the original question appears to be in Java (libgdx?). This is in C++, but the general idea is applicable and should work in any Box2d implementation by using references instead of pointers, etc.
There is a code base with samples of doing this located here. You can read more about this code base in this post. Watch the video...I suspect it will tell you immediately if this is the information you are looking for.
The Normalize function, which should be a member of the b2Vec2 class is:
/// Convert this vector into a unit vector. Returns the length.
float32 b2Vec2::Normalize()
{
float32 length = Length();
if (length < b2_epsilon)
{
return 0.0f;
}
float32 invLength = 1.0f / length;
x *= invLength;
y *= invLength;
return length;
}
This converts a vector to a unit vector (length = 1) pointing in the same direction.
NOTE This is an in-place operation. It changes the actual b2Vec2 object, it does not return a new object. Adapt to Java as you see fit.
Was this helpful?
You are using the same units for graphics and physics (pixels). There are two reasons why such approach is bad:
It will scale poorly on different resolutions
Box2D is tuned to work with some range of values (due to floating number precision). You can find the range of values in Box2D manual. They are in meters there, but how you name the units is actually not important, since Box2D does not keep track of the units, but instead operates on values. For example you can say that the speed of the body is 10 meters per second or 10 foots per second. For calculations it is not important. It will become important when you will interpret the result of calculations. For example, traveled distance over time: in first case it will be meters and in second one foots. But that is not Box2D business. You just pass values to it (For example, body.pos.set(2, 3) //no info about units here)
The common technics to overcome these problems is to have different units for use with Box2D and for graphics, and just to rescale between them (look for PTM_RATIO in cocos examples)
I'm creating a very very simple game for fun. Realizing I needed the trajectory of an object given an angle and a velocity, it seemed logical to use this parametric equation:
x = (v*cos(ø))t and y = (v*sin(ø)t - 16t^2
I know that this equation works for a trajectory, but it isn't working with most ø values I use.
Do java angles work any differently from normal angle calculation?
My goal is for the object to start from bottom left of the window and follow an arc that is determined by the speed and angle given. However it tends to go strange directions.
The value of ø should be horizontal at 0 degrees and vertical at 90, and in the equation it refers to the angle at which the arc begins.
This is my first ever question post on this site, so if I'm missing anything in that regard please let me know.
Here is the calculating part of my code
not shown is the void time() that counts for each 5ms
also I should mention that the parX and parY are used to refer to the x and y coordinates in an unrounded form, as graphics coordinates require integer values.
Any help is much appreciated, and thank you in advance!
public void parametric()
{
parX = (float) ((speed*cos(-ø))*time);
gravity = (time*time)*(16);
parY = (float) ((float) ((speed*sin(-ø))*time)+gravity)+500;
xCoord = round(parX);
yCoord = round(parY);
}
Do java angles work any differently from normal angle calculation?
You only need to read the docs
public static double cos(double a)
Parameters:
a - an angle, in radians.
I guess you are using degrees instead of radians?
I am new to Mahout, and have been lately transforming a lot of my previous machine learning code to this framework. In many places, I am using cosine similarity between vectors for clustering, classification, etc. Investigating Mahout's distance method, however, gave me quite a surprise. In the following code snippet, the dimension and the float values are taken from an actual output of one of my programs (not that it matters here):
import org.apache.mahout.math.RandomAccessSparseVector;
import org.apache.mahout.common.distance.CosineDistanceMeasure;
public static void main(String[] args) {
RandomAccessSparseVector u = new RandomAccessSparseVector(373);
RandomAccessSparseVector v = new RandomAccessSparseVector(373);
u.set(24, 0.4526985183337534);
u.set(55, 0.5333219834564495);
u.set(54, 0.5333219834564495);
u.set(53, 0.4756042214095471);
v.set(57, 0.6653016370845252);
v.set(56, 0.6653016370845252);
v.set(11, 0.3387439495921685);
CosineDistanceMeasure cosineDistanceMeasure = new CosineDistanceMeasure();
System.out.println(cosineDistanceMeasure.distance(u, v));
}
The output is 1.0. Shouldn't it be 0.0?
Comaring this with the output of cosineDistanceMeasure.distance(u, u), I realize that what I am looking for is 1 - cosineDistanceMeasure.distance(u, v). But this reversal just doesn't make sense to me. Any idea why it was implemented this way? Or am I missing something very obvious?
When two points are "close", the angle they form when viewed as vectors from the origin is small, near zero. The cosine of angles near zero is near 1, and the cosine decreases as the angle goes towards 90 and then 180 degrees.
So cosine decreases as distance increases. This is why the cosine of the angle between two vectors itself can't make sense as a distance metric. The 'canonical' way to make a distance metric is 1 - cosine; it's a proper metric.
I'm making a simple Vector class with a simple usage, so I don't want to import a whole library (like JScience...) for something that I can do myself.
Currently I have made this code so far:
public void add(Vector2D v){
double ang = this.angle*Math.PI/180;
double mag = this.magnitude;
double ang0 = v.angle*Math.PI/180;
double mag0 = v.magnitude;
//vector to coordinates
double x1 = mag*Math.cos(ang);
double y1 =-mag*Math.sin(ang);
//adding the other vector's coordinates
double x2 =x1+mag*Math.cos(ang0);
double y2 =y1-mag*Math.sin(ang0);
//back to vector form
double newMagnitude = Math.sqrt(x2*x2+y2*y2);
double newAngle = Math.atan2(y2,x2);
this.magnitude = newMagnitude;
this.angle = newAngle;
}
It's converting both vectors to coordinates and then back with the trigonometric functions, But those are extremely slow, and the method will be used very frequently.
Is there any better way?
First off, some terminology 101:
Point: a dimensionless entity that a space is made of.
Space: a set of points.
Euclidean space: a set of points, together with a set of lines and with the notion of closeness (topology). The set of lines is bound by the Euclid's axioms. It is uniquely defined by its dimension.
Vector: a translation-invariant relationship between two points in an Euclidean space.
Coordinate system: a mapping from tuples of real numbers to points or vectors in some space.
Cartesian coordinate system: A specific mapping, with the properties (in case of the Euclidean 2D space) that the set of points ax+by+c=0 is a line unless a,b are both zero, that the vectors [0,1] and [1,0] are perpendicular and unit length, and that points in space are close together iff they are close together in all coordinates. This is what you refer to as "coordinates".
Polar coordinate system: Another specific mapping, that can be defined from the cartesian coordinates: [arg,mag] in polar coordinates map to [cos(arg)*mag, sin(arg)*mag] in cartesian coordinates. This is what you refer to as "vector form".
The cartesian coordinate system has multiple benefits over the polar coordinate system. One of them is easier addition: [x1,y1]+[x2,y2]=[x1+x2,y1+y2] and scalar multiplication: [x1,y1].[x2,y2]=x1*x2+y1*y2. Additive inversion is slightly easier as well: -[x,y]=[-x,-y]
Another benefit is that while polar coordinates are strictly 2D (there is no unique extension - the spherical coordinate system is a candidate, though), cartesian coordinates extend naturally to any number of dimensions.
For this reason, it is beneficial - and usual - to always store vectors in their cartesian coordinate form.
If you ever need vectors in their polar form, then (and only then) convert, once and for all.
Polar coordinates not as useful. They can be used for input and output, but they are rarely useful for computation.
You keep storing your vectors in the polar form. You convert them to their cartesian form for computation, then convert back to polar - only to convert them to the cartesian again.
You should store your vectors in the cartesian form. The performance improvement should be clearly visible if you drop the redundant conversion.
Even if you want to rotate a vector, it's not beneficial to convert to polar and back. Rotation by a signed angle a is as easy as [x*cos(a)+y*sin(a), y*cos(a)-x*sin(a)]. That's two trigonometric functions (at most - you can cache these values) to rotate an entire array of vectors.