How to fix this 90 or -90 rotation problem - java

I wrote this code used to create a plane in front of our character that we then use
to check the position of enemies and know if they are inside the plane or not.
The code is in Java and is running on our server, and we use it in our game using Unity.
In the following code AreaWidth is the width of the plane we want to check, Area length is its length.
Message is holding data about our player such as position and quaternion.
bwNode is holding data about the current target that we are checking.
float AreaWidth = message.getWidth();
float AreaLength = message.getLength();
double Fi, cs, sn;
Fi = message.quaternion.toEulerAngles().getY();
cs = Math.cos(Fi);
sn = Math.sin(Fi);
int ptx, pty;
ptx = (int)(bwNode.getLoc().getX() - message.loc.getX());
pty = (int)(bwNode.getLoc().getZ() - message.loc.getZ());
double perplen, alonglen;
perplen = Math.abs(ptx * sn - pty * cs);
alonglen = ptx * cs + pty * sn;
if (perplen <= AreaWidth / 2)
{
if (alonglen >= 0 && alonglen <= AreaLength)
{
//the target is inside the area
}
}
So when debugging or actually using this code in game, it is working with 1 issue only, the plane is always facing either the right or left side of our player instead of always face the front/player facing direction.
Here is an image representing it for 0, 90, 180 and 270 degrees on our player Y rotation. (in this image the white cube is the player, the triangle shows where the player is facing, The sphere is the enemy, the red transparent box shows where our plane should be, and the colored lines forms the plane that is actually created.)
I need help figuring out what is causing it in my code.

Well it is not a question about Unity as far as I understand, as it relates to your backend only.
Anyway, you are really mixing up the terms of target position and player position, pay attention that at your screens, the plane will always face the target, no matter where the player looks.
The way I would do it, is expressing the target in coordinate system of the player, something like this:
float AreaWidth = message.getWidth();
float AreaLength = message.getLength();
Fi = message.quaternion.toEulerAngles().getY();
cs = Math.cos(Fi);
sn = Math.sin(Fi);
// First, move the target from word axis origin to player axis origin
float tx = (bwNode.getLoc().getX() - message.loc.getX());
float tz = (bwNode.getLoc().getZ() - message.loc.getZ());
// Now rotate the target around it new word origin to apply player rotation
float ptx = cs * tx - sn * tz;
float ptz = sn * tx + cs * tz;
// Now ptx and ptz are in new word coordinates, simply test them agains area
if (-AreaWidth / 2 <= ptx && ptx <= AreaWidth / 2 && AreaLength => 0 && AreaLength <= l) {
// It is inside
} else {
// It is outside
}
P.S: It is probably much easier (and definitely more efficient) to pass player full transform matrix to the server instead of handling transform and rotation components separately.

Related

Java Arc2D Collision detection (With Rotation)

I have tried to create NPC character that can "see" the player by using cones of vision.
The NPC will rotate back and forth at all times.
My problem is that the arc has a generic and unchanging position, but when its drawn to the screen it looks correct.
[Screenshots of the collisions in action][1]
[GitHub link for java files][2]
I'm using Arc2D to draw the shape like this in my NPC class
// Update the shapes used in the npc
rect.setRect(x, y, w, h);
ellipse.setFrame(rect);
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle, visionAngle * 2, Arc2D.PIE);
/ CenterX, CenterY (of the npc),
/ the distance from the arc to the npc
/ a constant value around 45 degrees and a constant value around 90 degress (to make a pie shape)
I've tried multiplying the position and the angles by the sin and cosine of the NPC's current angle
something like these
visionArc.setArcByCenter(cx * (Math.cos(Math.toRadians(angle))), cy (Math.sin(Math.toRadians(angle)), visionDistance, visionAngle, visionAngle * 2, Arc2D.PIE);
or
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle - angle, (visionAngle + angle) * 2, Arc2D.PIE);
or
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle * (Math.cos(Math.toRadians(angle))), visionAngle * 2, Arc2D.PIE);
I've tried a lot but can't seem to find what works. Making the vision angles not constant makes an arc that expands and contracts, and multiplying the position by the sin or cosine of the angle will make the arc fly around the screen, which doesn't really work either.
This is the function that draws the given NPC
public void drawNPC(NPC npc, Graphics2D g2, AffineTransform old) {
// translate to the position of the npc and rotate
AffineTransform npcTransform = AffineTransform.getRotateInstance(Math.toRadians(npc.angle), npc.x, npc.y);
// Translate back a few units to keep the npc rotating about its own center
// point
npcTransform.translate(-npc.halfWidth, -npc.halfHeight);
g2.setTransform(npcTransform);
// g2.draw(npc.rect); //<-- show bounding box if you want
g2.setColor(npc.outlineColor);
g2.draw(npc.visionArc);
g2.setColor(Color.BLACK);
g2.draw(npc.ellipse);
g2.setTransform(old);
}
This is my collision detection algorithim - NPC is a superclass to ninja (Shorter range, higher peripheral)
public void checkNinjas(Level level) {
for (int i = 0; i < level.ninjas.size(); i++) {
Ninja ninja = level.ninjas.get(i);
playerRect = level.player.rect;
// Check collision
if (playerRect.getBounds2D().intersects(ninja.visionArc.getBounds2D())) {
// Create an area of the object for greater precision
Area area = new Area(playerRect);
area.intersect(new Area(ninja.visionArc));
// After checking if the area intersects a second time make the NPC "See" the player
if (!area.isEmpty()) {
ninja.seesPlayer = true;
}
else {
ninja.seesPlayer = false;
}
}
}
}
Can you help me correct the actual positions of the arcs for my collision detection? I have tried creating new shapes so I can have one to do math on and one to draw to the screen but I scrapped that and am starting again from here.
[1]: https://i.stack.imgur.com/rUvTM.png
[2]: https://github.com/ShadowDraco/ArcCollisionDetection
After a few days of coding and learning and testing new ideas I came back to this program and implemented the collision detection using my original idea (ray casting) and have created the equivalent with rays!
Screenshot of the new product
Github link to the project that taught me the solution
Here's the new math
public void setRays() {
for (int i = 0; i < rays.length; i++) {
double rayStartAngleX = Math.sin(Math.toRadians((startAngle - angle) + i));
double rayStartAngleY = Math.cos(Math.toRadians((startAngle - angle) + i));
rays[i].setLine(cx, cy, cx + visionDistance * rayStartAngleX, cy + visionDistance * rayStartAngleY);
}
}
Here is a link the the program I started after I asked this question and moved on to learn more, and an image to what the new product looks like
(The original github page has been updated with a new branch :) I'm learning git hub right now too
I do not believe that using Arc2D in the way I intended is possible, however there is .setArcByTangent method, it may be possible to use that but I wasn't going to get into that. Rays are cooler.

How to compute the Rotation-Matrix based on a Direction-Vector?

In my 3D world implementation I use Direction-Vectors (unit vector) to decide the orientation of my 3D-objects.
Each 3D-object has its own Direction-Vector which by default has the orientation V3(1, 0, 0) with Origin at V3(0,0,0).
This is how I apply the directional Rotation-Matrix "D" (the matrix "A" is used to rotate 3D-objects around their Direction-Vector as an axis, this seems to work fine):
Model3D model = actor.model;
// Loops through all the edges in the model
for (int i = 0; i < model.edges.length; i++) {
M3 D = directionMatrix(actor);
M3 R = rotationMatrix(actor);
// Draws a line based on each edge in the model.
// Each line consists of two points a and b.
// The matrix R rotates the points around a given axis.
// The D matrix rotates the points towards a given axis - not around it.
S.drawLine(g,
D.mul(R.mul(model.points[model.edges[i].a])).scale(actor.scale),
D.mul(R.mul(model.points[model.edges[i].b])).scale(actor.scale)
);
}
This is how I calculate my current directional Rotation-Matrix "D":
public M3 directionalRotationMatrix(c_Actor3D actor) {
double x = Math.atan2(actor.direction.z, actor.direction.y);
double y = Math.atan2(actor.direction.x, actor.direction.z);
double z = Math.atan2(actor.direction.y, actor.direction.x);
double sin_x = Math.sin(x), sin_y = Math.sin(y), sin_z = Math.sin(z);
double cos_x = Math.cos(x), cos_y = Math.cos(y), cos_z = Math.cos(z);
return new M3(
cos_x * cos_y, (cos_x * sin_y * sin_z) - (sin_x * cos_z),
(cos_x * sin_y * cos_z) + (sin_x * sin_z), sin_x * cos_y, (sin_x * sin_y * sin_z) + (cos_x * cos_z),
(sin_x * sin_y * cos_z) - (cos_x * sin_z), -sin_y, cos_y * sin_z, cos_y * cos_z);
}
My problem is to create the correct directional Rotation-Matrix that rotates the 3D-objects in the direction of their respective Direction-Vectors.
I'm not sure at all what I do wrong... My idea is to first rotate the cube towards a direction, then rotate the cube around the axis of the direction. After all that comes position transformation etc.
Thank you for your help guys!
Sounds like you are trying to move a 3D object in the direction of it's forward facing vector. To do this you will need the position of the object (x,y,z) and 3 vectors (forward, up, and right). You can rotate the 3 vectors using pitch yaw and roll based vector math(see below link). For the forward movement you then add the position of the object plus the speed multiplied by the forward vector, ie: position += speed * forward
Use the following complete example code posted here to figure out how to implemented your own version. http://cs.lmu.edu/~ray/notes/flightsimulator/

Tracking a specific points' x & y position on an image during rotation in Java

I'm trying to create rope physics for a 2D game, so as a starting point I have a small rotating image and I need to add another piece of rope to the end of it. Unfortunately I'm having trouble trying to track the bottom part of the image as the rotation occurs at the top of it. I've managed to track the (0,0) coordinate of the image using the following code but I need to be able to track point (32,57). This is what I have so far:
xr = xm + (xPos - xm) * Math.cos(a) - (yPos - ym) * Math.sin(a);
yr = ym + (xPos - xm) * Math.sin(a) + (yPos - ym) * Math.cos(a);
Any help is appreciated!
EDIT:
So hey, I got it working =D Using polar coordinates turned out to be a lot easier then whatever I had going on before.
The top 2 variables are constant and stay the same:
theta0 = Math.atan2(y, x);
r = 25;
theta = theta0 + a;
xr = (r * Math.cos(theta)) + xm;
yr = (r * Math.sin(theta)) + ym;
xm and ym are the positions of my image.
Use polar coordinates. Set your origin at the point of rotation of your image, and pick your favorite angular reference (say 0 degrees is directly to the right, and positive rotations go counterclockwise from there).
Compute the polar coordinates of your desired point (32, 57) relative to this coordinate system. Say the answer is (r, theta).
Now, the only thing that's changing as you spin the image around is the value of theta. Now you can go back to x-y coordinates with your new value of theta.
Hope this helps.

Perspective Projection: determine the 2D screen coordinates (x,y) of points in 3D space (x,y,z)

I wish to determine the 2D screen coordinates (x,y) of points in 3D space (x,y,z).
The points I wish to project are real-world points represented by GPS coordinates and elevation above sea level.
For example:
Point (Lat:49.291882, Long:-123.131676, Height: 14m)
The camera position and height can also be determined as a x,y,z point. I also have the heading of the camera (compass degrees), its degree of tilt (above/below horizon) and the roll (around the z axis).
I have no experience of 3D programming, therefore, I have read around the subject of perspective projection and learnt that it requires knowledge of matrices, transformations etc - all of which completely confuse me at present.
I have been told that OpenGL may be of use to construct a 3D model of the real-world points, set up the camera orientation and retrieve the 2D coordinates of the 3D points.
However, I am not sure if using OpenGL is the best solution to this problem and even if it is I have no idea how to create models, set up cameras etc
Could someone suggest the best method to solve my problem? If OpenGL is a feasible solution i'd have to use OpenGL ES if that makes any difference. Oh and whatever solution I choose it must execute quickly.
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
Then, very simply:
X' = ((X - Xc) * (F/Z)) + Xc
Y' = ((Y - Yc) * (F/Z)) + Yc
If your camera is the origin, then this simplifies to:
X' = X * (F/Z)
Y' = Y * (F/Z)
You do indeed need a perspective projection and matrix operations greatly simplify doing so. I assume you are already aware that your spherical coordinates must be transformed to Cartesian coordinates for these calculations.
Using OpenGL would likely save you a lot of work over rolling your own software rasterizer. So, I would advise trying it first. You can prototype your system on a PC since OpenGL ES is not too different as long as you keep it simple.
If just need to compute coordinates of some points, you should only need some algebra skills, not 3D programming with openGL.
Moreover openGL does not deal with Geographic coordinates
First get some doc about WGS84 and geodesic coordinates, you have first to convert your GPS data into a cartesian frame ( for instance the earth centric cartesian frame in which is defined the WGS84 ellipsoid ).
Then the computations with matrix can take place.
The chain of transformations is roughly :
WGS84
earth centric coordinates
some local frame
camera frame
2D projection
For the first conversion see this
The last involves a projection matrix
The others are only coordinates rotations and translation.
The "some local frame" is the local cartesian frame with origin as your camera location
tangent to the ellipsoid.
I'd recommend "Mathematics for 3D Game Programming and Computer Graphics" by Eric Lengyel. It covers matrices, transformations, the view frustum, perspective projection and more.
There is also a good chapter in The OpenGL Programming Guide (red book) on viewing transformations and setting up a camera (including how to use gluLookAt).
If you aren't interested in displaying the 3D scene and are limited to using OpenGL ES then it may be better to just write your own code to do the mapping from 3D to 2D window coords. As a starting point you could download Mesa 3D, an open source implementation of OpenGL, to see how they implement gluPerspective (to set a projection matrix), gluLookAt (to set a camera transformation) and gluProject (to project a 3D point to 2D window coords).
return [((fol/v[2])*v[0]+x),((fol/v[2])*v[1]+y)];
Point at [0,0,1] will be x=0 and y=0, unless you add center screen xy - it's not camera xy. fol is focal length, derived from fov angle and screen width - how high is the triangle (tangent). This method will not match three.js perspective matrix, which is why am I looking for that.
I should not be looking for it. I matched xy on openGL, perfectly like super glue! But I cannot get it to work right in java. THAT Perfect match follows.
var pmat = [0,0,0,0,0,0,0,0,0,0,
(farclip + nearclip) / (nearclip - farclip),-1,0,0,
2*farclip*nearclip / (nearclip - farclip),0 ];
void setpmat() {
double fl; // = tan(dtor(90-fovx/aspect/2)); /// UNIT focal length
fl = 1/tan(dtor(fov/Aspect/2)); /// same number
pmat[0] = fl/Aspect;
pmat[5] = fl;
}
void fovmat(double v[],double p[]) {
int cx = (int)(_Width/2),cy = (int)(_Height/2);
double pnt2[4], pnt[4] = { 0,0,0,1 } ;
COPYVECTOR(pnt,p);NORMALIZE(pnt);
popmatrix4(pnt2,pmat,pnt);
COPYVECTOR(v,pnt2);
v[0] *= -cx; v[1] *= -cy;
v[0] += cx; v[1] += cy;
} // world to screen matrix
void w2sm(int xy[],double p[]) {
double v[3]; fovmat(v,p);
xy[0] = (int)v[0];
xy[1] = (int)v[1];
}
I have one more way to match three.js xy, til I get the matrix working, just one condition. must run at Aspect of 2
function w2s(fol,v,x,y) {
var a = width / height;
var b = height/width ;
/// b = .5 // a = 2
var f = 1/Math.tan(dtor(_fov/a)) * x * b;
return [intr((f/v[2])*v[0]+x),intr((f/v[2])*v[1]+y)];
}
Use it with the inverted camera matrix, you will need invert_matrix().
v = orbital(i);
v = subv(v,campos);
v3 = popmatrix(wmatrix,v); //inverted mat
if (v3[2] > 0) {
xy = w2s(flen,v3,cx,cy);
Finally here it is, (everyone ought to know by now), the no-matrix match, any aspect.
function angle2fol(deg,centerx) {
var b = width / height;
var a = dtor(90 - (clamp(deg,0.0001,174.0) / 2));
return asa_sin(PI_5,centerx,a) / b;
}
function asa_sin(a,s,b) {
return Math.sin(b) * (s / Math.sin(PI-(a+b)));
} // ASA solve opposing side of angle2 (b)
function w2s(fol,v,x,y) {
return [intr((fol/v[2])*v[0]+x),intr((fol/v[2])*v[1]+y)];
}
Updated the image for the proof. Input _fov gets you 1.5 that, "approximately." To see the FOV readout correctly, redo the triangle with the new focal length.
function afov(deg,centerx) {
var f = angle2fol(deg,centerx);
return rtod(2 * sss_cos(f,centerx,sas_cos(f,PI_5,centerx)));
}
function sas_cos(s,a,ss) {
return Math.sqrt((Math.pow(s,2)+Math.pow(ss,2))-(2*s*ss*Math.cos(a)));
} // Side Angle Side - solve length of missing side
function sss_cos(a,b,c) {
with (Math) {
return acos((pow(a,2)+pow(c,2)-pow(b,2))/(2*a*c));
}
} // SSS solve angle opposite side2 (b)
Star library confirmed the perspective, then possible to measure the VIEW! http://innerbeing.epizy.com/cwebgl/perspective.jpg
I can explain the 90 deg correction to moon's north pole in one word precession. So what is the current up vector. pnt? radec?
function ininorths() {
if (0) {
var c = ctime;
var v = LunarPos(jdm(c));
c += secday();
var vv = LunarPos(jdm(c));
vv = crossprod(v,vv);
v = eyeradec(vv);
echo(v,vv);
v = [266.86-90,65.64]; //old
}
var v = [282.6425,65.8873]; /// new.
// ...
}
I have yet to explain the TWO sets of vectors: Three.milkyway.matrix and the 3D to 2D drawing. They ARE:
function drawmilkyway() {
var v2 = radec2pos(dtor(192.8595), dtor(27.1283),75000000);
// gcenter 266.4168 -29.0078
var v3 = radec2pos(dtor(266.4168), dtor(-29.0078),75000000);
// ...
}
function initmwmat() {
var r,u,e;
e = radec2pos(dtor(156.35), dtor(12.7),1);
u = radec2pos(dtor(60.1533), dtor(25.5935),1);
r = normaliz(crossprod(u,e));
u = normaliz(crossprod(e,r));
e = normaliz(crossprod(r,u));
var m = MilkyWayMatrix;
m[0]=r[0];m[1]=r[1];m[2]=r[2];m[3]=0.0;
m[4]=u[0];m[5]=u[1];m[6]=u[2];m[7]=0.0;
m[8]=e[0];m[9]=e[1];m[10]=e[2];m[11]=0.0;
m[12]=0.0;m[13]=0.0;m[14]=0.0;m[15]=1.0;
}
/// draw vectors and matrix were the same in C !
void initmwmat(double m[16]) {
double r[3], u[3], e[3];
radec2pos(e,dtor(192.8595), dtor(27.1283),1); //up
radec2pos(u,dtor(266.4051), dtor(-28.9362),-1); //eye
}

How to position a Node along a circular orbit around a fixed center based on mouse coordinates (JavaFX)?

Im trying to get into some basic JavaFX game development and I'm getting confused with some circle maths.
I have a circle at (x:250, y:250) with a radius of 50.
My objective is to make a smaller circle to be placed on the circumference of the above circle based on the position of the mouse.
Where Im getting confused is with the coordinate space and the Trig behind it all.
My issues come from the fact that the X/Y space on the screen is not centered at 0,0. But the top left of the screen is 0,0 and the bottom right is 500,500.
My calculations are:
var xpos:Number = mouseEvent.getX();
var ypos:Number = mouseEvent.getY();
var center_pos_x:Number = 250;
var center_pos_y:Number = 250;
var length = ypos - center_pos_y;
var height = xpos - center_pos_x;
var angle_deg = Math.toDegrees(Math.atan(height / length));
var angle_rad = Math.toRadians(angle_deg);
var radius = 50;
moving_circ_xpos = (radius * Math.cos(angle_rad)) + center_pos_x;
moving_circ_ypos = (radius * Math.sin(angle_rad)) + center_pos_y;
I made the app print out the angle (angle_deg) that I have calculated when I move the mouse and my output is below:
When the mouse is (in degrees moving anti-clockwise):
directly above the circle and horizontally inline with the center, the angle is -0
to the left and vertically centered, the angle is -90
directly below the circle and horizontally inline with the center, the angle is 0
to the right and vertically centered, the angle is 90
So, what can I do to make it 0, 90, 180, 270??
I know it must be something small, but I just cant think of what it is...
Thanks for any help
(and no, this is not an assignment)
atan(height/length) is not enough to get the angle. You need to compensate for each quadrant, as well as the possibility of "division-by-zero". Most programming language libraries supply a method called atan2 which take two arguments; y and x. This method does this calculation for you.
More information on Wikipedia: atan2
You can get away without calculating the angle. Instead, use the center of your circle (250,250) and the position of the mouse (xpos,ypos) to define a line. The line intersects your circle when its length is equal to the radius of your circle:
// Calculate distance from center to mouse.
xlen = xpos - x_center_pos;
ylen = ypos - y_center_pos;
line_len = sqrt(xlen*xlen + ylen*ylen); // Pythagoras: x^2 + y^2 = distance^2
// Find the intersection with the circle.
moving_circ_xpos = x_center_pos + (xlen * radius / line_len);
moving_circ_ypos = y_center_pos + (ylen * radius / line_len);
Just verify that the mouse isn't at the center of your circle, or the line_len will be zero and the mouse will be sucked into a black hole.
There's a great book called "Graphics Gems" that can help with this kind of problem. It is a cookbook of algorithms and source code (in C I think), and allows you to quickly solve a problem using tested functionality. I would totally recommend getting your hands on it - it saved me big time when I quickly needed to add code to do fairly complex operations with normals to surfaces, and collision detections.

Categories