I am experimenting with LWJGL2 and I want to be able to tell if the camera is able to see a certain point in 3D space. I was trying on my own to see if I could do it, and ended up with something that kinda works and only for rotation on the Y axis.
This code works, but not in both axes. I am not sure if this is the correct way to do it either.
public boolean isInFrame(float x, float y, float z){ //The z isn't used
float camera = rotation.y; //The cameras y rotation
double object = Math.atan2(y, x)*(180/Math.PI);
object += (180 - camera);
if (object <0 ) object += 360;
if (object >360 ) object -= 360;
return 270>object&&90<object; //set to 180˚ for test
}
For the code, I am assuming the camera is centered around 0,0,0.
I just want to know how I could change this so that it works for x and y rotation of the camera. For example, it could tell me if if a point is visible regardless of the cameras rotation.
NOTE:
I am not worrying about anything obstructing the view of the point.
Thanks for the help in advance.
If you have the view and projection matrices of the camera (let's call them V, P), you can just apply the transformations to your point and check whether the result lies within the clip volume of the camera.
Say your point is at (x, y, z). Construct a vector p = (x, y, z, 1) and apply the camera transform to it:
q = P * V * p
The view transform V applies the transformation of the world relative to the camera, based on the camera position and orientation. Then, the projection P deforms the camera's view frustum (i.e., the visible space of the camera) into a unit cube, like this:
(Image source: Song Ho Ahn)
In order to read off the coordinate values of the resulting point, we must first de-homogenize it by dividing by its w component:
r = q / q.w
Now, the components r.x, r.y, r.z tell you whether the point lies within the visible range of the camera:
If r.x < -1, the point lies beyond the left border of the screen.
If r.x > 1, the point lies beyond the right border of the screen.
If r.y < -1, the point lies beyond the bottom border of the screen.
If r.y > 1, the point lies beyond the top border of the screen.
If r.z < -1, the point lies beyond the near plane of the camera, i.e., the point is behind the camera or too close for the camera to see.
If r.z > 1, the point lies beyond the far plane of the camera, i.e., the point is too far away for the camera to see.
Otherwise, the point is in the visible range of the camera.
Related
I am trying to shoot an object(a spell) depending on the rotation of the players arm. The spell is supposed to come out of the hand and shoot towards where the mouse cicked(the arm rotates and points to where the mouse is). This is how the arm rotates in game.
public boolean mouseMoved(int screenX, int screenY) {
tmp.x = screenX;
tmp.y = screenY;
tmp.z = 0;
cam.unproject(tmp);
rot = MathUtils.radiansToDegrees * MathUtils.atan2((float)tmp.y - (float)player.getArmSprite().getY() - player.getArmSprite().getHeight(),
tmp.x -player.getArmSprite().getX() - player.getArmSprite().getWidth());
if (rot < 0) rot += 360;
//Last right or left means if hes looking left or right
if(player.lastRight)
player.setObjectRotation(rot + 80);
if(player.lastLeft)
player.setObjectRotation(-rot - 80);
And this is how the spell is supposed to shoot based off rotation
//destination is a vector of where on screen the mouse was clicked
if(position.y < destination.y){
position.y += vel.y * Gdx.graphics.getDeltaTime();
}
if(position.x < destination.x){
position.x += vel.x * Gdx.graphics.getDeltaTime();
}
However this is very wonky and never really reacts the way it supposed to with a few exceptions. It fires from the hand and then if the y axis is equal it completely evens out and goes till it reaches the x position, I want it to fire from the hand to the position clicks perfectly straight from point a to point b, this is clearly a rotation problem that I just can't seem to figure out how to tackle.
Here is an image of what is happening
Example image
The red indicates where I clicked, as you can see it reached the x pos first and now is traveling to the y when it should have reached the x and y pos of where I clicked first
Any help with this problem is extremely appreciated!
I'm pretty bad at radians and tangents but luckily we have vectors.
Since you have the rot ation in degrees of the arm. I advice to use Vectors to use for any Vector related math now.
//A vector pointing up
Vector2 direction = new Vector2(0, 1);
//Let's rotate that by the rotation of the arm
direction.rotate(rot);
Now direction is the direction the arm is pointing. If your rotation is calculated where up = 0. So you might need to rotate it 180, 90 or -90 degrees. Or in the case you did something silly any degrees.
Your spell should have a Vector too for it's position. Set that to the hand or wherever you want to start from. Now all you need to do is scale that direction since it's currently has a length of 1. If you want to move 5 units each frame you can do direction.scl(5) now it is of length 5. But technically speaking it's no direction anymore now everybody calls it velocity so let's do.
//when you need to fire
float speed = 5;
Vector2 velocity = direction.cpy().scl(speed);
//update
position.add(velocity);
draw(fireballImage, position.x, position.y);
I copied direction first, otherwise it would also be scaled. Then I just added the velocity to the position and draw using that Vector.
And to show Vectors are awesome you should see this awesome badlogic vs mouse program I created. https://github.com/madmenyo/FollowMouse there are just a view lines of my own code. It just takes a little bit of vector knowledge and it's very readable.
In my 3d application I wish to have an object (a tree, for example), and my camera to look at this object. Then, I want the camera to rotate about the object, in a circle, while looking at the tree the whole time. Imagine walking around a tree, while constantly changing your angle so that you are still looking at it. I know this requires both rotation of my camera, and translation of my camera, but the math is far beyond the level I have been taught in schooling thusfar. Can anyone point me in the right direction?
Here is one way with very simple math. First, you need a constant for the distance the camera is from the center of the tree (the radius of the circle path it travels on). Also, you need some variable to track it's angle around the circle.
static final float CAM_PATH_RADIUS = 5f;
static final float CAM_HEIGHT = 2f;
float camPathAngle = 0;
Now you can change the camPathAngle to anything you want from 0 to 360 degrees. 0 degrees corresponds with the location on the circle that is in the same direction as the world's X-axis from the tree's center.
On each frame, after you've update camPathAngle, you can do this to update the camera position.
void updateTreeCamera(){
Vector3 camPosition = camera.getPosition();
camPosition.set(CAM_PATH_RADIUS, CAM_HEIGHT, 0); //Move camera to default location on circle centered at origin
camPosition.rotate(Vector3.Y, camPathAngle); //Rotate the position to the angle you want. Rotating this vector about the Y axis is like walking along the circle in a counter-clockwise direction.
camPosition.add(treeCenterPosition); //translate the circle from origin to tree center
camera.up.set(Vector3.Y); //Make sure camera is still upright, in case a previous calculation caused it to roll or pitch
camera.lookAt(treeCenterPosition);
camera.update(); //Register the changes to the camera position and direction
}
I did it like that for the sake of commenting it. It's actually shorter than the above if you chain commands:
void updateTreeCamera(){
camera.getPosition().set(CAM_PATH_RADIUS, CAM_HEIGHT, 0)
.rotate(Vector3.Y, camPathAngle).add(treeCenterPosition);
camera.up.set(Vector3.Y);
camera.lookAt(treeCenterPosition);
camera.update();
}
Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.
I'm making pretty simple game. You have a sprite onscreen with a gun, and he shoots a bullet in the direction the mouse is pointing. The method I'm using to do this is to find the X to Y ratio based on 2 points (the center of the sprite, and the mouse position). The X to Y ratio is essentially "for every time the X changes by 1, the Y changes by __".
This is my method so far:
public static Vector2f getSimplifiedSlope(Vector2f v1, Vector2f v2) {
float x = v2.x - v1.x;
float y = v2.y - v1.y;
// find the reciprocal of X
float invert = 1.0f / x;
x *= invert;
y *= invert;
return new Vector2f(x, y);
}
This Vector2f is then passed to the bullet, which moves that amount each frame.
But it isn't working. When my mouse is directly above or below the sprite, the bullets move very fast. When the mouse is to the right of the sprite, they move very slow. And if the mouse is on the left side, the bullets shoot out the right side all the same.
When I remove the invert variable from the mix, it seems to work fine. So here are my 2 questions:
Am I way off-track, and there's a simpler, cleaner, more widely used, etc. way to do this?
If I'm on the right track, how do I "normalize" the vector so that it stays the same regardless of how far away the mouse is from the sprite?
Thanks in advance.
Use vectors to your advantage. I don't know if Java's Vector2f class has this method, but here's how I'd do it:
return (v2 - v1).normalize(); // `v2` is obj pos and `v1` is the mouse pos
To normalize a vector, just divide it (i.e. each component) by the magnitude of the entire vector:
Vector2f result = new Vector2f(v2.x - v1.x, v2.y - v1.y);
float length = sqrt(result.x^2 + result.y^2);
return new Vector2f(result.x / length, result.y / length);
The result is unit vector (its magnitude is 1). So to adjust the speed, just scale the vector.
Yes for both questions:
to find what you call ratio you can use the arctan function which will provide the angle of of the vector which goes from first object to second object
to normalize it, since now you are starting from an angle you don't need to do anything: you can directly use polar coordinates
Code is rather simple:
float magnitude = 3.0; // your max speed
float angle = Math.atan2(y,x);
Vector2D vector = new Vector(magnitude*sin(angle), magnitude*cos(angle));
I'm working on an android project, I want to move an object in a projectile path, but have no idea how to do that..
I got the initial X and initial Y, i.e. left bottom corner of the phone in landscape mode. Also I fetch the X and Y were the user touch the phone, so I can calculate the angle too by tan-1(y/x), but how to calculate the curve path i.e X and Y for the object.
Any help will be appreciated.
Thanks
You have initial point p1 (X, Y) where you throw your projectile. And you have a point where user touched the screen, say p2. So, find the direction vector, like dir = p2 - p1 and normalize it. Then do following:
You have initial velocity, v = speed * dir, where speed is scalar factor
Then, on every game tick append to your current position vector v = v + (0, -10); v *= dt, where (0, -10) is gravity factor and dt - time between game frames.
You can eliminate having to increment by time intervals by using the parametric form of the projectile equations.
All you'd need to do is determine how far across (left to right) the screen you want to travel. I'll call that the X direction. Then, for each position in the X direction (could be a pixel, could be some number of pixels), you calculate the corresponding position in the Y (down to up) direction.
You'll need to set a value for the downward acceleration due to gravity. Whatever value you choose, I'll just call it g. You'll also need to set a value for how fast the projectile begins it's motion. Whatever value you choose, I'll just call it V.
The parametric equation is then:
Y = X * tan(theta) - (g * X^2) / (2 * V^2 * (cosine(theta))^2)
So, once you have the user touch point, you can calculate the angle, theta, determine V, g, and the maximum value for X, then just iterate from 0 to max X and you'll get a point (X,Y) for each iteration.