I'm creating a 3D renderer in Java, which currently can render the wireframe of a cube using Points and lines and rotate the cube, the question is, what should Z be? And what should be set to Z? I'm guessing that the size of the cube should be set to Z?
Thanks for your time! Any answers would be much appreciated.
Z usually means the out-of-plane direction if the current viewport lies in the x-y plane.
Your 3D world has its own coordinate system. You'll transform from 3D world coordinates to viewport coordinates when you render.
I think you might be missing some basic school math/geometry here. However, it's actually not that hard to understand.
Imagine a flat plane, e.g. a sheet of paper.
The first coordinate axis will go straight from left to right and we'll call it X. So X = 0 means your point is on the left border. X = 10 might mean your point is on the right border (really depends on how big you define a unit of 1; this could be in centimeters, inches, etc.). This is already enough to describe some point in one dimension (from left to right).
Now, we need a second axis. Let's call it Y. It's running from the top border (Y = 0) to the bottom (Y = 10). Now you're able to describe any point on the plane as you've got two positions. For example, (0, 0) would be the top left corner. (10, 10) would be the bottom right corner. (5, 0) would be the center point of the top border, etc.
What happens if we add yet another dimension? Call it Z. This will essentially be the height of your point above the sheet. For example, Z = 0 could mean your point is sitting on the sheet of painter, while Z = 10 means your point is sitting 10 cm above the paper. Now you use three coordinates to describe a point: (5, 5, 0) is the center of the paper. (5,5,5) is the center of the cube sitting on your paper filling it and being 10 cm high.
When programming in 3D, you can use the same terminology. The only real difference is, that you're using a so called projection/view matrix to determine how to display this 3d positions on screen. The easiest transform could be the following matrix:
1 0 0
0 1 0
Multiplying this with your 3d coordinates you'll get the following two terms:
2d-x = 3d-x
2d-y = 3d-y
This results in you viewing the cube (or whatever you're trying to display) from straight above essentially ignoring the Z axis again (you can't render something sticking out of your display, unless using some kind of 3d glasses or similar technology).
Overall, it's up to you how you use the coordinates and interpret them. Usually x and y refer to the plane (position on the ground or position inside a 2D world) while z might be the height or the depth (front or back). It really depends on the specific case. But in generic, it's really just another dimension like x and y.
3D means 3 "Dimensions". One dimension is "X", the other "Y", the third "Z". None have a sepcific direction, though it's convenient to conventionally assign a direction, for example "Forward", "Left", and "Up".
Something whose X, Y, and Z values are all equal to 0 resides at the origin, or center of the space. You can write this as (0,0,0) where the order of the parameters are (x,y,z).
A point or vertex at the location (1,0,0) is one unit in the X direction from the origin. So if you moved from (0,0,0) to (1,0,0), you would be moving purely in the X direction.
(0,1,0) is one unit in the Y direction away from the origin.
(0,0,1) is one unit in the Z direction away from the origin.
(1,1,0) is one unit in the X direction and one unit in the Y direction. So if X means "Forward", and Y means "Left", then (1,1,0) is forward-and-left of the origin.
So a basic cube can be defined by the following vertices:
(1,1,-1)
(1,-1,-1)
(-1,1,-1)
(-1,-1,-1)
(1,1,1)
(1,-1,1)
(-1,1,1)
(-1,-1,1)
Related
How to invert Y axis? When I touch on bottom or top of the screen, the Y value is opposite I want
You can't invert an axis per se. You see, in computer graphics, the 2D coordinate system is a bit different from the canonical one taught at school in maths. The difference is that in computer graphics the y-axis is in the opposite direction, that is, from the origin to the bottom it has positive values and from the origin to the top it has negative values. Also, the origin is at the top left corner of the screen. If you can't get used to it then you can always take the opposite value to what you get, for this, asume ycoord holds the value obtained then you can do ycoord = -ycoord and that will get you the value as you're used to. Also, if you want the origin to be in the bottom left corner then you should check your y-coordinate, if it's positive then substract the vertical resolution to it, and if it's negative then add the vertical resolution to it.
But keep mind that you're going against the standard definition for coordinate systems in computer graphics.
I would say this is a duplicate questions of this one:
Move a shape to place where my finger is touched
Check on my answer there, so I won't repeat my self.
Or in short - use camera.unproject() method to get world coordinates from screen coordinates.
I'm writing a OpenGL application in which there is a rectangle placed on a 3D object. The 3D object can move around and rotate and the rectangle follows these movements.
What I would like to do is to point with the mouse towards the rectangle so that a dot would appear in that point on the rectangle, and I want the point to follow it as the 3D object that "holds" the rectangle moves around.
I know how to find the intersection on a plane, and I know how to find the world coordinates of the contact point. What I need is a way to convert the world coordinates to the 2D local coordinate system of the rectangle.
For example, suppose I have the plane positioned like this in the 3D world, with a given orientation (sorry, I can't really draw properly):
The black point in the center is the origin of the plane, while the blue point is the one I would like to find. The numbers near the points are their world coordinates; in this example the Z axis "comes out" of the screen.
I would like to map the coordinates of the blue point in the local coordinate system of the plane, like this:
I know that somehow this shouldn't be hard, but I can't find a way at all. Any hints would be appreciated!
You probably already use a transformation matrix to render the rectangle. If you have the 3D position of the point, just transform it with the inverse transformation matrix and you get a 3D position in the rectangle's space. If the local system is aligned with the rectangle, one of the coordinates should always be zero and you can drop it.
I found what I needed in this post here.
Basically the solution for me is to project the point onto the x and y axis of the local coordinate system of the rectangle.
In pseudo code it's like this:
// I compute the direction that go from the origin of the rectangle to its x axis, in world coordinates, and take the norm; same thing for the y axis
var xDir = Norm(mRectangle.LocalToWorld([1, 0, 0]) - mRectangle.LocalToWorld([0, 0, 0]));
var yDir = Norm(mRectangle.LocalToWorld([0, 1, 0]) - mRectangle.LocalToWorld([0, 0, 0]));
// I compute the dot product
mLocalPosition.x = Dot(xDir, (contactPoint - mRectangle.LocalToWorld([0, 0, 0])));
mLocalPosition.y = Dot(yDir, (contactPoint - mRectangle.LocalToWorld([0, 0, 0])));
// I can now set the position of the point converting the local coordinates found to world coordinates
mPoint.SetPosition(mRectangle.LocalToWorld([mLocalPosition.x, mLocalPosition.y, 0]));
I know do a horizontal and vertical scroller game (like Mario), in this game type, the character is always in the same distance from user. The character only moves to left and right in horizontal scroller and to down and up in vertical scroller.
But there are games in 2D that the character can move freely in the scene, like for example graphic adventures.
So how can I do that ground in order to move the character freely on the ground with depth sense?
An example can see in this video: http://youtu.be/DbZZVbF8fZY?t=4m17s
Thanks you.
This is how I would do that:
First imagine that you are looking at your scene from the top to the ground. Set your coordinate system like that. So all object on your scene will have X and Y coordinates. All your object movements and checking (when character bumps into a wall or something), calculations do in that 2D world.
Now, to draw your world, if you want simpler thing, without some isometric perspective 3D look you just to draw your background image first, and then order all your objects far to near and draw them that way. Devide your Y coords to squeeze movement area a bit. Add some constant to Y to move that area down. If you characters can jump or fly (move trough Y axe) just move Y coord to for some amount.
But if you want it to be more 3D you'll have to make some kind of perspective transformation - multiply your X coordinate with Y and some constant (start with value 1 for constant and tune it up until optimal value). You can do similar thing with Y coord too, but don't think it's needed for adventure games like this.
This is probably hard to understand without the image, but it's actually very simple transformation.
I'm making a breakout/brick-breaker/arkanoid clone (opengl-es/android) and I've been stuck on my collision detection code for quite some time. As the title suggests: How do I figure out which side of a brick has been hit by the ball ?
Since I only need to invert the speed in a certain direction, x or y, when a brick is hit I could think of:
if(speedY < 0) : left, upper or right
else : left, bottom or right
if(speedX < 0) : bottom, right or upper
else : bottom, left or upper
however this doesn't bring me far in deciding if it collided vertical or horizontal, and with that, which direction I should send the ball next.
I've looked at some code examples on the internet, however those often are very vague, complicated or off-topic for me.
Well, if you know the position of the Brick and the position of the ball you can do tests on each object to determine the side of the brick.
Assuming the standard Java origin in the top-left:
+----+
( )| |
+----+
If the ball's Max-X is < the Min-X of the brick, you know that it has to be on the left side, and vice versa with the right and left. You would also test the Y values for top and bottom collisions.
Of course this assumes you've gotten the collision detection working first.
EDIT
This is an excerpt from my Collision engine, this is just a small bit for an example, but this is how I test if the object is to the left of the thing it's colliding with.
else if ((oCenter.getX() < sCenter.getX())
&& ((oCenter.getY() < (sCenter.getY() + sourceHalfHeight))
&& (oCenter.getY() > (sCenter.getY() - sourceHalfHeight))))
return LEFT;
In my example here oCenter is a Point2D and it's the center of the ball. sCenter is a Point2D and it's the center of a rectangular shape. sourceHalfHeight is half the height of the rectangular shape (the object with the center point sCenter).
The Pseudo-code algorithm:
if (the center X of the ball < the center X of the rectangle
AND the center Y of the ball is BETWEEN the max Y and min Y of the rectangle)
then the ball is to the LEFT of the rectangle
end if
The thing is, you cannot know it with the speed only, as the ball could hit 2 different sides at 2 different times, both times with the same speed in the same direction.
Ex:
First line hits the top, second one hits the side, but both have the same speed and direction
/
/ /
_____ /
|_____|
You would have to use the ball's position and compare it with each sides of the brick
Alright, so I got two angles. One is the joystick's angle, and the other is the camera to player angle. The camera's angle. Now I want it so when I press up on the joystick it moves the player away from the camera. How would I do this? And is there a easy way to do it in Java or Ardor3d?
edit: Here is the code of how I get my angles.
float camDegree = (float) Math.toDegrees(Math.atan2(
_canvas.getCanvasRenderer().getCamera().getLocation().getXf() - colladaNode.getTranslation().getXf(),
_canvas.getCanvasRenderer().getCamera().getLocation().getYf()) - colladaNode.getTranslation().getYf());
player.angle = (float) Math.toDegrees(Math.atan2(padX, padY));
Quaternion camQ = new Quaternion().fromAngleAxis(camDegree, Vector3.UNIT_Y);
I have to say that I don't really understand your question, but it seems to be about how to implement camera-relative control using a joystick.
The most important piece of advice I can give you is that it's better not to compute angles, but to work directly with vectors.
Suppose that the camera is looking in the direction v (in some types of game this vector will be pointing directly at the player, but not all types of game, and not always):
Typically you don't care about the vertical component of this vector, so remove it to get the horizontal component, which I'll call y for reasons that will become apparent later:
y = v − (v · up) up
where up is a unit vector pointing vertically upwards.
We can find the horizontal vector that's perpendicular to y using the cross product (and remembering the right hand rule):
x = v × up
Now you can see that y is a vector in the plane pointing forwards (away from the camera), and x a vector in the plane pointing right (sideway with respect to the camera). If you normalise these vectors:
x̂ = x / |x|
ŷ = y / |y|
then you can use x̂ and ŷ as the coordinate basis for camera-relative motion of the player. If your joystick readings are Jx and Jy, then move the player by
s (Jx x̂ + Jy ŷ)
where s is an appropriate scalar value proportional to the player's speed.
(Notice that no angles were computed at any point in this answer!)