I'm trying to write a game in Java (Processing, actually) with some cool 3D effects.
I have a choice of two 3D renderers, but neither have the quality or flexibility of the default renderer. I was thinking that if I could get a function to proje
So say I have a set of coordinates (x, y, z) floating in 3D space. How would I get where on the 2D screen that point should be drawn (perspective)?
Just to clarify, I need only the bare minimum (not taking the position of the camera into account, I can get that effect just by offsetting the points) - I'm not re-writing OpenGL.
And yes, I see that there are many other questions on this - but none of them seem to really have a definitive answer.
Here is your bare minimum.
column = X*focal/Z + width/2
row = -Y*focal/Z + height/2
The notation is:
X, Y, Z are 3D coordinates in distance units such as mm or meters;
focal is a focal length of the camera in pixels (focal= 500 for VGA resolution is a reasonable choice since it will generate a field of view about 60 deg; if you have a larger image size scale your focal length proportionally); note that physically focal~1cm << Z, which simplifies formulas presented in the previous answer.
height and width are the dimensions of the image or sensor in pixels;
row, column - are image pixels coordinates (note: row starts on top and goes down, while Y goes up). This is a standard set of coordinate systems centered on camera center (for X, Y, Z) and on the upper-left image corner (for row, column, see green lines).
You don't need to use OpenGL indeed since these formulas are easy to implement. But there will be some side-effects such as whenever your object has several surfaces they won't display correctly since you have no way to simulate occlusions. Thus you can add a depth buffer which is a simple 2D array with float Z values that keeps track which pixels are closer and which are further; if there is an attempt to write more than once at the same projected location, the closer pixel always wins. Good luck.
Look into Pin hole camera model
http://en.wikipedia.org/wiki/Pinhole_camera_model
ProjectedX = WorldX * D / ( D + worldZ )
ProjectedY = WorldY * D / ( D + worldZ )
where D is the distance between the projection plane and eye
Related
I am making a 3D game in which the player can rotate their view point via the mouse to look around the environment. I firstly just did x and y rotation via vertical and horizontal movement of the mouse and z via another control. But after playing the game I realised it did not rotate correctly. NOTE: I have a global variable matrix which represents the player's angle (3x1), at 0,0,0 it seems to work correctly as up or down is a direct x axis rotation and right or left is a direct y axis rotation, but if I move my camera diagonally for example then left doesn't directly correlate to a y axis rotation anymore.
Visually on a unit circle the players viewpoint wouldn't travel the full circumference anymore and would travel in a circle that is smaller that the circumference. This is the current code (x and yRateOfRot is the ratio of how far away from the centre the cursor is in each direction between -1 and 1):
private static void changeRotation(){
angle.set(Matrix.add(angle.matrix,new double[][]{
{ROTATION_SPEED * camera.xRateOfRot()},
{ROTATION_SPEED * camera.yRateOfRot()},
{ROTATION_SPEED * camera.zRateOfRot()}}));
}
I have looked at this source http://paulbourke.net/geometry/rotate/ and understand how to rotate via an arbitrary axis which I could do but I am not sure how to correlate this into getting a ratio to find out what the x,y and z change would be for looking in a specific direction i.e. at 0,0,0 the ratio of looking up would be x:1, y:0, z:0 but then at another angle the ratios would be different as looking up no longer means only an x rotation. Any information would be appreciated, thanks!
I need some help here with a physics simulation in java I'm writing. The simulation is about the free fall of a body. I'm using java and I don't use any third-party library.
I have an applet (1400px wide, 700px high) and a sprite (which is oval) falling down. The gravity is set to 10 m/s². I apply second Newton's Law to my oval sprite, and I use RK4 algorithm to compute the x an y coordinates of my sprite over time.
This all works fine...Except that I don't know how to scale the dimensions I use in my simulation.
For example, I would like 1px to represent 1cm (both width and height). So that my 1400px*700px applet dimensions will represent 14m*7m in real. I used
Graphics2D.scale()
method but it does't seem to work. I also thought to change the gravity but this seems not appropriate for me...
Could someone tell me a proper way to scale my dimensions?
You have a 1400 x 700 pixel applet drawing area.
You have a 14 x 7 meter physics area.
In order to scale from meters to pixels, you have to use a scaling factor.
1400 pixels / 14 meters = 100 pixels per meter.
700 pixels / 7 meters = 100 pixels per meter.
So far so good. If you had two different scaling factors, your drawing area would be distorted.
Let's assume that the oval started at (0, 0).
So we calculate the first position of the oval. Let's assume the first calculated position is
x = 2.45
y = 3.83
So, using the scaling factor we came up with:
pixel x = 2.45 meters x 100 pixels per meter = 245 pixels.
pixel y = 3.83 meters x 100 pixels per meter = 383 pixels.
Our physics area has increasing x to the right and increasing y down. Fortunately, our drawing area has increasing x to the right and increasing y down.
So, we don't have to worry about changing signs.
Draw the oval at (245, 383).
Calculate next x, y position and repeat.
I'm creating a 3D renderer in Java, which currently can render the wireframe of a cube using Points and lines and rotate the cube, the question is, what should Z be? And what should be set to Z? I'm guessing that the size of the cube should be set to Z?
Thanks for your time! Any answers would be much appreciated.
Z usually means the out-of-plane direction if the current viewport lies in the x-y plane.
Your 3D world has its own coordinate system. You'll transform from 3D world coordinates to viewport coordinates when you render.
I think you might be missing some basic school math/geometry here. However, it's actually not that hard to understand.
Imagine a flat plane, e.g. a sheet of paper.
The first coordinate axis will go straight from left to right and we'll call it X. So X = 0 means your point is on the left border. X = 10 might mean your point is on the right border (really depends on how big you define a unit of 1; this could be in centimeters, inches, etc.). This is already enough to describe some point in one dimension (from left to right).
Now, we need a second axis. Let's call it Y. It's running from the top border (Y = 0) to the bottom (Y = 10). Now you're able to describe any point on the plane as you've got two positions. For example, (0, 0) would be the top left corner. (10, 10) would be the bottom right corner. (5, 0) would be the center point of the top border, etc.
What happens if we add yet another dimension? Call it Z. This will essentially be the height of your point above the sheet. For example, Z = 0 could mean your point is sitting on the sheet of painter, while Z = 10 means your point is sitting 10 cm above the paper. Now you use three coordinates to describe a point: (5, 5, 0) is the center of the paper. (5,5,5) is the center of the cube sitting on your paper filling it and being 10 cm high.
When programming in 3D, you can use the same terminology. The only real difference is, that you're using a so called projection/view matrix to determine how to display this 3d positions on screen. The easiest transform could be the following matrix:
1 0 0
0 1 0
Multiplying this with your 3d coordinates you'll get the following two terms:
2d-x = 3d-x
2d-y = 3d-y
This results in you viewing the cube (or whatever you're trying to display) from straight above essentially ignoring the Z axis again (you can't render something sticking out of your display, unless using some kind of 3d glasses or similar technology).
Overall, it's up to you how you use the coordinates and interpret them. Usually x and y refer to the plane (position on the ground or position inside a 2D world) while z might be the height or the depth (front or back). It really depends on the specific case. But in generic, it's really just another dimension like x and y.
3D means 3 "Dimensions". One dimension is "X", the other "Y", the third "Z". None have a sepcific direction, though it's convenient to conventionally assign a direction, for example "Forward", "Left", and "Up".
Something whose X, Y, and Z values are all equal to 0 resides at the origin, or center of the space. You can write this as (0,0,0) where the order of the parameters are (x,y,z).
A point or vertex at the location (1,0,0) is one unit in the X direction from the origin. So if you moved from (0,0,0) to (1,0,0), you would be moving purely in the X direction.
(0,1,0) is one unit in the Y direction away from the origin.
(0,0,1) is one unit in the Z direction away from the origin.
(1,1,0) is one unit in the X direction and one unit in the Y direction. So if X means "Forward", and Y means "Left", then (1,1,0) is forward-and-left of the origin.
So a basic cube can be defined by the following vertices:
(1,1,-1)
(1,-1,-1)
(-1,1,-1)
(-1,-1,-1)
(1,1,1)
(1,-1,1)
(-1,1,1)
(-1,-1,1)
Alright, so I got two angles. One is the joystick's angle, and the other is the camera to player angle. The camera's angle. Now I want it so when I press up on the joystick it moves the player away from the camera. How would I do this? And is there a easy way to do it in Java or Ardor3d?
edit: Here is the code of how I get my angles.
float camDegree = (float) Math.toDegrees(Math.atan2(
_canvas.getCanvasRenderer().getCamera().getLocation().getXf() - colladaNode.getTranslation().getXf(),
_canvas.getCanvasRenderer().getCamera().getLocation().getYf()) - colladaNode.getTranslation().getYf());
player.angle = (float) Math.toDegrees(Math.atan2(padX, padY));
Quaternion camQ = new Quaternion().fromAngleAxis(camDegree, Vector3.UNIT_Y);
I have to say that I don't really understand your question, but it seems to be about how to implement camera-relative control using a joystick.
The most important piece of advice I can give you is that it's better not to compute angles, but to work directly with vectors.
Suppose that the camera is looking in the direction v (in some types of game this vector will be pointing directly at the player, but not all types of game, and not always):
Typically you don't care about the vertical component of this vector, so remove it to get the horizontal component, which I'll call y for reasons that will become apparent later:
y = v − (v · up) up
where up is a unit vector pointing vertically upwards.
We can find the horizontal vector that's perpendicular to y using the cross product (and remembering the right hand rule):
x = v × up
Now you can see that y is a vector in the plane pointing forwards (away from the camera), and x a vector in the plane pointing right (sideway with respect to the camera). If you normalise these vectors:
x̂ = x / |x|
ŷ = y / |y|
then you can use x̂ and ŷ as the coordinate basis for camera-relative motion of the player. If your joystick readings are Jx and Jy, then move the player by
s (Jx x̂ + Jy ŷ)
where s is an appropriate scalar value proportional to the player's speed.
(Notice that no angles were computed at any point in this answer!)
I'm writing a ray tracer (using left-handed coordinates, if that makes a difference). It's for the sake of teaching myself the principles, so I'm not using OpenGL or complex features like depth of field (yet). My camera can have an arbitrary position and orientation; I indicate them by way of three vectors, location, look_at, and sky, which behave like the equivalent POV-Ray vectors. Its "film" also has a width and height. (The focal length is implied by the distance from position to look_at.)
My problem is that don't know how to cast the rays. I have two quantities, vx and vy, that indicate where the ray should end up. They both vary from -1 to 1. If they're both -1, I'm casting the ray from the camera's position to the top-left corner of the "film"; if they're both 1, the bottom-right; if they're both 0, the center; and the rest is apparent.
I'm not familiar enough with vector arithmetic to derive an equation for the ray. I would appreciate an explanation of how to do so.
You've described what needs to be done quite well already. Your field of view is determined by the distance between your camera and your "film" that you're going to cast your rays through. The further away the camera is from the film, the narrower your field of view is.
Imagine the film as a bitmap image that the camera is pointing to. Say we position the camera one unit away from the bitmap. We then have to cast a ray though each of the bitmap's pixels.
The vector is extremely simple. If we put the camera location to (0,0,0), and the bitmap film right in front of it with it's center at (0,0,1), then the ray to the bottom right is - tada - (1,1,1), and the one to the bottom left is (-1,1,1).
That means that the difference between the bottom right and the bottom left is (2,0,0).
Assume that your horizontal bitmap resolution should be 1000, then you can iterate through the bottom line pixels as follows:
width = 1000;
cameraToBottomLeft = (-1,1,1);
bottomLeftToBottomRight = (2,0,0);
for (x = 0; x < width; x++) {
ray = cameraToBottomLeft + (x/width) * bottomLeftToBottomRight;
...
}
If that's clear, then you just add an equivalent outer loop for your lines, and you have all the rays that you will need.
You can then add appropriate variables for the distance of the camera to the film and horizontal and vertical resolution. When that's done, you could start changing your look vector and your up vector with matrix transformations.
If you want to wrap your head around computer graphics, an introductory textbook could be of great help. I used this one in college, and I think I liked it.