I'm writing a ray tracer (using left-handed coordinates, if that makes a difference). It's for the sake of teaching myself the principles, so I'm not using OpenGL or complex features like depth of field (yet). My camera can have an arbitrary position and orientation; I indicate them by way of three vectors, location, look_at, and sky, which behave like the equivalent POV-Ray vectors. Its "film" also has a width and height. (The focal length is implied by the distance from position to look_at.)
My problem is that don't know how to cast the rays. I have two quantities, vx and vy, that indicate where the ray should end up. They both vary from -1 to 1. If they're both -1, I'm casting the ray from the camera's position to the top-left corner of the "film"; if they're both 1, the bottom-right; if they're both 0, the center; and the rest is apparent.
I'm not familiar enough with vector arithmetic to derive an equation for the ray. I would appreciate an explanation of how to do so.
You've described what needs to be done quite well already. Your field of view is determined by the distance between your camera and your "film" that you're going to cast your rays through. The further away the camera is from the film, the narrower your field of view is.
Imagine the film as a bitmap image that the camera is pointing to. Say we position the camera one unit away from the bitmap. We then have to cast a ray though each of the bitmap's pixels.
The vector is extremely simple. If we put the camera location to (0,0,0), and the bitmap film right in front of it with it's center at (0,0,1), then the ray to the bottom right is - tada - (1,1,1), and the one to the bottom left is (-1,1,1).
That means that the difference between the bottom right and the bottom left is (2,0,0).
Assume that your horizontal bitmap resolution should be 1000, then you can iterate through the bottom line pixels as follows:
width = 1000;
cameraToBottomLeft = (-1,1,1);
bottomLeftToBottomRight = (2,0,0);
for (x = 0; x < width; x++) {
ray = cameraToBottomLeft + (x/width) * bottomLeftToBottomRight;
...
}
If that's clear, then you just add an equivalent outer loop for your lines, and you have all the rays that you will need.
You can then add appropriate variables for the distance of the camera to the film and horizontal and vertical resolution. When that's done, you could start changing your look vector and your up vector with matrix transformations.
If you want to wrap your head around computer graphics, an introductory textbook could be of great help. I used this one in college, and I think I liked it.
Related
How to invert Y axis? When I touch on bottom or top of the screen, the Y value is opposite I want
You can't invert an axis per se. You see, in computer graphics, the 2D coordinate system is a bit different from the canonical one taught at school in maths. The difference is that in computer graphics the y-axis is in the opposite direction, that is, from the origin to the bottom it has positive values and from the origin to the top it has negative values. Also, the origin is at the top left corner of the screen. If you can't get used to it then you can always take the opposite value to what you get, for this, asume ycoord holds the value obtained then you can do ycoord = -ycoord and that will get you the value as you're used to. Also, if you want the origin to be in the bottom left corner then you should check your y-coordinate, if it's positive then substract the vertical resolution to it, and if it's negative then add the vertical resolution to it.
But keep mind that you're going against the standard definition for coordinate systems in computer graphics.
I would say this is a duplicate questions of this one:
Move a shape to place where my finger is touched
Check on my answer there, so I won't repeat my self.
Or in short - use camera.unproject() method to get world coordinates from screen coordinates.
While working on Projectiles I thought that it would be a good idea to rotate the sprite as well, to make it look nicer.
I am currently using a 1-Dimensional Array, and the sprite's width and height can and will vary, so it makes it a bit more difficult for me to figure out on how to do this correctly.
I will be honest and straight out say it: I have absolutely no idea on how to do this. There have been a few searches that I have done to try to find some stuff, and there were some things out there, but the best I found was this:
DreamInCode ~ Rotating a 1-dimensional Array of Pixels
This method works fine, but only for square Sprites. I would also like to apply this for non-square (rectangular) Sprites. How could I set it up so that rectangular sprites can be rotated?
Currently, I'm attempting to make a laser, and it would look much better if it didn't only go along a vertical or horizontal axis.
You need to recalculate the coordinate points of your image (take a look here). You've to do a matrix product of every point of your sprite (x, y) for the rotation matrix, to get the new point in the space x' and y'.
You can assume that the bottom left (or the bottom up, depends on your system coordinate orientation) of your sprite is at (x,y) = (0,0)
And you should recalculate the color too (because if you have a pure red pixel surrounded by blue pixel at (x,y)=(10,5) when you rotate it can move for example to (x, y)=(8.33, 7.1) that it's not a real pixel position because pixel haven't float coordinate. So the pixel at real position (x, y)=(8, 7) will be not anymore pure red, but a red with a small percentage of blue)... but one thing for time.
It's easier than you think: you only have to copy the original rectangular sprites centered into bigger square ones with transparent background. .png files have that option and I think you may use them.
I have a little tech game I am messing around with and I can't figure out the formula to position 1 object given another objects origin.
So I have a Spaceship and a Cannon. I have the game setup to use units, so 1 unit = 16 pixels (pixel art).
Basically my cannon should be placed 0.5625 units on the X and 0 on the Y relative to the origin of the Spaceship, which is located at 0, 0 (bottom left corner).
The cannon should is independent on the angle of the spaceship, it can aim in different directions rather than being fixed to aim the way of the spaceship.
I have it constantly following the cursor, which works fine. Now when I rotate the Spaceship, obviously the origin of the Spaceship is changing in world coordinates, so my formula to place the cannon is all messed up, like so:
protected Vector2 weaponMount = new Vector2();
weaponMount.set(getBody().getPosition().x + 0.5625f, getBody()
.getPosition().y);
Obviously if I position the ship at a 90° angle, X is going to be different and the cannon would be waaaayyy off the ship. Here is a screenshot example of what I mean:
What would be the formula for this? I have tried using cos/sin but that does not work.
Any ideas?
weaponMount.set(0.5625f,0).setAngle(SpaceshipAngle).add(getBody().getPosition());
Where SpaceshipAngle is the angle of your Spaceship.
The origin of the spaceship is the point, arround which the spaceship will rotate and scale (the Texture of it). The position instead is always the lower left corner of the Texture and does not depend on the rotation.
Your problem is, that your offset does not depent on the rotation of your spaceship.
To take care about this rotation you should store a Vector2 offset, which describes your weapons offset (in your case it is a Vector2(0.5625f, 0)).
Next store a float angle describing your spaceships rotation.
Then you can rotate the offset by using: offset.setAngle(rotation).
The last thing is to set the weapons position. The code for this did not change so much:
weaponMount.set(getBody().getPosition().x + offset.x, getBody()
.getPosition().y + offset.y);
I'm experimenting with LibGDX and 3D in a projection view. Right now I'm looking at how to determine the outermost bounds of my viewport in world space at z=0.0, in order to draw coordinate grid no larger than necessary. However, I seem to have outpaced my education in that I haven't taken a formal linear algebra class and am still a little fuzzy on matrix math.
Is there a way to determine where I should start and stop drawing lines without resorting to using picking and drawing a transparent plane to intersect with?
LibGDX's unproject function takes screen coordinates in a Vector3 and returns a Vector3 in world space from the near clipping plane to the far, given the provided z. However, given that I have a translated and rotated Camera (an encapsulation of the viewprojection matrix and a slew of convenience methods), it occurs to me that I can't pick an arbitrary z to put in the window coordinate vector and just set it to 0.0 after unprojection, as that point probably won't be the furthest viewable point in the viewport. So how do I know what z value to use in the window coordinate that will give the the x and y I need in world space that's at z=0.0?
EDIT (UPDATE):
So apparently it looks like the problem I'm looking at is plane intersection, which would require ray tracing. So now I suppose my question is this: is ray tracing 4 times per render loop (or, I suppose whenever the camera has moved) worth the payoff of being able to dynamically draw a worldspace coordinate grid no larger than the viewport? If not, is there a cheaper algorithm I can use to estimate where I should start and stop drawing lines?
I have an assignment to implement ray tracing in Java.
I'm not asking for much, just to have some information on how to construct the rays from the camera through a pixel given its x and y. I've found over the Internet a lot of sources that explain that but in 2D, and I need how to do that in 3D.
Thanks in advance
The question is how to find the coordinates in space of a point on the screen whose position is given by (x,y) in screen coordinates.
I don't know what coordinates system you're using for the screen, so I'll make some educated guesses and you can adjust accordingly.
The center of the screen has known location [X,Y,Z]center in space. I'll presume the origin of the screen coordinate system is there. We have a "direction" vector d which is normal (perpendicular) to the screen, and an "up" vector u. I'll presume that the +y direction on the screen is u. We can take the cross-product of these vectors, r = dxu, which I will take to be the +x direction on the screen. So the location of a point on the screen whose screen coordinates are (x, y) will be [X, Y, Z]center + xr + yu, and we're done.
The basic idea is this: you have a camera at a given position (x,y,z) with a given resolution. You have a set of light sources. You have an orientation of the camera (an angle, think of how you'd tilt/rotate your head to look up/down etc...). Now what you want to do is essentially for each camera pixel, "extend" perpendiculars (rays) until they touch geometry. Then you know "what to render" (namely the bit of geometry you've just found). Next up is to determine whether or not the object is shadowed, which you do by "extending" rays towards the light sources until they either touch geometry (your spot is shadowed by the other bit of geometry, for the given the light source) or until they reach the light source (your spot is lit by the given light source).
That's the basics, it gets a lot more difficult when you consider things like reflection, diffusion of light, and so on.