I've written a ray tracing program that (for the moment) has two options for surface lighting: ambient and reflective. Ambient lighting replicates how natural surfaces scatter light. Reflections obviously replicate how mirrors reflect light. Everything works properly, but I can't figure out how to mix colors with glossy reflections.
The reflection algorithm returns a color and works recursively. Rays are "cast" in the form of parameterized lines. When they hit a reflective surface, they bounce off perfectly (and my methods for this work). Then these reflected rays are used as parameters to call the reflection algorithm again. This goes on until either the current ray hits an ambient (non reflective) surface or the current ray doesn't hit a surface at all.
The way I'm calculating colors now is I'm averaging the colors of the reflected surface and the newly hit surface from back to front. So that the colors on surfaces that the ray hits early on are represented more than later surface colors.
If color A is the color of the first (reflective) surface it hits, color B is the color of the second surface it hits, C is the third, and so on. So in the final color returned will be 50% A, 25% B, 12.5% C...
The method I use for this actually supports a weighted average so that mirrored surfaces have less effect on the final color. Here it is:
public void addColor(Color b, double dimFac) {
double red = c.getRed() * (1 - dimFac) + b.getRed() * dimFac;
double green = c.getGreen() * (1 - dimFac) + b.getGreen() * dimFac;
double blue = c.getBlue() * (1 - dimFac) + b.getBlue() * dimFac;
c = new Color((int) red,
(int) green,
(int) blue);
}
Here's a screenshot of the program with this. There are three ambient spheres hovering over a glossy reflective plane, With a 'dimFac' of 0.5:
Here's the same simulation with a dimFac of 1 so that the mirror has no effect on the final color:
Here dimFac is 0.8
And here it's 0.1
Maybe it's just me, but none of these reflections look amazingly realistic. What I'm using as a guide is a powerpoint by Cornell that, among other things, does mention anything about adding the colors. Mirrors do have color to a degree, and I don't know the correct way of mixing the colors. What am I doing something wrong?
So the way I get a color from a ray is as follows. Each iteration of the ray tracer begins with the initiation of shapes. This program supports three shapes: planes, spheres, and rectangular prisms (which is ultimately just 6 planes). I have a class for each shapes, and a class (called Shapes) that can store each type of shape (but only one per object).
After the shapes have been made, a class (called Projector) casts the rays via another class called MasterLight (which actually holds the methods for basic ray tracing, shadows, reflections, and (now) refractions).
In order to get a color of an intersection, I call the method getColor() which takes the vector (how I store 3d points) of the intersection. I use that to determine the unshaded color of a surface. If the surface is untextured and is just a blank color (like the shapes above), then an unshaded color is returned (this is simply stored in each of the shape classes "Color c = Color.RED"). An example being Color.RED
I take that color and recursively plug that back into MasterLight as the base color to get shading as if the surface was normal and ambient. This process returns the shade that the shape would normally have. Now the RGB value might be (128, 0, 0);
public Color getColor(Vector v) {
if (texturing) {
return texturingAlgorithm;
}
else {
return c;
}
}
DimFac the way it's being used has the potential to be anything from 0 to 1; In my program now, it's 0.8 (which is the universal shading constant. In ambient shading, I take the value that I'm dimming the color by and multiply it by 0.8 and add 0.2 (1 - 0.8), so that the dimmest a color can be is at 0.2 of its original brightness).
The addColor is in another class (I have 17 at the moment, one of which is an enum) called Intersection. This stores all important information about intersections of rays with shapes (color, position, normal vector of the hit surface, and some other constant that pertain to the object's material). Color c is the current color at that point in the calculations.
Each iteration of reflections calls addColor with the most recent surface color. To elaborate, if (in the picture above) a ray had just bounced off of the plane and hit a sphere and bounced off into empty space, I first find the color of the sphere's surface at the point of bounce, which is what 'c' is set to. Then I call addColor with the color of the plane at the point of intersection (the first time).
As soon as I've back tracked all of the reflections, I'm left with a color, which is what I use to color the pixel of that particular ray.
Tell if I missed anything or it was unclear.
You should use the Phong Shading method, created by Phong-Bui Tong in 1975. The Phong equation simplifies lighting into three components: ambient, diffuse, and specular.
Ambient light is the lighting of your object when in complete darkness. Ambient lighting is not affected by light sources.
Diffuse light is the brightness of light based on the angle between the surface normal of an intersection's and the light vector from the intersection.
Specular light is what I believe you're looking for. It is based on the angle between the vector from the angle of intersection to the camera position and the reflection vector for the light vector about the surface.
Here's how I typically use Phong Shading:
For any objects on your scene, define three constants: Ka (ambient lighting), Kd (diffuse lighting), and Ks (specular lighting). We will also define a constant "n" for the shininess of your object. I would keep this value above 3.
Find the dot product of the normal vector and the light vector, we'll call this quantity "dF" for now.
Now let's calculate the reflection vector: it is the normal vector, multiplied by the dot product of the normal vector and the light vector, multiplied by two. Subtract the light vector, and this should have a magnitude of 1 if the normal and light vectors did.
Find the dot product of the reflection vector and the vector to the viewer from the intersection, we'll call this "sF".
Finally, we'll call the color of your object "clr" and the final color will be called "fClr".
To get the final color, use the formula:
fClr = Ka(clr) + Kd(factor)(clr) + Ks(specularFactor^n)(clr)
Finally, I check if any of your R, G, or B values are out of bounds. If this is the case, make that R, G, or B value equal to the closest bound.
**Perform the equation for each RGB value, if you are using RGB.
**I would like to note that all RGB values should be scalars 0.0 - 1.0. If you are using 8-bit RGB (0-255), divide the values by 255 before putting them into the equation, and multiply the output values by 255.
**Any time I refer to a vector, it should be a unit vector, that is, it should have a magnitude of 1.
I hope this helps! Good luck!
Related
I couldn't find any satisfying answer on that topic. I want to make a program that will get snapshots from camera above the pool table and detect balls. I am using OpenCV and Java. My algorithm now is basically:
blurring image -> converting RGB to HSV -> splitting into 3 planes -> using Canny() on H plane -> using HoughCircles() method to detect balls
This algorithm detects balls quite well, it has problem with two balls only (green and blue, because background of the table is green). But I want to go one step further and:
Detect if the ball belongs to stripes or solids
Set an ID of every ball, stripes would have for example 1-7 and solids 8-14, every ball would have unique ID that doesn't change during the game
Do you have any idea how to implement task #1? My idea is to use inRange() function, but then I'd have to prepare a mask for every ball that detects that one ball in specified range of colors, and do this detection for every ball, am I right? Thanks for sharing your opinions.
#Edit: Here I give you some samples of how my algorithm works. I changed some parameters because I wanted to detect everything, and now it works worse, but it still works with quite nice accuracy. I`ll give you three samples of original image from camera, image where I detect balls (undistorted, with some filters) and image with detected balls.
Recommendation:
If you can mask out the pixels corresponding to a ball, the following method should work to differentiate striped/solid balls based on their associated pixels:
Desaturate the ball pixels and threshold them at some brightness p.
Count the number of white pixels and total pixels within the ball area.
Threshold on counts: if the proportion of white pixels is greater than some threshold q, classify it as a striped ball. Otherwise, it's a solid ball.
(The idea being that the stripes are white, and always at least partially visible, so striped balls will have a higher proportion of white pixels).
Sample Testing:
Here's an example of this applied (by hand, with p = 0.7) to some of the balls in the unrectified image, with final % white pixels on the right.
It looks like a classification threshold of q = 0.1 (minimum 10% white pixels to be a striped ball) will distinguish the two groups, although it would be ideal to tune the thresholds based on more data.
If you run into issues with shadowed balls using this method, you also can try rescaling each ball's brightnesses before thresholding (so that the brightnesses span the full range 0, 1), which should make the method less dependent on the absolute brightness.
I am 11 years old, and I program with Java, HTML, and CSS. Well what I have is a game, and its a Minecraft 2D Platformer.
Well I have some water to the side, and what I want to do is when the player intersects that water, I want it to slow down. Here is a example if there was a method to do this, in case you still don't understand my goal.
if (player.intersectsColor("0026FF"))
playerSpeed = 2;
else
playerSpeed = 3;
I suggest you represent the water not by its color but by its location. That way you can check whether the player is in a "tile" representing water, and adjust the speed accordingly.
This you can do with simple comparison on the x/y coordinates (adjusted for the size of the "tile"/"player")
If you don't have nice meshy tiles, but curves/polygons, you will need to read up on geometry and how to calculate (possibly curved) line intersection. The exact algorithm will depend on the curve used.
The reason I discourage you from using the color itself for the intersection many twofold:
"Intersecting" on a single color limits your ability to dynamically color the terrain/objects later
You cannot have two different terrain.object type with the same color
Having the color (e.g. brown) of the terrain/object does not tell you which blue terrain/object the player ran into (e.g. is it the first or the second chest?)
If you really want to represent the terrain with colors, you can translate the players in-game coordinates to screen coordinates and see what color pixel you have at that coordinate on the screen (before the player was rendered on the scene), but this is messy.
I'm working on making a 2D isometric engine in Java because I like suffering, I guess. Anyways, I'm getting into collision detection and I've hit a bit of a problem.
Characters in-game are not restricted to movement from tile to tile - they move freely. My problem is that I'm not sure how to stop a player from colliding with, say, a crate, without denying them access to the tile.
For instance, say the crate was on .5 of a tile, and then the rest of the crate was off the tile, I'd like the player to be able to move on to the free .5 of the tile instead of the entire tile becoming blocked.
The problem I've hit is that I'm not sure how to approximate the size of the footprint of the object. Using the image's dimensions don't work very well, since the object's "height" in gamespace translates to additional floorspace being taken up by the image.
How should I estimate an object's size? Mind, I don't need pixel-perfect detection. A rhombus would work fine.
I'm happy to provide any code you might need, but this seems like a math issue.
From the bounding rectangle of the sprite, you can infer the height of a rhombus that fits inside, but you cannot precisely determine the two dimensions on the floor, as each dimension contributes equally to width and height of the sprite. However, if you assume that the base of the rhombus square then you can determine the length of its side as well.
If the sprite is W pixels wide and H pixels high, the square base of the rhombus has a side of W / sqrt(3) and the height of the rhombus will be H - (W / sqrt(3)). This image of some shapes in isometric projection can be helpful to understand why these formulas work.
I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some lighting. It is difficult to distinguish between separate elevations because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block.
Well, it depends on how you do your lighting.
Basically, sprites are just textured quads made of two triangles.
Traditional vertex based lighting (which is supported by the built-in but now deprecated functions) will just calculate the lighting for the 4 corners of that quad and everything else will be interpolated. This is quite fast but might result in the wrong lighting - especially with spot lights and big quads.
If you use directional lighting only, you might apply a normal map to your quads and thus influence lighting in a per-texel way, but that might still not be what you want.
The modern way would be to use shaders, i.e. the lighting is evaluated per-pixel. You'd then have to provide per-texel lighting information for your quad which is then used in the fragment/pixel shader.
Just to clarify, the meanings of some terms in this context:
per-texel: per pixel in the texture, those value might be interpolated
per-pixel: per output pixel, i.e. per screen pixel
Edit:
I just looked at your screenshot and it seems you'll have to change the shade of sprite's edges if the adjacent sprite is not on the same level. Assuming you already know which sprite edge should be visible (i.e. there's a level change at that edge) you might just change the shading of the vertices that form that edge.
If you don't use any lighting, you might just start setting the vertex color to white and to some darker color for the vertices that need shading. Then multiply your texture color with the vertex color which should result in darker edges.
Alternatively, if those level have different depths (i.e. different z values) you could use some shader for edge detection (e.g. some SSAO implementation).
Edit 2:
If you use plain old vertex lighting, applying weighted normals might help. Basically you calculate the weighted vertex normals from the normals of those triangles that share a vertex.
There are several methods doing this, one being to weight the faces based on the angle at that vertex. You could multiply the normals by those angles, add then together and finally normalize the resulting normal.
The result of that calculation might be something like this (ASCII art):
| | /
|_______|________/
| / | |
|/______|_______|
Lines pointing up are the normals, the bottom lines would be your sprites in a side view.
I'm writing a ray tracer (using left-handed coordinates, if that makes a difference). It's for the sake of teaching myself the principles, so I'm not using OpenGL or complex features like depth of field (yet). My camera can have an arbitrary position and orientation; I indicate them by way of three vectors, location, look_at, and sky, which behave like the equivalent POV-Ray vectors. Its "film" also has a width and height. (The focal length is implied by the distance from position to look_at.)
My problem is that don't know how to cast the rays. I have two quantities, vx and vy, that indicate where the ray should end up. They both vary from -1 to 1. If they're both -1, I'm casting the ray from the camera's position to the top-left corner of the "film"; if they're both 1, the bottom-right; if they're both 0, the center; and the rest is apparent.
I'm not familiar enough with vector arithmetic to derive an equation for the ray. I would appreciate an explanation of how to do so.
You've described what needs to be done quite well already. Your field of view is determined by the distance between your camera and your "film" that you're going to cast your rays through. The further away the camera is from the film, the narrower your field of view is.
Imagine the film as a bitmap image that the camera is pointing to. Say we position the camera one unit away from the bitmap. We then have to cast a ray though each of the bitmap's pixels.
The vector is extremely simple. If we put the camera location to (0,0,0), and the bitmap film right in front of it with it's center at (0,0,1), then the ray to the bottom right is - tada - (1,1,1), and the one to the bottom left is (-1,1,1).
That means that the difference between the bottom right and the bottom left is (2,0,0).
Assume that your horizontal bitmap resolution should be 1000, then you can iterate through the bottom line pixels as follows:
width = 1000;
cameraToBottomLeft = (-1,1,1);
bottomLeftToBottomRight = (2,0,0);
for (x = 0; x < width; x++) {
ray = cameraToBottomLeft + (x/width) * bottomLeftToBottomRight;
...
}
If that's clear, then you just add an equivalent outer loop for your lines, and you have all the rays that you will need.
You can then add appropriate variables for the distance of the camera to the film and horizontal and vertical resolution. When that's done, you could start changing your look vector and your up vector with matrix transformations.
If you want to wrap your head around computer graphics, an introductory textbook could be of great help. I used this one in college, and I think I liked it.