Java lwjgl plane border - java

I am trying to make white square has blue border. I made double plane, but viewing such result seems strange messy white and blue.
Is there good way to make border for square in 3D Java lwjgl?
float d = 1;
float f = 1;
glColor3f(0, 0, 1);
glBegin(GL_QUADS);
glVertex3f(-d, f, -f);
glVertex3f(-d, -f, -f);
glVertex3f(-d, -f, f);
glVertex3f(-d, f, f);
glEnd();
float d = 1;
float f = 0.9f;
glColor3f(1, 1, 1);
glBegin(GL_QUADS);
glVertex3f(-d, f, -f);
glVertex3f(-d, -f, -f);
glVertex3f(-d, -f, f);
glVertex3f(-d, f, f);
glEnd();

strange messy white and blue
Sounds like z-fighting:
Z-fighting, also called stitching, is a phenomenon in 3D rendering
that occurs when two or more primitives have similar values in the
z-buffer. It is particularly prevalent with coplanar polygons, where
two faces occupy essentially the same space, with neither in front.
Affected pixels are rendered with fragments from one polygon or the
other arbitrarily, in a manner determined by the precision of the
z-buffer. It can also vary as the scene or camera is changed, causing
one polygon to "win" the z test, then another, and so on. The overall
effect is a flickering, noisy rasterization of two polygons which
"fight" to color the screen pixels. This problem is usually caused by
limited sub-pixel precision and floating point and fixed point
round-off errors.
Disable depth testing via glDisable(GL_DEPTH_TEST) sometime before you render the second quad.

The artifact you're describing sounds like what's commonly called "depth fighting" or "z-fighting". It occurs if you draw coplanar polygons with the depth test enabled.
The reason is that all calculations in the rendering pipeline happen with limited precision. So while the two coplanar polygons theoretically have the same depth at any given pixel, one of them will often have slightly larger depth due to precision/rounding effects. Which one ends up being slightly in front can change between pixels, causing the typical artifacts.
To fix this, you have a number of options:
If you don't really need depth testing, you can obviously disable it. But often times you really do need depth testing, so this is rarely an option.
You can apply an offset to one polygon, so that it's slightly in front of the other. How big the offset needs to be is somewhat tricky, and depends on the depth precision and the transformations you apply. You can either play with various values, or try and mathematically analyze what difference in input coordinates you need to create different depth values.
You can use glPolygonOffset(). The effect and related challenges are fairly similar to option 2. But it makes things a little easier for you since you don't have to apply an offset to the coordinates yourself.
In a simple case like this, you can draw just a blue frame around the white square, instead of drawing a whole blue square that overlaps the white square. You'll need a few more polygons to draw the "frame" shape (which will look like a large square with a hole the size of the blue square), but this is still fairly easy to draw, more efficient, and avoids the problem entirely.

Related

Java: Ray Tracing: Glossy reflection coloring

I've written a ray tracing program that (for the moment) has two options for surface lighting: ambient and reflective. Ambient lighting replicates how natural surfaces scatter light. Reflections obviously replicate how mirrors reflect light. Everything works properly, but I can't figure out how to mix colors with glossy reflections.
The reflection algorithm returns a color and works recursively. Rays are "cast" in the form of parameterized lines. When they hit a reflective surface, they bounce off perfectly (and my methods for this work). Then these reflected rays are used as parameters to call the reflection algorithm again. This goes on until either the current ray hits an ambient (non reflective) surface or the current ray doesn't hit a surface at all.
The way I'm calculating colors now is I'm averaging the colors of the reflected surface and the newly hit surface from back to front. So that the colors on surfaces that the ray hits early on are represented more than later surface colors.
If color A is the color of the first (reflective) surface it hits, color B is the color of the second surface it hits, C is the third, and so on. So in the final color returned will be 50% A, 25% B, 12.5% C...
The method I use for this actually supports a weighted average so that mirrored surfaces have less effect on the final color. Here it is:
public void addColor(Color b, double dimFac) {
double red = c.getRed() * (1 - dimFac) + b.getRed() * dimFac;
double green = c.getGreen() * (1 - dimFac) + b.getGreen() * dimFac;
double blue = c.getBlue() * (1 - dimFac) + b.getBlue() * dimFac;
c = new Color((int) red,
(int) green,
(int) blue);
}
Here's a screenshot of the program with this. There are three ambient spheres hovering over a glossy reflective plane, With a 'dimFac' of 0.5:
Here's the same simulation with a dimFac of 1 so that the mirror has no effect on the final color:
Here dimFac is 0.8
And here it's 0.1
Maybe it's just me, but none of these reflections look amazingly realistic. What I'm using as a guide is a powerpoint by Cornell that, among other things, does mention anything about adding the colors. Mirrors do have color to a degree, and I don't know the correct way of mixing the colors. What am I doing something wrong?
So the way I get a color from a ray is as follows. Each iteration of the ray tracer begins with the initiation of shapes. This program supports three shapes: planes, spheres, and rectangular prisms (which is ultimately just 6 planes). I have a class for each shapes, and a class (called Shapes) that can store each type of shape (but only one per object).
After the shapes have been made, a class (called Projector) casts the rays via another class called MasterLight (which actually holds the methods for basic ray tracing, shadows, reflections, and (now) refractions).
In order to get a color of an intersection, I call the method getColor() which takes the vector (how I store 3d points) of the intersection. I use that to determine the unshaded color of a surface. If the surface is untextured and is just a blank color (like the shapes above), then an unshaded color is returned (this is simply stored in each of the shape classes "Color c = Color.RED"). An example being Color.RED
I take that color and recursively plug that back into MasterLight as the base color to get shading as if the surface was normal and ambient. This process returns the shade that the shape would normally have. Now the RGB value might be (128, 0, 0);
public Color getColor(Vector v) {
if (texturing) {
return texturingAlgorithm;
}
else {
return c;
}
}
DimFac the way it's being used has the potential to be anything from 0 to 1; In my program now, it's 0.8 (which is the universal shading constant. In ambient shading, I take the value that I'm dimming the color by and multiply it by 0.8 and add 0.2 (1 - 0.8), so that the dimmest a color can be is at 0.2 of its original brightness).
The addColor is in another class (I have 17 at the moment, one of which is an enum) called Intersection. This stores all important information about intersections of rays with shapes (color, position, normal vector of the hit surface, and some other constant that pertain to the object's material). Color c is the current color at that point in the calculations.
Each iteration of reflections calls addColor with the most recent surface color. To elaborate, if (in the picture above) a ray had just bounced off of the plane and hit a sphere and bounced off into empty space, I first find the color of the sphere's surface at the point of bounce, which is what 'c' is set to. Then I call addColor with the color of the plane at the point of intersection (the first time).
As soon as I've back tracked all of the reflections, I'm left with a color, which is what I use to color the pixel of that particular ray.
Tell if I missed anything or it was unclear.
You should use the Phong Shading method, created by Phong-Bui Tong in 1975. The Phong equation simplifies lighting into three components: ambient, diffuse, and specular.
Ambient light is the lighting of your object when in complete darkness. Ambient lighting is not affected by light sources.
Diffuse light is the brightness of light based on the angle between the surface normal of an intersection's and the light vector from the intersection.
Specular light is what I believe you're looking for. It is based on the angle between the vector from the angle of intersection to the camera position and the reflection vector for the light vector about the surface.
Here's how I typically use Phong Shading:
For any objects on your scene, define three constants: Ka (ambient lighting), Kd (diffuse lighting), and Ks (specular lighting). We will also define a constant "n" for the shininess of your object. I would keep this value above 3.
Find the dot product of the normal vector and the light vector, we'll call this quantity "dF" for now.
Now let's calculate the reflection vector: it is the normal vector, multiplied by the dot product of the normal vector and the light vector, multiplied by two. Subtract the light vector, and this should have a magnitude of 1 if the normal and light vectors did.
Find the dot product of the reflection vector and the vector to the viewer from the intersection, we'll call this "sF".
Finally, we'll call the color of your object "clr" and the final color will be called "fClr".
To get the final color, use the formula:
fClr = Ka(clr) + Kd(factor)(clr) + Ks(specularFactor^n)(clr)
Finally, I check if any of your R, G, or B values are out of bounds. If this is the case, make that R, G, or B value equal to the closest bound.
**Perform the equation for each RGB value, if you are using RGB.
**I would like to note that all RGB values should be scalars 0.0 - 1.0. If you are using 8-bit RGB (0-255), divide the values by 255 before putting them into the equation, and multiply the output values by 255.
**Any time I refer to a vector, it should be a unit vector, that is, it should have a magnitude of 1.
I hope this helps! Good luck!

Slick2d using anti-aliasing with Animations/Images

I'm using slick2d to render an Animation, but using anti-aliasing
Without AA, the movement is choppy, as one would expect for something without AA.
I turn on AA in my game's preRenderState with:
g.setAntiAlias(true);
This leads to:
Note the diagonal line in the center, presumably caused by the two triangles that render the rectangle not meeting precicely. How can I remove that while still using AA to smooth my movement? I found http://www.java-gaming.org/index.php?topic=27313.0 but the solution was "remove AA" which I am reluctant to do.
This looks suspiciously like artifacts due to GL_POLYGON_SMOOTH.
When you use that (deprecated) functionality, you are expected to draw all opaque geometry with blending enabled and the blend function: GL_SRC_ALPHA_SATURATE, GL_ONE. Failure to do so produces white outlines on most contours (aliased edges, basically).
Chapter 6 of the OpenGL Redbook states:
Now you need to blend overlapping edges appropriately. First, turn off the depth buffer so that you have control over how overlapping pixels are drawn. Then set the blending factors to GL_SRC_ALPHA_SATURATE (source) and GL_ONE (destination). With this specialized blending function, the final color is the sum of the destination color and the scaled source color; the scale factor is the smaller of either the incoming source alpha value or one minus the destination alpha value.
This means that for a pixel with a large alpha value, successive incoming pixels have little effect on the final color because one minus the destination alpha is almost zero. With this method, a pixel on the edge of a polygon might be blended eventually with the colors from another polygon that's drawn later. Finally, you need to sort all the polygons in your scene so that they're ordered from front to back before drawing them.
That is a lot of work that almost nobody does correctly when they try to enable GL_POLYGON_SMOOTH.
Every time you enable anti-aliasing in Slick2D, it should look like this.
g.setAntiAlias(true);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA_SATURATE, GL11.GL_ONE);
Basically same as Andon's answer, but with code that should solve the problem.
Note that with this whatever you draw first will be on top, rather than what you draw last. So you may need to reverse rendering order.

Rotating a Sprite in Java

While working on Projectiles I thought that it would be a good idea to rotate the sprite as well, to make it look nicer.
I am currently using a 1-Dimensional Array, and the sprite's width and height can and will vary, so it makes it a bit more difficult for me to figure out on how to do this correctly.
I will be honest and straight out say it: I have absolutely no idea on how to do this. There have been a few searches that I have done to try to find some stuff, and there were some things out there, but the best I found was this:
DreamInCode ~ Rotating a 1-dimensional Array of Pixels
This method works fine, but only for square Sprites. I would also like to apply this for non-square (rectangular) Sprites. How could I set it up so that rectangular sprites can be rotated?
Currently, I'm attempting to make a laser, and it would look much better if it didn't only go along a vertical or horizontal axis.
You need to recalculate the coordinate points of your image (take a look here). You've to do a matrix product of every point of your sprite (x, y) for the rotation matrix, to get the new point in the space x' and y'.
You can assume that the bottom left (or the bottom up, depends on your system coordinate orientation) of your sprite is at (x,y) = (0,0)
And you should recalculate the color too (because if you have a pure red pixel surrounded by blue pixel at (x,y)=(10,5) when you rotate it can move for example to (x, y)=(8.33, 7.1) that it's not a real pixel position because pixel haven't float coordinate. So the pixel at real position (x, y)=(8, 7) will be not anymore pure red, but a red with a small percentage of blue)... but one thing for time.
It's easier than you think: you only have to copy the original rectangular sprites centered into bigger square ones with transparent background. .png files have that option and I think you may use them.

Texture repeating on quads OpenGL

I am writing a voxel engine and at the moment
I am working on the Chunk-Rendering-System but I have a problem.
It seems that the textures were repeated on the quads.
There is this green line at the bottom of the grass blocks and I don't know why.
This is the OpenGL-Render-Code:
Texture texture = TextureManager.getTexture(block.getTextureNameForSide(Direction.UP));
texture.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2d(0, 0); GL11.glVertex3f(0, 1, 0);
GL11.glTexCoord2d(1, 0); GL11.glVertex3f(0, 1, 1);
GL11.glTexCoord2d(1, 1); GL11.glVertex3f(1, 1, 1);
GL11.glTexCoord2d(0, 1); GL11.glVertex3f(1, 1, 0);
GL11.glEnd();
And here is the OpenGL-Setup:
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glShadeModel(GL11.GL_SMOOTH);
GL11.glClearColor(0.1F, 0.4F, 0.6F, 0F);
GL11.glClearDepth(1F);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
GL11.glCullFace(GL11.GL_BACK);
GL11.glEnable(GL11.GL_CULL_FACE);
Make sure GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T are set to GL_CLAMP_TO_EDGE.
genpfault's answer should do the trick for you, I just wanted to give you some insight into why you need this particular wrap state.
To be clear, the green line in your screenshot corresponds to the edges of one of your voxels?
It looks like you are using GL_LINEAR filtering (default) together with an inappropriate texture wrap state (e.g. GL_REPEAT or GL_CLAMP). I will explain why GL_CLAMP is a bad idea later.
You may think that the texture coordinate 0.0 and 1.0 are perfectly within the normalized texture coordinate range and therefore not subject to wrapping, but you would be wrong.
This particular combination of states will pickup texels from the other side of your texture at either extreme of the [0,1] texture coordinate range. The texture coordinate 1.0 is actually slightly beyond the center of the last texel in your texture, so when GL fetches the 4 nearest texels for linear filtering, it wraps around to the other side of the texture for at least 2 of them.
GL_CLAMP_TO_EDGE modifies this behavior, it clamps the texture coordinates to a range that is actually more restrictive than [0,1] so that no coordinate goes beyond the center of any edge texels in your texture. Linear filtering will not pickup texels from the other side of your texture with this set. You could also (mostly) fix this by using GL_NEAREST filtering, but that will result in a lot of texture aliasing.
It is also possible that you are using GL_CLAMP, which, by the way was removed in OpenGL 3.1. In older versions of GL it was designed to clamp the coordinates into the range [0,1] and then if linear filtering tried to fetch a texel beyond the edge it would use a special set of border texels rather than wrapping around. Border texels are no longer supported, and thus that wrap mode is gone.
The bottom line is do not use GL_CLAMP, it does not do what most people think. GL_CLAMP_TO_EDGE is almost always what you really want when you think of clamping textures.
EDIT:
genpfault brings up a good point; this would be a lot easier to understand with a diagram...
The following diagram illustrates the problem in 1 dimension:
   http://i.msdn.microsoft.com/dynimg/IC83860.gif
I have a more thorough explanation of this diagram in an answer I wrote to a similar issue.

How to shade a specific section of a sprite

I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some lighting. It is difficult to distinguish between separate elevations because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block.
Well, it depends on how you do your lighting.
Basically, sprites are just textured quads made of two triangles.
Traditional vertex based lighting (which is supported by the built-in but now deprecated functions) will just calculate the lighting for the 4 corners of that quad and everything else will be interpolated. This is quite fast but might result in the wrong lighting - especially with spot lights and big quads.
If you use directional lighting only, you might apply a normal map to your quads and thus influence lighting in a per-texel way, but that might still not be what you want.
The modern way would be to use shaders, i.e. the lighting is evaluated per-pixel. You'd then have to provide per-texel lighting information for your quad which is then used in the fragment/pixel shader.
Just to clarify, the meanings of some terms in this context:
per-texel: per pixel in the texture, those value might be interpolated
per-pixel: per output pixel, i.e. per screen pixel
Edit:
I just looked at your screenshot and it seems you'll have to change the shade of sprite's edges if the adjacent sprite is not on the same level. Assuming you already know which sprite edge should be visible (i.e. there's a level change at that edge) you might just change the shading of the vertices that form that edge.
If you don't use any lighting, you might just start setting the vertex color to white and to some darker color for the vertices that need shading. Then multiply your texture color with the vertex color which should result in darker edges.
Alternatively, if those level have different depths (i.e. different z values) you could use some shader for edge detection (e.g. some SSAO implementation).
Edit 2:
If you use plain old vertex lighting, applying weighted normals might help. Basically you calculate the weighted vertex normals from the normals of those triangles that share a vertex.
There are several methods doing this, one being to weight the faces based on the angle at that vertex. You could multiply the normals by those angles, add then together and finally normalize the resulting normal.
The result of that calculation might be something like this (ASCII art):
| | /
|_______|________/
| / | |
|/______|_______|
Lines pointing up are the normals, the bottom lines would be your sprites in a side view.

Categories