Texture repeating on quads OpenGL - java

I am writing a voxel engine and at the moment
I am working on the Chunk-Rendering-System but I have a problem.
It seems that the textures were repeated on the quads.
There is this green line at the bottom of the grass blocks and I don't know why.
This is the OpenGL-Render-Code:
Texture texture = TextureManager.getTexture(block.getTextureNameForSide(Direction.UP));
texture.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2d(0, 0); GL11.glVertex3f(0, 1, 0);
GL11.glTexCoord2d(1, 0); GL11.glVertex3f(0, 1, 1);
GL11.glTexCoord2d(1, 1); GL11.glVertex3f(1, 1, 1);
GL11.glTexCoord2d(0, 1); GL11.glVertex3f(1, 1, 0);
GL11.glEnd();
And here is the OpenGL-Setup:
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glShadeModel(GL11.GL_SMOOTH);
GL11.glClearColor(0.1F, 0.4F, 0.6F, 0F);
GL11.glClearDepth(1F);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
GL11.glCullFace(GL11.GL_BACK);
GL11.glEnable(GL11.GL_CULL_FACE);

Make sure GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T are set to GL_CLAMP_TO_EDGE.

genpfault's answer should do the trick for you, I just wanted to give you some insight into why you need this particular wrap state.
To be clear, the green line in your screenshot corresponds to the edges of one of your voxels?
It looks like you are using GL_LINEAR filtering (default) together with an inappropriate texture wrap state (e.g. GL_REPEAT or GL_CLAMP). I will explain why GL_CLAMP is a bad idea later.
You may think that the texture coordinate 0.0 and 1.0 are perfectly within the normalized texture coordinate range and therefore not subject to wrapping, but you would be wrong.
This particular combination of states will pickup texels from the other side of your texture at either extreme of the [0,1] texture coordinate range. The texture coordinate 1.0 is actually slightly beyond the center of the last texel in your texture, so when GL fetches the 4 nearest texels for linear filtering, it wraps around to the other side of the texture for at least 2 of them.
GL_CLAMP_TO_EDGE modifies this behavior, it clamps the texture coordinates to a range that is actually more restrictive than [0,1] so that no coordinate goes beyond the center of any edge texels in your texture. Linear filtering will not pickup texels from the other side of your texture with this set. You could also (mostly) fix this by using GL_NEAREST filtering, but that will result in a lot of texture aliasing.
It is also possible that you are using GL_CLAMP, which, by the way was removed in OpenGL 3.1. In older versions of GL it was designed to clamp the coordinates into the range [0,1] and then if linear filtering tried to fetch a texel beyond the edge it would use a special set of border texels rather than wrapping around. Border texels are no longer supported, and thus that wrap mode is gone.
The bottom line is do not use GL_CLAMP, it does not do what most people think. GL_CLAMP_TO_EDGE is almost always what you really want when you think of clamping textures.
EDIT:
genpfault brings up a good point; this would be a lot easier to understand with a diagram...
The following diagram illustrates the problem in 1 dimension:
   http://i.msdn.microsoft.com/dynimg/IC83860.gif
I have a more thorough explanation of this diagram in an answer I wrote to a similar issue.

Related

Why do I get black outlines/edges on a texture in libGDX?

Whenever I draw a texture that has alpha around the edges (it is anti-aliased by photoshop), these edges become dark. I have endlessly messed around with texture filters and blend modes but have had no success.
Here is what I mean:
minFilter: Linear magFilter: Linear
minFilter: MipMapLinearNearest magFilter: Linear
minFilter: MipMapLinearLinear magFilter: Linear
As you can see, changing the filter on the libGDX Texture Packer makes a big difference with how things look, but alphas are still dark.
I have tried manually setting the texture filter in libgdx with:
texture.setFilter(minFilter, magFilter);
But that does not work.
I have read that downscaling with a linear filter causes the alpha pixels to default to black? If this is the case, how can I avoid it?
I have also tried changing the blend mode: glBlend(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_APHA) makes no difference. glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) removes alpha altogether so that doesn't work.
I do NOT want to set my minFilter to Nearest because it makes things look terribly pixellated. I have tried every other combination of texture filters but everything results in the same black/dark edges/outline effect.
I have read that downscaling with a linear filter causes the alpha pixels to default to black?
This is not necessarily true; it depends on what colour Photoshop decided to put in the fully transparent pixels. Apparently this is black in your case.
The problem occurs because the GPU is interpolating between two neighbouring pixels, one of which is fully transparent (with all colour channels set to zero as well). Let's say that the other pixel is bright red:
(255, 0, 0, 255) // Bright red, fully opaque
( 0, 0, 0, 0) // Black, fully transparent
Interpolating with a 50/50 ratio gives:
(128, 0, 0, 128)
This is a half-opaque dark red pixel, which explains the dark fringes you're seeing.
There are two possible solutions.
1. Add bleeds to transparent regions
Make sure that the fully transparent pixels have the right colour assigned to them; essentially "bleed" the colour from the nearest non-transparent pixel into adjacent fully transparent pixels. I'm not sure Photoshop can do this, but the libGDX TexturePacker can; see the bleed and bleedIterations settings. You need to be careful to set bleedIterations high enough, and add enough padding for the bleed to expand into, for your particular level of downscaling.
Now the example comes out like this:
(255, 0, 0, 255)
(255, 0, 0, 0) // Red bled into the transparent region
Interpolation now gives a bright red transparent pixel, as desired:
(255, 0, 0, 128)
2. Use premultiplied alpha
This is less finicky to get right, but it helps to know exactly what you're doing. Again TexturePacker has you covered, with the premultipliedAlpha setting. This is OpenGL blend mode glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
The numbers in the example don't change; this is still the pixel that comes out:
(128, 0, 0, 128)
However, this is no longer interpreted as "half-transparent dark red", but rather as "add some red, and remove some background". More generally, with premultiplied alpha, the colour channels are not "which colour to blend in" but "how much colour to add".
Note that a pixel like (255, 0, 0, 0) can no longer exist in the source texture: because the alpha is premultiplied, an alpha of zero automatically means that all colour channels must be zero as well. (If you want to get really fancy, you can even use such pixels to apply additive blending in the same render pass and the same texture as regular blending!)
Further reading on premultiplied alpha:
Shawn Hargreaves' blog
Alpha Blending: To Pre or Not To Pre by NVIDIA
I was able to resolve this issue by changing filtering on texture packer pro from Linear to MipMap

Java lwjgl plane border

I am trying to make white square has blue border. I made double plane, but viewing such result seems strange messy white and blue.
Is there good way to make border for square in 3D Java lwjgl?
float d = 1;
float f = 1;
glColor3f(0, 0, 1);
glBegin(GL_QUADS);
glVertex3f(-d, f, -f);
glVertex3f(-d, -f, -f);
glVertex3f(-d, -f, f);
glVertex3f(-d, f, f);
glEnd();
float d = 1;
float f = 0.9f;
glColor3f(1, 1, 1);
glBegin(GL_QUADS);
glVertex3f(-d, f, -f);
glVertex3f(-d, -f, -f);
glVertex3f(-d, -f, f);
glVertex3f(-d, f, f);
glEnd();
strange messy white and blue
Sounds like z-fighting:
Z-fighting, also called stitching, is a phenomenon in 3D rendering
that occurs when two or more primitives have similar values in the
z-buffer. It is particularly prevalent with coplanar polygons, where
two faces occupy essentially the same space, with neither in front.
Affected pixels are rendered with fragments from one polygon or the
other arbitrarily, in a manner determined by the precision of the
z-buffer. It can also vary as the scene or camera is changed, causing
one polygon to "win" the z test, then another, and so on. The overall
effect is a flickering, noisy rasterization of two polygons which
"fight" to color the screen pixels. This problem is usually caused by
limited sub-pixel precision and floating point and fixed point
round-off errors.
Disable depth testing via glDisable(GL_DEPTH_TEST) sometime before you render the second quad.
The artifact you're describing sounds like what's commonly called "depth fighting" or "z-fighting". It occurs if you draw coplanar polygons with the depth test enabled.
The reason is that all calculations in the rendering pipeline happen with limited precision. So while the two coplanar polygons theoretically have the same depth at any given pixel, one of them will often have slightly larger depth due to precision/rounding effects. Which one ends up being slightly in front can change between pixels, causing the typical artifacts.
To fix this, you have a number of options:
If you don't really need depth testing, you can obviously disable it. But often times you really do need depth testing, so this is rarely an option.
You can apply an offset to one polygon, so that it's slightly in front of the other. How big the offset needs to be is somewhat tricky, and depends on the depth precision and the transformations you apply. You can either play with various values, or try and mathematically analyze what difference in input coordinates you need to create different depth values.
You can use glPolygonOffset(). The effect and related challenges are fairly similar to option 2. But it makes things a little easier for you since you don't have to apply an offset to the coordinates yourself.
In a simple case like this, you can draw just a blue frame around the white square, instead of drawing a whole blue square that overlaps the white square. You'll need a few more polygons to draw the "frame" shape (which will look like a large square with a hole the size of the blue square), but this is still fairly easy to draw, more efficient, and avoids the problem entirely.

2D LWJGL with OpenGL - Enabling depth testing for layering 2D textures?

What would I need to add to my OpenGL init method to enable depth testing, and how would I actually use it for texture layering?
I would have to extend the last parameter of glOrtho to something more extreme than -1, and of course glEnable depth testing. Then to use it, I can only assume that I change the third parameter of glVertex to something that isn't 0 to send it in front / behind of other textures.
I try this, and the damn textures don't even show. xD I must be missing something.
EDIT: RE: Tim's response
whenever i made the image's z more extreme than -1 it didnt show the screen was just black.
void initGL(){
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glEnable(GL11.GL_DEPTH_TEST); //depth test enabled
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glOrtho(-width/2, width/2, -height/2, height/2, 1, -10);//far changed to -10
GL11.glMatrixMode(GL11.GL_MODELVIEW);
}
and
void loadBG(int theLoadedOne){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, theLoadedOne);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex3f(-width/2,height/2, -2);//new z value
GL11.glTexCoord2f(1,0);
GL11.glVertex3f(width/2,height/2,-2);//new z value
GL11.glTexCoord2f(1,1);
GL11.glVertex3f(width/2,-height/2,-2);//new z value
GL11.glTexCoord2f(0,1);
GL11.glVertex3f(-width/2,-height/2,-2);//new z value
GL11.glEnd();
GL11.glFlush();
}
and
while(!Display.isCloseRequested()){
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
...
for(int i=0;i<1;i++){ //dont mind this for loop
bg.loadThisBG(0); //its here for reasons
}
updateFPS();
Display.update();
} Display.destroy();
}
Seems like you switched near and far plane. Have a look at gluOrtho2D. It just calls glOrtho with near=-1 and far=+1, resulting in the z coordinates switching sign (m33=-2/(far-near)). However, with the values given above, m33=-2/(-10-1) is positive, and the z axis reversed to standard workflow.
This consequences in the quad being viewed from the back.
OpenGL matrix manipulation methods do no care what you feed them; except when values would led to a division by zero.
Assuming there is no modelview transform, and only the one matrix contributing to the projection one, here is what I think is happening:
The z value transform from world to NDC space is z_ndc = -9/11 * z_w + 2/11 (set near and far into the orthographic matrix and take the third row). Now, z_w=-2, and so z_ndc = 20/11. This is out of the NDC space boundaries and thrown away.
Well, I assume that this test is implicitly enabled/disabled with the Z test itself. Next suspect would be backface culling...
Provided your context includes a depth buffer (not sure about lwjgl buffer creation...)
All you need should be:
Call glEnable(GL_DEPTH_TEST) during initialization
Add depth buffer bit to glClear glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Define z coordinate to be between near and far values of orthographic matrix.

How to shade a specific section of a sprite

I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some lighting. It is difficult to distinguish between separate elevations because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block.
Well, it depends on how you do your lighting.
Basically, sprites are just textured quads made of two triangles.
Traditional vertex based lighting (which is supported by the built-in but now deprecated functions) will just calculate the lighting for the 4 corners of that quad and everything else will be interpolated. This is quite fast but might result in the wrong lighting - especially with spot lights and big quads.
If you use directional lighting only, you might apply a normal map to your quads and thus influence lighting in a per-texel way, but that might still not be what you want.
The modern way would be to use shaders, i.e. the lighting is evaluated per-pixel. You'd then have to provide per-texel lighting information for your quad which is then used in the fragment/pixel shader.
Just to clarify, the meanings of some terms in this context:
per-texel: per pixel in the texture, those value might be interpolated
per-pixel: per output pixel, i.e. per screen pixel
Edit:
I just looked at your screenshot and it seems you'll have to change the shade of sprite's edges if the adjacent sprite is not on the same level. Assuming you already know which sprite edge should be visible (i.e. there's a level change at that edge) you might just change the shading of the vertices that form that edge.
If you don't use any lighting, you might just start setting the vertex color to white and to some darker color for the vertices that need shading. Then multiply your texture color with the vertex color which should result in darker edges.
Alternatively, if those level have different depths (i.e. different z values) you could use some shader for edge detection (e.g. some SSAO implementation).
Edit 2:
If you use plain old vertex lighting, applying weighted normals might help. Basically you calculate the weighted vertex normals from the normals of those triangles that share a vertex.
There are several methods doing this, one being to weight the faces based on the angle at that vertex. You could multiply the normals by those angles, add then together and finally normalize the resulting normal.
The result of that calculation might be something like this (ASCII art):
| | /
|_______|________/
| / | |
|/______|_______|
Lines pointing up are the normals, the bottom lines would be your sprites in a side view.

GLU.gluLookAt in Java OpenGL bindings seems to do nothing

I've already checked the other questions on this topic and their solutions haven't worked for me. I'm at a bit of a loss. I have the following functions in my GLEventListener implementation.
public void init(GLAutoDrawable gl) {
GL2 gl2 = gl.getGL().getGL2();
gl2.glMatrixMode(GL2.GL_PROJECTION);
gl2.glLoadIdentity();
GLU glu = GLU.createGLU(gl2);
glu.gluPerspective(45.0f, 1, 0.1f,100.0f);
gl2.glMatrixMode(GL2.GL_MODELVIEW);
gl2.glLoadIdentity();
gl2.glViewport(0, 0, width, height);
gl2.glEnable(GL.GL_DEPTH_TEST);
}
private void render(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
GLU glu = GLU.createGLU(gl);
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
glu.gluLookAt(5, 0, 20,
0, 30, 0,
0, 1, 0);
gl2.glPushMatrix();
gl2.glClear( GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT );
gl2.glLoadIdentity();
gl2.glTranslatef(x, y, z);
gl2.glBegin( GL2.GL_QUADS );
gl2.glColor3f( 1, 0, 0 );
//24 glVertex3f calls & some colour changes go here.
gl2.glVertex3f(...)
gl2.glEnd();
gl2.glPopMatrix();
gl.glFlush();
}
It doesn't matter what values I put into the gluLookAt() matrix, the view doesn't change. I still end up looking at the same face of a cube.
Any ideas?
Thanks
EDIT: Responding to the edit in the original question. Leaving the original text below because people seem to find it to be useful.
I think your problem is in your cube drawing code. Check the commentary below: the glLoadIdentity call is doing exactly what you would expect - forcing the cube to be there in front of you:
gl2.glPushMatrix();
gl2.glClear( GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT );
/** Try removing the following glLoadIdentity call below.
* That call was blowing out the MODELVIEW matrix - it's removing your
* gluLookAt call and returning to the identity.
* As a result, the cube will always be right there in front of you.
*/
// gl2.glLoadIdentity();
gl2.glTranslatef(x, y, z);
gl2.glBegin( GL2.GL_QUADS );
gl2.glColor3f( 1, 0, 0 ); //24 glVertex3f calls & some colour changes go here.
gl2.glVertex3f(...)
gl2.glEnd();
gl2.glPopMatrix();
Here's a very quick explanation about what the related calls will do. See the documentation for more information:
gl2.glPushMatrix(); // This preserves current MODEL_VIEW matrix so you can get back here.
// Think of it as a checkpoint save in a game.
// Most of your objects will be wrapped in push and pop.
gl2.glLoadIdentity(); // This erases the MODEL_VIEW and replaces it with an identity.
// This un-does your previous gluLookAt call. You will rarely use
// this inside an object (but it's not impossible).
// Does not apply here so don't use.
gl2.glTranslatef(x, y, z); // This is what puts your object out in space for you to find
// as opposed to putting it at the origin. Most objects will
// have a translate (and likely a rotate as well).
// Note that the order of operations matters:
// translate and then rotate != rotate and then translate.
// QUAD strip code with vertices and colors - you're okay with these.
gl2.glPopMatrix(); // This brings back the MODEL_VIEW that you originally saved by pushing
// it.
The great thing about the matrix code in OpenGL is that once you get a portfolio of example code that you understand, you'll always have it as a reference. When I switched from IrisGL to OpenGL back in the day, it took me a little while to port my utilities over and then I never looked back.
ORIGINAL: You need to add your cube drawing code - if you are putting the cube in the vicinity of (0, 30, 0), it's highly likely that the code is doing what you asked it to.
Checking the OpenGL FAQ, there's a specific question and answer that is likely relevant to what you're doing: 8.080 Why doesn't gluLookAt work? I'm going to quote the whole answer as there really isn't a good break but please visit the OpenGL FAQ, the answer is likely there:
This is usually caused by incorrect
transformations.
Assuming you are using
gluPerspective() on the Projection
matrix stack with zNear and zFar as
the third and fourth parameters, you
need to set gluLookAt on the ModelView
matrix stack, and pass parameters so
your geometry falls between zNear and
zFar.
It's usually best to experiment with a
simple piece of code when you're
trying to understand viewing
transformations. Let's say you are
trying to look at a unit sphere
centered on the origin. You'll want to
set up your transformations as
follows:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(50.0, 1.0, 3.0, 7.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0, 0.0, 5.0,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0);
It's important to note how the Projection
and ModelView transforms work
together.
In this example, the Projection
transform sets up a 50.0-degree field
of view, with an aspect ratio of 1.0.
The zNear clipping plane is 3.0 units
in front of the eye, and the zFar
clipping plane is 7.0 units in front
of the eye. This leaves a Z volume
distance of 4.0 units, ample room for
a unit sphere.
The ModelView transform sets the eye
position at (0.0, 0.0, 5.0), and the
look-at point is the origin in the
center of our unit sphere. Note that
the eye position is 5.0 units away
from the look at point. This is
important, because a distance of 5.0
units in front of the eye is in the
middle of the Z volume that the
Projection transform defines. If the
gluLookAt() call had placed the eye at
(0.0, 0.0, 1.0), it would produce a
distance of 1.0 to the origin. This
isn't long enough to include the
sphere in the view volume, and it
would be clipped by the zNear clipping
plane.
Similarly, if you place the eye at
(0.0, 0.0, 10.0), the distance of 10.0
to the look at point will result in
the unit sphere being 10.0 units away
from the eye and far behind the zFar
clipping plane placed at 7.0 units.
If this has confused you, read up on
transformations in the OpenGL red book
or OpenGL Specification. After you
understand object coordinate space,
eye coordinate space, and clip
coordinate space, the above should
become clear. Also, experiment with
small test programs. If you're having
trouble getting the correct transforms
in your main application project, it
can be educational to write a small
piece of code that tries to reproduce
the problem with simpler geometry.

Categories