What would I need to add to my OpenGL init method to enable depth testing, and how would I actually use it for texture layering?
I would have to extend the last parameter of glOrtho to something more extreme than -1, and of course glEnable depth testing. Then to use it, I can only assume that I change the third parameter of glVertex to something that isn't 0 to send it in front / behind of other textures.
I try this, and the damn textures don't even show. xD I must be missing something.
EDIT: RE: Tim's response
whenever i made the image's z more extreme than -1 it didnt show the screen was just black.
void initGL(){
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glEnable(GL11.GL_DEPTH_TEST); //depth test enabled
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glOrtho(-width/2, width/2, -height/2, height/2, 1, -10);//far changed to -10
GL11.glMatrixMode(GL11.GL_MODELVIEW);
}
and
void loadBG(int theLoadedOne){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, theLoadedOne);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex3f(-width/2,height/2, -2);//new z value
GL11.glTexCoord2f(1,0);
GL11.glVertex3f(width/2,height/2,-2);//new z value
GL11.glTexCoord2f(1,1);
GL11.glVertex3f(width/2,-height/2,-2);//new z value
GL11.glTexCoord2f(0,1);
GL11.glVertex3f(-width/2,-height/2,-2);//new z value
GL11.glEnd();
GL11.glFlush();
}
and
while(!Display.isCloseRequested()){
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
...
for(int i=0;i<1;i++){ //dont mind this for loop
bg.loadThisBG(0); //its here for reasons
}
updateFPS();
Display.update();
} Display.destroy();
}
Seems like you switched near and far plane. Have a look at gluOrtho2D. It just calls glOrtho with near=-1 and far=+1, resulting in the z coordinates switching sign (m33=-2/(far-near)). However, with the values given above, m33=-2/(-10-1) is positive, and the z axis reversed to standard workflow.
This consequences in the quad being viewed from the back.
OpenGL matrix manipulation methods do no care what you feed them; except when values would led to a division by zero.
Assuming there is no modelview transform, and only the one matrix contributing to the projection one, here is what I think is happening:
The z value transform from world to NDC space is z_ndc = -9/11 * z_w + 2/11 (set near and far into the orthographic matrix and take the third row). Now, z_w=-2, and so z_ndc = 20/11. This is out of the NDC space boundaries and thrown away.
Well, I assume that this test is implicitly enabled/disabled with the Z test itself. Next suspect would be backface culling...
Provided your context includes a depth buffer (not sure about lwjgl buffer creation...)
All you need should be:
Call glEnable(GL_DEPTH_TEST) during initialization
Add depth buffer bit to glClear glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Define z coordinate to be between near and far values of orthographic matrix.
Related
I am attempting to render the reflection of some water.
To create the illusion of reflection, I need the camera to be below the water:
(Pictures not drawn by me)
Therefore, I need to move the camera under the water before rendering the scene. I need to move the camera downward by the distance from the camera to the water multiplied by two, and then I need to invert the cameras' pitch.
The problem is that I am currently limited to the fixed-function pipeline to do this, meaning that I must use glTranslatef() and glRotatef() calls to do this.
Here's my current implementation:
public void createReflectionTexture(){
EulerCamera c = Main.TerrainDemo.camera; //This is the camera
float amtDown = -(2f * (c.y() - 300)); //Amount to move the camera down by. 300 is the height of the water.
float amtPit = (-(c.pitch() * 2));//Pitch to invert the camera by
glRotatef(-amtPit, 0, 0, 1); //I have tried both negative and positive values here, neither seem to work.
glTranslatef(0, amtDown, 0);
bindArrays();
fbos.bindReflectionFrameBuffer();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Main.TerrainDemo.renderVBOTerrain();
fbos.unbindCurrentFrameBuffer();
unBindArrays();
glTranslatef(0, -amtDown, 0); /This puts things back to their original translation so I can keep on rendering other objects.
glRotatef(amtPit, 0, 0, 1);
}
Unfortunately, in my game, the code creates a completely messy image which definitely is obviously not the correct reflection:
If I crash my plane into the water and then position the cameras' pitch to 0, it does appear to render the correct objects.
The refraction texture renders perfectly fine.
How can I correctly move the camera below the water to create a scene that looks like reflection when viewed from above?
I am working on an OpenGL game in Java with LWJGL (ThinMatrix's tutorials at the moment) and I just added my skybox. As you can see from the picture, however, it is clipping through the trees and covering everything behind a certain point.
Here is my rendering code for the skybox:
public void render(Camera camera, float r, float g, float b) {
shader.start();
shader.loadViewMatrix(camera);
shader.loadFogColor(r, g, b);
GL30.glBindVertexArray(cube.getVaoID());
GL20.glEnableVertexAttribArray(0);
bindTextures();
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, cube.getVertexCount());
GL30.glBindVertexArray(0);
shader.stop();
}
private void bindTextures() {
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL13.GL_TEXTURE_CUBE_MAP, texture);
GL13.glActiveTexture(GL13.GL_TEXTURE1);
GL11.glBindTexture(GL13.GL_TEXTURE_CUBE_MAP, nightTexture);
shader.loadBlendFactor(getBlendFactor());
}
also if it is needed, here is my code for my master renderer:
public void render(List<Light> lights, Camera camera){
prepare();
shader.start();
shader.loadSkyColor(RED, GREEN, BLUE);
shader.loadLights(lights);
shader.loadViewMatrix(camera);
renderer.render(entities);
shader.stop();
terrainShader.start();
terrainShader.loadSkyColor(RED, GREEN, BLUE);
terrainShader.loadLight(lights);
terrainShader.loadViewMatrix(camera);
terrainRenderer.render(terrains);
terrainShader.stop();
skyboxRenderer.render(camera, RED, GREEN, BLUE);
terrains.clear();
entities.clear();
}
There are two things you can do
If you draw your skybox first, you can disable your depth test glDisable(GL_DEPTH_TEST) or your depth write glDepthMask(false). This will prevent that your skybox draws depth values, and the skybox will never be in front of anything that will be drawn later.
If you draw your skybox last, you can make it literally infinitely big by using vertex coordinates with a w-coordinate as 0. A vertex (x y z 0) means it is a vertex infinitely far in the direction of the vector (x y z). To prevent clipping, you have to enable depth clamping glEnable(GL_DEPTH_CLAMP) this will prevent OpenGl to clip away your skybox faces, and you are sure that the skybox is always at the maximum distance and will never hide anything you have drawn earlier.
the advantage of the second method is within the depth test. Because you already have a depth values written for your scene, the OpenGL pipeline can skip the calculation of the skybox pixels that are already covered by your scene. But the fragment shader for skyboxes is usually very trivial, so it shouldn't make that much of a difference.
I am not familiar with LWJGL, are you alllowed to write shader? In plain opengl, you don't have to worry about the size of skybox cube, it can be {1.0, 1.0, 1.0} if you like. What you need is first place your camera at {0.0, 0.0, 0.0} and make skybox fail depth test against everything in your scene, you can achieve that by making the skybox's z value in normalized device coordinate be 1.0.
Do this in your vertex shader
gl_Position = (mvp_mat * vec4(xyz, 1.0)).xyww;
after the perspective divide by w, z will be w / w or 1.0.
You might want to check out How can I increase distance (zfar/gluPerspective) where openGL stops drawing objects?
The problem in that instance is that the skybox itself was too small and intersecting with the geometry.
I also see that you're rendering your terrain first, and then your skybox. I would try flipping the order there; draw the skybox first then the terrain.
First, you should remove the skybox and render the scene again to check if it is skybox that clip the tree.
If it is skybox, simply scale the skybox to make it contain all the object in the terrain.
If not, it is likely to be the problem of camera (like Hanston said). You need to set the far clipping plane at least behind the skybox. That is, it should be larger the diameter of your skybox.
If you want to scale the skybox or any other object, use the transformationMatrix. the game engine use a 4x4 matrix to control the size, location and rotation of the model. you can see example in source TerrainRenderer.java, at function loadModelMatrix. It create a transform matrix and uploads it into the shader. You should do the same thing, but change the scale parameter into what you want.
I have an app that renders a cube. I'm kind of new to using openGL for 3d stuff, but essentially what I want is for a corner of my cube to point north at all times, but also orient itself according to the geomagnetic sensor.
That way, when the user has the phone parallel to the ground and faces north, the corner will point "up" on the screen, whereas if the user has the phone upright, the corner will point "away" from the user, toward the back of the phone.
I had no problem writing this in 2d on only one rotational axis so that it would point the corner north if the phone was parallel to the ground.
However, when I made this 3D, two of the axes seem to be working fine, but the axis I worked with the first time doesn't seem to behave the same way.
I use the following code to get the rotation for each:
gl.glPushMatrix();
gl.glTranslatef(0,0,-4);
//get target angle
targetAngle1 = rotationHandler.getRotation1();
targetAngle2 = rotationHandler.getRotation2();
targetAngle3 = rotationHandler.
if (Math.abs(getShortestAngle(targetAngle1, currentAngle)) > 5) //this is to create a 5 degree "dead zone" so the compass isnt shaky
currentAngle1 = (getShortestAngle(currentAngle, targetAngle1) > 0) ?
currentAngle+1f : currentAngle-1f; //increase or decrease the current angle to move it towards the target angle
if (Math.abs(getShortestAngle(targetAngle2, currentAngle2))>5)
currentAngle2 = (getShortestAngle(currentAngle2, targetAngle2) > 0) ?
currentAngle2 + 1f : currentAngle2-1f;
if (Math.abs(getShortestAngle(targetAngle3, currentAngle3))>5)
currentAngle3 = (getShortestAngle(currentAngle3, targetAngle3) > 0) ?
currentAngle3 + 1f : currentAngle3 - 1f;
gl.glRotatef(currentAngle, 0, 0, -4);
gl.glRotatef(currentAngle2, 0, -4, 0);
gl.glRotatef(currentAngle3, -4, 0, 0);
cube.draw(gl);
gl.glPopMatrix();
The calls to glRotatef that use currentAngle2 and currentAngle3 seem to rotate the cube on an axis relative to the cube, while the first call seems to rotate it on an axis relative to the screen. When I comment out any two of the rotation calls, the third works as intended. But I can't seem to figure out how to get them to work together.
---EDIT---
I found that I could get the cube to rotate to almost any position possible even after taking away the first call. So it seems like I'm going to have to come up with an algorithm that will calculate the rotation as appropriate. I honestly don't know if I can do it but I'll post it here if I figure it out.
This might not be the best solution, but it seems to work as long as the phone isn't completely flat.
After changing the model to a rectangular prism, I could approach the problem a little more clearly. I used only the first two axes, and didn't even touch the last axis. I treated the prism like a fighter jet - in order to turn it left or right, I rotated it on its side, then altered the pitch up or down. Then I "un-rotated" it, and altered the pitch again for to point it up or down as appropriate.
public void findTargetRotation(float rotationAngle, float pitchAngle, GL10 gl){
//first, enable rotation around a vertical axis by rotating on frontBack axis by -90 degrees
gl.glRotatef(-90, 0, -4, 0);
//rotate around the leftRight axis by the angle1 amount
gl.glRotatef(rotationAngle, 0,-4,0);
//then rotate on frontBack axis by 90 degrees again, to straighten out
gl.glRotatef(90, 0, -4, 0);
//lastly, rotate around the leftRight axis to point it up or down
gl.glRotatef(pitchAngle, -4, 0, 0);
}
This seems to work as long as pitchAngle isn't extremely close to or equal to 0. So I just added a little more code to treat it like a 2D object if the pitchAngle is really small.
There's probably a better solution but at least this works.
I am writing a voxel engine and at the moment
I am working on the Chunk-Rendering-System but I have a problem.
It seems that the textures were repeated on the quads.
There is this green line at the bottom of the grass blocks and I don't know why.
This is the OpenGL-Render-Code:
Texture texture = TextureManager.getTexture(block.getTextureNameForSide(Direction.UP));
texture.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2d(0, 0); GL11.glVertex3f(0, 1, 0);
GL11.glTexCoord2d(1, 0); GL11.glVertex3f(0, 1, 1);
GL11.glTexCoord2d(1, 1); GL11.glVertex3f(1, 1, 1);
GL11.glTexCoord2d(0, 1); GL11.glVertex3f(1, 1, 0);
GL11.glEnd();
And here is the OpenGL-Setup:
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glShadeModel(GL11.GL_SMOOTH);
GL11.glClearColor(0.1F, 0.4F, 0.6F, 0F);
GL11.glClearDepth(1F);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
GL11.glCullFace(GL11.GL_BACK);
GL11.glEnable(GL11.GL_CULL_FACE);
Make sure GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T are set to GL_CLAMP_TO_EDGE.
genpfault's answer should do the trick for you, I just wanted to give you some insight into why you need this particular wrap state.
To be clear, the green line in your screenshot corresponds to the edges of one of your voxels?
It looks like you are using GL_LINEAR filtering (default) together with an inappropriate texture wrap state (e.g. GL_REPEAT or GL_CLAMP). I will explain why GL_CLAMP is a bad idea later.
You may think that the texture coordinate 0.0 and 1.0 are perfectly within the normalized texture coordinate range and therefore not subject to wrapping, but you would be wrong.
This particular combination of states will pickup texels from the other side of your texture at either extreme of the [0,1] texture coordinate range. The texture coordinate 1.0 is actually slightly beyond the center of the last texel in your texture, so when GL fetches the 4 nearest texels for linear filtering, it wraps around to the other side of the texture for at least 2 of them.
GL_CLAMP_TO_EDGE modifies this behavior, it clamps the texture coordinates to a range that is actually more restrictive than [0,1] so that no coordinate goes beyond the center of any edge texels in your texture. Linear filtering will not pickup texels from the other side of your texture with this set. You could also (mostly) fix this by using GL_NEAREST filtering, but that will result in a lot of texture aliasing.
It is also possible that you are using GL_CLAMP, which, by the way was removed in OpenGL 3.1. In older versions of GL it was designed to clamp the coordinates into the range [0,1] and then if linear filtering tried to fetch a texel beyond the edge it would use a special set of border texels rather than wrapping around. Border texels are no longer supported, and thus that wrap mode is gone.
The bottom line is do not use GL_CLAMP, it does not do what most people think. GL_CLAMP_TO_EDGE is almost always what you really want when you think of clamping textures.
EDIT:
genpfault brings up a good point; this would be a lot easier to understand with a diagram...
The following diagram illustrates the problem in 1 dimension:
http://i.msdn.microsoft.com/dynimg/IC83860.gif
I have a more thorough explanation of this diagram in an answer I wrote to a similar issue.
I've already checked the other questions on this topic and their solutions haven't worked for me. I'm at a bit of a loss. I have the following functions in my GLEventListener implementation.
public void init(GLAutoDrawable gl) {
GL2 gl2 = gl.getGL().getGL2();
gl2.glMatrixMode(GL2.GL_PROJECTION);
gl2.glLoadIdentity();
GLU glu = GLU.createGLU(gl2);
glu.gluPerspective(45.0f, 1, 0.1f,100.0f);
gl2.glMatrixMode(GL2.GL_MODELVIEW);
gl2.glLoadIdentity();
gl2.glViewport(0, 0, width, height);
gl2.glEnable(GL.GL_DEPTH_TEST);
}
private void render(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
GLU glu = GLU.createGLU(gl);
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
glu.gluLookAt(5, 0, 20,
0, 30, 0,
0, 1, 0);
gl2.glPushMatrix();
gl2.glClear( GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT );
gl2.glLoadIdentity();
gl2.glTranslatef(x, y, z);
gl2.glBegin( GL2.GL_QUADS );
gl2.glColor3f( 1, 0, 0 );
//24 glVertex3f calls & some colour changes go here.
gl2.glVertex3f(...)
gl2.glEnd();
gl2.glPopMatrix();
gl.glFlush();
}
It doesn't matter what values I put into the gluLookAt() matrix, the view doesn't change. I still end up looking at the same face of a cube.
Any ideas?
Thanks
EDIT: Responding to the edit in the original question. Leaving the original text below because people seem to find it to be useful.
I think your problem is in your cube drawing code. Check the commentary below: the glLoadIdentity call is doing exactly what you would expect - forcing the cube to be there in front of you:
gl2.glPushMatrix();
gl2.glClear( GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT );
/** Try removing the following glLoadIdentity call below.
* That call was blowing out the MODELVIEW matrix - it's removing your
* gluLookAt call and returning to the identity.
* As a result, the cube will always be right there in front of you.
*/
// gl2.glLoadIdentity();
gl2.glTranslatef(x, y, z);
gl2.glBegin( GL2.GL_QUADS );
gl2.glColor3f( 1, 0, 0 ); //24 glVertex3f calls & some colour changes go here.
gl2.glVertex3f(...)
gl2.glEnd();
gl2.glPopMatrix();
Here's a very quick explanation about what the related calls will do. See the documentation for more information:
gl2.glPushMatrix(); // This preserves current MODEL_VIEW matrix so you can get back here.
// Think of it as a checkpoint save in a game.
// Most of your objects will be wrapped in push and pop.
gl2.glLoadIdentity(); // This erases the MODEL_VIEW and replaces it with an identity.
// This un-does your previous gluLookAt call. You will rarely use
// this inside an object (but it's not impossible).
// Does not apply here so don't use.
gl2.glTranslatef(x, y, z); // This is what puts your object out in space for you to find
// as opposed to putting it at the origin. Most objects will
// have a translate (and likely a rotate as well).
// Note that the order of operations matters:
// translate and then rotate != rotate and then translate.
// QUAD strip code with vertices and colors - you're okay with these.
gl2.glPopMatrix(); // This brings back the MODEL_VIEW that you originally saved by pushing
// it.
The great thing about the matrix code in OpenGL is that once you get a portfolio of example code that you understand, you'll always have it as a reference. When I switched from IrisGL to OpenGL back in the day, it took me a little while to port my utilities over and then I never looked back.
ORIGINAL: You need to add your cube drawing code - if you are putting the cube in the vicinity of (0, 30, 0), it's highly likely that the code is doing what you asked it to.
Checking the OpenGL FAQ, there's a specific question and answer that is likely relevant to what you're doing: 8.080 Why doesn't gluLookAt work? I'm going to quote the whole answer as there really isn't a good break but please visit the OpenGL FAQ, the answer is likely there:
This is usually caused by incorrect
transformations.
Assuming you are using
gluPerspective() on the Projection
matrix stack with zNear and zFar as
the third and fourth parameters, you
need to set gluLookAt on the ModelView
matrix stack, and pass parameters so
your geometry falls between zNear and
zFar.
It's usually best to experiment with a
simple piece of code when you're
trying to understand viewing
transformations. Let's say you are
trying to look at a unit sphere
centered on the origin. You'll want to
set up your transformations as
follows:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(50.0, 1.0, 3.0, 7.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0, 0.0, 5.0,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0);
It's important to note how the Projection
and ModelView transforms work
together.
In this example, the Projection
transform sets up a 50.0-degree field
of view, with an aspect ratio of 1.0.
The zNear clipping plane is 3.0 units
in front of the eye, and the zFar
clipping plane is 7.0 units in front
of the eye. This leaves a Z volume
distance of 4.0 units, ample room for
a unit sphere.
The ModelView transform sets the eye
position at (0.0, 0.0, 5.0), and the
look-at point is the origin in the
center of our unit sphere. Note that
the eye position is 5.0 units away
from the look at point. This is
important, because a distance of 5.0
units in front of the eye is in the
middle of the Z volume that the
Projection transform defines. If the
gluLookAt() call had placed the eye at
(0.0, 0.0, 1.0), it would produce a
distance of 1.0 to the origin. This
isn't long enough to include the
sphere in the view volume, and it
would be clipped by the zNear clipping
plane.
Similarly, if you place the eye at
(0.0, 0.0, 10.0), the distance of 10.0
to the look at point will result in
the unit sphere being 10.0 units away
from the eye and far behind the zFar
clipping plane placed at 7.0 units.
If this has confused you, read up on
transformations in the OpenGL red book
or OpenGL Specification. After you
understand object coordinate space,
eye coordinate space, and clip
coordinate space, the above should
become clear. Also, experiment with
small test programs. If you're having
trouble getting the correct transforms
in your main application project, it
can be educational to write a small
piece of code that tries to reproduce
the problem with simpler geometry.