I've written several Android apps, but this is my first experience with 3D programming.
I've created a room (4 walls, ceiling and floor) with a couple objects inside and am able to move the camera around it as if walking. I've textured all surfaces with various images and everything was working as expected.
For context, the room is 14 units wide and 16 units deep (centered at origin), 3 units high (1 above origin and 2 below). There are 2 objects in the middle of the room, a cube and an inverted pyramid on top of it.
Then I went to add a light source to shade the cube and pyramid. I had read through and followed a couple of NeHe's ports, so I took what I had working in the lesson on lighting and applied it to my new code.
gl.glEnable(GL10.GL_LIGHTING);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_AMBIENT, new float[] { 0.1f, 0.1f, 0.1f, 1f }, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_DIFFUSE, new float[] { 1f, 1f, 1f, 1f }, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_POSITION, new float[] { -4f, 0.9f, 6f, 1f }, 0);
gl.glEnable(GL10.GL_LIGHT0);
The result is that the cube and pyramid are not shaded. They look the same on sides opposing the light as they do on the sides facing it. When the camera is pointed directly away from the light source the room looks as it did before I added the lighting code. As I rotate the camera to face the light source the entire room (including objects) becomes darker until completely black when the camera is directly facing the source.
What is going on here? I read many articles on lighting and how it works, but I have seen nothing to indicate why this wouldn't light up all sides of the room, with the cube and pyramid shaded based on the light position. Is there some expected behavior of the light because it is "inside" the room? Am I just missing something easy because I'm new?
Every object in your 3D world has a normal, where it helps OpenGL to determine how much light an object need to reflect. You've probably forgot to specify the normals for your surfaces. Without specifying them, OpenGL will light all objects in your world in the same way.
In order to get a surface's normal in 3D you need at least three vertices, which means it at least is a triangle.
Sample stuff:
In order to calculate a surface's normal you need two vectors. Since you have three vertices in 3D space that means that these sample points could contain a triangle:
// Top triangle, three points in 3D space.
vertices = new float[] {
-1.0f, 1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
0.0f, 1.0f, -1.0f,
}
Given these three points, you can now define two vectors by the following:
// Simple vector class, created by you.
Vector3f vector1 = new Vector3f();
Vector3f vector2 = new Vector3f();
vector1.x = vertices[0] - vertices[3];
vector1.y = vertices[1] - vertices[4];
vector1.z = vertices[2] - vertices[5];
vector2.x = vertices[3] - vertices[6];
vector2.y = vertices[4] - vertices[7];
vector2.z = vertices[5] - vertices[8];
Now when you have two vectors, you can finally get the surface's normal by using the Cross Product. Shortly, the cross product is an operation which results in a new vector containing an angle that is perpendicular to the input vectors. This is the normal that we need.
To get the cross product in your code you have to write your own method that calculates it. In theory you calculate the cross product given this formula:
A X B = (A.y * B.z - A.z * B.y, A.z * B.x - A.x * B.z, A.x * B.y - A.y * B.x)
In code (by using the vectors above):
public Vector3f crossProduct(Vector3f vector1, Vector3f vector2) {
Vector3f normalVector = new Vector3f();
// Cross product. The normalVector contains the normal for the
// surface, which is perpendicular both to vector1 and vector2.
normalVector.x = vector1.y * vector2.z - vector1.z * vector2.y;
normalVector.y = vector1.z * vector2.x - vector1.x * vector2.z;
normalVector.z = vector1.x * vector2.y - vector1.y * vector2.x;
return normalVector;
}
Before any further comments; you can specify your normals in an array and just put them into OpenGL when needed, but your understanding of this topic will be much better if you dig into it and your code will be much more flexible.
So now we have a normal which you can loop through, assign the vector values to your normal array (like NeHe's ports, but dynamically) and set up OpenGL to use GL_NORMAL_ARRAY in order to get OpenGL to reflect the light on the object correctly:
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
// I'm assuming you know how to put it into a FloatBuffer.
gl.glNormalPointer(GL10.GL_FLOAT, 0, mNormalsBuffer);
// Draw your surface...
Another last comment; if you're using other vertices values (like 5.0f, 10.0f or bigger) you might wanna normalize the vector that returns from the crossProduct() method in order to gain some performance. Otherwise OpenGL must calculate the new vector to get the unit vector and that might be a performance issue.
Also, your new float[] {-4f, 0.9f, 6f, 1f} for GL_POSITION is not quite correct. When the fourth value is set to 1.0f it means that the light position is 0, 0, 0, no matter what the first three values are. In order to specify a vector for your light position, change the fourth value to 0.0f.
You need to reload the light position each frame, otherwise the light source will move with the camera which is probably not what you want. Also the shading you are describing is totally consistent with vertex interpolated lighting. If you want something better you will have to do it per-pixel (which means implementing your own shader), or else subdivide your geometry.
Related
I am working on an OpenGL game in Java with LWJGL (ThinMatrix's tutorials at the moment) and I just added my skybox. As you can see from the picture, however, it is clipping through the trees and covering everything behind a certain point.
Here is my rendering code for the skybox:
public void render(Camera camera, float r, float g, float b) {
shader.start();
shader.loadViewMatrix(camera);
shader.loadFogColor(r, g, b);
GL30.glBindVertexArray(cube.getVaoID());
GL20.glEnableVertexAttribArray(0);
bindTextures();
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, cube.getVertexCount());
GL30.glBindVertexArray(0);
shader.stop();
}
private void bindTextures() {
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL13.GL_TEXTURE_CUBE_MAP, texture);
GL13.glActiveTexture(GL13.GL_TEXTURE1);
GL11.glBindTexture(GL13.GL_TEXTURE_CUBE_MAP, nightTexture);
shader.loadBlendFactor(getBlendFactor());
}
also if it is needed, here is my code for my master renderer:
public void render(List<Light> lights, Camera camera){
prepare();
shader.start();
shader.loadSkyColor(RED, GREEN, BLUE);
shader.loadLights(lights);
shader.loadViewMatrix(camera);
renderer.render(entities);
shader.stop();
terrainShader.start();
terrainShader.loadSkyColor(RED, GREEN, BLUE);
terrainShader.loadLight(lights);
terrainShader.loadViewMatrix(camera);
terrainRenderer.render(terrains);
terrainShader.stop();
skyboxRenderer.render(camera, RED, GREEN, BLUE);
terrains.clear();
entities.clear();
}
There are two things you can do
If you draw your skybox first, you can disable your depth test glDisable(GL_DEPTH_TEST) or your depth write glDepthMask(false). This will prevent that your skybox draws depth values, and the skybox will never be in front of anything that will be drawn later.
If you draw your skybox last, you can make it literally infinitely big by using vertex coordinates with a w-coordinate as 0. A vertex (x y z 0) means it is a vertex infinitely far in the direction of the vector (x y z). To prevent clipping, you have to enable depth clamping glEnable(GL_DEPTH_CLAMP) this will prevent OpenGl to clip away your skybox faces, and you are sure that the skybox is always at the maximum distance and will never hide anything you have drawn earlier.
the advantage of the second method is within the depth test. Because you already have a depth values written for your scene, the OpenGL pipeline can skip the calculation of the skybox pixels that are already covered by your scene. But the fragment shader for skyboxes is usually very trivial, so it shouldn't make that much of a difference.
I am not familiar with LWJGL, are you alllowed to write shader? In plain opengl, you don't have to worry about the size of skybox cube, it can be {1.0, 1.0, 1.0} if you like. What you need is first place your camera at {0.0, 0.0, 0.0} and make skybox fail depth test against everything in your scene, you can achieve that by making the skybox's z value in normalized device coordinate be 1.0.
Do this in your vertex shader
gl_Position = (mvp_mat * vec4(xyz, 1.0)).xyww;
after the perspective divide by w, z will be w / w or 1.0.
You might want to check out How can I increase distance (zfar/gluPerspective) where openGL stops drawing objects?
The problem in that instance is that the skybox itself was too small and intersecting with the geometry.
I also see that you're rendering your terrain first, and then your skybox. I would try flipping the order there; draw the skybox first then the terrain.
First, you should remove the skybox and render the scene again to check if it is skybox that clip the tree.
If it is skybox, simply scale the skybox to make it contain all the object in the terrain.
If not, it is likely to be the problem of camera (like Hanston said). You need to set the far clipping plane at least behind the skybox. That is, it should be larger the diameter of your skybox.
If you want to scale the skybox or any other object, use the transformationMatrix. the game engine use a 4x4 matrix to control the size, location and rotation of the model. you can see example in source TerrainRenderer.java, at function loadModelMatrix. It create a transform matrix and uploads it into the shader. You should do the same thing, but change the scale parameter into what you want.
I'm rendering a simple rectangle mesh using libgdx, and other geometric elements that are similar in simplicity. Therse are going to interact with the sprites I have setup in my game. The sprites' position and other properties are setup in world units and before each sprite draw session I setup the camera like this:
camera.update();
batch.setProjectionMatrix(camera.combined);
It all works well but I need to draw meshes using world units. How can I feed the shader program world coordinates(12.5f, 30f, etc, based on my game world data) instead of (0f, 1f) ranges? I want to draw several textured meshes so I need coordinates that are in relation with the other elements in the game.
Here is how I draw a simple rectangle mesh :
mesh = new Mesh(true, 4, 6,
new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoords"));
mesh.setVertices(new float[] {
-1.0f, -1.0f, 0, 0,1,
0.0f, -1.0f, 0,1,1,
0.0f, 0.0f, 0, 1,0,
-1.0f, 0.0f, 0, 0,0 });
mesh.setIndices(new short[] { 0, 1, 2, 2, 3, 0});
Gdx.graphics.getGL20().glEnable(GL20.GL_TEXTURE_2D);
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE);
createShader();
shader.begin();
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Is there any way I can feed world units to the mesh vertices array ?
You can transform the vertices in the vertex shader. This allows you to project world coordinates onto the -1 to 1 range required for rendering. This is typically done by multiplying the position vertex attribute with a (projection) matrix. Have a look at the default spritebatch shader, for an example of how to implement this.
You can use the camera.combined matrix to multiply these vertices in vertex shader. Just like you did when specifying the projection matrix for the spritebatch. You'll have to assign this matrix to the uniform you've used in your vertex shader. An example of how to do this can also be found in default spritebatch implementation.
However, you might want to reconsider your approach. Since you're using a spritebatch, you can profit from a performance gain by using the spritebatch instead of manually rendering. This will also simplify the rendering for you, because you dont have to mess with the shader and matrices yourself. Spritebatch contains a method (javadoc) which allows you to specify a manually created mesh (or vertices actually). Each vertex is expected to be 5 floats (x, y, u, v, color) in size and a multiple of four vertices (doesn't have to be a rectangle shape though) must provided (you can use Color.WHITE.toFloatBits() for the color).
But, since you're trying to render a simple rectangle, you might as well use one of the more convenient methods that allows you to render a rectangle without having to create a mesh all together (javadocs). Or, even easier, use it how it is designed by creating a Sprite for your rectangle (wiki page).
Now, if you're still certain that you do want to create a mesh and shader manually, then I'd suggest learning that using e.g. a tutorial, instead of just diving into it. E.g. these tutorials might help you get started. The wiki also contains an article describing how to do this, including transforming vertices in the vertex shader.
Since few days I'm trying to implement quaternion rotation for android OpenGL ES. I would like to get function with input quaternion(x,y,z,w). This function will be set rotation for GL10 object. GL10 object have only gl.glRotatef(y, 1.0f, 0.0f, 0.0f) function providing setting position in Euler angles. I tried that class https://github.com/TraxNet/ShadingZen/blob/master/library/src/main/java/org/traxnet/shadingzen/math/Quaternion.java to create Matrix but it still don't work. I would be grateful if someone could show/write how to set position of GL10 object by putting as parameter quaternion(GL10setRotation(Quaternion q)).
glRotatef is just a multiplication of the current matrix with a rotation matrix (plus associated boundary checks).
One way to do this in OpenGL 1 (using the linked Quaternion class) is:
Matrix rotation = new Matrix();
quaternion.toMatrix(rotation);
glMultMatrixf(rotation.getAsArray(), 0);
Do note that glRotate and glTranslate are slower than doing the Matrix math yourself and using glLoadMatrix. In general, if performance is important, I'd advise against using OpenGL 1 entirely.
I've already checked the other questions on this topic and their solutions haven't worked for me. I'm at a bit of a loss. I have the following functions in my GLEventListener implementation.
public void init(GLAutoDrawable gl) {
GL2 gl2 = gl.getGL().getGL2();
gl2.glMatrixMode(GL2.GL_PROJECTION);
gl2.glLoadIdentity();
GLU glu = GLU.createGLU(gl2);
glu.gluPerspective(45.0f, 1, 0.1f,100.0f);
gl2.glMatrixMode(GL2.GL_MODELVIEW);
gl2.glLoadIdentity();
gl2.glViewport(0, 0, width, height);
gl2.glEnable(GL.GL_DEPTH_TEST);
}
private void render(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
GLU glu = GLU.createGLU(gl);
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
glu.gluLookAt(5, 0, 20,
0, 30, 0,
0, 1, 0);
gl2.glPushMatrix();
gl2.glClear( GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT );
gl2.glLoadIdentity();
gl2.glTranslatef(x, y, z);
gl2.glBegin( GL2.GL_QUADS );
gl2.glColor3f( 1, 0, 0 );
//24 glVertex3f calls & some colour changes go here.
gl2.glVertex3f(...)
gl2.glEnd();
gl2.glPopMatrix();
gl.glFlush();
}
It doesn't matter what values I put into the gluLookAt() matrix, the view doesn't change. I still end up looking at the same face of a cube.
Any ideas?
Thanks
EDIT: Responding to the edit in the original question. Leaving the original text below because people seem to find it to be useful.
I think your problem is in your cube drawing code. Check the commentary below: the glLoadIdentity call is doing exactly what you would expect - forcing the cube to be there in front of you:
gl2.glPushMatrix();
gl2.glClear( GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT );
/** Try removing the following glLoadIdentity call below.
* That call was blowing out the MODELVIEW matrix - it's removing your
* gluLookAt call and returning to the identity.
* As a result, the cube will always be right there in front of you.
*/
// gl2.glLoadIdentity();
gl2.glTranslatef(x, y, z);
gl2.glBegin( GL2.GL_QUADS );
gl2.glColor3f( 1, 0, 0 ); //24 glVertex3f calls & some colour changes go here.
gl2.glVertex3f(...)
gl2.glEnd();
gl2.glPopMatrix();
Here's a very quick explanation about what the related calls will do. See the documentation for more information:
gl2.glPushMatrix(); // This preserves current MODEL_VIEW matrix so you can get back here.
// Think of it as a checkpoint save in a game.
// Most of your objects will be wrapped in push and pop.
gl2.glLoadIdentity(); // This erases the MODEL_VIEW and replaces it with an identity.
// This un-does your previous gluLookAt call. You will rarely use
// this inside an object (but it's not impossible).
// Does not apply here so don't use.
gl2.glTranslatef(x, y, z); // This is what puts your object out in space for you to find
// as opposed to putting it at the origin. Most objects will
// have a translate (and likely a rotate as well).
// Note that the order of operations matters:
// translate and then rotate != rotate and then translate.
// QUAD strip code with vertices and colors - you're okay with these.
gl2.glPopMatrix(); // This brings back the MODEL_VIEW that you originally saved by pushing
// it.
The great thing about the matrix code in OpenGL is that once you get a portfolio of example code that you understand, you'll always have it as a reference. When I switched from IrisGL to OpenGL back in the day, it took me a little while to port my utilities over and then I never looked back.
ORIGINAL: You need to add your cube drawing code - if you are putting the cube in the vicinity of (0, 30, 0), it's highly likely that the code is doing what you asked it to.
Checking the OpenGL FAQ, there's a specific question and answer that is likely relevant to what you're doing: 8.080 Why doesn't gluLookAt work? I'm going to quote the whole answer as there really isn't a good break but please visit the OpenGL FAQ, the answer is likely there:
This is usually caused by incorrect
transformations.
Assuming you are using
gluPerspective() on the Projection
matrix stack with zNear and zFar as
the third and fourth parameters, you
need to set gluLookAt on the ModelView
matrix stack, and pass parameters so
your geometry falls between zNear and
zFar.
It's usually best to experiment with a
simple piece of code when you're
trying to understand viewing
transformations. Let's say you are
trying to look at a unit sphere
centered on the origin. You'll want to
set up your transformations as
follows:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(50.0, 1.0, 3.0, 7.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0, 0.0, 5.0,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0);
It's important to note how the Projection
and ModelView transforms work
together.
In this example, the Projection
transform sets up a 50.0-degree field
of view, with an aspect ratio of 1.0.
The zNear clipping plane is 3.0 units
in front of the eye, and the zFar
clipping plane is 7.0 units in front
of the eye. This leaves a Z volume
distance of 4.0 units, ample room for
a unit sphere.
The ModelView transform sets the eye
position at (0.0, 0.0, 5.0), and the
look-at point is the origin in the
center of our unit sphere. Note that
the eye position is 5.0 units away
from the look at point. This is
important, because a distance of 5.0
units in front of the eye is in the
middle of the Z volume that the
Projection transform defines. If the
gluLookAt() call had placed the eye at
(0.0, 0.0, 1.0), it would produce a
distance of 1.0 to the origin. This
isn't long enough to include the
sphere in the view volume, and it
would be clipped by the zNear clipping
plane.
Similarly, if you place the eye at
(0.0, 0.0, 10.0), the distance of 10.0
to the look at point will result in
the unit sphere being 10.0 units away
from the eye and far behind the zFar
clipping plane placed at 7.0 units.
If this has confused you, read up on
transformations in the OpenGL red book
or OpenGL Specification. After you
understand object coordinate space,
eye coordinate space, and clip
coordinate space, the above should
become clear. Also, experiment with
small test programs. If you're having
trouble getting the correct transforms
in your main application project, it
can be educational to write a small
piece of code that tries to reproduce
the problem with simpler geometry.
I have a simple OpenGL app that displays arbitrary 3D models. I'd like to implement zoom. what I have now uses glScale, and works to some degree. However, I'm having two issues.
Any sort of zoom (+) quickly gets to the point where the edges of the object are inside the near clipping plane. Right now, my zNear is something like 0.1, so it makes sense that increasing the scale of the object will cause clipping. I am wondering if there are any other approaches for achieving a better effect.
As I zoom in, the object gets dimmer. Zoom out and it gets brighter. I have a light position at around 0, 0, 100. I have very simple lighting positioned at 0,0,100 and using only diffuse.
gl.glEnable(GL10.GL_LIGHTING);
gl.glEnable(GL10.GL_LIGHT0);
gl.glEnable(GL10.GL_COLOR_MATERIAL);
float[] lights;
lights = new float[] { 0f, 0f, 0f, 1f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_AMBIENT, lights, 0);
lights = new float[] { 1f, 1f, 1f, 1f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_DIFFUSE, lights, 0);
lights = new float[] { 0f, 0f, 0f, 1f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_SPECULAR, lights, 0);
float matAmbient[] = { 0f, 0f, 0f, 1f };
float matDiffuse[] = { 1f, 1f, 1f, 1f };
float matSpecular[] = { 0f, 0f, 0f, 1f };
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, matAmbient, 0);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, matDiffuse, 0);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR, matSpecular, 0);
float lightPosition[] = { mesh.mid.vertex[X], mesh.mid.vertex[Y], 100f,
1f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_POSITION, lightPosition, 0);
I do not have attenuation settings, which I believe needs to be enabled to cause the light to be affected by distance. Regardless, I'm not changing the distance of the object, just scaling it. Sure the position of the faces are changing but not significantly. Anyway, I'd thinking zooming in would cause it to get brighter, not dimmer.
This happens to be using opengl-es 1.0 on the Android platform.
Scaling will change the way your object is lit as the normals are also scaled (which as you pointed out in your own answer can be forced with a call to glEnable(GL_NORMALIZE)). Note that depending on when they are specified, the lights themselves may not have the equivalent transformation:
http://www.opengl.org/resources/faq/technical/lights.htm
A light's position is transformed by the current ModelView matrix at the time the position is specified with a call to glLight*().
Depending on the kind of zoom effect you want, you could achieve your zoom in different ways.
If you literally want to 'zoom' in the way that a zoom lens on a camera does, then you could change the field of vision parameter passed in to gluPerspective. This will mean that you have the effect of flattened or exaggerated perspective, as you do with a real camera.
What is more commonly desired by typical applications, is to change the position of the camera in relation to the object. The simplest way to do this is with gluLookAt.
Beware of the difference between projection and modelview matrices; changing perspective should be done to projection, while positioning the camera should effect the modelview. See http://www.sjbaker.org/steve/omniv/projection_abuse.html
nb... I've just realised that the OpenGL-es you're using might not support those exact functions; you should be able to find how to achieve the same results quite easily.
The partial answer to #2 is that I was not scaling my normals. In other words, the previously normalized normal values have a greater (relative) magnitude when the object is scaled smaller, causing the faces to reflect more light ... and vice versa.
You can set a parameter,
glEnable(GL_NORMALIZE)
and it solves the problem, at the expense of some extra calculations. the full story is here,
http://www.opengl.org/resources/features/KilgardTechniques/oglpitfall/
(see #16).