Feed Shader Program with World Units instead of (0f,1f) values - java

I'm rendering a simple rectangle mesh using libgdx, and other geometric elements that are similar in simplicity. Therse are going to interact with the sprites I have setup in my game. The sprites' position and other properties are setup in world units and before each sprite draw session I setup the camera like this:
camera.update();
batch.setProjectionMatrix(camera.combined);
It all works well but I need to draw meshes using world units. How can I feed the shader program world coordinates(12.5f, 30f, etc, based on my game world data) instead of (0f, 1f) ranges? I want to draw several textured meshes so I need coordinates that are in relation with the other elements in the game.
Here is how I draw a simple rectangle mesh :
mesh = new Mesh(true, 4, 6,
new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoords"));
mesh.setVertices(new float[] {
-1.0f, -1.0f, 0, 0,1,
0.0f, -1.0f, 0,1,1,
0.0f, 0.0f, 0, 1,0,
-1.0f, 0.0f, 0, 0,0 });
mesh.setIndices(new short[] { 0, 1, 2, 2, 3, 0});
Gdx.graphics.getGL20().glEnable(GL20.GL_TEXTURE_2D);
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE);
createShader();
shader.begin();
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Is there any way I can feed world units to the mesh vertices array ?

You can transform the vertices in the vertex shader. This allows you to project world coordinates onto the -1 to 1 range required for rendering. This is typically done by multiplying the position vertex attribute with a (projection) matrix. Have a look at the default spritebatch shader, for an example of how to implement this.
You can use the camera.combined matrix to multiply these vertices in vertex shader. Just like you did when specifying the projection matrix for the spritebatch. You'll have to assign this matrix to the uniform you've used in your vertex shader. An example of how to do this can also be found in default spritebatch implementation.
However, you might want to reconsider your approach. Since you're using a spritebatch, you can profit from a performance gain by using the spritebatch instead of manually rendering. This will also simplify the rendering for you, because you dont have to mess with the shader and matrices yourself. Spritebatch contains a method (javadoc) which allows you to specify a manually created mesh (or vertices actually). Each vertex is expected to be 5 floats (x, y, u, v, color) in size and a multiple of four vertices (doesn't have to be a rectangle shape though) must provided (you can use Color.WHITE.toFloatBits() for the color).
But, since you're trying to render a simple rectangle, you might as well use one of the more convenient methods that allows you to render a rectangle without having to create a mesh all together (javadocs). Or, even easier, use it how it is designed by creating a Sprite for your rectangle (wiki page).
Now, if you're still certain that you do want to create a mesh and shader manually, then I'd suggest learning that using e.g. a tutorial, instead of just diving into it. E.g. these tutorials might help you get started. The wiki also contains an article describing how to do this, including transforming vertices in the vertex shader.

Related

JOGL - translate GL_QUADS

In JOGL im trying to create a few 3D shapes using GL_QUADS (i.e. different components of a whole object) and so far its been fine to do this but I cant figure out how to translate the shape, there must be a way to do this but im not very familiar with GL_QUADS so im not entirely sure how to go about this. Editing gl.glVertex3f just results in the shape being a different size which seems to be the only thing I can edit, is it possible to give a GL_QUAD a variable name?
You can use the glTranslatef function:
// render the shape
gl.glTranslatef(5.0f, 0.0f, 0.0f); // translate along x, y, z
// render the shape - you will now have two shapes next to each other
Calling gl.glTranslatef(1.0f, 0.0f, 0.0f); will apply to the current matrix in the stack, effectively meaning that whatever you draw from then on will appear 1 unit along on the x axis from whatever the matrix was on before (probably the origin in your case).
I can see why it might seem confusing, rather than creating the shape then moving it (can't be done, it's already been drawn), you'll want to translate then draw your shape.
For example:
gl.glPushMatrix();
gl.glTranslatef(1.0f, 0.0f, 0.0f);
gl.glBegin(GL2.GL_QUADS);
// draw some vertices here
gl.glEnd();
gl.glPopMatrix();

How would I auto texture map a sphere in OpenGL?

Which parameters for especially s and t would I use to text map a sphere? I've tried various different options, but there's always a little portion that's distorted no matter what values I choose for float[] s and t. I can do planes and cylinders, but I'm not sure about spheres. Any help would be appreciated.
gl.glTexGeni(GL2.GL_T, GL2.GL_TEXTURE_GEN_MODE, GL2.GL_OBJECT_LINEAR);
float[] s = {1f, 0f, 0f, 0};
gl.glTexGenfv(GL2.GL_S, GL2.GL_OBJECT_PLANE, s, 0);
float[] t = {0f, 1f, 0f, 0};
gl.glTexGenfv(GL2.GL_T, GL2.GL_OBJECT_PLANE, s, 0);
You can not texture map a sphere using a single rectangular texture without creating some serious distortion at some place. It's mathematically impossible. That being said, you should not use the glTexGen functionality for this either, because a) it's been deprecated, and b) only creates linear planar mappings, whereas for texturing a sphere you need curvilinear coordinates. Use a vertex shader to generate the texture coordinates from vertex position.

Java OpenGL Camera

I've started with JOGL lately, I know how to create and draw objects on the canvas, but I couldn't find tutorial or explanations on how to set and rotate the camera.
I only found source code, but since I'm quite new with this, it doesn't help too much.
Does anyone know of a good tutorial or place to start? I googled but couldn't find anything (only for JOGL 1.5, and I'm using 2.0).
UPDATE
As datenwolf points out my explanation is tied to the OpenGL 2 pipeline, which has been superseded. This means you have to do your own manipulation from world space into screen space if you want to eschew the deprecated methods. Sadly, this little footnote hasn't gotten around to being attached to every last bit of OpenGL sample code or commentary in the universe yet.
Of course I don't know why it's necessarily a bad thing to use the existing GL2 pipeline before picking a library to do the same or building one yourself.
ORIGINAL
I'm playing around with JOGL myself, though I have some limited prior experience with OpenGL. OpenGL uses two matrices to transform all the 3D points you pass through it from 3D model space into 2D screen space, the Projection matrix and the ModelView matrix.
The projection matrix is designed to compensate for the translation between the 3D world and the 2D screen, projecting a higher dimensional space onto a lower dimensional one. You can get lots more details by Googling gluPerspective, which is a function in the glut toolkit for setting that matrix.
The ModelView1 matrix on the other hand is responsible for translating 3D coordinates items from scene space into view (or camera) space. How exactly this is done depends on how you're representing the camera. Three common ways of representing the camera are
A vector for the position, a vector for the target of the camera, and a vector for the 'up' direction
A vector for the position plus a quaternion for the orientation (plus perhaps a single floating point value for scale, or leave scale set to 1)
A single 4x4 matrix containing position, orientation and scale
Whichever one you use will require you to write code to translate the representation into something you can give to the OpenGL methods to set up the ModelView matrix, as well as writing code than translates user actions into modifications to the Camera data.
There are a number of demos in JOGL-Demos and JOCL-Demos that involve this kind of manipulation. For instance, this class is designed to act as a kind of primitive camera which can zoom in and out and rotate around the origin of the scene, but cannot turn otherwise. It's therefore represented as only 3 floats: and X and Y rotation and a Z distance. It applies its transform to the Modelview something like this2:
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, z);
gl.glRotatef(rotx, 1f, 0f, 0f);
gl.glRotatef(roty, 0f, 1.0f, 0f);
I'm currently experimenting with a Quaternion+Vector+Float based camera using the Java Vecmath library, and I apply my camera transform like this:
Quat4d orientation;
Vector3d position;
double scale;
...
public void applyMatrix(GL2 gl) {
Matrix4d matrix = new Matrix4d(orientation, position, scale);
double[] glmatrix = new double[] {
matrix.m00, matrix.m10, matrix.m20, matrix.m30,
matrix.m01, matrix.m11, matrix.m21, matrix.m31,
matrix.m02, matrix.m12, matrix.m22, matrix.m32,
matrix.m03, matrix.m13, matrix.m23, matrix.m33,
};
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadMatrixd(glmatrix, 0);
}
1: The reason it's called the ModelView and not just the View matrix is because you can actually push and pop matrices on the ModelView stack (this is true of all OpenGL transformation matrices I believe). Typically you either have a full stack of matrices representing various transformations of items relative to one another in the scene graph, with the bottom one representing the camera transform, or you have a single camera transform and keep everything in the scene graph in world space coordinates (which kind of defeats the point of having a scene graph, but whatever).
2: In practice you wouldn't see the calls to gl.glMatrixMode(GL2.GL_MODELVIEW); in the code because the GL state machine is simply left in MODELVIEW mode all the time unless you're actively setting the projection matrix.
but I couldn't find tutorial or explanations on how to set and rotate the camera
Because there is none. OpenGL is not a scene graph. It's mostly sophisticated canvas and simple point, line and triangle drawing tools. Placing "objects" actually means applying a linear transformations to place a 3 dimensional vector on a 2D framebuffer.
So instead of placing the "camera" you just move around the whole world (transformation) in the opposite way you'd move the camera, yielding the very same outcome.

OpenGL-ES finding the right coordinates (Translations etc...)

How can I correctly figure out what values I must use for gl.glTranslatef(x,y,z), and similar methods. Example: I've got an square, and want to display it in the upper left corner, at about 1/4th of the screen. I figured it would be glTranslate() with values -0.5 and 0.5, but this doens't display where I expected it.
So basically I wan't to know how to find the right coordinates for objects in OpenGL-ES.
Unfortunately haven't developed opengl-es content for android yet, but AFAIK you need to convert screen coordinates (e.g. upper left corner on your screen) to world coordinates(coordinates in your 3D world in OpenGL).
For 3D you could do this would be through ray projection. You will find plenty of examples through google search and maybe a OpenGL implementation too.
For 2D you can get away bit using an orthogonal projection matrix(with no perspective distortion basically) and rotating it as needed (e.g. for lanscape mode):
// Initialize your projection matrix - current number are half the dimensions for the G1 I borrowed(320x480)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-240.0f, 240.0f, -160.0f, 160.0f, -1.0f, 1.0f);
// Rotate everything by 90 degrees
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(-90.0f, 0.0f, 1.0f, 0.0f);
HTH

Light source inside a room acting unexpectedly

I've written several Android apps, but this is my first experience with 3D programming.
I've created a room (4 walls, ceiling and floor) with a couple objects inside and am able to move the camera around it as if walking. I've textured all surfaces with various images and everything was working as expected.
For context, the room is 14 units wide and 16 units deep (centered at origin), 3 units high (1 above origin and 2 below). There are 2 objects in the middle of the room, a cube and an inverted pyramid on top of it.
Then I went to add a light source to shade the cube and pyramid. I had read through and followed a couple of NeHe's ports, so I took what I had working in the lesson on lighting and applied it to my new code.
gl.glEnable(GL10.GL_LIGHTING);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_AMBIENT, new float[] { 0.1f, 0.1f, 0.1f, 1f }, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_DIFFUSE, new float[] { 1f, 1f, 1f, 1f }, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_POSITION, new float[] { -4f, 0.9f, 6f, 1f }, 0);
gl.glEnable(GL10.GL_LIGHT0);
The result is that the cube and pyramid are not shaded. They look the same on sides opposing the light as they do on the sides facing it. When the camera is pointed directly away from the light source the room looks as it did before I added the lighting code. As I rotate the camera to face the light source the entire room (including objects) becomes darker until completely black when the camera is directly facing the source.
What is going on here? I read many articles on lighting and how it works, but I have seen nothing to indicate why this wouldn't light up all sides of the room, with the cube and pyramid shaded based on the light position. Is there some expected behavior of the light because it is "inside" the room? Am I just missing something easy because I'm new?
Every object in your 3D world has a normal, where it helps OpenGL to determine how much light an object need to reflect. You've probably forgot to specify the normals for your surfaces. Without specifying them, OpenGL will light all objects in your world in the same way.
In order to get a surface's normal in 3D you need at least three vertices, which means it at least is a triangle.
Sample stuff:
In order to calculate a surface's normal you need two vectors. Since you have three vertices in 3D space that means that these sample points could contain a triangle:
// Top triangle, three points in 3D space.
vertices = new float[] {
-1.0f, 1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
0.0f, 1.0f, -1.0f,
}
Given these three points, you can now define two vectors by the following:
// Simple vector class, created by you.
Vector3f vector1 = new Vector3f();
Vector3f vector2 = new Vector3f();
vector1.x = vertices[0] - vertices[3];
vector1.y = vertices[1] - vertices[4];
vector1.z = vertices[2] - vertices[5];
vector2.x = vertices[3] - vertices[6];
vector2.y = vertices[4] - vertices[7];
vector2.z = vertices[5] - vertices[8];
Now when you have two vectors, you can finally get the surface's normal by using the Cross Product. Shortly, the cross product is an operation which results in a new vector containing an angle that is perpendicular to the input vectors. This is the normal that we need.
To get the cross product in your code you have to write your own method that calculates it. In theory you calculate the cross product given this formula:
A X B = (A.y * B.z - A.z * B.y, A.z * B.x - A.x * B.z, A.x * B.y - A.y * B.x)
In code (by using the vectors above):
public Vector3f crossProduct(Vector3f vector1, Vector3f vector2) {
Vector3f normalVector = new Vector3f();
// Cross product. The normalVector contains the normal for the
// surface, which is perpendicular both to vector1 and vector2.
normalVector.x = vector1.y * vector2.z - vector1.z * vector2.y;
normalVector.y = vector1.z * vector2.x - vector1.x * vector2.z;
normalVector.z = vector1.x * vector2.y - vector1.y * vector2.x;
return normalVector;
}
Before any further comments; you can specify your normals in an array and just put them into OpenGL when needed, but your understanding of this topic will be much better if you dig into it and your code will be much more flexible.
So now we have a normal which you can loop through, assign the vector values to your normal array (like NeHe's ports, but dynamically) and set up OpenGL to use GL_NORMAL_ARRAY in order to get OpenGL to reflect the light on the object correctly:
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
// I'm assuming you know how to put it into a FloatBuffer.
gl.glNormalPointer(GL10.GL_FLOAT, 0, mNormalsBuffer);
// Draw your surface...
Another last comment; if you're using other vertices values (like 5.0f, 10.0f or bigger) you might wanna normalize the vector that returns from the crossProduct() method in order to gain some performance. Otherwise OpenGL must calculate the new vector to get the unit vector and that might be a performance issue.
Also, your new float[] {-4f, 0.9f, 6f, 1f} for GL_POSITION is not quite correct. When the fourth value is set to 1.0f it means that the light position is 0, 0, 0, no matter what the first three values are. In order to specify a vector for your light position, change the fourth value to 0.0f.
You need to reload the light position each frame, otherwise the light source will move with the camera which is probably not what you want. Also the shading you are describing is totally consistent with vertex interpolated lighting. If you want something better you will have to do it per-pixel (which means implementing your own shader), or else subdivide your geometry.

Categories