OpenGL ES sphere issues - java

I've been following the tutorials http://www.learnopengles.com and learning more how to work with OpenGL ES but I'm having a hard time trying to get a sphere to show up.
I've went ahead and made a geodesic sphere in Blender and imported the vertices and the draw order but I whenever I call the sphere the app crashes.
I'll include a link to my full render file but I'll also point out what I think some of the key things where I think problems might be:
Here is where the buffers are created. I'm not sure if the app has issues with the way I'm buffering the sphere or the sphere's draw order points.
// Initialize the buffers.
mCubePositions = ByteBuffer.allocateDirect(cubePositionData.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mCubePositions.put(cubePositionData).position(0);
mSpherePositions = ByteBuffer.allocateDirect(spherePositionData.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
mSpherePositions.put(spherePositionData).position(0);
ByteBuffer dlb = ByteBuffer.allocateDirect(sphereDrawOrder.length * 2).order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(sphereDrawOrder);
drawListBuffer.position(0);
mCubeColors = ByteBuffer.allocateDirect(cubeColorData.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mCubeColors.put(cubeColorData).position(0);
mCubeNormals = ByteBuffer.allocateDirect(cubeNormalData.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mCubeNormals.put(cubeNormalData).position(0);
And here is the class that draws the sphere. I have no colors or normals for the sphere so I just removed those parts. Is that what's causing it to freak out?
private void drawSphere()
{
// Pass in the position information
mSpherePositions.position(0);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
0, mSpherePositions);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the color information
// Pass in the normal information
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// Pass in the modelview matrix.
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mMVPMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
// Pass in the combined matrix.
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
// Pass in the light position in eye space.
GLES20.glUniform3f(mLightPosHandle, mLightPosInEyeSpace[0], mLightPosInEyeSpace[1], mLightPosInEyeSpace[2]);
// Draw the sphere.
GLES20.glDrawElements(GLES20.GL_TRIANGLES, sphereDrawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
}
Whenever it crashes it tells me there's something wrong with the code after //Draw the sphere. It doesn't like the glDrawElements for some reason.
Here is the full render script for reference: http://pastebin.com/Y1WU27hz
If you have any insight to this I thank you.

Related

Transforming the camera using a Matrix4f

I have a 3D scene with one triangle and a camera I want to move around. I stored the camera transformation as a matrix (specificially, a Matrix4f object loaded with the identity).
Here are two examples of methods that would modify that matrix:
This one rotates it a negative amount on the y axis:
public void turnRight() {
getCameraMatrix().rotate(-ROTATION_INTERVAL, new Vector3f(0, 1, 0));
}
This one translates right:
public void right() {
getCameraMatrix().translate(new Vector3f(TRANSLATION_INTERVAL, 0, 0));
}
I set the projection matrix to orthographic (having previously called glViewport()):
public void performProjection() {
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, Display.getWidth(), 0, Display.getHeight(), 0, 1000);
}
And change the camera position before drawing the scene every time the camera moves:
public void performLook() {
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glLoadMatrix(
getAsFloatBuffer() //Gets the FloatBuffer from the matrix.
);
}
However, the effect of this transformation is that the triangle is translated and rotated, not the camera. As an example of how this should happen, if the right() is called, the camera should move right perpendicular to its current orientation, not down some axis.
Any tips on how to do this? (Our assignment forbids modern openGL for some reason.)
EDIT
In case it's useful, the coordinates for the triangle are: (1, 1, 1), (1, 50, 1), (50, 50, 1)
The viewport call is glViewport(0, 0, Display.getWidth(), Display.getHeight());
And finally, the Display dimensions are 640x480
If there's anything about my question that needs clarification, please let me know. Thanks.
EDIT
The camera matrix is the model view matrix in my code. I built it by creating a new Matrix4f object and loading the identity. All subsequent transforms were added to the matrix, whose data is then moved to the model view matrix when performLook() is called. I'm only drawing a single primitive right now, so I haven't yet dealt with object to world transforms.
The render steps are:
1. performProjection()
2. performLook()
3. Clear the screen
4. Draw the primitive
5. Swap the buffers.

OpenGL does not render my Mesh

Using LWJGL I tried to render to render a simple Mesh on screen, but OpenGL decided to instead do nothing. :(
So I have a mesh class which creates a VBO. I can add some vertices which then are supposed to be drawn on screen.
public class Mesh {
private int vbo;
private int size = 0;
public Mesh() {
vbo = glGenBuffers();
}
public void addVertices(Vertex[] vertices) {
size = vertices.length;
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, Util.createFlippedBuffer(vertices), GL_STATIC_DRAW);
}
public void draw() {
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, false, Vertex.SIZE * 4, 0);
glDrawArrays(GL_TRIANGLES, 0, size);
glDisableVertexAttribArray(0);
}
}
Here is how I add vertices to my mesh:
mesh = new Mesh();
Vertex[] vertices = new Vertex[] { new Vertex(new Vector3f(-1, -1, 0)),
new Vertex(new Vector3f(-1, 1, 0)),
new Vertex(new Vector3f(0, 1, 0)) };
mesh.addVertices(vertices);
I am pretty sure I added them in the correct (clock-wise) order.
And my OpenGL setup:
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
Calling glGetError() returns no error (0).
EDIT:
Well I found out that macs are little weird when it comes to OpenGL. I needed to use a VAO along with the VBO. Now it works fine. Thanks anyway!
I don't see anywhere you're specifying either a shader for output or a color, or a vertex array. Depending on which profile you're using you need to be doing one or more of these.
I would suggest checking / setting the following
Disable face culling to ensure that regardless of the winding you should see something
If you're requesting a core profile, you'll need a shader and quite possibly a vertex array object
If you're instead using a compatibility profile you should call glColor3f(1, 1, 1) in your draw call to ensure you're not drawing a black triangle
Are you clearing the color and depth framebuffers before your render?
You might not be drawing that object within the viewing frustum, also call glCheckError to make sure you aren't making any mistakes.
It is also important to understand the difference between fixed pipeline and programmable pipeline OpenGL. If you are using a version with a programmable pipeline you will need to write shaders, otherwise you will need to set modelview and projection matrices.

OpenGLES - Create objects at runtime

I'm new to OpenGL and started with the small tutorial from dev.android.com. The sample code includes this Square class for a square geometry. The object will be created in the onSurfaceCreated() method and drawn every frame using onDrawFrame(). Here is the example code of the Square (constructor and draw-method):
public Square() {
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(squareCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
// prepare shaders and OpenGL program
int vertexShader = MyGLRenderer.loadShader(
GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = MyGLRenderer.loadShader(
GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
mProgram = GLES20.glCreateProgram(); // create empty OpenGL Program
GLES20.glAttachShader(mProgram, vertexShader); // add the vertex shader to program
GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
GLES20.glLinkProgram(mProgram); // create OpenGL program executables
}
public void draw(float[] mvpMatrix) {
// Add program to OpenGL environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the triangle
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
MyGLRenderer.checkGlError("glGetUniformLocation");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
MyGLRenderer.checkGlError("glUniformMatrix4fv");
// Draw the square
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
}
My question is now: how can I create the object not at onSurfaceCreated() but after a touch event?
I tried to define a Square variable but not initialize it at onSurfaceCreated(), then check if the object is null before drawing it. After the touch I called:
mSquare = new Square();
I know it's not a good way of implementing this, but I just wanted to try if it works. I would have created a list of drawable elements and run through it in the onDrawFrame() method, calling every draw() from the list objects. But since this method causes the program to crash, I don't know how to go on.
You can only make calls to OpenGL within an OpenGL context.
There are 3 methods in which this context exists:
1) onSurfaceCreated - when context is created/recreated - you should load resources here
2) onSurfaceChanged - after creation and on a surface resize - you should assign size-dependent variables here
3) onDrawFrame - here rendering is executed - here you should execute all drawing commands
If you try to execute OpenGL calls outside of the context, most likely the code will fail to execute correctly.
Android and GL contexts are different and they run on different(manages) threads in their own way
so if you want to create a square on the touch you've to create object, you have to do it in GLcontext like this
http://pastebin.com/zAav7jpu
Initialize your object in touch/or where you want to do
public static isAddedBody = false; //declare globally in touch_class
public void touch(int.. whateva){
mSquare = new Square();
isAddedBody = true;
}
in render()/or draw() loop
if (touch_class.isAddedBody){
mSquare.createBody();
touch_class.isAdded = false;
}

Android - OpenGL - Emulator vs Actual Device

I am writing a game which uses opengles. I have created my renderer class and have a sample of my game working on the emulator, however none of the texures display on an actual device. I have read about the most common cause for this being the need for texture to be a factor of 2 however I have tried drawing a square (128x128) with a texture of the same size mapped to it and this only shows on the emulator. Further to that my actual game will be using rectangles so I'm unsure how I can map textures that are squares to rectangles..
This is my code so far (The game is 2d so I'm using ortho mode):
EDIT: I have updated my code, it is now correctly binding textures and using textures of size 128x128, still only seeing textures on the emulator..
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
byteBuffer = ByteBuffer.allocateDirect(shape.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuffer.asFloatBuffer();
vertexBuffer.put(cardshape);
vertexBuffer.position(0);
byteBuffer = ByteBuffer.allocateDirect(shape.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(textureshape);
textureBuffer.position(0);
// Set the background color to black ( rgba ).
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
// Enable Smooth Shading, default not really needed.
gl.glShadeModel(GL10.GL_SMOOTH);
// Depth buffer setup.
gl.glClearDepthf(1.0f);
// Enables depth testing.
gl.glEnable(GL10.GL_DEPTH_TEST);
// The type of depth testing to do.
gl.glDepthFunc(GL10.GL_LEQUAL);
// Really nice perspective calculations.
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
gl.glEnable(GL10.GL_TEXTURE_2D);
loadGLTexture(gl);
}
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection
gl.glPushMatrix(); // Push The Matrix
gl.glLoadIdentity(); // Reset The Matrix
gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix
gl.glPushMatrix(); // Push The Matrix
gl.glLoadIdentity(); // Reset The Matrix
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glLoadIdentity();
gl.glTranslatef(card.x, card.y, 0.0f);
gl.glBindTexture(GL10.GL_TEXTURE_2D, card.texture[0]); //activates texture to be used now
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Sets the current view port to the new size.
gl.glViewport(0, 0, width, height);
// Select the projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
// Reset the projection matrix
gl.glLoadIdentity();
// Calculate the aspect ratio of the window
GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f,
100.0f);
// Select the modelview matrix
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Reset the modelview matrix
gl.glLoadIdentity();
}
public int[] texture = new int[1];
public void loadGLTexture(GL10 gl) {
// loading texture
Bitmap bitmap;
bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image);
// generate one texture pointer
gl.glGenTextures(0, texture, 0); //adds texture id to texture array
// ...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now
// create nearest filtered texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
// Clean up
bitmap.recycle();
}
Is there anything I have done wrong? Or something I haven't done? It works perfectly fine in the emulator so I could only assume it was the power of 2 issue but like I said I tried that using a 128x128 texture on a square but it didn't show.. any help would be appreciated..
EDIT: I have also tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and then trying them in the raw folder.. any other ideas?
I'm actually looking for the solution to a similar problem right now. I think I might have a temporary fix for you, however.
The problem appears to be that on the emulator the orthographic view is flipped. To solve this, in my app we added an option in preferences to manually flip the view if nothing draws. Here's the snippet that handles this:
if (!flipped)
{
glOrthof(0, screenWidth, screenHeight, 0, -1, 1); //--Device
}
else
{
glOrthof(0, screenWidth, 0, -screenHeight, -1, 1); //--Emulator
}
Hope this helps! If anybody has a more general solution, I'd be happy to hear it!
I didn't look at your code but I have been on that road before. Developing in OpenGL is a real pain in the ass. If you are not obligated to use OpenGL, then use a graphics engine. Unity is a great one and it's free. Also your game would work on Android, iOS or other platforms. Study your choices carefully. Good luck..

Render image for 2D game use in OpenGL ES for android

EDIT: Solved it! I made stupid mistake, I had a textureId I'd forgotten about when it was textureID I should use.
Okay, I am fully aware that this is a recurring question, and that there is a lot of tutorials and open source code. But I've been trying as best as I can for quite a while here, and my screen is still blank (with whatever color I set using glClearColor()).
So, I would be grateful for some pointers to what I'm doing wrong, or even better, some working code that will render a resource image.
I'll show what I've got so far (by doing some crafty copy-pasting) in my onDrawFrame of the class that implements the Renderer. I've removed some of the jumping between methods, and will simply paste it in the order it is executed.
Feel free to disregard my current code, I'm more than happy to start over, if anyone can give me a working piece of code.
Setup:
bitmap = BitmapFactory.decodeResource(panel.getResources(),
R.drawable.test);
addGameComponent(new MeleeAttackComponent());
// Mapping coordinates for the vertices
float textureCoordinates[] = { 0.0f, 2.0f, //
2.0f, 2.0f, //
0.0f, 0.0f, //
2.0f, 0.0f, //
};
short[] indices = new short[] { 0, 1, 2, 1, 3, 2 };
float[] vertices = new float[] { -0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f };
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
protected void setVertices(float[] vertices) {
// a float is 4 bytes, therefore we multiply the number if
// vertices with 4.
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
mVerticesBuffer = vbb.asFloatBuffer();
mVerticesBuffer.put(vertices);
mVerticesBuffer.position(0);
}
protected void setIndices(short[] indices) {
// short is 2 bytes, therefore we multiply the number if
// vertices with 2.
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
mIndicesBuffer = ibb.asShortBuffer();
mIndicesBuffer.put(indices);
mIndicesBuffer.position(0);
mNumOfIndices = indices.length;
}
protected void setTextureCoordinates(float[] textureCoords) {
// float is 4 bytes, therefore we multiply the number of
// vertices with 4.
ByteBuffer byteBuf = ByteBuffer
.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mTextureBuffer = byteBuf.asFloatBuffer();
mTextureBuffer.put(textureCoords);
mTextureBuffer.position(0);
}
//The onDrawFrame(GL10 gl)
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -4);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
if(shoudlLoadTexture){
loadGLTextures(gl);
shoudlLoadTexture = false;
}
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
}
gl.glTranslatef(posX, posY, 0);
// Point out the where the color buffer is.
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
private void loadGLTextures(GL10 gl) {
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
It doesn't crash, no exceptions, simply a blank screen with color. I've printed stuff in there, so I'm pretty sure it is all executed.
I know it's not optimal to just paste code, but at the moment, I just want to be able to do what I was able to do with canvas :)
Thanks a lot
If you're getting the background colour, that means your window is properly set up. OpenGL is connected to that area of the screen.
However, OpenGL clips to the near and far clip planes, ensuring that objects don't cross or intersect the camera (which, both mathematically and logically, doesn't make sense) and that objects too far away don't appear. So if you've not set up modelview and projection correctly, it's probable that all your geometry is being clipped.
Modelview is used to map from world to eye space. Projection maps from eye space to screen space. So a typical applications uses the former to position objects within the scene, and position the scene relative to the camera, then the latter deals with whether the camera sees with perspective or not, how many world units make how many screen units, etc.
If you look at examples like this one, particularly onSurfaceChanged, you'll see an example of a perspective projection with a camera fixed at the origin.
Because the camera is at (0, 0, 0), leaving your geometry on z = 0 as your code does will cause it to be clipped. In that example code they've set the near clip plane to be at z = 0.1, so in your existing code you could change:
gl.glTranslatef(posX, posY, 0);
To:
gl.glTranslatef(posX, posY, -1.0);
To push your geometry back sufficiently far to appear on screen.

Categories