Looking through googles cardboard example I am wondering, where the head movement transformation takes place, so that the scene or the view is adopted to reflect the head movement.
The interesting methods should be public void onNewFrame(HeadTransform headTransform) and public void onDrawEye(Eye eye) in the MainActivity class.
Here is a snippet:
#Override
public void onNewFrame(HeadTransform headTransform) {
// Build the Model part of the ModelView matrix.
Matrix.rotateM(modelCube, 0, TIME_DELTA, 0.5f, 0.5f, 1.0f);
// Build the camera matrix and apply it to the ModelView.
Matrix.setLookAtM(camera, 0, 0.0f, 0.0f, CAMERA_Z, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
headTransform.getHeadView(headView, 0);
// Update the 3d audio engine with the most recent head rotation.
headTransform.getQuaternion(headRotation, 0);
cardboardAudioEngine.setHeadRotation(
headRotation[0], headRotation[1], headRotation[2], headRotation[3]);
checkGLError("onReadyToDraw");
}
#Override
public void onDrawEye(Eye eye) {
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
checkGLError("colorParam");
// Apply the eye transformation to the camera.
Matrix.multiplyMM(view, 0, eye.getEyeView(), 0, camera, 0);
// Set the position of the light
Matrix.multiplyMV(lightPosInEyeSpace, 0, view, 0, LIGHT_POS_IN_WORLD_SPACE, 0);
// Build the ModelView and ModelViewProjection matrices
// for calculating cube position and light.
float[] perspective = eye.getPerspective(Z_NEAR, Z_FAR);
Matrix.multiplyMM(modelView, 0, view, 0, modelCube, 0);
Matrix.multiplyMM(modelViewProjection, 0, perspective, 0, modelView, 0);
drawCube();
// Set modelView for the floor, so we draw floor in the correct location
Matrix.multiplyMM(modelView, 0, view, 0, modelFloor, 0);
Matrix.multiplyMM(modelViewProjection, 0, perspective, 0, modelView, 0);
drawFloor();
}
My first assumption was that the model (or camera) is modified in onNewFrame() depending on data from headTransform. But this seems not to be the case as there are only two accesses to it. One to identify at which cube we are looking (headTransform.getHeadView(headView, 0);) and another for the audio engine.
So my next assumption, and only possibility I see, is that it is handled by the eye passed to onDrawEye(). But on the other hand after having a short look inside the disassembly I could not find the relation between headTransform and eye (Which doesn't mean there is no relation, because I haven't invested much time in that).
So my question:
Is my assumption right? Does the rendering take the head movement in account by multiplying the camera with the eyeView?
Well, I spend some more time with browsing the disassembly and it seems as if my assumption was correct.
The private class RendererHelper within CardboardView implements the following method (it is quite large, so I removed what seems not to be important to me):
public void onDrawFrame(GL10 gl)
{
// ...
if (mVRMode)
{
Matrix.setIdentityM(mLeftEyeTranslate, 0);
Matrix.setIdentityM(mRightEyeTranslate, 0);
Matrix.translateM(mLeftEyeTranslate, 0, halfInterpupillaryDistance, 0.0F, 0.0F);
Matrix.translateM(mRightEyeTranslate, 0, -halfInterpupillaryDistance, 0.0F, 0.0F);
Matrix.multiplyMM(mLeftEye.getTransform().getEyeView(), 0, mLeftEyeTranslate, 0, mHeadTransform.getHeadView(), 0);
Matrix.multiplyMM(mRightEye.getTransform().getEyeView(), 0, mRightEyeTranslate, 0, mHeadTransform.getHeadView(), 0);
}
// ...
}
The last two matrix multiplications seem to be the place, where the relation between headTransform and the eye is made.
Related
I have been working for the last few days to "unproject" our app's touch events into our Renderer's coordinate space. In the pursuit of this goal, I have tried various custom unproject methods and alternative techniques (including trying to convert the coordinates using the scaling factor & transform values) to no end. I have gotten close (where my touches are slightly off) however my attempts using GLU.gluUnProject have been way off, usually placing the coordinates around the center of the view. The "closest" results were produced by Xandy's method however even these are usually off. My primary questions are how do I setup my viewport matrix and am I passing GLU.gluUnProject correct parameters? My math is based on the answer to this question. Here are the relevant excerpts of my code (showing how I setup my matrices and my current attempt):
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Set the OpenGL viewport to fill the entire surface.
glViewport(0, 0, width, height);
...
float ratio = (float) width / height;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 gl) {
glClear(GL_COLOR_BUFFER_BIT);
Matrix.setLookAtM(mViewMatrix, 0, 0f, 0f, -4.5f, 0f, 0f, 0f, 0f, 1f, 0f);
Matrix.scaleM(mViewMatrix, 0, mScaleFactor, mScaleFactor, 1.0f);
Matrix.translateM(mViewMatrix, 0, -mOffset.x, mOffset.y, 0.0f);
Matrix.multiplyMM(mModelMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
...
}
public PointF convertScreenCoords(float x, float y) {
int[] viewPortMatrix = new int[]{0, 0, (int)mViewportWidth, (int)mViewportHeight};
float[] outputNear = new float[4];
float[] outputFar = new float[4];
y = mViewportHeight - y;
int successNear = GLU.gluUnProject(x, y, 0, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
int successFar = GLU.gluUnProject(x, y, 1, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputFar, 0);
if (successNear == GL_FALSE || successFar == GL_FALSE) {
throw new RuntimeException("Cannot invert matrices!");
}
convert4DCoords(outputNear);
convert4DCoords(outputFar);
float distance = outputNear[2] / (outputFar[2] - outputNear[2]);
float normalizedX = (outputNear[0] + (outputFar[0] - outputNear[0]) * distance);
float normalizedY = (outputNear[1] + (outputFar[1] - outputNear[1]) * distance);
return new PointF(normalizedX, normalizedY);
}
convert4DCoords is simply a helper function that divides each coordinate (x, y, z) of an array by w. mOffset and mScaleFactor are the translation and scaling parameters (given by a ScaleGestureDetector within our GLSurfaceView)
Based on everything I have read this should be working however it is consistently wrong and I am not sure what else to try. Any help/feedback would be greatly appreciated!
I didn't look through all of your math, but some of your matrix operations and specifications look wrong to me. That is most likely at least part of your problem.
Looking at what you call mViewMatrix first:
Matrix.setLookAtM(mViewMatrix, 0, 0f, 0f, -4.5f, 0f, 0f, 0f, 0f, 1f, 0f);
Matrix.scaleM(mViewMatrix, 0, mScaleFactor, mScaleFactor, 1.0f);
Matrix.translateM(mViewMatrix, 0, -mOffset.x, mOffset.y, 0.0f);
There's nothing necessarily wrong with this. But most people would probably only call the first part, which is the result of setLookAtM(), a View matrix. The rest looks then more like a Model matrix. But since rendering normally only uses the product of View and Model matrix, this is just terminology. It would seem fair to say that what you store in mViewMatrix is your ModelView matrix.
Then the naming gets stranger here:
Matrix.multiplyMM(mModelMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
This calculates the product of Projection and View matrix. So this would be a ViewProjection matrix. Storing this in a variable called mModelMatrix is confusing, and I think you might have set yourself a trap with this naming.
This leads us to the glUnproject() call:
GLU.gluUnProject(x, y, 0, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
Based on the above, what you're passing for the two matrix arguments (the viewport is not really a matrix) are the ViewProjection and the Projection matrix. Where based on the method definition, you should be passing the ModelView and the Projection matrix.
Since we established that what you call mViewMatrix corresponds to a ModelView matrix, this should start working much better if you change just that argument:
GLU.gluUnProject(x, y, 0, mViewMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
Then you can get rid of the mModelMatrix that is not a Model matrix at all.
I am working with LWJGL to make a game. It's very basic. Before even implementing any sort of gpu rendering, or fancy model loaders, I wanted to make sure I could at least render 2D and 3D at the same time; My game has a gui while you walk around. Or at least, it is supposed to. Here is my initialization code; The flickering does not happen when I only render 3D.
public void clearGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glLoadIdentity();
}
public void init3D() {
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective((float) 68, Engine.size[0] / Engine.size[1], 0.3f, 1000);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH);
glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
}
public void render3D(Camera c) {
init3D();
clearGL();
//Do translations here
glTranslatef(0f, -5f, 0f);
glColor3f(0, 1, 0);
glBegin(GL_QUADS);
glVertex3f(-50f, 0f, -50f);
glColor3f(0, 0, 1);
glVertex3f(50f, 0f, -50f);
glColor3f(1, 0, 0);
glVertex3f(50f, 0f, 50f);
glColor3f(0, 1, 1);
glVertex3f(-50f, 0f, 50f);
glEnd();
}
public void init2D() {
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, Engine.size[0], 0, Engine.size[1], -1, 1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
}
public void render2D() {
init2D();
glBegin(GL_QUADS);
glVertex2f(0f, 50f);
glVertex2f(50f, 50f);
glVertex2f(50f, 0f);
glVertex2f(0f, 0f);
glPopMatrix();
}
I can tell its rendering at all because I am drawing a quad to represent the floor in JBullet. For some reason it is above the cameras head, but when I translate the camera up towards it it get's further away, which is why I translated the Camera to -5. That's another problem, for another day.
You should really consider disabling the depth test when you "switch" from 3D to 2D if you are going to draw at Z=0 (middle of your depth range). Half of the visible space in your 3D scene will potentially obstruct your 2D drawing if you do not do this. Alternatively, you could replace your glVertex2f (...) calls with glVertex3f (x,y, -1.0) to bring everything in 2D to the very front of the depth range.
But the really weird thing about all of this is the end of your render2D (...) function: You never call glEnd (...) and you pop a matrix that you appear never to have pushed. That is two sources of mismatched weirdness, either one of them could be causing your problem.
I'm creating an app, which should draw objects in constant place on the scene, but when i move my phone around, camera should change view and i should see all object around (when i do full rotation with phone).
When i use gl.glMultMatrixf(rotationMatrix, 0) in my onDrawFrame method and then draw object its working perfect (rotation matrix is obtained from SensorManager)
#Override
public void onDrawFrame(GL10 gl) {
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glMultMatrixf(rotationMatrix, 0);
gl.glTranslatef(0f, 0f, -10f);
//draw object
pyramid.draw(gl);
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
But when i try to use:
GLU.gluLookAt( gl, 0.0f, 0.0f, 0.0f, x, y, z, 0.0f, 1.0f, 0.0f );
gl.glPushMatrix();
//draw object
pyramid.draw(gl);
my object is all time in the same place on screen and follow camera movement. What I'm doing wrong in second example ?
I think you are unclear on what glulookat() actually does. You are asking for a matrix representing glulookat(currentrotationmatrix, mylocation.x, mylocation.y, mylocation.z, whereImlooking.x, whereImlooking.y, whereImlooking.z, upvector.x, upvector.y, upvector.z)
So with that code you are telling opengl to rotate everything, (assuming that code is still present) and then you are saying "I am at origin (0.0, 0.0, 0.0) and looking at xyz" (not sure what xyz is in your case. or what effect you are trying to achieve.)
I am writing a game which uses opengles. I have created my renderer class and have a sample of my game working on the emulator, however none of the texures display on an actual device. I have read about the most common cause for this being the need for texture to be a factor of 2 however I have tried drawing a square (128x128) with a texture of the same size mapped to it and this only shows on the emulator. Further to that my actual game will be using rectangles so I'm unsure how I can map textures that are squares to rectangles..
This is my code so far (The game is 2d so I'm using ortho mode):
EDIT: I have updated my code, it is now correctly binding textures and using textures of size 128x128, still only seeing textures on the emulator..
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
byteBuffer = ByteBuffer.allocateDirect(shape.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuffer.asFloatBuffer();
vertexBuffer.put(cardshape);
vertexBuffer.position(0);
byteBuffer = ByteBuffer.allocateDirect(shape.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(textureshape);
textureBuffer.position(0);
// Set the background color to black ( rgba ).
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
// Enable Smooth Shading, default not really needed.
gl.glShadeModel(GL10.GL_SMOOTH);
// Depth buffer setup.
gl.glClearDepthf(1.0f);
// Enables depth testing.
gl.glEnable(GL10.GL_DEPTH_TEST);
// The type of depth testing to do.
gl.glDepthFunc(GL10.GL_LEQUAL);
// Really nice perspective calculations.
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
gl.glEnable(GL10.GL_TEXTURE_2D);
loadGLTexture(gl);
}
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection
gl.glPushMatrix(); // Push The Matrix
gl.glLoadIdentity(); // Reset The Matrix
gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix
gl.glPushMatrix(); // Push The Matrix
gl.glLoadIdentity(); // Reset The Matrix
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glLoadIdentity();
gl.glTranslatef(card.x, card.y, 0.0f);
gl.glBindTexture(GL10.GL_TEXTURE_2D, card.texture[0]); //activates texture to be used now
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Sets the current view port to the new size.
gl.glViewport(0, 0, width, height);
// Select the projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
// Reset the projection matrix
gl.glLoadIdentity();
// Calculate the aspect ratio of the window
GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f,
100.0f);
// Select the modelview matrix
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Reset the modelview matrix
gl.glLoadIdentity();
}
public int[] texture = new int[1];
public void loadGLTexture(GL10 gl) {
// loading texture
Bitmap bitmap;
bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image);
// generate one texture pointer
gl.glGenTextures(0, texture, 0); //adds texture id to texture array
// ...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now
// create nearest filtered texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
// Clean up
bitmap.recycle();
}
Is there anything I have done wrong? Or something I haven't done? It works perfectly fine in the emulator so I could only assume it was the power of 2 issue but like I said I tried that using a 128x128 texture on a square but it didn't show.. any help would be appreciated..
EDIT: I have also tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and then trying them in the raw folder.. any other ideas?
I'm actually looking for the solution to a similar problem right now. I think I might have a temporary fix for you, however.
The problem appears to be that on the emulator the orthographic view is flipped. To solve this, in my app we added an option in preferences to manually flip the view if nothing draws. Here's the snippet that handles this:
if (!flipped)
{
glOrthof(0, screenWidth, screenHeight, 0, -1, 1); //--Device
}
else
{
glOrthof(0, screenWidth, 0, -screenHeight, -1, 1); //--Emulator
}
Hope this helps! If anybody has a more general solution, I'd be happy to hear it!
I didn't look at your code but I have been on that road before. Developing in OpenGL is a real pain in the ass. If you are not obligated to use OpenGL, then use a graphics engine. Unity is a great one and it's free. Also your game would work on Android, iOS or other platforms. Study your choices carefully. Good luck..
EDIT: Solved it! I made stupid mistake, I had a textureId I'd forgotten about when it was textureID I should use.
Okay, I am fully aware that this is a recurring question, and that there is a lot of tutorials and open source code. But I've been trying as best as I can for quite a while here, and my screen is still blank (with whatever color I set using glClearColor()).
So, I would be grateful for some pointers to what I'm doing wrong, or even better, some working code that will render a resource image.
I'll show what I've got so far (by doing some crafty copy-pasting) in my onDrawFrame of the class that implements the Renderer. I've removed some of the jumping between methods, and will simply paste it in the order it is executed.
Feel free to disregard my current code, I'm more than happy to start over, if anyone can give me a working piece of code.
Setup:
bitmap = BitmapFactory.decodeResource(panel.getResources(),
R.drawable.test);
addGameComponent(new MeleeAttackComponent());
// Mapping coordinates for the vertices
float textureCoordinates[] = { 0.0f, 2.0f, //
2.0f, 2.0f, //
0.0f, 0.0f, //
2.0f, 0.0f, //
};
short[] indices = new short[] { 0, 1, 2, 1, 3, 2 };
float[] vertices = new float[] { -0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f };
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
protected void setVertices(float[] vertices) {
// a float is 4 bytes, therefore we multiply the number if
// vertices with 4.
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
mVerticesBuffer = vbb.asFloatBuffer();
mVerticesBuffer.put(vertices);
mVerticesBuffer.position(0);
}
protected void setIndices(short[] indices) {
// short is 2 bytes, therefore we multiply the number if
// vertices with 2.
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
mIndicesBuffer = ibb.asShortBuffer();
mIndicesBuffer.put(indices);
mIndicesBuffer.position(0);
mNumOfIndices = indices.length;
}
protected void setTextureCoordinates(float[] textureCoords) {
// float is 4 bytes, therefore we multiply the number of
// vertices with 4.
ByteBuffer byteBuf = ByteBuffer
.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mTextureBuffer = byteBuf.asFloatBuffer();
mTextureBuffer.put(textureCoords);
mTextureBuffer.position(0);
}
//The onDrawFrame(GL10 gl)
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -4);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
if(shoudlLoadTexture){
loadGLTextures(gl);
shoudlLoadTexture = false;
}
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
}
gl.glTranslatef(posX, posY, 0);
// Point out the where the color buffer is.
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
private void loadGLTextures(GL10 gl) {
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
It doesn't crash, no exceptions, simply a blank screen with color. I've printed stuff in there, so I'm pretty sure it is all executed.
I know it's not optimal to just paste code, but at the moment, I just want to be able to do what I was able to do with canvas :)
Thanks a lot
If you're getting the background colour, that means your window is properly set up. OpenGL is connected to that area of the screen.
However, OpenGL clips to the near and far clip planes, ensuring that objects don't cross or intersect the camera (which, both mathematically and logically, doesn't make sense) and that objects too far away don't appear. So if you've not set up modelview and projection correctly, it's probable that all your geometry is being clipped.
Modelview is used to map from world to eye space. Projection maps from eye space to screen space. So a typical applications uses the former to position objects within the scene, and position the scene relative to the camera, then the latter deals with whether the camera sees with perspective or not, how many world units make how many screen units, etc.
If you look at examples like this one, particularly onSurfaceChanged, you'll see an example of a perspective projection with a camera fixed at the origin.
Because the camera is at (0, 0, 0), leaving your geometry on z = 0 as your code does will cause it to be clipped. In that example code they've set the near clip plane to be at z = 0.1, so in your existing code you could change:
gl.glTranslatef(posX, posY, 0);
To:
gl.glTranslatef(posX, posY, -1.0);
To push your geometry back sufficiently far to appear on screen.