Related
When rendering to a texture (secondary framebuffer) in OpenGL, anything transparent overwrites the transparency of what is below, rather than adding transparency as transparency should. This means if I render a transparent object on top of an existing opaque object, the result is a texture that is transparent, even though it should be opaque.
In this picture, the space is rendered as a background normally, the framebuffer is then changed and opaque blue is then rendered, then, the green/red are rendered. The texture that the framebuffer was rendering to is used to render onto the original framebuffer(the window), and the result is as seen.
Some code:
Framebuffer/Texture Creation:
int texture = glGenTextures();
framebufferID = glGenFramebuffers();
glBindFramebuffer(GL_FRAMEBUFFER,framebufferID);
glBindTexture(GL_TEXTURE_2D, texture);
int width,height;
width = (int)(getMaster().getWindow().getWidth()*(xscale/(16f*getMaster().getGuiCamera().getWidth()))*1.2f);
height = (int)(getMaster().getWindow().getHeight()*(yscale/9f)*1.15f);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture,0);
int depth = glGenTextures();
glBindTexture(GL_TEXTURE_2D, depth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth, 0);
this.texture = new Texture(texture,width,height,true);
glDrawBuffer(GL_FRONT_AND_BACK);
The depth buffer here was something I tried to get it to work, but it had no effect on the output texture though.
Render Code:
getMaster().returnToViewportAfterFunction(new Vector2i(texture.getWidth(),texture.getHeight()),
() -> getMaster().returnToFramebufferAfterFunction(framebufferID, () -> {
shapeShader.bind();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
model.bind();
shapeShader.setUniform("color", 0f, 0f, 1f, 1f); //Blue
shapeShader.setUniform("projection", this.camera.getProjectionForGuiWindow(vec2zero, xscale, yscale));
glDrawElements(GL_TRIANGLES, model.getVerticeCount(), GL_UNSIGNED_INT, 0);
shapeShader.setUniform("color", 0f, 1f, 0f, 0.5f); //Green
shapeShader.setUniform("projection", this.camera.getProjectionForGuiWindow(vec2zero, xscale/2f, yscale/2f));
glDrawElements(GL_TRIANGLES, model.getVerticeCount(), GL_UNSIGNED_INT, 0);
shapeShader.setUniform("color", 1f, 0f, 0f, .2f); //Red
shapeShader.setUniform("projection", this.camera.getProjectionForGuiWindow(vec2zero, xscale/4f, yscale/2f));
glDrawElements(GL_TRIANGLES, model.getVerticeCount(), GL_UNSIGNED_INT, 0);
glDisable(GL_BLEND);
model.unbind();
})
);
//Renders the texture we just made
shader.bind();
model.bind();
textureCoords.bind(texture);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
shader.setUniform("color", 1f, 1f, 1f, 1f);
shader.setUniform("projection", camera.getProjectionForGuiWindow(getPosition(), xscale, yscale));
glDrawElements(GL_TRIANGLES, model.getVerticeCount(), GL_UNSIGNED_INT, 0);
glDisable(GL_BLEND);
model.unbind();
Rendering over opaque objects in framebuffer 0 with a transparent object does not cause the resulting pixels to be transparent, else they would appear as a blend of the glClearColor and the opaque object, so why does this happen in the framebuffer in which I am using to render to a texture? Shouldn't it be consistent? I feel that maybe I'm missing some attachment to my framebuffer but I can't be sure what. I saw a solution that said to use glColorMask(true,true,true,false), the last false meaning no alpha writes, which appears at first glance to work, but why should I disable transparency when transparency is what I'm going for? It also appears to cause many issues if left on. Can anyone lend some insight? Thanks in advance!
EDIT: the glColorMask was not the solution, after further analysis.
You have to use a separable blending function which treats RGB and Alpha values separately: glBlendFuncSeparate. For the RGB values you keep using GL_SRC_ALPHA, GL_SRC_ONE_MINUS_ALPHA. For the Alpha values use something like GL_ONE, GL_ONE so that things simply add up.
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);
For anyone wondering, best solution to my issue was to use glBlendFuncSeparate(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA,GL_ONE_MINUS_DST_ALPHA,GL_ONE) instead of glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
As in the title, I'm using OpenGL ES (2) on Android (7.0).
Apparently, though, I have some problems when trying to load a texture from the asset folder.
This is the code I'm using
public static int loadTexture(final AssetManager assetManager, final String img)
{
final int[] textureHandle = new int[1];
glGenTextures(1, textureHandle, 0);
if(textureHandle[0] == 0)
throw new RuntimeException("Error loading texture");
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
try
{
final Bitmap bitmap = BitmapFactory.decodeStream(assetManager.open(img));
glBindTexture(GL_TEXTURE_2D, textureHandle[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLUtils.texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
catch (IOException e)
{
e.printStackTrace();
}
textures.put(img, textureHandle[0]);
return textureHandle[0];
}
This is called from onSurfaceCreated as I know I need a running OpenGL context
Most of the time this works and gives me this result
but a considerable number of other times I get this
No Exception is thrown.
I know the problem is not depending on the 3D model nor the texture because I've tried other ones. The only thing I can do when this happens is to restart my App a couple of times.
I've tried googling around but with no results. I know I could try to implement another loading function, but first I'd like to understand why this is not ok and where it's not working.
If you think this could depends on the code I'm using to render the scene here it is (no VAOs because I need to support older devices and OpenGL ES 2):
glBindBuffer(GL_ARRAY_BUFFER, model.getCoordsVBO());
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, model.getTextureVBO());
glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, 0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, model.getNormalsVBO());
glVertexAttribPointer(2, 3, GL_FLOAT, false, 0, 0);
glEnableVertexAttribArray(2);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, entity.getTexture());
loadFloat(loc_textureSampler, 0);
if(model.getIndicesBuffer() != null)
{
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, model.getIndicesVBO());
glDrawElements(GL_TRIANGLES, model.getVertexCount(), GL_UNSIGNED_INT, 0);
}
else
glDrawArrays(GL_TRIANGLES, 0, model.getVertexCount());
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
What I didn't say (because I thought it was irrelevant) is that I had a little more code ad the top of the function...
if(textures.containsKey(img))
return textures.get(img);
and at the bottom
textures.put(img, textureHandle[0]);
I thought it was intelligent to keep track of the textures, so I could have called this load function anytime I needed a texture and if I had already loaded it, it would have returned it without loading again.
This could work on a desktop application, but I haven't considered that on Android the garbage collector could delete the texture from OpenGL when I exit the app so that when the activity restarts it wouldn't be there anymore.
Removing that HashMap solved the problem, demonstrating that the actual reading and loading code had nothing to do with that black thing.
I am writing a game which uses opengles. I have created my renderer class and have a sample of my game working on the emulator, however none of the texures display on an actual device. I have read about the most common cause for this being the need for texture to be a factor of 2 however I have tried drawing a square (128x128) with a texture of the same size mapped to it and this only shows on the emulator. Further to that my actual game will be using rectangles so I'm unsure how I can map textures that are squares to rectangles..
This is my code so far (The game is 2d so I'm using ortho mode):
EDIT: I have updated my code, it is now correctly binding textures and using textures of size 128x128, still only seeing textures on the emulator..
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
byteBuffer = ByteBuffer.allocateDirect(shape.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuffer.asFloatBuffer();
vertexBuffer.put(cardshape);
vertexBuffer.position(0);
byteBuffer = ByteBuffer.allocateDirect(shape.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(textureshape);
textureBuffer.position(0);
// Set the background color to black ( rgba ).
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
// Enable Smooth Shading, default not really needed.
gl.glShadeModel(GL10.GL_SMOOTH);
// Depth buffer setup.
gl.glClearDepthf(1.0f);
// Enables depth testing.
gl.glEnable(GL10.GL_DEPTH_TEST);
// The type of depth testing to do.
gl.glDepthFunc(GL10.GL_LEQUAL);
// Really nice perspective calculations.
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
gl.glEnable(GL10.GL_TEXTURE_2D);
loadGLTexture(gl);
}
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection
gl.glPushMatrix(); // Push The Matrix
gl.glLoadIdentity(); // Reset The Matrix
gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix
gl.glPushMatrix(); // Push The Matrix
gl.glLoadIdentity(); // Reset The Matrix
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glLoadIdentity();
gl.glTranslatef(card.x, card.y, 0.0f);
gl.glBindTexture(GL10.GL_TEXTURE_2D, card.texture[0]); //activates texture to be used now
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Sets the current view port to the new size.
gl.glViewport(0, 0, width, height);
// Select the projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
// Reset the projection matrix
gl.glLoadIdentity();
// Calculate the aspect ratio of the window
GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f,
100.0f);
// Select the modelview matrix
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Reset the modelview matrix
gl.glLoadIdentity();
}
public int[] texture = new int[1];
public void loadGLTexture(GL10 gl) {
// loading texture
Bitmap bitmap;
bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image);
// generate one texture pointer
gl.glGenTextures(0, texture, 0); //adds texture id to texture array
// ...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now
// create nearest filtered texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
// Clean up
bitmap.recycle();
}
Is there anything I have done wrong? Or something I haven't done? It works perfectly fine in the emulator so I could only assume it was the power of 2 issue but like I said I tried that using a 128x128 texture on a square but it didn't show.. any help would be appreciated..
EDIT: I have also tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and then trying them in the raw folder.. any other ideas?
I'm actually looking for the solution to a similar problem right now. I think I might have a temporary fix for you, however.
The problem appears to be that on the emulator the orthographic view is flipped. To solve this, in my app we added an option in preferences to manually flip the view if nothing draws. Here's the snippet that handles this:
if (!flipped)
{
glOrthof(0, screenWidth, screenHeight, 0, -1, 1); //--Device
}
else
{
glOrthof(0, screenWidth, 0, -screenHeight, -1, 1); //--Emulator
}
Hope this helps! If anybody has a more general solution, I'd be happy to hear it!
I didn't look at your code but I have been on that road before. Developing in OpenGL is a real pain in the ass. If you are not obligated to use OpenGL, then use a graphics engine. Unity is a great one and it's free. Also your game would work on Android, iOS or other platforms. Study your choices carefully. Good luck..
I just started using openGL I need to load allot of bitmaps for animation I can get it to work great for a few frames but run out of memory when I try to load all the frames How can I load just a few frames at a time? This is the code I'm using to load the textures
public void loadGLTexture(GL10 gl, Context context) {
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),R.drawable.r1);
Bitmap bitmap01 = Bitmap.createScaledBitmap(bitmap, 512, 512, false);
Bitmap bitmap2 = BitmapFactory.decodeResource(context.getResources(),R.drawable.r2);
Bitmap bitmap02 = Bitmap.createScaledBitmap(bitmap2, 512, 512, false);
Bitmap bitmap3 = BitmapFactory.decodeResource(context.getResources(),R.drawable.r3);
Bitmap bitmap03 = Bitmap.createScaledBitmap(bitmap3, 512, 512, false);
try {
bitmap = bitmap01;
bitmap2 = bitmap02;
bitmap3 = bitmap03;
} finally {
}
//Generate there texture pointer
gl.glGenTextures(3, textures, 0);
//Create Texture and bind it to texture 0
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
//Create Texture and bind it to texture 1
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap2, 0);
//Create Texture and bind it to texture 2
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[2]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap3, 0);
//Clean up
bitmap.recycle();
bitmap2.recycle();
bitmap3.recycle();
}
}
Well, it's the classic issue of loading to many bitmaps in the (very limited) memory available to your app's process. I'm not gonna get into this, suffice to say that pre Honeycomb these bitmaps were loaded offheap, from Honeycomb and beyond they're loaded on the heap, but the issue still remains: a bitmap of 128x256 pixels will take 4*128*256 bytes in memory. And you usually have between 16 and 48 megs of available memory for your app (depending on device and Android version).
Good news here is that once you load a bitmap and create an opengl texture with it, you no longer need that Bitmap object, you can recycle it and make it null. So you might try loading (and, as I see from your code, scaling) the first bitmap, then create the first texture object, then recycle and de-refference that first bitmap. Then go on and do the same with the second bitmap for the second texture object.
First, you could recycle the bitmaps as soon as you scale the image. Second, you should prescale them down, that is, store them as 512x512 from the get go.
But most importantly, you should create a texture atlas for this. Essentially, instead of using multiple textures, you could use a single, large texture and store all the fragments there. The idea would be to create, say, a 2048x2048 texture, which can store 16 "cels" of 512x512 pixels (4 rows of 4 cels each), and then when drawing your quads on screen, altering the texture coordinates to display a particular cel.
For offline generation, there are a number of utilities that can help; TexturePacker is one, and there are many others. To do it in code, you can generate your texture once:
gl.glGenTextures(1, textures, 0);
//Create Texture and bind it to texture 0
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, 2048, 2048, 0, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, null);
That last line allocates an empty image of the size (2048x2048)
Then for each of your bitmaps, you can use:
int xoffset = (imageIndex % 4) * 512;
int yoffset = (imageIndex / 4) * 512;
GLUtils.texSubImage2D(GL10.GL_TEXTURE_2D, 0, xoffset, yoffset, bitmap);
This will load your bitmap into a portion of the texture. As #Joseph hinted, the constraints for the RAM usage on the GL Textures are quite relaxed.
EDIT: Solved it! I made stupid mistake, I had a textureId I'd forgotten about when it was textureID I should use.
Okay, I am fully aware that this is a recurring question, and that there is a lot of tutorials and open source code. But I've been trying as best as I can for quite a while here, and my screen is still blank (with whatever color I set using glClearColor()).
So, I would be grateful for some pointers to what I'm doing wrong, or even better, some working code that will render a resource image.
I'll show what I've got so far (by doing some crafty copy-pasting) in my onDrawFrame of the class that implements the Renderer. I've removed some of the jumping between methods, and will simply paste it in the order it is executed.
Feel free to disregard my current code, I'm more than happy to start over, if anyone can give me a working piece of code.
Setup:
bitmap = BitmapFactory.decodeResource(panel.getResources(),
R.drawable.test);
addGameComponent(new MeleeAttackComponent());
// Mapping coordinates for the vertices
float textureCoordinates[] = { 0.0f, 2.0f, //
2.0f, 2.0f, //
0.0f, 0.0f, //
2.0f, 0.0f, //
};
short[] indices = new short[] { 0, 1, 2, 1, 3, 2 };
float[] vertices = new float[] { -0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f };
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
protected void setVertices(float[] vertices) {
// a float is 4 bytes, therefore we multiply the number if
// vertices with 4.
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
mVerticesBuffer = vbb.asFloatBuffer();
mVerticesBuffer.put(vertices);
mVerticesBuffer.position(0);
}
protected void setIndices(short[] indices) {
// short is 2 bytes, therefore we multiply the number if
// vertices with 2.
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
mIndicesBuffer = ibb.asShortBuffer();
mIndicesBuffer.put(indices);
mIndicesBuffer.position(0);
mNumOfIndices = indices.length;
}
protected void setTextureCoordinates(float[] textureCoords) {
// float is 4 bytes, therefore we multiply the number of
// vertices with 4.
ByteBuffer byteBuf = ByteBuffer
.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mTextureBuffer = byteBuf.asFloatBuffer();
mTextureBuffer.put(textureCoords);
mTextureBuffer.position(0);
}
//The onDrawFrame(GL10 gl)
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -4);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
if(shoudlLoadTexture){
loadGLTextures(gl);
shoudlLoadTexture = false;
}
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
}
gl.glTranslatef(posX, posY, 0);
// Point out the where the color buffer is.
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
private void loadGLTextures(GL10 gl) {
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
It doesn't crash, no exceptions, simply a blank screen with color. I've printed stuff in there, so I'm pretty sure it is all executed.
I know it's not optimal to just paste code, but at the moment, I just want to be able to do what I was able to do with canvas :)
Thanks a lot
If you're getting the background colour, that means your window is properly set up. OpenGL is connected to that area of the screen.
However, OpenGL clips to the near and far clip planes, ensuring that objects don't cross or intersect the camera (which, both mathematically and logically, doesn't make sense) and that objects too far away don't appear. So if you've not set up modelview and projection correctly, it's probable that all your geometry is being clipped.
Modelview is used to map from world to eye space. Projection maps from eye space to screen space. So a typical applications uses the former to position objects within the scene, and position the scene relative to the camera, then the latter deals with whether the camera sees with perspective or not, how many world units make how many screen units, etc.
If you look at examples like this one, particularly onSurfaceChanged, you'll see an example of a perspective projection with a camera fixed at the origin.
Because the camera is at (0, 0, 0), leaving your geometry on z = 0 as your code does will cause it to be clipped. In that example code they've set the near clip plane to be at z = 0.1, so in your existing code you could change:
gl.glTranslatef(posX, posY, 0);
To:
gl.glTranslatef(posX, posY, -1.0);
To push your geometry back sufficiently far to appear on screen.