I am writing a game which uses opengles. I have created my renderer class and have a sample of my game working on the emulator, however none of the texures display on an actual device. I have read about the most common cause for this being the need for texture to be a factor of 2 however I have tried drawing a square (128x128) with a texture of the same size mapped to it and this only shows on the emulator. Further to that my actual game will be using rectangles so I'm unsure how I can map textures that are squares to rectangles..
This is my code so far (The game is 2d so I'm using ortho mode):
EDIT: I have updated my code, it is now correctly binding textures and using textures of size 128x128, still only seeing textures on the emulator..
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
byteBuffer = ByteBuffer.allocateDirect(shape.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
vertexBuffer = byteBuffer.asFloatBuffer();
vertexBuffer.put(cardshape);
vertexBuffer.position(0);
byteBuffer = ByteBuffer.allocateDirect(shape.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(textureshape);
textureBuffer.position(0);
// Set the background color to black ( rgba ).
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
// Enable Smooth Shading, default not really needed.
gl.glShadeModel(GL10.GL_SMOOTH);
// Depth buffer setup.
gl.glClearDepthf(1.0f);
// Enables depth testing.
gl.glEnable(GL10.GL_DEPTH_TEST);
// The type of depth testing to do.
gl.glDepthFunc(GL10.GL_LEQUAL);
// Really nice perspective calculations.
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
gl.glEnable(GL10.GL_TEXTURE_2D);
loadGLTexture(gl);
}
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection
gl.glPushMatrix(); // Push The Matrix
gl.glLoadIdentity(); // Reset The Matrix
gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix
gl.glPushMatrix(); // Push The Matrix
gl.glLoadIdentity(); // Reset The Matrix
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glLoadIdentity();
gl.glTranslatef(card.x, card.y, 0.0f);
gl.glBindTexture(GL10.GL_TEXTURE_2D, card.texture[0]); //activates texture to be used now
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Sets the current view port to the new size.
gl.glViewport(0, 0, width, height);
// Select the projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
// Reset the projection matrix
gl.glLoadIdentity();
// Calculate the aspect ratio of the window
GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f,
100.0f);
// Select the modelview matrix
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Reset the modelview matrix
gl.glLoadIdentity();
}
public int[] texture = new int[1];
public void loadGLTexture(GL10 gl) {
// loading texture
Bitmap bitmap;
bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image);
// generate one texture pointer
gl.glGenTextures(0, texture, 0); //adds texture id to texture array
// ...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now
// create nearest filtered texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
// Clean up
bitmap.recycle();
}
Is there anything I have done wrong? Or something I haven't done? It works perfectly fine in the emulator so I could only assume it was the power of 2 issue but like I said I tried that using a 128x128 texture on a square but it didn't show.. any help would be appreciated..
EDIT: I have also tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and then trying them in the raw folder.. any other ideas?
I'm actually looking for the solution to a similar problem right now. I think I might have a temporary fix for you, however.
The problem appears to be that on the emulator the orthographic view is flipped. To solve this, in my app we added an option in preferences to manually flip the view if nothing draws. Here's the snippet that handles this:
if (!flipped)
{
glOrthof(0, screenWidth, screenHeight, 0, -1, 1); //--Device
}
else
{
glOrthof(0, screenWidth, 0, -screenHeight, -1, 1); //--Emulator
}
Hope this helps! If anybody has a more general solution, I'd be happy to hear it!
I didn't look at your code but I have been on that road before. Developing in OpenGL is a real pain in the ass. If you are not obligated to use OpenGL, then use a graphics engine. Unity is a great one and it's free. Also your game would work on Android, iOS or other platforms. Study your choices carefully. Good luck..
Related
I just started trying out some OpenGL with java. I downloaded the LWJGL and the Slick-Util Library. Now I'm trying to paint some images on the screen which is working quite fine. But I have to big problems:
When i rotate my image and it's about 45° you can see some bits of the same image at the corners like it's a spritesheet with the same image, which get rotated.
How do I scale my image? It's pretty small and the glScale() func scales the image itself, but not the space where it's printed. So if the image has a size of 16*16 pixels and i scale it up i just see a part of the the scaled image in the 16*16pixels
Here's my code for the OpenGL:
public class Widget {
String name;
int angle;
public Texture image_texture;
public String image_path ;
public int image_ID;
public int cord_x = 0;
public int cord_y = 0;
static LinkedList<Widget> llwidget = new LinkedList<Widget>();
public Widget(String path){
llwidget.add(this);
image_path = path;
try {
image_texture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream(image_path), GL_NEAREST);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
};
image_ID = image_texture.getTextureID();
glEnable(GL_TEXTURE_2D);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glViewport(0,0,Display.getWidth(),Display.getHeight());
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, Display.getWidth(),Display.getHeight(), 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
}
void render(){
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glTranslatef(0.5f,0.5f,0.0f);
//! Graphics bug
glRotatef(angle,0.0f,0.0f,1.0f);
//
glTranslatef(-0.5f,-0.5f,0.0f);
//! Doesn't work
glScalef(2f, 2f, 2f);
//
glMatrixMode(GL_MODELVIEW);
Color.white.bind();
image_texture.bind();
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(cord_x, Display.getHeight() - cord_y);
glTexCoord2f(1,0);
glVertex2f(cord_x+image_texture.getTextureWidth(),Display.getHeight() - cord_y);
glTexCoord2f(1,1);
glVertex2f(cord_x+image_texture.getTextureWidth(),Display.getHeight() - cord_y+image_texture.getTextureHeight());
glTexCoord2f(0,1);
glVertex2f(cord_x,Display.getHeight() - cord_y+image_texture.getTextureHeight());
glEnd();
}
}
Question 1:
Calling glScalef(2f, 2f, 2f) in GL_TEXTURE mode results in a zoom of the texture inside a quad which has the size determined by your glVertex2f calls. This could lead to unwanted artifacts at the edge of the quad.
Question 2:
The main problem at this is that the program is not in a state which allows to translate your quad coordinates when glScalef(2f, 2f, 2f) got called. After calling glMatrixMode(GL_TEXTURE) all following matrix operations affect the texture matrix. To get a zoom effect of the quad which you draw with glVertex2f you need to get in GL_MODELVIEW Mode before calling glScale2f.
Alternative:
With glVertex2f you determine on which coordinates you draw the quad. The parameters of glVertex2f are coordinates on which to draw vertices. So altering this params would also do the job.
I'm quite new to OpenGL ES and I'm trying to draw some textured quads. I want to keep it 2D so I decided to use orthographic projection. What I really want is to draw a plane that takes the same relative amount of screen space on every device regardless the screen resolution.
The problem I encounter is the setup of the orthographic projection. The aspect ratio just isn't correct. A square is drawn as a rectangle in the height. This is my code so far:
The Renderer:
// automatically looped by android
public void onDrawFrame(GL10 gl) {
// clear screen and buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Draw elements
for (GameObject object : level.getGameObjects()) {
gl.glScalef(0.2f, 0.2f, 0.0f);
object.draw(gl);
gl.glLoadIdentity();
}
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Load all textures
for (GameObject object : level.getGameObjects()) {
object.bindTexture(gl);
}
// Initialize game canvas
gl.glEnable(GL10.GL_TEXTURE_2D); // Enable Texture Mapping
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black background
// enable texture transparency
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
}
The draw method is exactly the same as in this tutorial: http://obviam.net/index.php/texture-mapping-opengl-android-displaying-images-using-opengl-and-squares/
Kind regards,
Daan
Why are you hardcoding the width and height to glOrthof? Shouldn't you use the passed in width and height?
gl.glOrthof(0f, width, 0f, height, -1f, 1f);
I have found the answer to my problem. First of all I was hardcoding the width and height wich wasn't a good option. To have a fixed with on all screen resolutions i now calculate the aspect ratio and for the height I use the with multiplied by the aspect ratio.
Another problem was the fact that I hadn't reset the projection matrix prior to setting the glortho. I have changed all this and it solved the problem:
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, 320, 0f, 320*aspect, -1f, 1f);
}
I hope this will be helpfull for somebody.
Kind regards,
Daan
Looks like you've solved it. Can you provide details on how you calculated the aspect variable (a division of height and width perhaps?) Just as an FYI, the height and width values passed in to the onSurfaceChanged() event are indeed dynamic. For instance, they are reversed when you flip the screen orientation.
I'm relatively new to this as well, but in my desktop gl experience, it's important to factor these in when a window size changes.
I'm trying to get my rendering-to-texture working. So far, it does all the necessary GL gubbins to draw on the texture and everything - the only problem is its getting the scaling all off.
I figured I'd want to set the viewport to the size of the texture, and the gluOrtho2d (the way I'm going to be drawing onto the texture) as -halfwidth, halfwidth, -halfheight, halfheight. This means when drawing position 0,0 should be in the center. A position of halfwdith, halfheight should be in the top right corner etc etc.
I'm getting really weird effects though, it seems that its not drawing on the texture in the right scale - so everything gets skewed, can anyone suggest what I might be doing wrong?
Thanks
public void renderToTexture(GLRenderer glRenderer, GL10 gl)
{
boolean checkIfContextSupportsExtension = checkIfContextSupportsExtension(gl, "GL_OES_framebuffer_object");
if(checkIfContextSupportsExtension)
{
GL11ExtensionPack gl11ep = (GL11ExtensionPack) gl;
int mFrameBuffer = createFrameBuffer(gl,texture.getWidth(), texture.getHeight(), texture.getGLID());
if (mFrameBuffer == -1)
{
return;
}
gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, mFrameBuffer);
int halfWidth = texture.getWidth()/2;//width/2;
int halfHeight = texture.getHeight()/2;//height/2;
gl.glViewport(0,0,texture.getWidth(), texture.getHeight());
gl.glLoadIdentity();
GLU.gluOrtho2D(gl, -halfWidth, halfWidth , -halfHeight, halfHeight);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glClearColor(0f, 1f, 0f, 1f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
//draw the old version of the texture to framebuffer:
Quad quad = new Quad(texture.getWidth(), texture.getHeight());
quad.setTexture(texture);
SpriteRenderable sr = new SpriteRenderable(quad);
sr.renderTo(glRenderer, gl, 1);
//draw the new gl objects etc to framebuffer
for (Renderable renderable : renderThese)
{
if (renderable.isVisible()) {
renderable.renderTo(glRenderer, gl, 1);
}
}
//default to the old framebuffer
gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, 0);
}
}
images:
This one is the game prior to any texture rendering.
image1
This is after the "blood splats" (currently pigs!) are rendered on the "arena" background texture shown in picture 1. Notice that the original texture has shrunk too small to see in the middle (its a few pixles) and the pig "blood splat" jumps in a zig-zag, flipping over the center of the texture and becoming smaller...
image2
(sorry, dont have enough rep to post images in the post!)
Just a speculative guess, but do you remember to set matrixMode to GL_PROJECTION prior to entering renderToTexture function? It's not set inside the function, where it seems like it should be. Also don't forget to restore projection matrix and viewport at the end of the function.
I'm trying to make a copy of MineCraft in Java using OpenGL (LWJGL). The problem I'm facing is that everything of my 2D overlay (aiming cross in the middle, menus, etc...) are all white. The 3D part of the game works great: every cube has a texture on each side.
But when I try to draw the overlay, as I said, every texture is white, but I can see the shape of it (because it has transparent areas). I'll add a picture of it.
(This is supposed to be the inventory)
As you can see, the overlay is completely white. And it should look like this:
I'm already searching the web for hours. Can't seem to find a solution.
This drives my crazy... I already searched for instructions of how to create a 2D overlay on a 3D scene, but they don't help either. So I though, I'll give StackOverflow a try.
Hopefully someone can help me?
Thanks for reading my question and for the (hopefully coming) answers!
Martijn
Here is the code:
Initialising OpenGL
public void initOpenGL() throws IOException
{
// init OpenGL
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 800, 600, 0, 1, 300);
glMatrixMode(GL_MODELVIEW);
float color = 0.9f;
glClearColor(color, color, color, color);
glEnable(GL_TEXTURE_2D);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glShadeModel(GL_FLAT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_CULL_FACE);
glEnable(GL_FOG);
glFog(GL_FOG_COLOR, MineCraft.wrapDirect(color, color, color, 1.0f));
glFogi(GL_FOG_MODE, GL_LINEAR);
glFogf(GL_FOG_START, _configuration.getViewingDistance() * 0.8f);
glFogf(GL_FOG_END, _configuration.getViewingDistance());
glFogi(NVFogDistance.GL_FOG_DISTANCE_MODE_NV, NVFogDistance.GL_EYE_RADIAL_NV);
glHint(GL_FOG_HINT, GL_NICEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
Configuring the matrixes for drawing the overlay (Out of inspiration, I literally copied all the OpenGL calls for this method from BlockMania (another open-source MineCraft copy), which works great)
public void renderOverlay()
{
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
GLU.gluOrtho2D(0, conf.getWidth(), conf.getHeight(), 0);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_COLOR_MATERIAL);
glPushMatrix();
glLoadIdentity();
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
/** RENDER **/
if (_activatedInventory != null)
{
_activatedInventory.renderInventory();
}
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
}
Drawing the texture itself:
public void renderInventory()
{
Configuration conf = Game.getInstance().getConfiguration();
glTranslatef(conf.getWidth() / 2.0f, conf.getHeight() / 2.0f, 0.0f);
glEnable(GL_TEXTURE_2D);
Texture tex = TextureStorage.getTexture("gui.inventory");
tex.bind(); // newdawn.slick (same library for my whole program, so this works)
float hw = 170; // half width
float hh = 163; // half height
Vector2f _texPosUpLeft = new Vector2f(3, 0);
Vector2f _texPosDownRight = new Vector2f(_texPosUpLeft.x + hw, _texPosUpLeft.y + hh);
_texPosUpLeft.x /= tex.getTextureWidth();
_texPosUpLeft.y /= tex.getTextureHeight();
_texPosDownRight.x /= tex.getTextureWidth();
_texPosDownRight.y /= tex.getTextureHeight();
glColor3f(1, 1, 1); // Changes this doesn't make any effect
glBegin(GL_QUADS);
glTexCoord2f(_texPosUpLeft.x, _texPosUpLeft.y);
glVertex2f(-hw, -hh);
glTexCoord2f(_texPosDownRight.x, _texPosUpLeft.y);
glVertex2f(hw, -hh);
glTexCoord2f(_texPosDownRight.x, _texPosDownRight.y);
glVertex2f(hw, hh);
glTexCoord2f(_texPosUpLeft.x, _texPosDownRight.y);
glVertex2f(-hw, hh);
glEnd();
}
(The texture pack I'm using is CUBISM1.00)
I found it!!
It was the fog. For one or another reason it looks like it thinks the overlay is out of sight and gives it the color of the fog. So, disabling the fog before rendering the overlay solved it.
glDisable(GL_FOG);
/* Render overlay here */
glEnable(GL_FOG);
If there are still people who read this, is this caused by matrix abuse or is this behaviour normal?
EDIT: Solved it! I made stupid mistake, I had a textureId I'd forgotten about when it was textureID I should use.
Okay, I am fully aware that this is a recurring question, and that there is a lot of tutorials and open source code. But I've been trying as best as I can for quite a while here, and my screen is still blank (with whatever color I set using glClearColor()).
So, I would be grateful for some pointers to what I'm doing wrong, or even better, some working code that will render a resource image.
I'll show what I've got so far (by doing some crafty copy-pasting) in my onDrawFrame of the class that implements the Renderer. I've removed some of the jumping between methods, and will simply paste it in the order it is executed.
Feel free to disregard my current code, I'm more than happy to start over, if anyone can give me a working piece of code.
Setup:
bitmap = BitmapFactory.decodeResource(panel.getResources(),
R.drawable.test);
addGameComponent(new MeleeAttackComponent());
// Mapping coordinates for the vertices
float textureCoordinates[] = { 0.0f, 2.0f, //
2.0f, 2.0f, //
0.0f, 0.0f, //
2.0f, 0.0f, //
};
short[] indices = new short[] { 0, 1, 2, 1, 3, 2 };
float[] vertices = new float[] { -0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f };
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
protected void setVertices(float[] vertices) {
// a float is 4 bytes, therefore we multiply the number if
// vertices with 4.
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
mVerticesBuffer = vbb.asFloatBuffer();
mVerticesBuffer.put(vertices);
mVerticesBuffer.position(0);
}
protected void setIndices(short[] indices) {
// short is 2 bytes, therefore we multiply the number if
// vertices with 2.
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
mIndicesBuffer = ibb.asShortBuffer();
mIndicesBuffer.put(indices);
mIndicesBuffer.position(0);
mNumOfIndices = indices.length;
}
protected void setTextureCoordinates(float[] textureCoords) {
// float is 4 bytes, therefore we multiply the number of
// vertices with 4.
ByteBuffer byteBuf = ByteBuffer
.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mTextureBuffer = byteBuf.asFloatBuffer();
mTextureBuffer.put(textureCoords);
mTextureBuffer.position(0);
}
//The onDrawFrame(GL10 gl)
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -4);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
if(shoudlLoadTexture){
loadGLTextures(gl);
shoudlLoadTexture = false;
}
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
}
gl.glTranslatef(posX, posY, 0);
// Point out the where the color buffer is.
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
private void loadGLTextures(GL10 gl) {
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
It doesn't crash, no exceptions, simply a blank screen with color. I've printed stuff in there, so I'm pretty sure it is all executed.
I know it's not optimal to just paste code, but at the moment, I just want to be able to do what I was able to do with canvas :)
Thanks a lot
If you're getting the background colour, that means your window is properly set up. OpenGL is connected to that area of the screen.
However, OpenGL clips to the near and far clip planes, ensuring that objects don't cross or intersect the camera (which, both mathematically and logically, doesn't make sense) and that objects too far away don't appear. So if you've not set up modelview and projection correctly, it's probable that all your geometry is being clipped.
Modelview is used to map from world to eye space. Projection maps from eye space to screen space. So a typical applications uses the former to position objects within the scene, and position the scene relative to the camera, then the latter deals with whether the camera sees with perspective or not, how many world units make how many screen units, etc.
If you look at examples like this one, particularly onSurfaceChanged, you'll see an example of a perspective projection with a camera fixed at the origin.
Because the camera is at (0, 0, 0), leaving your geometry on z = 0 as your code does will cause it to be clipped. In that example code they've set the near clip plane to be at z = 0.1, so in your existing code you could change:
gl.glTranslatef(posX, posY, 0);
To:
gl.glTranslatef(posX, posY, -1.0);
To push your geometry back sufficiently far to appear on screen.