Offsets for per vertex data interleaved in OpenGL ES on Android - java

Is it possible to use per vertex data interleaved in OpenGL ES on Android?
I'm unable to get the correct offset pointers for the normal and color members.
In C++ I would do something like this:
struct ColoredVertexData3D{
Vertex3D vertex;
Vector3D normal;
ColorRGBA color;
};
const ColoredVertexData3D vertexData[] =
{
{
{0.0f, 0.5f, 0.8f}, // Vertex |
{0.0f, 0.4f, 0.6f}, // Normal | Vertex 0
{1.0f, 0.0f, 0.0f, 1.0f} // Color |
},
{
{0.8f, 0.0f, 0.5f}, // Vertex |
{0.6f, 0.0f, 0.4f}, // Normal | Vertex 1
{1.0f, 0.5f, 0.0f, 1.0f} // Color |
},
// ... more vertexes.
};
const int stride = sizeof(ColoredVertexData3D);
glVertexPointer(3, GL_FLOAT, stride, &vertexData[0].vertex);
glColorPointer(4, GL_FLOAT, stride, &vertexData[0].color);
glNormalPointer(GL_FLOAT, stride, &vertexData[0].normal);
Is the same thing possible on Android in Java? This is what I currently got:
ByteBuffer vertexData = ...;
int stride = 40;
gl.glVertexPointer(3, GL10.GL_FLOAT, stride, vertexData);
// This obviously doesn't work. ------------v
gl.glColorPointer(4, GL10.GL_FLOAT, stride, &vertexData[0].color);
gl.glNormalPointer(GL10.GL_FLOAT, stride, &vertexData[0].normal);

The basic idea is to call duplicate() on the ByteBuffer, which creates a new ByteBuffer that shares the underlying storage but allows you to start at a different position.
Here's what worked for me:
FloatBuffer verticesNormals;
// ... code to initialize verticesNormals ...
// verticesNormals contains 6 floats for each vertex. The first three
// define the position and the next three define the normal.
gl.glVertexPointer(3, gl.GL_FLOAT, 24, verticesNormals);
// Create a buffer that points 3 floats past the beginning.
FloatBuffer normalData = mVerticesNormals.duplicate();
normalData.position(3);
gl.glNormalPointer(gl.GL_FLOAT, 24, normalData);

Related

How to create and use VBOs in OpenGL ES 2

I am looking for help with understanding VBOs. I have done a ton of research and have found tutorials on the subject, but they are still vague to me. I have a few questions:
Where should a VBO be created, and how should I create one?
I am currently using the code right below to initialize my vertex and index buffers:
vertices = new float[]
{
p[0].x, p[0].y, 0.0f,
p[1].x, p[1].y, 0.0f,
p[2].x, p[2].y, 0.0f,
p[3].x, p[3].y, 0.0f,
};
// The order of vertex rendering for a quad
indices = new short[] {0, 1, 2, 0, 2, 3};
ByteBuffer bb = ByteBuffer.allocateDirect(vertices.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
ByteBuffer dlb = ByteBuffer.allocateDirect(indices.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(indices);
drawListBuffer.position(0);
If I am correct, this is not creating a VBO. So, if I wanted to make a VBO, would the code to create a VBO go right after the code listed above? If so, how would it be created?
Also, how is a VBO rendered and drawn to screen?
Is it rendered and drawn the same way as just using vertex and index arrays? If not, what is the process? Currently, I render and draw my objects as shown in the code below:
GLES20.glUseProgram(GraphicTools.sp_SolidColor);
mPositionHandle =
GLES20.glGetAttribLocation(GraphicTools.sp_SolidColor, "vPosition");
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, 3,
GLES20.GL_FLOAT, false,
0, vertexBuffer);
mtrxHandle = GLES20.glGetUniformLocation(GraphicTools.sp_SolidColor,
"uMVPMatrix");
GLES20.glUniformMatrix4fv(mtrxHandle, 1, false, m, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
GLES20.glDisableVertexAttribArray(mPositionHandle);
If you have any questions, let me know. Thanks in advance.
A Vertex Buffer Object is a buffer where vertex array data can be stored. The data are uploaded one time to the graphics memory (GPU) and can be used repeatedly to draw a mesh.
First you have to create 2 buffer objects, one for the vertices and one for the indices:
int buffers[] = new int[2];
GLES20.glGenBuffers(2, buffers, 0);
int vbo = buffers[0];
int ibo = buffers[1];
Then you have to bind the buffer and to transfer the data
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo);
GLES20.glBufferData(
GLES20.GL_ARRAY_BUFFER,
vertexBuffer.capacity() * 4, // 4 = bytes per float
vertexBuffer,
GLES20.GL_STATIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo);
GLES20.glBufferData(
GLES20.GL_ELEMENT_ARRAY_BUFFER,
drawListBuffer.capacity() * 2, // 2 = bytes per short
drawListBuffer,
GLES20.GL_STATIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
If you want to draw the mesh, then you have to define the array of generic vertex attribute data and you have to bind the index buffer, but you don't have to transfer any data to the GPU:
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo);
GLES20.glVertexAttribPointer(
mPositionHandle, 3,
GLES20.GL_FLOAT, false,
0, 0); // <----- 0, because "vbo" is bound
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo);
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, indices.length,
GLES20.GL_UNSIGNED_SHORT, 0); // <----- 0, because "ibo" is bound
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
See also An Introduction to Vertex Buffer Objects (VBOs)

Why is the texture coordinate in my fragment shader always (0, 0)?

I'm using LWJGL to draw "tiles", or textured 2D squares on the screen. However, the texture coordinate is always (0, 0) and therefore the textured square only uses the first pixel colour to fill it.
This is my vertex shader:
#version 330 core
in vec4 in_Position;
in vec4 in_Color;
in vec2 in_TextureCoord;
out vec4 pass_Color;
out vec2 pass_TextureCoord;
void main(void) {
gl_Position = in_Position;
pass_Color = in_Color;
pass_TextureCoord = in_TextureCoord;
}
And this is my fragment shader:
#version 330 core
uniform sampler2D texture_diffuse;
in vec4 pass_Color;
in vec2 pass_TextureCoord;
out vec4 out_Color;
void main(void) {
out_Color = pass_Color;
// Override out_Color with our texture pixel
out_Color = texture(texture_diffuse, pass_TextureCoord);
}
And this is essentially the code I'm using to draw the square:
ARBShaderObjects.glUseProgramObjectARB(shaderProgram);
glBindTexture(GL_TEXTURE_2D, sprite.getId());
glBegin(GL11.GL_QUADS);
glVertex2d(screenMinX, screenMinY);
glTexCoord2d(0.0, 0.0);
glVertex2d(screenMaxX, screenMinY);
glTexCoord2d(1.0, 0.0);
glVertex2d(screenMaxX, screenMaxY);
glTexCoord2d(1.0, 1.0);
glVertex2d(screenMinX, screenMaxY);
glTexCoord2d(0.0, 1.0);
glEnd();
// release the shader
ARBShaderObjects.glUseProgramObjectARB(0);
I cannot fix it because I don't know how the above code works in the first place. I have not told the shaders what in_Position, in_Color or in_TextureCoord are but the first two seem to work just fine. It is in_TextureCoord, which is eventually passed to the fragment shader, that seems to have a constant value of (0, 0) - I have determined that by setting the output colour of the fragment shader to have one of the channels equal to the X-coordinate of the texture coordinate. It remained a solid colour throughout the square, indicating that there was no change in texture coordinate.
The square produced with the code above should be textured but instead is painted a solid colour - the first pixel of the texture given. How can I change the code to make the texture coordinate change accordingly? If I appear to have some misunderstanding to how this all fits together, please correct me.
This is the tutorial I used to try to accomplish the above.
p.s. I am aware that the Java snippet is using deprecated immediate-mode, but I don't know how to use glDrawArrays or any other commonly suggested method to accomplish the same. Could you help me to change this?
I am aware that the Java snippet is using deprecated immediate-mode, but I don't know how to use glDrawArrays or any other commonly suggested method to accomplish the same. Could you help me to change this?
Since you do not need the attribute in_Color anymore, you have to delete the attribute from the vertex shader (and of course also pass_Color from the vertex shader and the fragment shader).
Otherwise, you have to expand my solution logically by the color attribute.
Set up an array for the vertex positions an for the texture coordinates:
float[] posData = {
screenMinX, screenMinY, 0.0, 1.0,
screenMaxX, screenMinY, 0.0, 1.0,
screenMaxX, screenMaxY, 0.0, 1.0,
screenMinX, screenMaxY, 0.0, 1.0 };
float[] texData = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 };
Generate a vertex array object:
int vaoObj = glGenVertexArrays();
glBindVertexArray(vaoObj);
Generate array buffers for the vertices and texture coordinates, enable the attribute indices and associate them buffers to the attribute indices:
FloatBuffer posBuffer = MemoryUtil.memAllocFloat(posData.length);
posBuffer.put(posData).flip();
FloatBuffer texBuffer = MemoryUtil.memAllocFloat(texData.length);
texBuffer.put(texData).flip();
int vboPosObj = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vboPosObj);
glBufferData(GL_ARRAY_BUFFER, posBuffer, GL_STATIC_DRAW);
// index 0 to associate with "in_Position"
glVertexAttribPointer(0, 4, GL_FLOAT, false, 0, 0);
glEnableVertexAttribArray(0); // 0 = attribute index of "in_Position"
int vboTexObj = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vboTexObj);
glBufferData(GL_ARRAY_BUFFER, texBuffer, GL_STATIC_DRAW);
// index 0 to associate with "in_TextureCoord"
glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, 0);
glEnableVertexAttribArray(1); // 1 = attribute index of "in_TextureCoord"
Release the vertex array object:
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
You have to specify the attribute indices of the attributes in_Position and in_TextureCoord.
Either you use explicit layout specifications in the vertex shader:
layout (location = 0) in vec4 in_Position;
layout (location = 1) in vec2 in_TextureCoord;
Or you specify the attribute indices in the shader program, right before you link the shader program (glLinkProgram).
glBindAttribLocation(shaderProgramID, 0, "in_Position");
glBindAttribLocation(shaderProgramID, 1, "in_TextureCoord");
If the object is to be drawn, it is sufficient to bind the Vertex Array Object:
glBindVertexArray(vaoObj);
glDrawArrays(GL_QUADS, 0, 4); // 4 = number of vertices
glBindVertexArray(0);
Note, if a buffer objects or a vertex array object is not further used, it has to be deleted, to prevent memory leaks. Buffer objects are deleted by glDeleteBuffers and vertex array objects are deleted by glDeleteVertexArrays.
Buffer objects are not "created under" vertex array objects, it is not sufficient to delete the vertex array object only (see OpenGL Vertex Array/Buffer Objects)
If you use need to use box to show some texture image that will be fit the full box area you can use two triangles for it and following parameters for -1,-1 to 1,1 area (that can be used with appropriate shaders to show).
Vertex (two triangles coordinates):
-1.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f
Then use can use following texture coordinates for full size:
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f

How Do I UnProject Screen Coordinates in OpenGL ES 2.0 (Android)?

I have been working for the last few days to "unproject" our app's touch events into our Renderer's coordinate space. In the pursuit of this goal, I have tried various custom unproject methods and alternative techniques (including trying to convert the coordinates using the scaling factor & transform values) to no end. I have gotten close (where my touches are slightly off) however my attempts using GLU.gluUnProject have been way off, usually placing the coordinates around the center of the view. The "closest" results were produced by Xandy's method however even these are usually off. My primary questions are how do I setup my viewport matrix and am I passing GLU.gluUnProject correct parameters? My math is based on the answer to this question. Here are the relevant excerpts of my code (showing how I setup my matrices and my current attempt):
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Set the OpenGL viewport to fill the entire surface.
glViewport(0, 0, width, height);
...
float ratio = (float) width / height;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 gl) {
glClear(GL_COLOR_BUFFER_BIT);
Matrix.setLookAtM(mViewMatrix, 0, 0f, 0f, -4.5f, 0f, 0f, 0f, 0f, 1f, 0f);
Matrix.scaleM(mViewMatrix, 0, mScaleFactor, mScaleFactor, 1.0f);
Matrix.translateM(mViewMatrix, 0, -mOffset.x, mOffset.y, 0.0f);
Matrix.multiplyMM(mModelMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
...
}
public PointF convertScreenCoords(float x, float y) {
int[] viewPortMatrix = new int[]{0, 0, (int)mViewportWidth, (int)mViewportHeight};
float[] outputNear = new float[4];
float[] outputFar = new float[4];
y = mViewportHeight - y;
int successNear = GLU.gluUnProject(x, y, 0, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
int successFar = GLU.gluUnProject(x, y, 1, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputFar, 0);
if (successNear == GL_FALSE || successFar == GL_FALSE) {
throw new RuntimeException("Cannot invert matrices!");
}
convert4DCoords(outputNear);
convert4DCoords(outputFar);
float distance = outputNear[2] / (outputFar[2] - outputNear[2]);
float normalizedX = (outputNear[0] + (outputFar[0] - outputNear[0]) * distance);
float normalizedY = (outputNear[1] + (outputFar[1] - outputNear[1]) * distance);
return new PointF(normalizedX, normalizedY);
}
convert4DCoords is simply a helper function that divides each coordinate (x, y, z) of an array by w. mOffset and mScaleFactor are the translation and scaling parameters (given by a ScaleGestureDetector within our GLSurfaceView)
Based on everything I have read this should be working however it is consistently wrong and I am not sure what else to try. Any help/feedback would be greatly appreciated!
I didn't look through all of your math, but some of your matrix operations and specifications look wrong to me. That is most likely at least part of your problem.
Looking at what you call mViewMatrix first:
Matrix.setLookAtM(mViewMatrix, 0, 0f, 0f, -4.5f, 0f, 0f, 0f, 0f, 1f, 0f);
Matrix.scaleM(mViewMatrix, 0, mScaleFactor, mScaleFactor, 1.0f);
Matrix.translateM(mViewMatrix, 0, -mOffset.x, mOffset.y, 0.0f);
There's nothing necessarily wrong with this. But most people would probably only call the first part, which is the result of setLookAtM(), a View matrix. The rest looks then more like a Model matrix. But since rendering normally only uses the product of View and Model matrix, this is just terminology. It would seem fair to say that what you store in mViewMatrix is your ModelView matrix.
Then the naming gets stranger here:
Matrix.multiplyMM(mModelMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
This calculates the product of Projection and View matrix. So this would be a ViewProjection matrix. Storing this in a variable called mModelMatrix is confusing, and I think you might have set yourself a trap with this naming.
This leads us to the glUnproject() call:
GLU.gluUnProject(x, y, 0, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
Based on the above, what you're passing for the two matrix arguments (the viewport is not really a matrix) are the ViewProjection and the Projection matrix. Where based on the method definition, you should be passing the ModelView and the Projection matrix.
Since we established that what you call mViewMatrix corresponds to a ModelView matrix, this should start working much better if you change just that argument:
GLU.gluUnProject(x, y, 0, mViewMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
Then you can get rid of the mModelMatrix that is not a Model matrix at all.

Example code on LWJGL wiki fails with GL_INVALID_OPERATION

I just copy pasted the code from this tutorial on the LWJGL wiki, which I will now paste here for your convenience.
import org.lwjgl.BufferUtils;
import org.lwjgl.LWJGLException;
import org.lwjgl.opengl.*;
import org.lwjgl.util.glu.GLU;
import java.nio.FloatBuffer;
public class TheQuadExampleDrawArrays {
// Entry point for the application
public static void main(String[] args) {
new TheQuadExampleDrawArrays();
}
// Setup variables
private final String WINDOW_TITLE = "The Quad: glDrawArrays";
private final int WIDTH = 320;
private final int HEIGHT = 240;
// Quad variables
private int vaoId = 0;
private int vboId = 0;
private int vertexCount = 0;
public TheQuadExampleDrawArrays() {
// Initialize OpenGL (Display)
this.setupOpenGL();
this.setupQuad();
while (!Display.isCloseRequested()) {
// Do a single loop (logic/render)
this.loopCycle();
// Force a maximum FPS of about 60
Display.sync(60);
// Let the CPU synchronize with the GPU if GPU is tagging behind
Display.update();
}
// Destroy OpenGL (Display)
this.destroyOpenGL();
}
public void setupOpenGL() {
// Setup an OpenGL context with API version 3.2
try {
PixelFormat pixelFormat = new PixelFormat();
ContextAttribs contextAtrributes = new ContextAttribs(3, 2)
.withForwardCompatible(true)
.withProfileCore(true);
Display.setDisplayMode(new DisplayMode(WIDTH, HEIGHT));
Display.setTitle(WINDOW_TITLE);
Display.create(pixelFormat, contextAtrributes);
GL11.glViewport(0, 0, WIDTH, HEIGHT);
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(-1);
}
// Setup an XNA like background color
GL11.glClearColor(0.4f, 0.6f, 0.9f, 0f);
// Map the internal OpenGL coordinate system to the entire screen
GL11.glViewport(0, 0, WIDTH, HEIGHT);
this.exitOnGLError("Error in setupOpenGL");
}
public void setupQuad() {
// OpenGL expects vertices to be defined counter clockwise by default
float[] vertices = {
// Left bottom triangle
-0.5f, 0.5f, 0f,
-0.5f, -0.5f, 0f,
0.5f, -0.5f, 0f,
// Right top triangle
0.5f, -0.5f, 0f,
0.5f, 0.5f, 0f,
-0.5f, 0.5f, 0f
};
// Sending data to OpenGL requires the usage of (flipped) byte buffers
FloatBuffer verticesBuffer = BufferUtils.createFloatBuffer(vertices.length);
verticesBuffer.put(vertices);
verticesBuffer.flip();
vertexCount = 6;
// Create a new Vertex Array Object in memory and select it (bind)
// A VAO can have up to 16 attributes (VBO's) assigned to it by default
vaoId = GL30.glGenVertexArrays();
GL30.glBindVertexArray(vaoId);
// Create a new Vertex Buffer Object in memory and select it (bind)
// A VBO is a collection of Vectors which in this case resemble the location of each vertex.
vboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, verticesBuffer, GL15.GL_STATIC_DRAW);
// Put the VBO in the attributes list at index 0
GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 0, 0);
// Deselect (bind to 0) the VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
// Deselect (bind to 0) the VAO
GL30.glBindVertexArray(0);
this.exitOnGLError("Error in setupQuad");
}
public void loopCycle() {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);
// Bind to the VAO that has all the information about the quad vertices
GL30.glBindVertexArray(vaoId);
GL20.glEnableVertexAttribArray(0);
// Draw the vertices
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, vertexCount);
/**
* !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
* I found that the GL_INVALID_OPERATION flag was being raised here,
* at the call to glDrawArrays().
* !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*/
// Put everything back to default (deselect)
GL20.glDisableVertexAttribArray(0);
GL30.glBindVertexArray(0);
this.exitOnGLError("Error in loopCycle");
}
public void destroyOpenGL() {
// Disable the VBO index from the VAO attributes list
GL20.glDisableVertexAttribArray(0);
// Delete the VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
GL15.glDeleteBuffers(vboId);
// Delete the VAO
GL30.glBindVertexArray(0);
GL30.glDeleteVertexArrays(vaoId);
Display.destroy();
}
public void exitOnGLError(String errorMessage) {
int errorValue = GL11.glGetError();
if (errorValue != GL11.GL_NO_ERROR) {
String errorString = GLU.gluErrorString(errorValue);
System.err.println("ERROR - " + errorMessage + ": " + errorString);
if (Display.isCreated()) Display.destroy();
System.exit(-1);
}
}
}
When I ran it, it threw an error that read
ERROR - Error in loopCycle: Invalid operation
I narrowed it down to the call to glDrawArrays() in the loopCycle() method, then hit up Google to find out what that might mean, and uncovered this SO question, which lists a whole ton of possible reasons (listed here for convenience).
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to an enabled array or to the GL_DRAW_INDIRECT_BUFFER binding and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if glDrawArrays is executed between the execution of glBegin and the corresponding glEnd.
GL_INVALID_OPERATION will be generated by glDrawArrays or glDrawElements if any two active samplers in the current program object are of different types, but refer to the same texture image unit.
GL_INVALID_OPERATION is generated if a geometry shader is active and mode is incompatible with the input primitive type of the geometry shader in the currently installed program object.
GL_INVALID_OPERATION is generated if mode is GL_PATCHES and no tessellation control shader is active.
GL_INVALID_OPERATION is generated if recording the vertices of a primitive to the buffer objects being used for transform feedback purposes would result in either exceeding the limits of any buffer object’s size, or in exceeding the end position offset + size - 1, as set by glBindBufferRange.
GL_INVALID_OPERATION is generated by glDrawArrays() if no geometry shader is present, transform feedback is active and mode is not one of the allowed modes.
GL_INVALID_OPERATION is generated by glDrawArrays() if a geometry shader is present, transform feedback is active and the output primitive type of the geometry shader does not match the transform feedback primitiveMode.
GL_INVALID_OPERATION is generated if the bound shader program is invalid.
GL_INVALID_OPERATION is generated if transform feedback is in use, and the buffer bound to the transform feedback binding point is also bound to the array buffer binding point.
Most of these make no sense to me, and after a fair amount of time reading through them I'm no closer to finding out what's wrong with this code. Could someone who knows more about this than me please point out the reason that the GL_INVALID_OPERATION flag is being raised?
Item 9. Looks like you have no shader program bound.
You're creating a context using the Core Profile:
ContextAttribs contextAtrributes = new ContextAttribs(3, 2)
.withForwardCompatible(true)
.withProfileCore(true);
With the Core Profile, it's required that you provide a shader program. You will typically write at least a vertex and a fragment shader in GLSL, and then use calls like the following to build and bind a shader program:
glCreateShader
glShaderSource
glCompileShader
glCreateProgram
glAttachShader
glLinkProgram
glUseProgram

Render image for 2D game use in OpenGL ES for android

EDIT: Solved it! I made stupid mistake, I had a textureId I'd forgotten about when it was textureID I should use.
Okay, I am fully aware that this is a recurring question, and that there is a lot of tutorials and open source code. But I've been trying as best as I can for quite a while here, and my screen is still blank (with whatever color I set using glClearColor()).
So, I would be grateful for some pointers to what I'm doing wrong, or even better, some working code that will render a resource image.
I'll show what I've got so far (by doing some crafty copy-pasting) in my onDrawFrame of the class that implements the Renderer. I've removed some of the jumping between methods, and will simply paste it in the order it is executed.
Feel free to disregard my current code, I'm more than happy to start over, if anyone can give me a working piece of code.
Setup:
bitmap = BitmapFactory.decodeResource(panel.getResources(),
R.drawable.test);
addGameComponent(new MeleeAttackComponent());
// Mapping coordinates for the vertices
float textureCoordinates[] = { 0.0f, 2.0f, //
2.0f, 2.0f, //
0.0f, 0.0f, //
2.0f, 0.0f, //
};
short[] indices = new short[] { 0, 1, 2, 1, 3, 2 };
float[] vertices = new float[] { -0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f };
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
protected void setVertices(float[] vertices) {
// a float is 4 bytes, therefore we multiply the number if
// vertices with 4.
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
mVerticesBuffer = vbb.asFloatBuffer();
mVerticesBuffer.put(vertices);
mVerticesBuffer.position(0);
}
protected void setIndices(short[] indices) {
// short is 2 bytes, therefore we multiply the number if
// vertices with 2.
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
mIndicesBuffer = ibb.asShortBuffer();
mIndicesBuffer.put(indices);
mIndicesBuffer.position(0);
mNumOfIndices = indices.length;
}
protected void setTextureCoordinates(float[] textureCoords) {
// float is 4 bytes, therefore we multiply the number of
// vertices with 4.
ByteBuffer byteBuf = ByteBuffer
.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mTextureBuffer = byteBuf.asFloatBuffer();
mTextureBuffer.put(textureCoords);
mTextureBuffer.position(0);
}
//The onDrawFrame(GL10 gl)
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -4);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
if(shoudlLoadTexture){
loadGLTextures(gl);
shoudlLoadTexture = false;
}
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
}
gl.glTranslatef(posX, posY, 0);
// Point out the where the color buffer is.
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
private void loadGLTextures(GL10 gl) {
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
It doesn't crash, no exceptions, simply a blank screen with color. I've printed stuff in there, so I'm pretty sure it is all executed.
I know it's not optimal to just paste code, but at the moment, I just want to be able to do what I was able to do with canvas :)
Thanks a lot
If you're getting the background colour, that means your window is properly set up. OpenGL is connected to that area of the screen.
However, OpenGL clips to the near and far clip planes, ensuring that objects don't cross or intersect the camera (which, both mathematically and logically, doesn't make sense) and that objects too far away don't appear. So if you've not set up modelview and projection correctly, it's probable that all your geometry is being clipped.
Modelview is used to map from world to eye space. Projection maps from eye space to screen space. So a typical applications uses the former to position objects within the scene, and position the scene relative to the camera, then the latter deals with whether the camera sees with perspective or not, how many world units make how many screen units, etc.
If you look at examples like this one, particularly onSurfaceChanged, you'll see an example of a perspective projection with a camera fixed at the origin.
Because the camera is at (0, 0, 0), leaving your geometry on z = 0 as your code does will cause it to be clipped. In that example code they've set the near clip plane to be at z = 0.1, so in your existing code you could change:
gl.glTranslatef(posX, posY, 0);
To:
gl.glTranslatef(posX, posY, -1.0);
To push your geometry back sufficiently far to appear on screen.

Categories