I just copy pasted the code from this tutorial on the LWJGL wiki, which I will now paste here for your convenience.
import org.lwjgl.BufferUtils;
import org.lwjgl.LWJGLException;
import org.lwjgl.opengl.*;
import org.lwjgl.util.glu.GLU;
import java.nio.FloatBuffer;
public class TheQuadExampleDrawArrays {
// Entry point for the application
public static void main(String[] args) {
new TheQuadExampleDrawArrays();
}
// Setup variables
private final String WINDOW_TITLE = "The Quad: glDrawArrays";
private final int WIDTH = 320;
private final int HEIGHT = 240;
// Quad variables
private int vaoId = 0;
private int vboId = 0;
private int vertexCount = 0;
public TheQuadExampleDrawArrays() {
// Initialize OpenGL (Display)
this.setupOpenGL();
this.setupQuad();
while (!Display.isCloseRequested()) {
// Do a single loop (logic/render)
this.loopCycle();
// Force a maximum FPS of about 60
Display.sync(60);
// Let the CPU synchronize with the GPU if GPU is tagging behind
Display.update();
}
// Destroy OpenGL (Display)
this.destroyOpenGL();
}
public void setupOpenGL() {
// Setup an OpenGL context with API version 3.2
try {
PixelFormat pixelFormat = new PixelFormat();
ContextAttribs contextAtrributes = new ContextAttribs(3, 2)
.withForwardCompatible(true)
.withProfileCore(true);
Display.setDisplayMode(new DisplayMode(WIDTH, HEIGHT));
Display.setTitle(WINDOW_TITLE);
Display.create(pixelFormat, contextAtrributes);
GL11.glViewport(0, 0, WIDTH, HEIGHT);
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(-1);
}
// Setup an XNA like background color
GL11.glClearColor(0.4f, 0.6f, 0.9f, 0f);
// Map the internal OpenGL coordinate system to the entire screen
GL11.glViewport(0, 0, WIDTH, HEIGHT);
this.exitOnGLError("Error in setupOpenGL");
}
public void setupQuad() {
// OpenGL expects vertices to be defined counter clockwise by default
float[] vertices = {
// Left bottom triangle
-0.5f, 0.5f, 0f,
-0.5f, -0.5f, 0f,
0.5f, -0.5f, 0f,
// Right top triangle
0.5f, -0.5f, 0f,
0.5f, 0.5f, 0f,
-0.5f, 0.5f, 0f
};
// Sending data to OpenGL requires the usage of (flipped) byte buffers
FloatBuffer verticesBuffer = BufferUtils.createFloatBuffer(vertices.length);
verticesBuffer.put(vertices);
verticesBuffer.flip();
vertexCount = 6;
// Create a new Vertex Array Object in memory and select it (bind)
// A VAO can have up to 16 attributes (VBO's) assigned to it by default
vaoId = GL30.glGenVertexArrays();
GL30.glBindVertexArray(vaoId);
// Create a new Vertex Buffer Object in memory and select it (bind)
// A VBO is a collection of Vectors which in this case resemble the location of each vertex.
vboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, verticesBuffer, GL15.GL_STATIC_DRAW);
// Put the VBO in the attributes list at index 0
GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 0, 0);
// Deselect (bind to 0) the VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
// Deselect (bind to 0) the VAO
GL30.glBindVertexArray(0);
this.exitOnGLError("Error in setupQuad");
}
public void loopCycle() {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);
// Bind to the VAO that has all the information about the quad vertices
GL30.glBindVertexArray(vaoId);
GL20.glEnableVertexAttribArray(0);
// Draw the vertices
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, vertexCount);
/**
* !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
* I found that the GL_INVALID_OPERATION flag was being raised here,
* at the call to glDrawArrays().
* !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*/
// Put everything back to default (deselect)
GL20.glDisableVertexAttribArray(0);
GL30.glBindVertexArray(0);
this.exitOnGLError("Error in loopCycle");
}
public void destroyOpenGL() {
// Disable the VBO index from the VAO attributes list
GL20.glDisableVertexAttribArray(0);
// Delete the VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
GL15.glDeleteBuffers(vboId);
// Delete the VAO
GL30.glBindVertexArray(0);
GL30.glDeleteVertexArrays(vaoId);
Display.destroy();
}
public void exitOnGLError(String errorMessage) {
int errorValue = GL11.glGetError();
if (errorValue != GL11.GL_NO_ERROR) {
String errorString = GLU.gluErrorString(errorValue);
System.err.println("ERROR - " + errorMessage + ": " + errorString);
if (Display.isCreated()) Display.destroy();
System.exit(-1);
}
}
}
When I ran it, it threw an error that read
ERROR - Error in loopCycle: Invalid operation
I narrowed it down to the call to glDrawArrays() in the loopCycle() method, then hit up Google to find out what that might mean, and uncovered this SO question, which lists a whole ton of possible reasons (listed here for convenience).
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to an enabled array or to the GL_DRAW_INDIRECT_BUFFER binding and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if glDrawArrays is executed between the execution of glBegin and the corresponding glEnd.
GL_INVALID_OPERATION will be generated by glDrawArrays or glDrawElements if any two active samplers in the current program object are of different types, but refer to the same texture image unit.
GL_INVALID_OPERATION is generated if a geometry shader is active and mode is incompatible with the input primitive type of the geometry shader in the currently installed program object.
GL_INVALID_OPERATION is generated if mode is GL_PATCHES and no tessellation control shader is active.
GL_INVALID_OPERATION is generated if recording the vertices of a primitive to the buffer objects being used for transform feedback purposes would result in either exceeding the limits of any buffer object’s size, or in exceeding the end position offset + size - 1, as set by glBindBufferRange.
GL_INVALID_OPERATION is generated by glDrawArrays() if no geometry shader is present, transform feedback is active and mode is not one of the allowed modes.
GL_INVALID_OPERATION is generated by glDrawArrays() if a geometry shader is present, transform feedback is active and the output primitive type of the geometry shader does not match the transform feedback primitiveMode.
GL_INVALID_OPERATION is generated if the bound shader program is invalid.
GL_INVALID_OPERATION is generated if transform feedback is in use, and the buffer bound to the transform feedback binding point is also bound to the array buffer binding point.
Most of these make no sense to me, and after a fair amount of time reading through them I'm no closer to finding out what's wrong with this code. Could someone who knows more about this than me please point out the reason that the GL_INVALID_OPERATION flag is being raised?
Item 9. Looks like you have no shader program bound.
You're creating a context using the Core Profile:
ContextAttribs contextAtrributes = new ContextAttribs(3, 2)
.withForwardCompatible(true)
.withProfileCore(true);
With the Core Profile, it's required that you provide a shader program. You will typically write at least a vertex and a fragment shader in GLSL, and then use calls like the following to build and bind a shader program:
glCreateShader
glShaderSource
glCompileShader
glCreateProgram
glAttachShader
glLinkProgram
glUseProgram
Related
I recently decided to start Learning OpenGL and got myself a book about OpenGL Core 3.3. The book is generally about C++.
So, after looking for a bit, I found a library in a language I was better in which provided almost the same functionality: lwjgl.
I followed the book's steps and translated the C++ syntax into java syntax, which worked until it got to actually drawing something.
There, the JVM just kept crashing, no matter what I changed about the code. After doing some debugging, I found out that the JVM crashed when I called either glVertexAttribPointer or glDrawArrays.
I am very new to OpenGL, and I am assuming this question must sound very stupid to someone more experienced, but: What do I need to change about this code?
float[] vertices = {
-0.5f, -0.5f, -0.0f,
0.5f, 0.5f, 0.0f,
0.0f,0.5f,0.0f
};
FloatBuffer b = BufferUtils.createFloatBuffer(9);
b.put(vertices);
int VBO = glGenBuffers();
int VAO = glGenVertexArrays();
log.info("VBO:" + VBO + "VAO: " + VAO);
// bind the Vertex Array Object first, then bind and set vertex buffer(s), and then configure vertex attributes(s).
glBindVertexArray(VAO);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 12, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, vertices, GL_STATIC_DRAW);
glBindVertexArray(0);
// Run the rendering loop until the user has attempted to close
// the window or has pressed the ESCAPE key.
while (!glfwWindowShouldClose(window))
{
// input
// -----
// render
// ------
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// draw our first triangle
glUseProgram(shaderProgram);
glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to bind it every time, but we'll do so to keep things a bit more organized
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); // no need to unbind it every time
// glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.)
// -------------------------------------------------------------------------------
glfwSwapBuffers(window);
glfwPollEvents();
}
I would be very thankful for any help i can get, if you need more info/ need to see more of my code please let me know. Thanks in advance
You have to bind the vertex buffer object to the target GL_ARRAY_BUFFER, before specifying the vertex attribute:
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 12, 0);
When glVertexAttribPointer is called, the buffer object currently bound to the ARRAY_BUFFER target is associated to the attribute (index) and a reference to the buffer object is stored in the state vector of the Vertex Array Object.
Using LWJGL I tried to render to render a simple Mesh on screen, but OpenGL decided to instead do nothing. :(
So I have a mesh class which creates a VBO. I can add some vertices which then are supposed to be drawn on screen.
public class Mesh {
private int vbo;
private int size = 0;
public Mesh() {
vbo = glGenBuffers();
}
public void addVertices(Vertex[] vertices) {
size = vertices.length;
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, Util.createFlippedBuffer(vertices), GL_STATIC_DRAW);
}
public void draw() {
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, false, Vertex.SIZE * 4, 0);
glDrawArrays(GL_TRIANGLES, 0, size);
glDisableVertexAttribArray(0);
}
}
Here is how I add vertices to my mesh:
mesh = new Mesh();
Vertex[] vertices = new Vertex[] { new Vertex(new Vector3f(-1, -1, 0)),
new Vertex(new Vector3f(-1, 1, 0)),
new Vertex(new Vector3f(0, 1, 0)) };
mesh.addVertices(vertices);
I am pretty sure I added them in the correct (clock-wise) order.
And my OpenGL setup:
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
Calling glGetError() returns no error (0).
EDIT:
Well I found out that macs are little weird when it comes to OpenGL. I needed to use a VAO along with the VBO. Now it works fine. Thanks anyway!
I don't see anywhere you're specifying either a shader for output or a color, or a vertex array. Depending on which profile you're using you need to be doing one or more of these.
I would suggest checking / setting the following
Disable face culling to ensure that regardless of the winding you should see something
If you're requesting a core profile, you'll need a shader and quite possibly a vertex array object
If you're instead using a compatibility profile you should call glColor3f(1, 1, 1) in your draw call to ensure you're not drawing a black triangle
Are you clearing the color and depth framebuffers before your render?
You might not be drawing that object within the viewing frustum, also call glCheckError to make sure you aren't making any mistakes.
It is also important to understand the difference between fixed pipeline and programmable pipeline OpenGL. If you are using a version with a programmable pipeline you will need to write shaders, otherwise you will need to set modelview and projection matrices.
I'm new to OpenGL and started with the small tutorial from dev.android.com. The sample code includes this Square class for a square geometry. The object will be created in the onSurfaceCreated() method and drawn every frame using onDrawFrame(). Here is the example code of the Square (constructor and draw-method):
public Square() {
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(squareCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
// prepare shaders and OpenGL program
int vertexShader = MyGLRenderer.loadShader(
GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = MyGLRenderer.loadShader(
GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
mProgram = GLES20.glCreateProgram(); // create empty OpenGL Program
GLES20.glAttachShader(mProgram, vertexShader); // add the vertex shader to program
GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
GLES20.glLinkProgram(mProgram); // create OpenGL program executables
}
public void draw(float[] mvpMatrix) {
// Add program to OpenGL environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the triangle
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
MyGLRenderer.checkGlError("glGetUniformLocation");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
MyGLRenderer.checkGlError("glUniformMatrix4fv");
// Draw the square
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
}
My question is now: how can I create the object not at onSurfaceCreated() but after a touch event?
I tried to define a Square variable but not initialize it at onSurfaceCreated(), then check if the object is null before drawing it. After the touch I called:
mSquare = new Square();
I know it's not a good way of implementing this, but I just wanted to try if it works. I would have created a list of drawable elements and run through it in the onDrawFrame() method, calling every draw() from the list objects. But since this method causes the program to crash, I don't know how to go on.
You can only make calls to OpenGL within an OpenGL context.
There are 3 methods in which this context exists:
1) onSurfaceCreated - when context is created/recreated - you should load resources here
2) onSurfaceChanged - after creation and on a surface resize - you should assign size-dependent variables here
3) onDrawFrame - here rendering is executed - here you should execute all drawing commands
If you try to execute OpenGL calls outside of the context, most likely the code will fail to execute correctly.
Android and GL contexts are different and they run on different(manages) threads in their own way
so if you want to create a square on the touch you've to create object, you have to do it in GLcontext like this
http://pastebin.com/zAav7jpu
Initialize your object in touch/or where you want to do
public static isAddedBody = false; //declare globally in touch_class
public void touch(int.. whateva){
mSquare = new Square();
isAddedBody = true;
}
in render()/or draw() loop
if (touch_class.isAddedBody){
mSquare.createBody();
touch_class.isAdded = false;
}
EDIT: Solved it! I made stupid mistake, I had a textureId I'd forgotten about when it was textureID I should use.
Okay, I am fully aware that this is a recurring question, and that there is a lot of tutorials and open source code. But I've been trying as best as I can for quite a while here, and my screen is still blank (with whatever color I set using glClearColor()).
So, I would be grateful for some pointers to what I'm doing wrong, or even better, some working code that will render a resource image.
I'll show what I've got so far (by doing some crafty copy-pasting) in my onDrawFrame of the class that implements the Renderer. I've removed some of the jumping between methods, and will simply paste it in the order it is executed.
Feel free to disregard my current code, I'm more than happy to start over, if anyone can give me a working piece of code.
Setup:
bitmap = BitmapFactory.decodeResource(panel.getResources(),
R.drawable.test);
addGameComponent(new MeleeAttackComponent());
// Mapping coordinates for the vertices
float textureCoordinates[] = { 0.0f, 2.0f, //
2.0f, 2.0f, //
0.0f, 0.0f, //
2.0f, 0.0f, //
};
short[] indices = new short[] { 0, 1, 2, 1, 3, 2 };
float[] vertices = new float[] { -0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f };
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
protected void setVertices(float[] vertices) {
// a float is 4 bytes, therefore we multiply the number if
// vertices with 4.
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
mVerticesBuffer = vbb.asFloatBuffer();
mVerticesBuffer.put(vertices);
mVerticesBuffer.position(0);
}
protected void setIndices(short[] indices) {
// short is 2 bytes, therefore we multiply the number if
// vertices with 2.
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
mIndicesBuffer = ibb.asShortBuffer();
mIndicesBuffer.put(indices);
mIndicesBuffer.position(0);
mNumOfIndices = indices.length;
}
protected void setTextureCoordinates(float[] textureCoords) {
// float is 4 bytes, therefore we multiply the number of
// vertices with 4.
ByteBuffer byteBuf = ByteBuffer
.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mTextureBuffer = byteBuf.asFloatBuffer();
mTextureBuffer.put(textureCoords);
mTextureBuffer.position(0);
}
//The onDrawFrame(GL10 gl)
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -4);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
if(shoudlLoadTexture){
loadGLTextures(gl);
shoudlLoadTexture = false;
}
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
}
gl.glTranslatef(posX, posY, 0);
// Point out the where the color buffer is.
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
private void loadGLTextures(GL10 gl) {
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
It doesn't crash, no exceptions, simply a blank screen with color. I've printed stuff in there, so I'm pretty sure it is all executed.
I know it's not optimal to just paste code, but at the moment, I just want to be able to do what I was able to do with canvas :)
Thanks a lot
If you're getting the background colour, that means your window is properly set up. OpenGL is connected to that area of the screen.
However, OpenGL clips to the near and far clip planes, ensuring that objects don't cross or intersect the camera (which, both mathematically and logically, doesn't make sense) and that objects too far away don't appear. So if you've not set up modelview and projection correctly, it's probable that all your geometry is being clipped.
Modelview is used to map from world to eye space. Projection maps from eye space to screen space. So a typical applications uses the former to position objects within the scene, and position the scene relative to the camera, then the latter deals with whether the camera sees with perspective or not, how many world units make how many screen units, etc.
If you look at examples like this one, particularly onSurfaceChanged, you'll see an example of a perspective projection with a camera fixed at the origin.
Because the camera is at (0, 0, 0), leaving your geometry on z = 0 as your code does will cause it to be clipped. In that example code they've set the near clip plane to be at z = 0.1, so in your existing code you could change:
gl.glTranslatef(posX, posY, 0);
To:
gl.glTranslatef(posX, posY, -1.0);
To push your geometry back sufficiently far to appear on screen.
Once again I'm trying to get into openGL, but as usual I choke when passing around vertices/vertixes/whatever and every little detail can lead to disaster (wrong format, initialization not properly set up, where memory are saved, etc.).
My main goal is to use openGL for 2D graphics to speed up performance compared to regular cpu drawing.
Anyways, my openGL Renderer looks like this:
package com.derp.testopengl;
import java.nio.Buffer;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.opengl.GLU;
import android.opengl.GLSurfaceView.Renderer;
public class OpenGLRenderer implements Renderer {
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Set the background color to black ( rgba ).
gl.glClearColor(1.0f, 0.0f, 0.0f, 0.5f); // OpenGL docs.
// Enable Smooth Shading, default not really needed.
gl.glShadeModel(GL10.GL_SMOOTH);// OpenGL docs.
// Depth buffer setup.
gl.glClearDepthf(1.0f);// OpenGL docs.
// Enables depth testing.
gl.glEnable(GL10.GL_DEPTH_TEST);// OpenGL docs.
// The type of depth testing to do.
gl.glDepthFunc(GL10.GL_LEQUAL);// OpenGL docs.
// Really nice perspective calculations.
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, // OpenGL docs.
GL10.GL_NICEST);
}
public void onDrawFrame(GL10 gl) {
// Clears the screen and depth buffer.
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | // OpenGL docs.
GL10.GL_DEPTH_BUFFER_BIT);
// Define the points of my triangle
float floatbuff[] = {
1.0f,0.0f,0.0f,
0.0f,1.0f,0.0f,
-1.0f,0.0f,0.0f
};
// Create memory on the heap
ByteBuffer vbb = ByteBuffer.allocateDirect(3 * 3 * 4);
vbb.order(ByteOrder.nativeOrder());
FloatBuffer vertices = vbb.asFloatBuffer();
// Insert points into floatbuffer
vertices.put(floatbuff);
// Reset position
vertices.position(0);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Change color to green
gl.glColor4f(0.0f, 1.0f, 0.0f, 1.0f);
// Pass vertices to openGL
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertices);
// Draw 'em
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Sets the current view port to the new size.
gl.glViewport(0, 0, width, height);// OpenGL docs.
// Select the projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);// OpenGL docs.
// Reset the projection matrix
gl.glLoadIdentity();// OpenGL docs.
// Should give a 2D coordinate system that responds to the screen
gl.glOrthof(0.0f, width, 0.0f, height, 0, 200.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();// OpenGL docs.
}
}
It currently crash on the line:
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3);
With an IndexOutOfBoundsException, but I'm sure there are more problems with the code.
Thankful for any help!
You probably already solved it but i thought i put in the answer for somebody else. When you called this line:
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3);
You are saying that there are 3 triangles in the vertices buffer. But there is actually 1 triangle. You should replace 3 with:
vertices.length / 3
That way, if you put in any more points for more polygons, it will render it correctly. Hope this helps.
-Brian
PS: I am currently learning how to use opengl with android. So I am figuring out all these little problems too. Good luck :)
Try changing this:
// Create memory on the heap
ByteBuffer vbb = ByteBuffer.allocateDirect(3 * 3 * 4);
vbb.order(ByteOrder.nativeOrder());
FloatBuffer vertices = vbb.asFloatBuffer();
// Insert points into floatbuffer
vertices.put(floatbuff);
// Reset position
vertices.position(0);
to this:
FloatBuffer vertices = FloatBuffer.wrap(floatbuff);
it may not be answere but a advice...
don't put this below code in OnDrawFrame, since your triangle co-ordinates don't change... do this stuff in onSurfaceCreated.
make FloatBuffer a member variable of the class
// Define the points of my triangle
float floatbuff[] = {
1.0f,0.0f,0.0f,
0.0f,1.0f,0.0f,
-1.0f,0.0f,0.0f
};
// Create memory on the heap
ByteBuffer vbb = ByteBuffer.allocateDirect(3 * 3 * 4);
vbb.order(ByteOrder.nativeOrder());
FloatBuffer vertices = vbb.asFloatBuffer();
// Insert points into floatbuffer
vertices.put(floatbuff);
take a look at this class TriangleRenderer.java
it will give you basic