So I've been working 2 nights with this code that I got from my teacher. I have been looking to find some good Javadoc on JOGL without much success. So I've been using the try/fail method changing the variables here and there. I've learned how to control rotation, distance and size. So I made me a little "Solar System" - but here comes my problem - how can I implement multiple textures for the different planets I've made? Heres my code:
public class RelativeTransformation implements GLEventListener, KeyListener {
// OpenGL window reference
private static GLWindow window;
// The animator is responsible for continuous operation
private static Animator animator;
// The program entry point
public static void main(String[] args) {
new RelativeTransformation().setup();
}
// Vertex data
private float[] vertexData;
// Triangle data
private short[] elementData;
// Light properties (4 valued vectors due to std140 see OpenGL 4.5 reference)
private float[] lightProperties = {
// Position
2f, 0f, 3f, 0f,
// Ambient Color
0.2f, 0.2f, 0.2f, 0f,
// Diffuse Color
0.5f, 0.5f, 0.5f, 0f,
// Specular Color
1f, 1f, 1f, 0f
};
private float[] materialProperties = {
// Shininess
8f
};
// Camera properties
private float[] cameraProperties = {
0f, 0f, 2f
};
// The OpenGL profile
GLProfile glProfile;
// The texture filename
private final String textureFilename = "src/relative_transformation/sun.jpg";
private final String textureFilename2 = "src/relative_transformation/earth.jpg";
// Create buffers for the names
private IntBuffer bufferNames = GLBuffers.newDirectIntBuffer(Buffer.MAX);
private IntBuffer vertexArrayName = GLBuffers.newDirectIntBuffer(1);
private IntBuffer textureNames = GLBuffers.newDirectIntBuffer(1);
// Create buffers for clear values
private FloatBuffer clearColor = GLBuffers.newDirectFloatBuffer(new float[] {0, 0, 0, 0});
private FloatBuffer clearDepth = GLBuffers.newDirectFloatBuffer(new float[] {1});
// Create references to buffers for holding the matrices
private ByteBuffer globalMatricesPointer, modelMatrixPointer1, modelMatrixPointer2, modelMatrixPointer3;
// Program instance reference
private Program program;
// Variable for storing the start time of the application
private long start;
// Application setup function
private void setup() {
// Get a OpenGL 4.x profile (x >= 0)
glProfile = GLProfile.get(GLProfile.GL4);
// Get a structure for definining the OpenGL capabilities with default values
GLCapabilities glCapabilities = new GLCapabilities(glProfile);
// Create the window with default capabilities
window = GLWindow.create(glCapabilities);
// Set the title of the window
window.setTitle("Relative Transformation");
// Set the size of the window
window.setSize(1024, 768);
// Set debug context (must be set before the window is set to visible)
window.setContextCreationFlags(GLContext.CTX_OPTION_DEBUG);
// Make the window visible
window.setVisible(true);
// Add OpenGL and keyboard event listeners
window.addGLEventListener(this);
window.addKeyListener(this);
// Create and start the animator
animator = new Animator(window);
animator.start();
// Add window event listener
window.addWindowListener(new WindowAdapter() {
// Window has been destroyed
#Override
public void windowDestroyed(WindowEvent e) {
// Stop animator and exit
animator.stop();
System.exit(1);
}
});
}
// GLEventListener.init implementation
#Override
public void init(GLAutoDrawable drawable) {
// Get OpenGL 4 reference
GL4 gl = drawable.getGL().getGL4();
// Initialize debugging
initDebug(gl);
// Initialize buffers
initBuffers(gl);
// Initialize vertex array
initVertexArray(gl);
// Initialize texture
initTexture(gl);
// Set up the program
program = new Program(gl, "relative_transformation", "shader", "shader");
// Enable Opengl depth buffer testing
gl.glEnable(GL_DEPTH_TEST);
// Store the starting time of the application
start = System.currentTimeMillis();
}
// GLEventListener.display implementation
#Override
public void display(GLAutoDrawable drawable) {
// Get OpenGL 4 reference
GL4 gl = drawable.getGL().getGL4();
// Copy the view matrix to the server
{
// Create identity matrix
float[] view = FloatUtil.makeTranslation(new float[16], 0, false, -cameraProperties[0], -cameraProperties[1], -cameraProperties[2]);
// Copy each of the values to the second of the two global matrices
for (int i = 0; i < 16; i++)
globalMatricesPointer.putFloat(16 * 4 + i * 4, view[i]);
}
// Clear the color and depth buffers
gl.glClearBufferfv(GL_COLOR, 0, clearColor);
gl.glClearBufferfv(GL_DEPTH, 0, clearDepth);
// Copy the model matrices to the server
{
// Find a time delta for the time passed since the start of execution
long now = System.currentTimeMillis();
float diff = (float) (now - start) / 2000;
// Create a rotation matrix around the z axis based on the time delta
// Lag 2 rotate inni hverandre, relater den 2. til den 1. og sett speed opp! Se Universe.java (model og modelPos?)
float[] rotate1 = FloatUtil.makeRotationAxis(new float[16], 0, 00.5f*diff, 0f, 1f, 0f, new float[3]);
float[] rotate2 = FloatUtil.makeRotationAxis(new float[16], 0, 01.0f*diff, 0f, 1f, 0f, new float[3]);
float[] rotate3 = FloatUtil.makeRotationAxis(new float[16], 0, 15.0f*diff, 0f, 1f, 0f, new float[3]);
float[] translate2 = FloatUtil.makeTranslation(new float[16], false, 1.4f, 0f, 0f);
float[] translate3 = FloatUtil.makeTranslation(new float[16], false, 0.0f, 0f, 0f);
float[] modelPos2 = FloatUtil.multMatrix(rotate1, FloatUtil.multMatrix(rotate2, translate2, new float[16]), new float[16]);
float[] model2 = FloatUtil.multMatrix(modelPos2, FloatUtil.makeScale(new float[16], false, 0.1f, 0.1f, 0.1f), new float[16]);
float[] modelPos3 = FloatUtil.multMatrix(modelPos2, FloatUtil.multMatrix(rotate3, translate3, new float[16]), new float[16]);
float[] model3 = FloatUtil.multMatrix(modelPos3, FloatUtil.makeScale(new float[16], false, 0.5f, 0.5f, 0.5f), new float[16]);
// Copy the entire matrix to the server
modelMatrixPointer1.asFloatBuffer().put(rotate1);
modelMatrixPointer2.asFloatBuffer().put(model2);
modelMatrixPointer3.asFloatBuffer().put(model3);
}
// Activate the vertex program and vertex array
gl.glUseProgram(program.name);
gl.glBindVertexArray(vertexArrayName.get(0));
gl.glBindTexture(gl.GL_TEXTURE_2D, textureNames.get(0));
// Bind the global matrices buffer to a specified index within the uniform buffers
gl.glBindBufferBase(
GL_UNIFORM_BUFFER,
Semantic.Uniform.TRANSFORM0,
bufferNames.get(Buffer.GLOBAL_MATRICES));
// Bind the light properties buffer to a specified uniform index
gl.glBindBufferBase(
GL_UNIFORM_BUFFER,
Semantic.Uniform.LIGHT0,
bufferNames.get(Buffer.LIGHT_PROPERTIES));
// Bind the light properties buffer to a specified uniform index
gl.glBindBufferBase(
GL_UNIFORM_BUFFER,
Semantic.Uniform.MATERIAL,
bufferNames.get(Buffer.MATERIAL_PROPERTIES));
// Bind the light properties buffer to a specified uniform index
gl.glBindBufferBase(
GL_UNIFORM_BUFFER,
Semantic.Uniform.CAMERA,
bufferNames.get(Buffer.CAMERA_PROPERTIES));
// Bind the model matrix buffer to a specified index within the uniform buffers
gl.glBindBufferBase(
GL_UNIFORM_BUFFER,
Semantic.Uniform.TRANSFORM1,
bufferNames.get(Buffer.MODEL_MATRIX1));
// Draw the triangle
gl.glDrawElements(
GL_TRIANGLES,
elementData.length,
GL_UNSIGNED_SHORT,
0);
// Bind the model matrix buffer to a specified index within the uniform buffers
gl.glBindBufferBase(
GL_UNIFORM_BUFFER,
Semantic.Uniform.TRANSFORM1,
bufferNames.get(Buffer.MODEL_MATRIX2));
// Draw the triangle
gl.glDrawElements(
GL_TRIANGLES,
elementData.length,
GL_UNSIGNED_SHORT,
0);
// Bind the model matrix buffer to a specified index within the uniform buffers
gl.glBindBufferBase(
GL_UNIFORM_BUFFER,
Semantic.Uniform.TRANSFORM1,
bufferNames.get(Buffer.MODEL_MATRIX3));
// Draw the triangle
gl.glDrawElements(
GL_TRIANGLES,
elementData.length,
GL_UNSIGNED_SHORT,
0);
// Deactivate the program and vertex array
gl.glUseProgram(0);
gl.glBindVertexArray(0);
gl.glBindTexture(gl.GL_TEXTURE_2D, 0);
}
// GLEventListener.reshape implementation
#Override
public void reshape(GLAutoDrawable drawable, int x, int y, int width, int height) {
// Get OpenGL 4 reference
GL4 gl = drawable.getGL().getGL4();
// Create an orthogonal projection matrix
float[] ortho = FloatUtil.makePerspective(new float[16], 0, false, (float)Math.PI/2f, (float)width/height, 0.1f, 100f);
// Copy the projection matrix to the server
globalMatricesPointer.asFloatBuffer().put(ortho);
// Set the OpenGL viewport
gl.glViewport(x, y, width, height);
}
// GLEventListener.dispose implementation
#Override
public void dispose(GLAutoDrawable drawable) {
// Get OpenGL 4 reference
GL4 gl = drawable.getGL().getGL4();
// Delete the program
gl.glDeleteProgram(program.name);
// Delete the vertex array
gl.glDeleteVertexArrays(1, vertexArrayName);
// Delete the buffers
gl.glDeleteBuffers(Buffer.MAX, bufferNames);
gl.glDeleteTextures(1, textureNames);
}
// KeyListener.keyPressed implementation
#Override
public void keyPressed(KeyEvent e) {
// Destroy the window if the esape key is pressed
if (e.getKeyCode() == KeyEvent.VK_ESCAPE) {
new Thread(() -> {
window.destroy();
}).start();
}
}
// KeyListener.keyPressed implementation
#Override
public void keyReleased(KeyEvent e) {
}
// Function for initializing OpenGL debugging
private void initDebug(GL4 gl) {
// Register a new debug listener
window.getContext().addGLDebugListener(new GLDebugListener() {
// Output any messages to standard out
#Override
public void messageSent(GLDebugMessage event) {
System.out.println(event);
}
});
// Ignore all messages
gl.glDebugMessageControl(
GL_DONT_CARE,
GL_DONT_CARE,
GL_DONT_CARE,
0,
null,
false);
// Enable messages of high severity
gl.glDebugMessageControl(
GL_DONT_CARE,
GL_DONT_CARE,
GL_DEBUG_SEVERITY_HIGH,
0,
null,
true);
// Enable messages of medium severity
gl.glDebugMessageControl(
GL_DONT_CARE,
GL_DONT_CARE,
GL_DEBUG_SEVERITY_MEDIUM,
0,
null,
true);
}
// Function fo initializing OpenGL buffers
private void initBuffers(GL4 gl) {
// Create a new float direct buffer for the vertex data
vertexData = createSphereVertices(0.5f, 16, 16);
FloatBuffer vertexBuffer = GLBuffers.newDirectFloatBuffer(vertexData);
// Create a new short direct buffer for the triangle indices
elementData = createSphereElements(16, 16);
ShortBuffer elementBuffer = GLBuffers.newDirectShortBuffer(elementData);
// Create a direct buffer for the light properties
FloatBuffer lightBuffer = GLBuffers.newDirectFloatBuffer(lightProperties);
// Create a direct buffer for the material properties
FloatBuffer materialBuffer = GLBuffers.newDirectFloatBuffer(materialProperties);
// Create a direct buffer for the light properties
FloatBuffer cameraBuffer = GLBuffers.newDirectFloatBuffer(cameraProperties);
// Create the OpenGL buffers names
gl.glCreateBuffers(Buffer.MAX, bufferNames);
// Create and initialize a buffer storage for the vertex data
gl.glBindBuffer(GL_ARRAY_BUFFER, bufferNames.get(Buffer.VERTEX));
gl.glBufferStorage(GL_ARRAY_BUFFER, vertexBuffer.capacity() * Float.BYTES, vertexBuffer, 0);
gl.glBindBuffer(GL_ARRAY_BUFFER, 0);
// Create and initialize a buffer storage for the triangle indices
gl.glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bufferNames.get(Buffer.ELEMENT));
gl.glBufferStorage(GL_ELEMENT_ARRAY_BUFFER, elementBuffer.capacity() * Short.BYTES, elementBuffer, 0);
gl.glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// Retrieve the uniform buffer offset alignment minimum
IntBuffer uniformBufferOffset = GLBuffers.newDirectIntBuffer(1);
gl.glGetIntegerv(GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT, uniformBufferOffset);
// Set the required bytes for the matrices in accordance to the uniform buffer offset alignment minimum
int globalBlockSize = Math.max(16 * 4 * 2, uniformBufferOffset.get(0));
int modelBlockSize = Math.max(16 * 4, uniformBufferOffset.get(0));
int lightBlockSize = Math.max(12 * Float.BYTES, uniformBufferOffset.get(0));
int materialBlockSize = Math.max(3 * Float.BYTES, uniformBufferOffset.get(0));
int cameraBlockSize = Math.max(3 * Float.BYTES, uniformBufferOffset.get(0));
// Create and initialize a named storage for the global matrices
gl.glBindBuffer(GL_UNIFORM_BUFFER, bufferNames.get(Buffer.GLOBAL_MATRICES));
gl.glBufferStorage(GL_UNIFORM_BUFFER, globalBlockSize, null, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT);
gl.glBindBuffer(GL_UNIFORM_BUFFER, 0);
// Create and initialize a named storage for the model matrix
// NUMERO 1
gl.glBindBuffer(GL_UNIFORM_BUFFER, bufferNames.get(Buffer.MODEL_MATRIX1));
gl.glBufferStorage(GL_UNIFORM_BUFFER, modelBlockSize, null, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT);
gl.glBindBuffer(GL_UNIFORM_BUFFER, 0);
// NUMERO 2
gl.glBindBuffer(GL_UNIFORM_BUFFER, bufferNames.get(Buffer.MODEL_MATRIX2));
gl.glBufferStorage(GL_UNIFORM_BUFFER, modelBlockSize, null, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT);
gl.glBindBuffer(GL_UNIFORM_BUFFER, 0);
// NUMERO 3
gl.glBindBuffer(GL_UNIFORM_BUFFER, bufferNames.get(Buffer.MODEL_MATRIX3));
gl.glBufferStorage(GL_UNIFORM_BUFFER, modelBlockSize, null, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT);
gl.glBindBuffer(GL_UNIFORM_BUFFER, 0);
// Create and initialize a named buffer storage for the light properties
gl.glBindBuffer(GL_UNIFORM_BUFFER, bufferNames.get(Buffer.LIGHT_PROPERTIES));
gl.glBufferStorage(GL_UNIFORM_BUFFER, lightBlockSize, lightBuffer, 0);
gl.glBindBuffer(GL_UNIFORM_BUFFER, 0);
// Create and initialize a named buffer storage for the camera properties
gl.glBindBuffer(GL_UNIFORM_BUFFER, bufferNames.get(Buffer.MATERIAL_PROPERTIES));
gl.glBufferStorage(GL_UNIFORM_BUFFER, materialBlockSize, materialBuffer, 0);
gl.glBindBuffer(GL_UNIFORM_BUFFER, 0);
// Create and initialize a named buffer storage for the camera properties
gl.glBindBuffer(GL_UNIFORM_BUFFER, bufferNames.get(Buffer.CAMERA_PROPERTIES));
gl.glBufferStorage(GL_UNIFORM_BUFFER, cameraBlockSize, cameraBuffer, 0);
gl.glBindBuffer(GL_UNIFORM_BUFFER, 0);
// map the global matrices buffer into the client space
// NUMERO 1
globalMatricesPointer = gl.glMapNamedBufferRange(
bufferNames.get(Buffer.GLOBAL_MATRICES),
0,
16 * 4 * 2,
GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
// NUMERO 2
modelMatrixPointer1 = gl.glMapNamedBufferRange(
bufferNames.get(Buffer.MODEL_MATRIX1),
0,
16 * 4,
GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
// NUMERO 3
modelMatrixPointer2 = gl.glMapNamedBufferRange(
bufferNames.get(Buffer.MODEL_MATRIX2),
0,
16 * 4,
GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
// NUMERO 4
modelMatrixPointer3 = gl.glMapNamedBufferRange(
bufferNames.get(Buffer.MODEL_MATRIX3),
0,
16 * 4,
GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
}
// Function for initializing the vertex array
private void initVertexArray(GL4 gl) {
// Create a single vertex array object
gl.glCreateVertexArrays(1, vertexArrayName);
// Associate the vertex attributes in the vertex array object with the vertex buffer
gl.glVertexArrayAttribBinding(vertexArrayName.get(0), Semantic.Attr.POSITION, Semantic.Stream.A);
gl.glVertexArrayAttribBinding(vertexArrayName.get(0), Semantic.Attr.NORMAL, Semantic.Stream.A);
gl.glVertexArrayAttribBinding(vertexArrayName.get(0), Semantic.Attr.TEXCOORD, Semantic.Stream.A);
// Set the format of the vertex attributes in the vertex array object
gl.glVertexArrayAttribFormat(vertexArrayName.get(0), Semantic.Attr.POSITION, 3, GL_FLOAT, false, 0);
gl.glVertexArrayAttribFormat(vertexArrayName.get(0), Semantic.Attr.NORMAL, 3, GL_FLOAT, false, 3 * 4);
gl.glVertexArrayAttribFormat(vertexArrayName.get(0), Semantic.Attr.TEXCOORD, 2, GL_FLOAT, false, 6 * 4);
// Enable the vertex attributes in the vertex object
gl.glEnableVertexArrayAttrib(vertexArrayName.get(0), Semantic.Attr.POSITION);
gl.glEnableVertexArrayAttrib(vertexArrayName.get(0), Semantic.Attr.NORMAL);
gl.glEnableVertexArrayAttrib(vertexArrayName.get(0), Semantic.Attr.TEXCOORD);
// Bind the triangle indices in the vertex array object the triangle indices buffer
gl.glVertexArrayElementBuffer(vertexArrayName.get(0), bufferNames.get(Buffer.ELEMENT));
// Bind the vertex array object to the vertex buffer
gl.glVertexArrayVertexBuffer(vertexArrayName.get(0), Semantic.Stream.A, bufferNames.get(Buffer.VERTEX), 0, (3+3+2) * 4);
}
private void initTexture(GL4 gl) {
try {
// Load texture
TextureData textureData = TextureIO.newTextureData(glProfile, new File(textureFilename), false, TextureIO.JPG);
// Generate texture name
gl.glGenTextures(1, textureNames);
// Bind the texture
gl.glBindTexture(gl.GL_TEXTURE_2D, textureNames.get(0));
// Specify the format of the texture
gl.glTexImage2D(gl.GL_TEXTURE_2D,
0,
textureData.getInternalFormat(),
textureData.getWidth(),
textureData.getHeight(),
textureData.getBorder(),
textureData.getPixelFormat(),
textureData.getPixelType(),
textureData.getBuffer());
// Set the sampler parameters
gl.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
gl.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
gl.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
gl.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
// Generate mip maps
gl.glGenerateMipmap(GL_TEXTURE_2D);
// Deactivate texture
gl.glBindTexture(GL_TEXTURE_2D, 0);
}
catch (IOException io) {
io.printStackTrace();
}
}
private float[] createSphereVertices(float radius, int numH, int numV) {
// Variables needed for the calculations
float t1, t2;
float pi = (float)Math.PI;
float pi2 = (float)Math.PI*2f;
float d1 = pi2/numH;
float d2 = pi/numV;
// Allocate the data needed to store the necessary positions, normals and texture coordinates
int numVertices = (numH*(numV-1)+2);
int numFloats = (3+3+2);
float[] data = new float[numVertices*numFloats];
data[0] = 0f; data[1] = radius; data[2] = 0f;
data[3] = 0f; data[4] = 1f; data[5] = 0f;
data[6] = 0.5f; data[7] = 1f;
for (int j=0; j<numV-1; j++) {
for (int i=0; i<numH; i++) {
// Position
data[(j*numH+i+1)*numFloats] = radius*(float)(Math.sin(i*d1)*Math.sin((j+1)*d2));
data[(j*numH+i+1)*numFloats+1] = radius*(float)Math.cos((j+1)*d2);
data[(j*numH+i+1)*numFloats+2] = radius*(float)(Math.cos(i*d1)*Math.sin((j+1)*d2));
// Normal
data[(j*numH+i+1)*numFloats+3] = (float)(Math.sin(i*d1)*Math.sin((j+1)*d2));
data[(j*numH+i+1)*numFloats+4] = (float)Math.cos((j+1)*d2);
data[(j*numH+i+1)*numFloats+5] = (float)(Math.cos(i*d1)*Math.sin((j+1)*d2));
// UV
data[(j*numH+i+1)*numFloats+6] = (float)(Math.asin(data[(j*numH+i+1)*numFloats+3])/Math.PI) + 0.5f;
data[(j*numH+i+1)*numFloats+7] = (float)(Math.asin(data[(j*numH+i+1)*numFloats+4])/Math.PI) + 0.5f;
}
}
data[(numVertices-1)*numFloats] = 0f; data[(numVertices-1)*numFloats+1] = -radius; data[(numVertices-1)*numFloats+2] = 0f;
data[(numVertices-1)*numFloats+3] = 0f; data[(numVertices-1)*numFloats+4] = -1f; data[(numVertices-1)*numFloats+5] = 0f;
data[(numVertices-1)*numFloats+6] = 0.5f; data[(numVertices-1)*numFloats+7] = 0f;
return data;
}
private short[] createSphereElements(int numH, int numV) {
// Allocate the data needed to store the necessary elements
int numTriangles = (numH*(numV-1)*2);
short[] data = new short[numTriangles*3];
for (int i=0; i<numH; i++) {
data[i*3] = 0; data[i*3+1] = (short)(i+1); data[i*3+2] = (short)((i+1)%numH+1);
}
for (int j=0; j<numV-2; j++) {
for (int i=0; i<numH; i++) {
data[((j*numH+i)*2+numH)*3] = (short)(j*numH+i+1);
data[((j*numH+i)*2+numH)*3+1] = (short)((j+1)*numH+i+1);
data[((j*numH+i)*2+numH)*3+2] = (short)((j+1)*numH+(i+1)%numH+1);
data[((j*numH+i)*2+numH)*3+3] = (short)((j+1)*numH+(i+1)%numH+1);
data[((j*numH+i)*2+numH)*3+4] = (short)(j*numH+(i+1)%numH+1);
data[((j*numH+i)*2+numH)*3+5] = (short)(j*numH+i+1);
}
}
int trianglIndex = (numTriangles-numH);
int vertIndex = (numV-2)*numH+1;
for (short i=0; i<numH; i++) {
data[(trianglIndex+i)*3] = (short)(vertIndex+i);
data[(trianglIndex+i)*3+1] = (short)((numH*(numV-1)+1));
data[(trianglIndex+i)*3+2] = (short)(vertIndex+(i+1)%numH);
}
return data;
}
// Private class representing a vertex program
private class Program {
// The name of the program
public int name = 0;
// Constructor
public Program(GL4 gl, String root, String vertex, String fragment) {
// Instantiate a complete vertex shader
ShaderCode vertShader = ShaderCode.create(gl, GL_VERTEX_SHADER, this.getClass(), root, null, vertex,
"vert", null, true);
// Instantiate a complete fragment shader
ShaderCode fragShader = ShaderCode.create(gl, GL_FRAGMENT_SHADER, this.getClass(), root, null, fragment,
"frag", null, true);
// Create the shader program
ShaderProgram shaderProgram = new ShaderProgram();
// Add the vertex and fragment shader
shaderProgram.add(vertShader);
shaderProgram.add(fragShader);
// Initialize the program
shaderProgram.init(gl);
// Store the program name (nonzero if valid)
name = shaderProgram.program();
// Compile and link the program
shaderProgram.link(gl, System.out);
}
}
// Interface for creating final static variables for defining the buffers
private interface Buffer {
int VERTEX = 0;
int ELEMENT = 1;
int GLOBAL_MATRICES = 2;
int MODEL_MATRIX1 = 3;
int MODEL_MATRIX2 = 4;
int MODEL_MATRIX3 = 5;
int LIGHT_PROPERTIES = 6;
int MATERIAL_PROPERTIES = 7;
int CAMERA_PROPERTIES = 8;
int MAX = 9;
}
// Private class to provide an semantic interface between Java and GLSL
private static class Semantic {
public interface Attr {
int POSITION = 0;
int NORMAL = 1;
int TEXCOORD = 2;
}
public interface Uniform {
int TRANSFORM0 = 1;
int TRANSFORM1 = 2;
int LIGHT0 = 3;
int MATERIAL = 4;
int CAMERA = 5;
}
public interface Stream {
int A = 0;
}
}
}
You need a texture object for each texture. For this you have to create a container with the proper size.
private IntBuffer textureNames = GLBuffers.newDirectIntBuffer( noOfTextures );
and you have to create the texture objects and you have to load the textures:
gl.glGenTextures( noOfTextures , textureNames);
for (int i=0; i<noOfTextures; i++) {
TextureData textureData = TextureIO.newTextureData(glProfile,
new File( textureFilename[i] ), false, TextureIO.JPG);
gl.glBindTexture(gl.GL_TEXTURE_2D, textureNames.get(i));
gl.glTexImage2D( ..... );
.....
}
Finally you have to bind the proper texture right before you draw the mesh:
gl.glBindTexture(gl.GL_TEXTURE_2D, textureNames.get( texture_index1 ));
gl.glDrawElements( ..... );
.....
gl.glBindTexture(gl.GL_TEXTURE_2D, textureNames.get( texture_index2 ));
gl.glDrawElements( ..... );
Take respect of the number of generated textures, when you delete them:
gl.glDeleteTextures( noOfTextures , textureNames);
Related
I'm studying Opengl and the book i've been using is OpenGL(R) ES 3.0 Programming Guide, 2nd Edition. And at chapter 6 they talk about Vertex arrays and they have a example code that uses Vertex Array methods, which is the code below. Later down that chapter they are talking about Vertex Array Object and what I wanted to try is to take this example code and refactor the code into something that uses the Vertex Array Object methods.
Problem is I have no idea how Vertex Array Object works and would be gratful if anyone could push me in the right direction.
The example code is here:
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.opengl.GLES30;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;
import android.util.Log;
import se.hig.dvg306.modul3app.R;
import se.hig.dvg306.modul3app.tools.ResourceHandler;
public class Modul3Renderer implements GLSurfaceView.Renderer
{
//
// Constructor - loads model data from a res file and creates byte buffers for
// vertex data and for normal data
//
public Modul3Renderer (Context context)
{
appContext = context;
Log.e(TAG, "--->>> Creating ModelLoader...");
ModelLoader modelLoader = new ModelLoaderImpl ();
Log.e(TAG, "--->>> ...finished.");
Log.e(TAG, "--->>> Loading model...");
Log.e(TAG, "--->>> Starting with vertices...");
float[] mVerticesData; //= new float[0];
try {
mVerticesData = modelLoader.loadModel (context, R.raw.torus2, 0, 4, 6);
} catch (IOException e) {
throw new RuntimeException (e);
}
Log.e(TAG, "--->>> ...finished.");
// Process vertex data
// 4: because of 4 elements per vertex position
nbrOfVertices = mVerticesData.length / 4;
mVertices = ByteBuffer.allocateDirect(mVerticesData.length * 4)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mVertices.put(mVerticesData).position(0);
Log.e(TAG, "--->>> Starting with normals...");
float[] mNormalData; //= new float[0];
try {
mNormalData = modelLoader.loadModel (context, R.raw.torus2, 4, 4, 6);
} catch (IOException e) {
throw new RuntimeException (e);
}
Log.e(TAG, "--->>> ...finished.");
// Process normal data
// 4: because of 4 elements per vertex position
nbrOfNormals = mNormalData.length / 4;
mNormals = ByteBuffer.allocateDirect(mNormalData.length * 4)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mNormals.put(mNormalData).position(0);
}
///
// Create a shader object, load the shader source, and
// compile the shader.
//
private int createShader(int type, String shaderSrc )
{
int shader;
int[] compiled = new int[1];
// Create the shader object
shader = GLES30.glCreateShader ( type );
if ( shader == 0 )
{
return 0;
}
// Load the shader source
GLES30.glShaderSource ( shader, shaderSrc );
// Compile the shader
GLES30.glCompileShader ( shader );
// Check the compile status
GLES30.glGetShaderiv ( shader, GLES30.GL_COMPILE_STATUS, compiled, 0 );
if ( compiled[0] == 0 )
{
Log.e ( TAG, GLES30.glGetShaderInfoLog ( shader ) );
GLES30.glDeleteShader ( shader );
return 0;
}
return shader;
}
///
// Initialize the shader and program object
//
public void onSurfaceCreated ( GL10 glUnused, EGLConfig config )
{
int vertexShader;
int fragmentShader;
int programObject;
int[] linked = new int[1];
// Load the source code for the vertex shader program from a res file:
try {
vShaderStr = ResourceHandler.readTextData(appContext, R.raw.vertex_shader);
} catch (IOException e) {
Log.e ( TAG, "--->>> Could not load source code for vertex shader.");
throw new RuntimeException (e);
}
Log.e ( TAG, "--->>> Loaded vertex shader: " + vShaderStr);
// Load the source code for the fragment shader program from a res file:
try {
fShaderStr = ResourceHandler.readTextData(appContext, R.raw.fragment_shader);
} catch (IOException e) {
Log.e ( TAG, "--->>> Could not load source code for fragment shader.");
throw new RuntimeException (e);
}
Log.e ( TAG, "--->>> Loaded fragment shader: " + fShaderStr);
// Create the vertex/fragment shaders
vertexShader = createShader( GLES30.GL_VERTEX_SHADER, vShaderStr );
fragmentShader = createShader( GLES30.GL_FRAGMENT_SHADER, fShaderStr );
// Create the program object
programObject = GLES30.glCreateProgram();
if ( programObject == 0 )
{
return;
}
GLES30.glAttachShader ( programObject, vertexShader );
GLES30.glAttachShader ( programObject, fragmentShader );
// Bind vPosition to attribute 0
GLES30.glBindAttribLocation ( programObject, 0, "vPosition" );
// Bind vNormal to attribute 1
GLES30.glBindAttribLocation ( programObject, 1, "vNormal" );
// Link the program
GLES30.glLinkProgram ( programObject );
// Check the link status
GLES30.glGetProgramiv ( programObject, GLES30.GL_LINK_STATUS, linked, 0 );
if ( linked[0] == 0 )
{
Log.e ( TAG, "Error linking program:" );
Log.e ( TAG, GLES30.glGetProgramInfoLog ( programObject ) );
GLES30.glDeleteProgram ( programObject );
return;
}
// Store the program object
mProgramObject = programObject;
GLES30.glClearColor ( 0.15f, 0.15f, 0.15f, 1.0f );
GLES30.glEnable(GLES30.GL_DEPTH_TEST);
}
//
// Draw a torus using the shader pair created in onSurfaceCreated()
//
public void onDrawFrame ( GL10 glUnused )
{
// Initiate the model-view matrix as identity matrix
Matrix.setIdentityM(mViewMatrix, 0);
// Define a translation transformation
Matrix.translateM(mViewMatrix, 0, 0.0f, 0.0f, -60.0f);
// Define a rotation transformation
Matrix.rotateM(mViewMatrix, 0, 90.0f, 1.0f, 0.0f, 0.0f);
// Calculate the model-view and projection transformation as composite transformation
Matrix.multiplyMM (mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
// Clear the color buffer
GLES30.glClear ( GLES30.GL_COLOR_BUFFER_BIT | GLES30.GL_DEPTH_BUFFER_BIT );
// Use the program object
GLES30.glUseProgram ( mProgramObject );
// Make MVP matrix accessible in the vertex shader
mMVPMatrixHandle = GLES30.glGetUniformLocation(mProgramObject, "uMVPMatrix");
GLES30.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
// Light position:
vLightPositionHandle = GLES30.glGetUniformLocation(mProgramObject, "vLightPosition");
GLES30.glUniform4fv(vLightPositionHandle, 1, lightPosition, 0);
// Light color:
vLightColorDfHandle = GLES30.glGetUniformLocation(mProgramObject, "vLightColorDf");
GLES30.glUniform4fv(vLightColorDfHandle, 1, lightColorDf, 0);
// Material color:
vMaterialColorDfHandle = GLES30.glGetUniformLocation(mProgramObject, "vMaterialColorDf");
GLES30.glUniform4fv(vMaterialColorDfHandle, 1, materialColorDf, 0);
// Load the vertex data from mVertices
GLES30.glVertexAttribPointer ( 0, 4, GLES30.GL_FLOAT, false, 0, mVertices );
// Assign vertex data to 'in' variable bound to attribute with index 0:
GLES30.glEnableVertexAttribArray ( 0 );
// Load the normal data from mNormals
GLES30.glVertexAttribPointer ( 1, 4, GLES30.GL_FLOAT, false, 0, mNormals );
// Assign normal data to 'in' variable bound to attribute with index 1:
GLES30.glEnableVertexAttribArray ( 1 );
GLES30.glDrawArrays (GLES30.GL_TRIANGLES, 0, nbrOfVertices);
GLES30.glDisableVertexAttribArray ( 1 );
GLES30.glDisableVertexAttribArray ( 0 );
}
//
// Handle surface changes
//
public void onSurfaceChanged ( GL10 glUnused, int width, int height )
{
mWidth = width;
mHeight = height;
GLES30.glViewport(0, 0, width, height);
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1.0f, 1.0f, 0.5f, 1000.0f);
}
// Member variables
private Context appContext;
private int mWidth;
private int mHeight;
private int nbrOfVertices;
private FloatBuffer mVertices;
private int nbrOfNormals;
private FloatBuffer mNormals;
private int mProgramObject;
private int mMVPMatrixHandle;
// Transformation data:
private final float[] mMVPMatrix = new float[16];
private final float[] mProjectionMatrix = new float[16];
private final float[] mViewMatrix = new float[16];
// Light position and color (only diffuse term now):
private int vLightPositionHandle;
private final float lightPosition [] = {175.0f, 75.0f, 125.0f, 0.0f};
// Light color (only diffuse term now):
private int vLightColorDfHandle;
private final float lightColorDf [] = {0.98f, 0.98f, 0.98f, 1.0f};
// Material color (only diffuse term now):
private int vMaterialColorDfHandle;
private final float materialColorDf [] = {0.62f, 0.773f, 0.843f, 1.0f};
// To be read when creating the instance:
private String vShaderStr;
private String fShaderStr;
private static String TAG = "Modul3Renderer";
}
I've tried for the past days now to understand how to write a code that uses the Object methods but I cant wrap my head around it and decided to ask. So I hope by asking I can get some understanding how to begin.
When you want to use a Vertex Array Object, you must create and bind the VAO. The vertex specification is stored in the state vector of the currently bound VAO. Therefore, you must do the vertex specification when the VAO is bound. I also suggest to put the attributes in Vertex Buffer Objects:
int vao;
int vboVertices;
int vboNormals;
vao = GLES30.glGenVertexArray();
GLES30.glBindVertexArray(vao);
vboVertices = GLES30.glGenBuffer();
GLES30.glBindBuffer(GLES30.GL_ARRAY_BUFFER, vboVertices);
GLES30.glBufferData(GLES30.GL_ARRAY_BUFFER, mVertices.remaining() * 4, mVertices, GLES30.GL_STATIC_DRAW);
GLES30.glVertexAttribPointer(0, 4, GLES30.GL_FLOAT, false, 0, 0);
GLES30.glEnableVertexAttribArray(0);
vboNormals = GLES30.glGenBuffer();
GLES30.glBindBuffer(GLES30.GL_ARRAY_BUFFER, vboNormals);
GLES30.glBufferData(GLES30.GL_ARRAY_BUFFER, mNormals.remaining() * 4, mNormals, GLES30.GL_STATIC_DRAW);
GLES30.glVertexAttribPointer(1, 4, GLES30.GL_FLOAT, false, 0, 0);
GLES30.glEnableVertexAttribArray(1);
Later you can use the VAO to draw the geometry. For this it is enough to bind the VAO:
GLES30.glBindVertexArray(vao);
GLES30.glDrawArrays (GLES30.GL_TRIANGLES, 0, nbrOfVertices);
I'm learning to use LibGDX and my goal is to create a cube, with which you can control the resolution (number of vertices along each face). I already did that, and managed to use MeshBuilder to make it out of 6 different meshes and then render the resulting Mesh successfully using basic shaders :
Cube Mesh
//creates a square face with a normal vector and resolution number of vertices along any edge of the face
public Mesh createFace(Vector3 normal, int resolution) {
//creates 2 vectors perpendicular to each other and to the vector normal
Vector3 axisA = new Vector3(normal.y,normal.z,normal.x);
Vector3 axis = u.crs(normal, axisA);
Vector3 axisB = new Vector3(u.sqrt(axis.x),u.sqrt(axis.y),u.sqrt(axis.z));
//creates the arrays to hold the vertices and triangles
Vector3[] vertices = new Vector3[resolution * resolution];
//code for triangles
short[] triangles = new short[(resolution - 1) * (resolution - 1) * 6];
int triIndex = 0;
//looping over each vertex in the face
for (int y = 0; y < resolution; y++) {
for (int x = 0; x < resolution; x++) {
int vertexIndex = x + y * resolution;
//vector representing how close to the end of the x or y axis the loop is
Vector2 t = new Vector2(x / (resolution - 1f),y / (resolution - 1f));
//calculates the position of the vertex to place on the face
Vector3 mulA = u.mul(axisA, (2*t.x - 1));
Vector3 mulB = u.mul(axisB, (2*t.y-1));
Vector3 point = u.add3(normal, mulA, mulB);
//point = u.normalize(point);
vertices[vertexIndex] = point;
//puts the vertices into triangles
if (x != resolution - 1 && y != resolution - 1) {
triangles[triIndex + 0] = (short) vertexIndex;
triangles[triIndex + 1] = (short) (vertexIndex + resolution + 1);
triangles[triIndex + 2] = (short) (vertexIndex + resolution);
triangles[triIndex + 3] = (short) vertexIndex;
triangles[triIndex + 4] = (short) (vertexIndex + 1);
triangles[triIndex + 5] = (short) (vertexIndex + resolution + 1);
triIndex += 6;
}
}
}
float[] verticeList = u.vectorToList(vertices);
Mesh m = new Mesh(true, resolution * resolution, triangles.length, new VertexAttribute(Usage.Position,3,"a_Position"));
m.setIndices(triangles);
m.setVertices(verticeList);
return m;
}
//generates a cube Mesh with resolution vertices along each face
public Mesh generateFaces(int resolution, float scale) {
MeshBuilder meshBuilder = new MeshBuilder();
meshBuilder.begin(new VertexAttributes(new VertexAttribute (Usage.Position, 3 ,"a_Position")));
Vector3[] faceNormals = {
new Vector3(0,1*scale,0), //up
new Vector3(0,-1*scale,0), //down
new Vector3(-1*scale,0,0), //left
new Vector3(1*scale,0,0), //right
new Vector3(0,0,1*scale), //forward
new Vector3(0,0,-1*scale) //back
};
for (int i = 0; i < faceNormals.length; i++) {
meshBuilder.part("part"+ Integer.toString(i), GL20.GL_TRIANGLES);
meshBuilder.addMesh(createFace(faceNormals[i], resolution));
}
Mesh mesh = meshBuilder.end();
return mesh;
}
u is just a utilities class i created to store some math functions.
I then render it like so:
#Override
public void render () {
camController.update();
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
shader.bind();
shader.setUniformMatrix("matViewProj", cam.combined);
//rendering mesh
mesh1.render(shader, GL20.GL_LINE_STRIP);
[...]
}
I now want to make a model out of that mesh where each of the 6 faces will have a different color.
I thus tried to do it using a ModelBuilder following the LibGDX wiki, like so:
public Model generateModel(int resolution, float scale, Color[] colors) {
Vector3[] faceNormals = {
new Vector3(0,1*scale,0), //up
new Vector3(0,-1*scale,0), //down
new Vector3(-1*scale,0,0), //left
new Vector3(1*scale,0,0), //right
new Vector3(0,0,1*scale), //forward
new Vector3(0,0,-1*scale) //back
};
ModelBuilder modelBuilder = new ModelBuilder();
modelBuilder.begin();
for (int i = 0; i < faceNormals.length; i++) {
Mesh mesh = createFace(faceNormals[i], resolution);
MeshPart part = new MeshPart("part"+Integer.toString(i),mesh, 0, mesh.getNumVertices() ,GL20.GL_TRIANGLES);
modelBuilder.node().parts.add(new NodePart(part, new Material(ColorAttribute.createDiffuse(colors[i]))));
}
Model m = modelBuilder.end();
return m;
}
And then i rendered it using a ModelBatch and ModelInstance :
#Override
public void create () {
//creates an environment to handle lighting and such
environment = new Environment();
environment.set(new ColorAttribute(ColorAttribute.AmbientLight,0.4f,0.4f,0.4f,1f));
environment.add(new DirectionalLight().set(0.8f,0.8f,0.8f,-1f,-0.8f,-0.2f));
modelBatch = new ModelBatch();
//handling the inputProcessors of the camera and stage(UI)
multiplexer = new InputMultiplexer();
stage = new Stage();
multiplexer.addProcessor(stage);
scroll = new ScrolledInputProcessor();
multiplexer.addProcessor(scroll);
//camera (3D inputProcessor)
cam = new PerspectiveCamera(67,Gdx.graphics.getWidth(),Gdx.graphics.getHeight());
cam.position.set(10f,10f,10f);
cam.lookAt(0,0,0);
cam.near = 1f;
cam.far = 300f;
cam.update();
camController = new CameraInputController(cam);
multiplexer.addProcessor(camController);
//shaders for every vertex and every pixel(fragment)
shader = new ShaderProgram(Gdx.files.internal("shader/vertexshader.glsl").readString() ,Gdx.files.internal("shader/fragmentshader.glsl").readString());
shader2 = new ShaderProgram(Gdx.files.internal("shader/vertexshader.glsl").readString() ,Gdx.files.internal("shader/fragmentshader2.glsl").readString());
//The 2D box encompassing the screen (UI)
table = new Table();
table.setFillParent(true);
stage.addActor(table);
//skins for UI
skin = new Skin(Gdx.files.internal("uiskin.json"));
//making a slider and dressing it in the skin
Drawable knobDown = skin.newDrawable("default-slider-knob", Color.GRAY);
SliderStyle sliderStyle = skin.get("default-horizontal", SliderStyle.class);
sliderStyle.knobDown = knobDown;
slider = new Slider(3.0f, 70.0f, 1.0f, false, sliderStyle);
table.right().top();
table.add(slider).row();
//creates the unit cube and unit sphere
model = generateModel(res, 1, colors);
instance = new ModelInstance(model);
font = new BitmapFont(Gdx.files.internal("uiskin.fnt"));
batch = new SpriteBatch();
Gdx.input.setInputProcessor(multiplexer);
}
#Override
public void render () {
camController.update();
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
shader.bind();
shader.setUniformMatrix("matViewProj", cam.combined);
modelBatch.begin(cam);
modelBatch.render(instance, environment);
modelBatch.end();
batch.begin();
font.draw(batch, "Zoom Level : " + zoomLevel, 1000f, 100f);
batch.end();
stage.act(Gdx.graphics.getDeltaTime());
stage.draw();
}
However, when i run the program, nothing is rendered, just the gray void.
Gray void of nothingness
My question is: How do I get my model to render?
I attempted to follow the LWJGL 3.2+ Tutorial on drawElements and get my LWJGL application to draw a quad. My code runs successfully but doesn't draw anything (apart from the basic window), no matter where I run my loopCycle method that should draw the quad. I assume it has to do with the change from Display (Tutorial) to GLFW (my code)? I saw some posts talking about Projection, View and Model matrices that I do not use (afaik), is that the issue why it doesn't display?
package org.tempest.game;
import org.lwjgl.*;
import org.lwjgl.glfw.*;
import org.lwjgl.opengl.*;
import org.lwjgl.system.*;
import java.nio.*;
import static org.lwjgl.glfw.Callbacks.*;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.system.MemoryStack.*;
import static org.lwjgl.system.MemoryUtil.*;
public class Graphics {
// The window handle
private long window;
// Window setup
private final String WINDOW_TITLE = "Test";
// 1920x1080, 1600x900 and 1200x675 are all 16:9 ratios
private final int WIDTH = 320;
private final int HEIGHT = 240;
// Quad variables
private int vaoId = 0;
private int vboId = 0;
private int vboiId = 0;
private int indicesCount = 0;
public static void main(String[] args) {
new Graphics().run();
}
public void run() {
System.out.println("Hello LWJGL " + Version.getVersion() + "!");
init();
setupQuad();
loop();
destroyOpenGL();
// Free the window callbacks and destroy the window
glfwFreeCallbacks(window);
glfwDestroyWindow(window);
// Terminate GLFW and free the error callback
glfwTerminate();
glfwSetErrorCallback(null).free();
}
private void init() {
// Setup an error callback. The default implementation
// will print the error message in System.err.
GLFWErrorCallback.createPrint(System.err).set();
// Initialize GLFW. Most GLFW functions will not work before doing this.
if ( !glfwInit() )
throw new IllegalStateException("Unable to initialize GLFW");
// Configure GLFW
glfwDefaultWindowHints(); // optional, the current window hints are already the default
glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE);
glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE);
// Create the window
window = glfwCreateWindow(WIDTH, HEIGHT, WINDOW_TITLE, NULL, NULL);
if ( window == NULL )
throw new RuntimeException("Failed to create the GLFW window");
// Setup a key callback. It will be called every time a key is pressed, repeated or released.
glfwSetKeyCallback(window, (window, key, scancode, action, mods) -> {
if ( key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE )
glfwSetWindowShouldClose(window, true); // We will detect this in the rendering loop
});
// Get the thread stack and push a new frame
try ( MemoryStack stack = stackPush() ) {
IntBuffer pWidth = stack.mallocInt(1); // int*
IntBuffer pHeight = stack.mallocInt(1); // int*
// Get the window size passed to glfwCreateWindow
glfwGetWindowSize(window, pWidth, pHeight);
// Get the resolution of the primary monitor
GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor());
// Center the window
glfwSetWindowPos(
window,
(vidmode.width() - pWidth.get(0)) / 2,
(vidmode.height() - pHeight.get(0)) / 2
);
} // the stack frame is popped automatically
// Make the OpenGL context current
glfwMakeContextCurrent(window);
// Enable v-sync with 1
glfwSwapInterval(0);
// Make the window visible
glfwShowWindow(window);
}
private void loop() {
// Initialize variables for fps calculation
long time_start = System.nanoTime();
int frames = 0;
final double check_fps_time = 1d;
// Set the clear color
glClearColor(0.2f, 0.2f, 0.2f, 0.0f);
// TODO Where to initialize this?
//GL11.glViewport(0, 0, WIDTH, HEIGHT);
// Run the rendering loop until the user has attempted to close
// the window or has pressed the ESCAPE key.
while ( !glfwWindowShouldClose(window) ) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the framebuffer
glfwSwapBuffers(window); // swap the color buffers
// Count, calculate and display fps
frames++;
long time_now = System.nanoTime();
if ((double)(time_now - time_start)/1000000000 > check_fps_time) {
int fps_prediction = (int)(frames/check_fps_time);
System.out.println("FPS: " + fps_prediction);
frames = 0;
time_start = time_now;
}
// Poll for window events. The key callback above will only be
// invoked during this call.
glfwPollEvents();
loopCycle();
}
}
public void setupQuad() {
GL.createCapabilities();
// Vertices, the order is not important.
float[] vertices = {
-0.5f, 0.5f, 0f, // Left top ID: 0
-0.5f, -0.5f, 0f, // Left bottom ID: 1
0.5f, -0.5f, 0f, // Right bottom ID: 2
0.5f, 0.5f, 0f // Right left ID: 3
};
// Sending data to OpenGL requires the usage of (flipped) byte buffers
FloatBuffer verticesBuffer = BufferUtils.createFloatBuffer(vertices.length);
verticesBuffer.put(vertices);
verticesBuffer.flip();
// OpenGL expects to draw vertices in counter clockwise order by default
byte[] indices = {
// Left bottom triangle
0, 1, 2,
// Right top triangle
2, 3, 0
};
indicesCount = indices.length;
ByteBuffer indicesBuffer = BufferUtils.createByteBuffer(indicesCount);
indicesBuffer.put(indices);
indicesBuffer.flip();
// Create a new Vertex Array Object in memory and select it (bind)
// A VAO can have up to 16 attributes (VBOs) assigned to it by default
vaoId = GL30.glGenVertexArrays();
GL30.glBindVertexArray(vaoId);
// Create a new Vertex Buffer Object in memory and select it (bind)
// A VBO is a collection of Vectors which in this case resemble the location of each vertex.
vboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, verticesBuffer, GL15.GL_STATIC_DRAW);
// Put the VBO in the attributes list at index 0
GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 0, 0);
// Deselect (bind to 0) the VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
// Deselect (bind to 0) the VAO
GL30.glBindVertexArray(0);
// Create a new VBO for the indices and select it (bind)
vboiId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboiId);
GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL15.GL_STATIC_DRAW);
// Deselect (bind to 0) the VBO
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
}
public void loopCycle() {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);
// Bind to the VAO that has all the information about the vertices
GL30.glBindVertexArray(vaoId);
GL20.glEnableVertexAttribArray(0);
// Bind to the index VBO that has all the information about the order of the vertices
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboiId);
// Draw the vertices
GL11.glDrawElements(GL11.GL_TRIANGLES, indicesCount, GL11.GL_UNSIGNED_BYTE, 0);
// Put everything back to default (deselect)
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
GL20.glDisableVertexAttribArray(0);
GL30.glBindVertexArray(0);
}
public void destroyOpenGL() {
// Disable the VBO index from the VAO attributes list
GL20.glDisableVertexAttribArray(0);
// Delete the vertex VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
GL15.glDeleteBuffers(vboId);
// Delete the index VBO
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
GL15.glDeleteBuffers(vboiId);
// Delete the VAO
GL30.glBindVertexArray(0);
GL30.glDeleteVertexArrays(vaoId);
}
public int getWIDTH() {
return WIDTH;
}
public int getHEIGHT() {
return HEIGHT;
}
}
I am a beginner and probably there are a number of things I need to look into to make this work. I would love to hear some guidance on what to look into to get my application to do something so I can take things from there. Thank you so much! :)
There is at least one issue with this code- it calls clear/draw/swap in the wrong order. Basically with OpenGL, the main loop should call clear() first, draw some things, and then call swapBuffers() to display the buffer contents.
The example instead: calls clear (ok, clear the buffer), swaps the buffers (here a blank window is shown, since the buffer is cleared), and then draws a bunch of stuff to the buffer. But the buffer contents is never displayed (since in the next cycle, the first operation is clear() again).
Below the slightly modified code; it draws a white rectangle - I am not totally sure about the usage of glBindBuffer (I used drawLine and drawTriangle in the past), but it's a start.
package sample;
import org.lwjgl.*;
import org.lwjgl.glfw.*;
import org.lwjgl.opengl.*;
import org.lwjgl.system.*;
import java.nio.*;
import static org.lwjgl.glfw.Callbacks.*;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.system.MemoryStack.*;
import static org.lwjgl.system.MemoryUtil.*;
public class DrawExample {
// The window handle
private long window;
// Window setup
private final String WINDOW_TITLE = "Test";
// 1920x1080, 1600x900 and 1200x675 are all 16:9 ratios
private final int WIDTH = 320;
private final int HEIGHT = 240;
// Quad variables
private int vaoId = 0;
private int vboId = 0;
private int vboiId = 0;
private int indicesCount = 0;
public static void main(String[] args) {
new DrawExample().run();
}
public void run() {
System.out.println("Hello LWJGL " + Version.getVersion() + "!");
init();
setupQuad();
loop();
destroyOpenGL();
// Free the window callbacks and destroy the window
glfwFreeCallbacks(window);
glfwDestroyWindow(window);
// Terminate GLFW and free the error callback
glfwTerminate();
glfwSetErrorCallback(null).free();
}
private void init() {
// Setup an error callback. The default implementation
// will print the error message in System.err.
GLFWErrorCallback.createPrint(System.err).set();
// Initialize GLFW. Most GLFW functions will not work before doing this.
if (!glfwInit())
throw new IllegalStateException("Unable to initialize GLFW");
// Configure GLFW
glfwDefaultWindowHints(); // optional, the current window hints are already the default
glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE);
glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE);
// Create the window
window = glfwCreateWindow(WIDTH, HEIGHT, WINDOW_TITLE, NULL, NULL);
if (window == NULL)
throw new RuntimeException("Failed to create the GLFW window");
// Setup a key callback. It will be called every time a key is pressed, repeated or released.
glfwSetKeyCallback(window, (window, key, scancode, action, mods) -> {
if (key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE)
glfwSetWindowShouldClose(window, true); // We will detect this in the rendering loop
});
// Get the resolution of the primary monitor
GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor());
// Get the thread stack and push a new frame
try (MemoryStack stack = stackPush()) {
IntBuffer pWidth = stack.mallocInt(1); // int*
IntBuffer pHeight = stack.mallocInt(1); // int*
// Get the window size passed to glfwCreateWindow
glfwGetWindowSize(window, pWidth, pHeight);
// Center the window
glfwSetWindowPos(window, (vidmode.width() - pWidth.get(0)) / 2,
(vidmode.height() - pHeight.get(0)) / 2);
} // the stack frame is popped automatically
// Make the OpenGL context current
glfwMakeContextCurrent(window);
// Enable v-sync with 1
glfwSwapInterval(1);
// Make the window visible
glfwShowWindow(window);
}
private void loop() {
// Initialize variables for fps calculation
long time_start = System.nanoTime();
int frames = 0;
final double check_fps_time = 1d;
// Set the clear color
glClearColor(0.2f, 0.2f, 0.2f, 0.0f);
// TODO Where to initialize this?
// GL11.glViewport(0, 0, WIDTH, HEIGHT);
// Run the rendering loop until the user has attempted to close
// the window or has pressed the ESCAPE key.
while (!glfwWindowShouldClose(window)) {
// Count, calculate and display fps
frames++;
long time_now = System.nanoTime();
if ((double) (time_now - time_start) / 1000000000 > check_fps_time) {
int fps_prediction = (int) (frames / check_fps_time);
System.out.println("FPS: " + fps_prediction);
frames = 0;
time_start = time_now;
}
// Poll for window events. The key callback above will only be
// invoked during this call.
glfwPollEvents();
loopCycle();
glfwSwapBuffers(window); // swap the color buffers
}
}
public void setupQuad() {
GL.createCapabilities();
// Vertices, the order is not important.
float[] vertices = {-0.5f, 0.5f, 0f, // Left top ID: 0
-0.5f, -0.5f, 0f, // Left bottom ID: 1
0.5f, -0.5f, 0f, // Right bottom ID: 2
0.5f, 0.5f, 0f // Right left ID: 3
};
// Sending data to OpenGL requires the usage of (flipped) byte buffers
FloatBuffer verticesBuffer = BufferUtils.createFloatBuffer(vertices.length);
verticesBuffer.put(vertices);
verticesBuffer.flip();
// OpenGL expects to draw vertices in counter clockwise order by default
byte[] indices = {
// Left bottom triangle
0, 1, 2,
// Right top triangle
2, 3, 0};
indicesCount = indices.length;
ByteBuffer indicesBuffer = BufferUtils.createByteBuffer(indicesCount);
indicesBuffer.put(indices);
indicesBuffer.flip();
// Create a new Vertex Array Object in memory and select it (bind)
// A VAO can have up to 16 attributes (VBOs) assigned to it by default
vaoId = GL30.glGenVertexArrays();
GL30.glBindVertexArray(vaoId);
// Create a new Vertex Buffer Object in memory and select it (bind)
// A VBO is a collection of Vectors which in this case resemble the location of each vertex.
vboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, verticesBuffer, GL15.GL_STATIC_DRAW);
// Put the VBO in the attributes list at index 0
GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 0, 0);
// Deselect (bind to 0) the VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
// Deselect (bind to 0) the VAO
GL30.glBindVertexArray(0);
// Create a new VBO for the indices and select it (bind)
vboiId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboiId);
GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL15.GL_STATIC_DRAW);
// Deselect (bind to 0) the VBO
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
}
public void loopCycle() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Bind to the VAO that has all the information about the vertices
GL30.glBindVertexArray(vaoId);
GL20.glEnableVertexAttribArray(0);
// Bind to the index VBO that has all the information about the order of the vertices
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboiId);
// Draw the vertices
GL11.glDrawElements(GL11.GL_TRIANGLES, indicesCount, GL11.GL_UNSIGNED_BYTE, 0);
// Put everything back to default (deselect)
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
GL20.glDisableVertexAttribArray(0);
GL30.glBindVertexArray(0);
}
public void destroyOpenGL() {
// Disable the VBO index from the VAO attributes list
GL20.glDisableVertexAttribArray(0);
// Delete the vertex VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
GL15.glDeleteBuffers(vboId);
// Delete the index VBO
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
GL15.glDeleteBuffers(vboiId);
// Delete the VAO
GL30.glBindVertexArray(0);
GL30.glDeleteVertexArrays(vaoId);
}
public int getWIDTH() {
return WIDTH;
}
public int getHEIGHT() {
return HEIGHT;
}
}
I am little bit desperate here.
I am trying to update/refactor an existing code written in legacy opengl to make use of the "modern way" of opengl version 3.2+.
It is written in Java with lwjgl. I already stripped away most of the functionality to test the basic setup. For me at the moment it is really just about setting up the vbo with vertices loaded from an obj file and render it. My problem is, that the display window stays empty. If it would display me just something, I would be really happy.
Maybe you guys can help me what I am missing here.
public class Mobile {
private final String texturePath = "../CGSS15Ex3MobileDS/dataEx3/Textures";
private int
width = 1200,
height = 800,
fps = 0,
cameraDist = 2000,
fillMode = GL_LINE,
ticksPerSecond = 60,
frameCounter = 0,
vaoId,
vboId,
vboiID,
pId,
vsId,
fsId;
private long
time,
lastTime,
lastFPS,
lastKeySpace,
frameCounterTime,
avgTime = 0;
private float
dx = 0f, // mouse x distance
dy = 0f, // mouse y distance
diffTime = 0f, // frame length
mouseSensitivity = 0.5f,
movementSpeed = 800.0f; // move 10 units per second.
private Fork fork;
private CameraController camera;
FloatBuffer kugelBuff, indexBuff;
int kugelVertCount;
static LinkedList<Integer> textureIDs = new LinkedList<>();
public Mobile() {
run();
}
private void run() {
init();
while (!exit()) {
update();
draw();
updateFPS();
}
fini();
}
private void init() {
// OpenGL Setup
// create display
try {
PixelFormat pixelFormat = new PixelFormat();
ContextAttribs contextAtrributes = new ContextAttribs(3, 2)
.withProfileCore(true)
.withForwardCompatible(true);
Display.setDisplayMode(new DisplayMode(width, height));
Display.setTitle("Mobile by Aaron Scheu");
Display.create(pixelFormat, contextAtrributes);
GL11.glClearColor(0.3f, 0.3f, 0.3f, 0f);
GL11.glViewport(0, 0, width, height);
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(-1);
}
// setup scene //
setupSphere();
setupShaders();
setupTex();
// set Timer
frameCounterTime = lastFPS = getTime();
System.out.println("Start timer ...");
}
private void setupTex() {
for (String file : getTextureFiles(texturePath)) {
try {
TextureReader.Texture texture = TextureReader.readTexture(file);
textureIDs.add(glGenTextures());
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureIDs.getLast());
// Upload tex and generate mipmap for scaling
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGB, texture.getWidth(), texture.getHeight(), 0,
GL_RGB, GL_UNSIGNED_BYTE, texture.getPixels()
);
GL30.glGenerateMipmap(GL11.GL_TEXTURE_2D);
// Setup the ST coordinate system
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_REPEAT);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_REPEAT);
// Setup what to do when the texture has to be scaled
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER,
GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER,
GL11.GL_LINEAR_MIPMAP_LINEAR);
} catch(IOException e) {
System.out.println(e);
}
}
}
private void setupShaders() {
// Load the vertex shader
// vsId = GLDrawHelper.compileShader("../CGSS15Ex3MobileDS/dataEx3/Shader/phong_vertex.glsl", GL20.GL_VERTEX_SHADER);
vsId = GLDrawHelper.compileShader("shader/vert_shader.glsl", GL20.GL_VERTEX_SHADER);
// Load the fragment shader
// fsId = GLDrawHelper.compileShader("../CGSS15Ex3MobileDS/dataEx3/Shader/phong_fragment.glsl", GL20.GL_FRAGMENT_SHADER);
fsId = GLDrawHelper.compileShader("shader/frac_shader.glsl", GL20.GL_FRAGMENT_SHADER);
// Create a new shader program that links both shaders
pId = GL20.glCreateProgram();
GL20.glAttachShader(pId, vsId);
GL20.glAttachShader(pId, fsId);
// Bind shader data to vbo attribute list
// GL20.glBindAttribLocation(pId, 0, "vert_in");
// GL20.glBindAttribLocation(pId, 1, "col_in");
// GL20.glBindAttribLocation(pId, 2, "tex0_in");
// GL20.glBindAttribLocation(pId, 3, "norm_in");
// Test Shader
GL20.glBindAttribLocation(pId, 0, "in_Position");
GL20.glBindAttribLocation(pId, 1, "in_Color");
GL20.glBindAttribLocation(pId, 2, "in_TextureCoord");
GL20.glLinkProgram(pId);
GL20.glValidateProgram(pId);
}
private void setupSphere() {
Model sphere = null;
try {
sphere = OBJLoader.loadModel(new File("sphere.obj"));
} catch (IOException e) {
e.printStackTrace();
Display.destroy();
System.exit(1);
}
kugelBuff = GLDrawHelper.directFloatBuffer(sphere.getVVVNNNTT());
indexBuff = GLDrawHelper.directFloatBuffer(sphere.getVertIndices());
kugelVertCount = sphere.getVertCount();
// Create a new Vertex Array Object in memory and select it (bind)
vaoId = GL30.glGenVertexArrays();
GL30.glBindVertexArray(vaoId);
// Create a new Vertex Buffer Object in memory and select it (bind)
vboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, kugelBuff, GL15.GL_STATIC_DRAW);
// Attribute Pointer - list id, size, type, normalize, sprite, offset
GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 8*4, 0); // Vertex
// GL20.glVertexAttribPointer(1, 3, GL11.GL_FLOAT, false, 3, 0); // Color
GL20.glVertexAttribPointer(2, 2, GL11.GL_FLOAT, false, 8*4, 6*4); // UV Tex
// GL20.glVertexAttribPointer(3, 3, GL11.GL_FLOAT, false, 8*4, 3*4); // Normals
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
// Deselect (bind to 0) the VAO
GL30.glBindVertexArray(0);
// Create a new VBO for the indices and select it (bind) - INDICES
vboiID = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboiID);
GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER, indexBuff, GL15.GL_STATIC_DRAW);
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
}
private void update() {
// limit framerate
// Display.sync(ticksPerSecond);
// get time
time = getTime();
diffTime = (time - lastTime)/1000.0f;
lastTime = time;
// Distance mouse has been moved
dx = Mouse.getDX();
dy = Mouse.getDY();
// toggle wireframe
if(Keyboard.isKeyDown(Keyboard.KEY_SPACE)) {
if (time - lastKeySpace > 100) {
fillMode = fillMode == GL_FILL ? GL_LINE : GL_FILL;
glPolygonMode(GL_FRONT_AND_BACK, fillMode);
}
lastKeySpace = time;
}
// mouse control
camera.yaw(dx * mouseSensitivity);
camera.pitch(dy * mouseSensitivity);
// WASD control
if (Keyboard.isKeyDown(Keyboard.KEY_W)) {
camera.walkForward(movementSpeed * diffTime);
}
if (Keyboard.isKeyDown(Keyboard.KEY_S)) {
camera.walkBackwards(movementSpeed * diffTime);
}
if (Keyboard.isKeyDown(Keyboard.KEY_A)) {
camera.strafeLeft(movementSpeed * diffTime);
}
if (Keyboard.isKeyDown(Keyboard.KEY_D)) {
camera.strafeRight(movementSpeed * diffTime);
}
}
private boolean exit() {
return Display.isCloseRequested() || Keyboard.isKeyDown(Keyboard.KEY_ESCAPE);
}
// runner is finished, clean up
private void fini() {
// glDisable(GL_DEPTH_BITS);
// Delete all textures
textureIDs.stream().forEach(GL11::glDeleteTextures);
// Delete the shaders
GL20.glUseProgram(0);
GL20.glDetachShader(pId, vsId);
GL20.glDetachShader(pId, fsId);
GL20.glDeleteShader(vsId);
GL20.glDeleteShader(fsId);
GL20.glDeleteProgram(pId);
// Select the VAO
GL30.glBindVertexArray(vaoId);
// Disable the VBO index from the VAO attributes list
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
// Delete the vertex VBO
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
GL15.glDeleteBuffers(vboId);
// Delete the index VBO
// GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
// GL15.glDeleteBuffers(vboiId);
// Delete the VAO
GL30.glBindVertexArray(0);
GL30.glDeleteVertexArrays(vaoId);
Display.destroy();
}
private void updateFPS() {
long time = getTime();
String title;
if (time - lastFPS > 1000) {
// Display.setTitle("FPS: " + fps);
title = "FPS: " + fps + " || avg time per frame: " + (avgTime != 0 ? avgTime/1000f : "-/-") + " ms";
Display.setTitle(title);
fps = 0;
lastFPS += 1000;
}
fps++;
// Frame Count over 1000
if (frameCounter == 1000) {
avgTime = time - frameCounterTime;
// System.out.println("Time for 1000 frames: " + avgTime + " ms.");
frameCounter = 0;
frameCounterTime = time;
}
frameCounter++;
}
private long getTime() {
return (Sys.getTime() * 1000 / Sys.getTimerResolution());
}
private void draw() {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);
GL20.glUseProgram(pId);
// Bind the texture
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureIDs.get(0));
// Bind to the VAO that has all the information about the vertices
GL30.glBindVertexArray(vaoId);
GL20.glEnableVertexAttribArray(0);
// GL20.glEnableVertexAttribArray(1);
GL20.glEnableVertexAttribArray(2);
GL20.glEnableVertexAttribArray(3);
// Bind to the index VBO that has all the information about the order of the vertices
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboiID);
// Draw the vertices
GL11.glDrawElements(GL11.GL_TRIANGLES, kugelVertCount, GL11.GL_UNSIGNED_BYTE, 0);
// Put everything back to default (deselect)
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
GL20.glDisableVertexAttribArray(0);
// GL20.glDisableVertexAttribArray(1);
GL20.glDisableVertexAttribArray(2);
GL20.glDisableVertexAttribArray(3);
GL30.glBindVertexArray(0);
GL20.glUseProgram(0);
Display.update();
}
private static String[] getTextureFiles(String directory) {
File pathfile = new File(directory);
File[] files = pathfile.listFiles( (File dir, String name) ->
name.endsWith(".jpg") || name.endsWith(".png")
);
return Arrays.stream(files).map(File::toString).toArray(String[]::new);
}
public static void main(String[] args) {
new Mobile();
}
}
Sorry for the code mess. Maybe this is better readable.
https://codeshare.io/1SEQK
Don't be desperate, amaridev.
When you can't get nothing rendered you have in general two option:
start from something basic and working (like this hello triangle from mine, it's jogl but you can port it to lwjgl very easily) and build on top of that
debug your application step by step
In case you decide for the second one, you may want to disable first and lighting, any matrix multiplication and any texturing:
check your rendering targets setup by testing if you see the clear color you set
check if glViewport and the fragment shader work by running an hardcoded vertex shader with:
gl_Position = vec4(4.0 * float(gl_VertexID % 2) - 1.0, 4.0 * float(gl_VertexID / 2) - 1.0, 0.0, 1.0);
like here, no matrices and a simple
glDrawArrays(GL_TRIANGLES, 3, 0);
you may want also to hardcode the color output
check if you are reading valid vertex attributes, by outputting each of them in turn to the color fragment shader
out Block
{
vec4 color
} outBlock;
...
outBlock.color = position;
in Block
{
vec4 color;
} inBlock;
outputColor = inBlock.color;
enable matrix multiplication and pass a simple hardcoded triangle to check if any matrix (first proj, then view and finally also model) works as expected
start fetching from your real sphere geometry
start fetching color
enable again texturing and start fetching texture coordinates again
output light and materials values to output color and then enable them back as well
I have created a pinch zoom with a scale detector which in turn calls the following renderer.
This uses the projection matrix to do the zoom and then scales the eye per the zoom when panning.
public class vboCustomGLRenderer implements GLSurfaceView.Renderer {
// Store the model matrix. This matrix is used to move models from object space (where each model can be thought
// of being located at the center of the universe) to world space.
private float[] mModelMatrix = new float[16];
// Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
// it positions things relative to our eye.
private float[] mViewMatrix = new float[16];
// Store the projection matrix. This is used to project the scene onto a 2D viewport.
private float[] mProjectionMatrix = new float[16];
// Allocate storage for the final combined matrix. This will be passed into the shader program.
private float[] mMVPMatrix = new float[16];
// This will be used to pass in the transformation matrix.
private int mMVPMatrixHandle;
// This will be used to pass in model position information.
private int mPositionHandle;
// This will be used to pass in model color information.
private int mColorUniformLocation;
// How many bytes per float.
private final int mBytesPerFloat = 4;
// Offset of the position data.
private final int mPositionOffset = 0;
// Size of the position data in elements.
private final int mPositionDataSize = 3;
// How many elements per vertex for double values.
private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;
// Position the eye behind the origin.
public double eyeX = default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2);
public double eyeY = default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2);
// Position the eye behind the origin.
//final float eyeZ = 1.5f;
public float eyeZ = 1.5f;
// We are looking toward the distance
public double lookX = eyeX;
public double lookY = eyeY;
public float lookZ = 0.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
public float upX = 0.0f;
public float upY = 1.0f;
public float upZ = 0.0f;
public double mScaleFactor = 1;
public double mScrnVsMapScaleFactor = 0;
public vboCustomGLRenderer() {}
public void setEye(double x, double y){
eyeX -= (x / screen_vs_map_horz_ratio);
lookX = eyeX;
eyeY += (y / screen_vs_map_vert_ratio);
lookY = eyeY;
// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);
}
public void setScaleFactor(float scaleFactor, float gdx, float gdy){
mScaleFactor *= scaleFactor;
mRight = mRight / scaleFactor;
mLeft = -mRight;
mTop = mTop / scaleFactor;
mBottom = -mTop;
//Need to calculate the shift in the eye when zooming on a particular spot.
//So get the distance between the zoom point and eye point, figure out the
//new eye point by getting the factor of this distance.
double eyeXShift = (((mWidth / 2) - gdx) - (((mWidth / 2) - gdx) / scaleFactor));
double eyeYShift = (((mHeight / 2) - gdy) - (((mHeight / 2) - gdy) / scaleFactor));
screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));
eyeX -= (eyeXShift / screen_vs_map_horz_ratio);
lookX = eyeX;
eyeY += (eyeYShift / screen_vs_map_vert_ratio);
lookY = eyeY;
// Set the scale (Projection matrix)
Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
}
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// Set the background frame color
//White
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);
final String vertexShader =
"uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.
+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color; \n" // Per-vertex color information we will pass in.
+ "varying vec4 v_Color; \n" // This will be passed into the fragment shader.
+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
+ " v_Color = a_Color; \n" // Pass the color through to the fragment shader.
// It will be interpolated across the triangle.
+ " gl_Position = u_MVPMatrix \n" // gl_Position is a special variable used to store the final position.
+ " * a_Position; \n" // Multiply the vertex by the matrix to get the final point in
+ "} \n"; // normalized screen coordinates.
final String fragmentShader =
"precision mediump float; \n" // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
+ "uniform vec4 u_Color; \n" // This is the color from the vertex shader interpolated across the
// triangle per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = u_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
// Load in the vertex shader.
int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
if (vertexShaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(vertexShaderHandle, vertexShader);
// Compile the shader.
GLES20.glCompileShader(vertexShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
GLES20.glDeleteShader(vertexShaderHandle);
vertexShaderHandle = 0;
}
}
if (vertexShaderHandle == 0)
{
throw new RuntimeException("Error creating vertex shader.");
}
// Load in the fragment shader shader.
int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
if (fragmentShaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);
// Compile the shader.
GLES20.glCompileShader(fragmentShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
GLES20.glDeleteShader(fragmentShaderHandle);
fragmentShaderHandle = 0;
}
}
if (fragmentShaderHandle == 0)
{
throw new RuntimeException("Error creating fragment shader.");
}
// Create a program object and store the handle to it.
int programHandle = GLES20.glCreateProgram();
if (programHandle != 0)
{
// Bind the vertex shader to the program.
GLES20.glAttachShader(programHandle, vertexShaderHandle);
// Bind the fragment shader to the program.
GLES20.glAttachShader(programHandle, fragmentShaderHandle);
// Bind attributes
GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
GLES20.glBindAttribLocation(programHandle, 1, "a_Color");
// Link the two shaders together into a program.
GLES20.glLinkProgram(programHandle);
// Get the link status.
final int[] linkStatus = new int[1];
GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);
// If the link failed, delete the program.
if (linkStatus[0] == 0)
{
GLES20.glDeleteProgram(programHandle);
programHandle = 0;
}
}
if (programHandle == 0)
{
throw new RuntimeException("Error creating program.");
}
// Set program handles. These will later be used to pass in values to the program.
mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
mColorUniformLocation = GLES20.glGetUniformLocation(programHandle, "u_Color");
// Tell OpenGL to use this program when rendering.
GLES20.glUseProgram(programHandle);
}
static double mWidth = 0;
static double mHeight = 0;
static double mLeft = 0;
static double mRight = 0;
static double mTop = 0;
static double mBottom = 0;
static double mRatio = 0;
double screen_width_height_ratio;
double screen_height_width_ratio;
final float near = 1.5f;
final float far = 10.0f;
double screen_vs_map_horz_ratio = 0;
double screen_vs_map_vert_ratio = 0;
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
// Adjust the viewport based on geometry changes,
// such as screen rotation
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
screen_width_height_ratio = (double) width / height;
screen_height_width_ratio = (double) height / width;
//Initialize
if (mRatio == 0){
mWidth = (double) width;
mHeight = (double) height;
//map height to width ratio
double map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
double map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
double map_width_height_ratio = map_extents_width/map_extents_height;
if (screen_width_height_ratio > map_width_height_ratio){
mRight = (screen_width_height_ratio * map_extents_height)/2;
mLeft = -mRight;
mTop = map_extents_height/2;
mBottom = -mTop;
}
else{
mRight = map_extents_width/2;
mLeft = -mRight;
mTop = (screen_height_width_ratio * map_extents_width)/2;
mBottom = -mTop;
}
mRatio = screen_width_height_ratio;
}
if (screen_width_height_ratio != mRatio){
final double wRatio = width/mWidth;
final double oldWidth = mRight - mLeft;
final double newWidth = wRatio * oldWidth;
final double widthDiff = (newWidth - oldWidth)/2;
mLeft = mLeft - widthDiff;
mRight = mRight + widthDiff;
final double hRatio = height/mHeight;
final double oldHeight = mTop - mBottom;
final double newHeight = hRatio * oldHeight;
final double heightDiff = (newHeight - oldHeight)/2;
mBottom = mBottom - heightDiff;
mTop = mTop + heightDiff;
mWidth = (double) width;
mHeight = (double) height;
mRatio = screen_width_height_ratio;
}
screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));
Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
}
ListIterator<mapLayer> orgNonAssetCatLayersList_it;
ListIterator<FloatBuffer> mapLayerObjectList_it;
ListIterator<Byte> mapLayerObjectTypeList_it;
mapLayer MapLayer;
#Override
public void onDrawFrame(GL10 unused) {
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
drawPreset();
orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();
while (orgNonAssetCatLayersList_it.hasNext()) {
MapLayer = orgNonAssetCatLayersList_it.next();
if (MapLayer.BatchedPointVBO != null){
}
if (MapLayer.BatchedLineVBO != null){
drawLineString(MapLayer.BatchedLineVBO, MapLayer.lineStringObjColor);
}
if (MapLayer.BatchedPolygonVBO != null){
drawPolygon(MapLayer.BatchedPolygonVBO, MapLayer.polygonObjColor);
}
}
}
private void drawPreset()
{
Matrix.setIdentityM(mModelMatrix, 0);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
}
private void drawLineString(final FloatBuffer geometryBuffer, final float[] colorArray)
{
// Pass in the position information
geometryBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);
GLES20.glLineWidth(2.0f);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
}
private void drawPolygon(final FloatBuffer geometryBuffer, final float[] colorArray)
{
// Pass in the position information
geometryBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);
GLES20.glLineWidth(1.0f);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
}
}
This works very well up until it gets to a certain level then the panning starts jumping. After testing I found that it was because the floating point value of the eye, could not cope with such a small shift in position. I keep my x and y eye values in doubles so it continues to calculate shifting positions, then when calling setLookAtM() I convert them to floats.
So need I need to change the way the zoom works. I was thinking instead of zooming with the projection, scaling the model larger or smaller.
The setScaleFactor() function in my code will change, by removing the projection and eye shifting.
There is a Matrix.scaleM(m,Offset,x,y,z) function but I am unsure how or where to implement this.
Could use some suggestions on how to accomplish this.
[Edit] 24/7/2013
I tried altering setScaleFactor() like so:
public void setScaleFactor(float scaleFactor, float gdx, float gdy){
mScaleFactor *= scaleFactor;
}
and in drawPreset()
private void drawPreset()
{
Matrix.setIdentityM(mModelMatrix, 0);
//*****Added scaleM
Matrix.scaleM(mModelMatrix, 0, (float)mScaleFactor, (float)mScaleFactor, 1.0f);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
}
Now as soon as you do a zoom the image disappears from the screen.
Actually I found it right off to the right hand side. I could still pan over to it.
Still not sure on what I should be scaling to zoom, is it the model, view or view-model?
I have found out that if you take the center of your model back to the origin (0,0) it allows you to extend your zoom capabilities. With my x coord data which was between 152.6 and 152.7.
Taking it back to the origin by the offset 152.65, which needs to be applied to the data before loading it into the floatbuffer.
So the width of the data becomes 0.1 or 0.05 on each side, allowing for more precision on the trailing end of the value.