How can i generate multiple VBO with an ByteBuffer - java

According to lwjgl javadooc, there is a function :
public static void glGenBuffers(int n, ByteBuffer buffer)
But i dont exactly understand how this works.
Do i create an ByteBuffer :
ByteBuffer buffer = ByteBuffer.allocate("how much???")
glGenBuffers(4, buffer);
And especially do i have to fill the Buffer with glBufferSubData OR
is it better to create 4 buffers and bind and fill them?
My thought was, that it is more efficient when i only create 1 Buffer ,which stores vertices,texturecoords,normals and indices.

glGenBuffers(int n, ByteBuffer buffer) generates n vertex buffer objects (VBOs), which you can use to store your data, and puts them in the specified buffer. This buffer is not actually the one that holds your data, but the ids of the VBOs just generated. You have to manually define the VBO data with glBufferData.
The function is useful if you want to create multiple VBOs with a single call. Each VBO id is an integer, and the buffer has to be large enough to hold n (in this case 4) buffers.
ByteBuffer buffer = BufferUtils.createByteBuffer(4 * Integer.BYTES);
glGenBuffers(4, buffer);
However, to make things easier, you can also use an IntBuffer.
IntBuffer buffer = BufferUtils.createIntBuffer(4);
glGenBuffers(4, buffer);
You can then attach the data to each of the VBOs.
glBindBuffer(GL_ARRAY_BUFFER, buffer.get(2); /*value from 0 to 3*/
glBufferData(GL_ARRAY_BUFFER, data, GL_STATIC_DRAW);
(If you are using a ByteBuffer, you have to call buffer.getInt(2) instead.)
It is generally more efficient to have all data in a single buffer, but note that in this case you have to use at least two because indices have to be in a GL_ELEMENT_ARRAY_BUFFER. However, the performance difference is very small, so it may be easier to use several different VBOs.
It is preferred to tightly pack all the data in a buffer and upload that with glBufferData instead of calling glBufferSubData for each type of data. Your data layout will then look something like this (P = position, T = texture coordinate, N = normal vector):
PPPTTNNNPPPTTNNNPPPTTN...
Now you can setup the vertex attribute pointers to correctly read the values from this buffer.
int vbo = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, tightlyPackedData, GL_STATIC_DRAW);
int stride = (3 + 2 + 3) * Float.BYTES;
glVertexAttribPointer(positionIndex, 3, GL_FLOAT, false, stride, 0);
glVertexAttribPointer(texCoordIndex, 2, GL_FLOAT, false, stride, 3 * Float.BYTES);
glVertexAttribPointer(normalIndex, 3, GL_FLOAT, false, stride, (3 + 2) * Float.BYTES);
positionIndex, texCoordIndex and normalIndex are the attribute locations of your vertex shader attributes. You can get them with glGetAttribLocation.
stride is the number of bytes between the value of one vertex and the value of the next one. We have 3 positions, 2 texture coordinates and 3 normals, which are all floats, so we multiply them with Float.BYTES.
The last parameter to glVertexAttribPointer is the offset in bytes, i.e. the number of bytes from the start of the buffer to where the first value is located.

Related

Several separate vertex buffer objects

For rendering a 3D object, four separate vertex buffers are created: for vertices, indices, texture coordinates and normals:
private final int[] VBO = new int[4]; // array for vertex buffer objects
private void createVertexBuffers() {
VBO[0] = 0; VBO[1] = 0; VBO[2] = 0; VBO[3] = 0;
GLES20.glGenBuffers(4, VBO, 0);
bufferVertices.position(0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, VBO[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, VERTEX_STRIDE * NUMBER_VERTICES,
bufferVertices, GLES20.GL_STATIC_DRAW); // VBO for vertex
bufferTextureCoordinates.position(0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, VBO[1]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, TEXTURE_STRIDE * NUMBERS_TEXTURES,
bufferTextureCoordinates, GLES20.GL_STATIC_DRAW); // VBO for texture coordinates
bufferNormals.position(0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, VBO[2]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, VERTEX_STRIDE * NUMBER_NORMALS,
bufferNormals, GLES20.GL_STATIC_DRAW); // VBO for normals
bufferIndices.position(0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, VBO[3]);
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, INT_SIZE * NUMBER_INDICES,
bufferIndices, GLES20.GL_STATIC_DRAW); // VBO for indices
}
The 3D-objects themselves are also many, respectively, the number of buffers additionally is increasing. Question: Is it a normal approach to use many separate buffers? In mobile apps? I would be grateful for the answers.
Note: looked at similar questions, but I still have uncertainty.
Interleaved attributes (array of structs) are generally more efficient than completely de-interleaved ones (struct of arrays). The reason for this is that you're less likely to load a whole cache line and then only use one value from it.
However recent mobile implementations still like some level of deinterleaving. For tile-based GPUs it's common to process position computation first, and then only process the rest of the vertex shader if the vertex contributes to a visible triangle. For this you want two packed buffer regions one for all attributes that contribute to position computation, and one for everything else.
As always this comes with caveats. If packing as an array of structs forces a lot of padding elements to correct alignment, that rapidly eats into the benefits.

LWJGL Vulkan : nothing drawn when adding a second vertex buffer

I've made a little program with lwjgl that displays a colored triangle using Vulkan, at first I was sending all the required data to my shaders through a single vertex buffer (color + position).
While knowing that it is the best practice to use only one vertex buffer I still wanted to try to send the positions and the colors through 2 different vertex buffer
To add my second vertex buffer I've created it the same way I created the first, I then added a binding and attributed the second attribute to it like this :
VkVertexInputBindingDescription.Buffer bindingDescriptor = VkVertexInputAttributeDescription.calloc(2)
.binding(0)
.stride(2 * 4)
.inputRate(VK_VERTEX_INPUT_RATE_VERTEX);
//I added this
bindingDescriptor.binding(1)
.stride(3 * 4)
.inputRate(VK_VERTEX_INPUT_RATE_VERTEX);
VkVertexInputAttributeDescription.Buffer attributeDescription = VkVertexInputAttributeDescription.calloc(2)
.get(0)
.binding(0)
.location(0)
.format(VK_FORMAT_R32G32_SFLOAT)
.offset(0);
attributeDescription.get(1)
//changed the binding from 0 to 1
.binding(1)
.location(1)
.format(VK_FORMAT_R32G32B32_SFLOAT)
.offset(0);
then in the definition of the render pass I've added my second vertex buffer to the buffer submitted to vkCmdBindVertexBuffers like this :
LongBuffer offsets = memAllocLong(2);
offsets.put(0, 0L);
//added this
offsets.put(1, 0L);
LongBuffer pBuffers = memAllocLong(2);
pBuffers.put(0, verticesBuf);
//added this
pBuffers.put(1, colorBuf);
vkCmdBindVertexBuffers(renderCommandBuffers[i], 0, pBuffers, offsets);
The problem is that when I add my second vertex buffer nothing is displayed anymore, the only thing left is my background color
Am I missing something ? Is there something special to do to use multiple vertex buffers ?

Texture renders incorrectly (OpenGL)

I've been trying to render an 8x8 texture. I've used code from 2 tutorials, but the texture doesn't render correctly. For now I have this initialization code:
int shaderProgram,fragmentShader,vertexShader,texture,elementBuffer,vertexBuffer, vertexArray;
public Texture2D(String texturePath_, String vertexShader_,String fragmentShader_)
{
vertexArray=GL30.glGenVertexArrays();
GL30.glBindVertexArray(vertexArray);
String[] vertexshader=Utilities.loadShaderFile(vertexShader_,getClass());
String[] fragmentshader=Utilities.loadShaderFile(fragmentShader_,getClass());
if(vertexshader==null)
throw new NullPointerException("The vertex shader is null");
if(fragmentshader==null)
throw new NullPointerException("The fragment shader is null");
vertexShader=GL20.glCreateShader(GL20.GL_VERTEX_SHADER);
GL20.glShaderSource(vertexShader,vertexshader);
GL20.glCompileShader(vertexShader);
Utilities.showShaderCompileLog(vertexShader);
fragmentShader=GL20.glCreateShader(GL20.GL_FRAGMENT_SHADER);
GL20.glShaderSource(fragmentShader,fragmentshader);
GL20.glCompileShader(fragmentShader);
Utilities.showShaderCompileLog(fragmentShader);
shaderProgram= GL20.glCreateProgram();
GL20.glAttachShader(shaderProgram,fragmentShader);
GL20.glAttachShader(shaderProgram,vertexShader);
GL30.glBindFragDataLocation(shaderProgram,0,"fragcolor");
GL20.glLinkProgram(shaderProgram);
GL20.glUseProgram(shaderProgram);
texture= GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D,texture);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_WRAP_S, GL13.GL_CLAMP_TO_BORDER);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_WRAP_T,GL13.GL_CLAMP_TO_BORDER);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_MIN_FILTER,GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_MAG_FILTER,GL11.GL_LINEAR);
ByteBuffer image;
FloatBuffer verteces;
IntBuffer imagewidth,imageheight, positions,imagechannels;
try(MemoryStack memoryStack=MemoryStack.stackPush())
{
imageheight=memoryStack.mallocInt(1);
imagewidth=memoryStack.mallocInt(1);
positions=memoryStack.mallocInt(6);
imagechannels=memoryStack.mallocInt(1);
image= STBImage.stbi_load(texturePath_,imagewidth,imageheight,imagechannels,0);
if(image==null) throw new NullPointerException("Failed to load image");
verteces=memoryStack.mallocFloat(28);
}
positions.put(0).put(1).put(2).put(2).put(3).put(0).flip();
int width=imagewidth.get();
int height=imageheight.get();
GL11.glTexImage2D(GL11.GL_TEXTURE_2D,0,GL11.GL_RGBA,width,height,0,GL11.GL_RGBA,GL11.GL_UNSIGNED_BYTE,image);
elementBuffer=GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER,elementBuffer);
GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER,positions,GL15.GL_STATIC_DRAW);
float x1=0f, x2=1f;
float y1=1f,y2=-1f;
verteces.put(x1).put(y1).put(1).put(1).put(1).put(0).put(0);
verteces.put(x1).put(y2).put(1).put(1).put(1).put(1).put(0);
verteces.put(x2).put(y2).put(1).put(1).put(1).put(1).put(1);
verteces.put(x2).put(y1).put(1).put(1).put(1).put(0).put(1).flip();
vertexBuffer=GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,vertexBuffer);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER,verteces,GL15.GL_STATIC_DRAW);
int uniform=GL20.glGetUniformLocation(shaderProgram,"texture_image");
GL20.glUniform1i(uniform,0);
int position=GL20.glGetAttribLocation(shaderProgram,"position");
GL20.glEnableVertexAttribArray(position);
GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,0,0);
int color=GL20.glGetAttribLocation(shaderProgram,"color");
GL20.glEnableVertexAttribArray(color);
GL20.glVertexAttribPointer(color,3,GL11.GL_FLOAT,false,7*Float.BYTES, 2 * Float.BYTES);
int textureST=GL20.glGetAttribLocation(shaderProgram,"textureCoord");
GL20.glEnableVertexAttribArray(textureST);
GL20.glVertexAttribPointer(textureST,3,GL11.GL_FLOAT,false,7*Float.BYTES, 5 * Float.BYTES);
Utilities.showErrors(1);
}
The result is:
But I'd like the texture to occupy all area. The shaders compile fine, and there are no GL errors.
If I change values to the ones from the tutorial:
verteces.put(-1f).put(1f).put(1).put(1).put(1).put(0).put(0);
verteces.put(1f).put(1f).put(1).put(1).put(1).put(1).put(0);
verteces.put(1f).put(-1f).put(1).put(1).put(1).put(1).put(1);
verteces.put(-1f).put(-1f).put(1).put(1).put(1).put(0).put(1).flip();
I get:
The tutorials: https://open.gl/textures and https://github.com/SilverTiger/lwjgl3-tutorial/wiki/Textures
I'm using profile 3.0 with shaders version 300 ES. The texture's format is PNG.
The vertex attribute layout:
GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,0,0);
GL20.glVertexAttribPointer(color,3,GL11.GL_FLOAT,false,7*Float.BYTES, 2 * Float.BYTES);
GL20.glVertexAttribPointer(textureST,3,GL11.GL_FLOAT,false,7*Float.BYTES, 5 * Float.BYTES);
doesn't look correct. There are multiple problems with it:
The texture coordinates try to read 3 floats from the array. In combination with the stride, your last vertex will read outside the VBO. Most probably texture coordinates should only read 2 floats.
The total number of floats used (2+3+3=8) does not fit to the data where only 7 floats per vertex are given. This is solved when texture coordinates read only two floats.
The stride of the positions look wrong. 0 means that all positions are tightly packed. Basically, the positions use the first 8 floats in the VBO. If you look at them: {-1, 1, 1, 1, 1, 0, 0, 1}, then this is exactly the geometry you see. It was just luck that it worked in first place. Solution: Change position layout to:
GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,7*Float.BYTES,0);

Copying float arrays and float values to a bytebuffer - Java

I am trying copy the data from two arrays and two variables to a bytebuffer. The bytebuffer will hold this data for a uniform block structure in a fragment shader. I can copy the first one in fine but the second always generates an index out of range error.
I've tried using .asFloatBuffer, tried initializing the buffer to twice the size I needed and tried using a FloatBuffer (I need a ByteBuffer but I thought I'd just try to fix this error then work my way back)
The structure from fragment shader:
layout (binding = 0) uniform BlobSettings {
vec4 InnerColor;
vec4 OuterColor;
float RadiusInner;
float RadiusOuter;
};
This is what I have now for code (a bit of a mess but you get the idea...):
//create a buffer for the data
//blockB.get(0) contains the 'size' of the data structure I need to copy (value is 48)
//FloatBuffer blockBuffer = BufferUtil.newFloatBuffer(blockB.get(0));
ByteBuffer blockBuffer = ByteBuffer.allocateDirect(blockB.get(0) * 4);//.asFloatBuffer();
//the following data will be copied to the buffer
float outerColor[] = {0.1f,0.1f,0.1f,0.1f};
float innerColor[] = {1.0f,1.0f,0.75f,1.0f};
float innerRadius = 0.25f;
float outerRadius = 0.45f;
//copy data to buffer at appropriate offsets
//params contains the offsets (0, 16, 32, 36)
//following 4 lines using a FloatBuffer (maybe convert to ByteBuffer after loading?)
blockBuffer.put(outerColor, params.get(0), outerColor.length);
blockBuffer.put(innerColor, params.get(1), innerColor.length); //idx out of range here...
blockBuffer.put(params.get(2), innerRadius);
blockBuffer.put(params.get(3), outerRadius);
//when using ByteBuffer directly - maybe something like the following?
for (int idx=0;idx<4;idx++){
blockBuffer.putFloat(params.get(0) + idx, outerColor[idx]) //????
}
Can anyone tell me how I can properly get that data into a ByteBuffer?
Based on the way I read the spec for uniform blocks, you will need to be careful how the definition in the shader matches up with the values in the buffer. There are several layout options for uniform blocks. The declaration you have does not specify a layout:
layout (binding = 0) uniform BlobSettings {
vec4 InnerColor;
vec4 OuterColor;
float RadiusInner;
float RadiusOuter;
};
Without a specified layout, the default is shared. This does not guarantee a well defined memory layout for the block, and you will have to query the size/offset of values with glGetActiveUniformBlockiv() and glGetActiveUniformsiv(). In the data you provided, the total size came back as 48, which most likely means that padding was added at the end to make the size a multiple of 16.
The simpler option is that you specify the std140 layout option, which guarantees a specific layout:
layout (std140, binding = 0) uniform BlobSettings {
...
};
This should give you a packed layout with a total size of 40 bytes, and the values in the specified order. The std140 layout is not always fully packed, e.g. a vec3 will use the same space as a vec4. You can read the specs for the details. But for vec4 and scalar values, it is packed in this case.
For filling a buffer with float values, there are a few different options. Using your data as an example (changing the order to match the uniform definition):
float innerColor[] = {1.0f, 1.0f, 0.75f, 1.0f};
float outerColor[] = {0.1f, 0.1f, 0.1f, 0.1f};
float innerRadius = 0.25f;
float outerRadius = 0.45f;
Assuming that you want to pass this to OpenGL with a call like glBufferData().
Put into array, then wrap
If you put all the values into a single float array, you can directly turn it into a buffer with FloatBuffer.wrap():
float bufferData[] = {
1.0f, 1.0f, 0.75f, 1.0f,
0.1f, 0.1f, 0.1f, 0.1f,
0.25f,
0.45f);
glBufferData(..., FloatBuffer.wrap(bufferData));
Allocate and use float buffer
For this, you allocate a FloatBuffer, fill it with the data, then use it. Note that the put() methods advance the buffer position, so you can simply add the values one by one. You do have to rewind the buffer to the start position before using it.
FloatBuffer buf = FloatBuffer.allocate(10);
buf.put(innerColor);
buf.put(outerColor);
buf.put(innerRadius);
buf.put(outerRadius);
buf.rewind();
glBufferData(..., buf);
Allocate byte buffer, and use as float buffer
All of the OpenGL Java bindings I have seen accept either FloatBuffer or Buffer arguments. So using a FloatBuffer as in the approaches above is easiest. But if you use OpenGL bindings that really require a ByteBuffer, or need a direct allocation (which at least on Android is only the case for client side vertex arrays), you can allocate a ByteBuffer first, then use it as FloatBuffer:
ByteBuffer byteBuf = ByteBuffer.allocateDirect(10 * 4);
FloatBuffer floatBuf = byteBuf.asFloatBuffer();
floatBuf.put(innerColor);
floatBuf.put(outerColor);
floatBuf.put(innerRadius);
floatBuf.put(outerRadius);
byteBuf.rewind();
glBufferData(..., byteBuf);
Use byte buffer
This seems somewhat cumbersome, but you could use a ByteBuffer all the way:
ByteBuffer buf = ByteBuffer.allocateDirect(10 * 4);
buf.putFloat(innerColor[0]);
buf.putFloat(innerColor[1]);
buf.putFloat(innerColor[2]);
buf.putFloat(innerColor[3]);
buf.putFloat(outerColor[0]);
buf.putFloat(outerColor[1]);
buf.putFloat(outerColor[2]);
buf.putFloat(outerColor[3]);
buf.putFloat(innerRadius);
buf.putFloat(outerRadius);
buf.rewind();
glBufferData(..., buf);

Difference between glNormal3f and glNormalPointer

i'm getting these strange two results when drawing with immediate mode and vertex arrays. In immediate mode i'm passing the normals by glNormal3f. My shader takes the normal and computes something like a shadow, but nothing seriously.
Immediate Mode:
glBegin(GL_TRIANGLES);
for (Triangle tri : this.triangles) {
Vec3d normal = Vec3d.vectorProduct(
Vec3d.sub(tri.getB(), tri.getA()),
Vec3d.sub(tri.getC(), tri.getA())); //calculating normalvector
glNormal3d(normal.x, normal.y, normal.z);
glColor3d(1.0, 1.0, 1.0);
glVertex3d(tri.getA().x, tri.getA().y, tri.getA().z);
glVertex3d(tri.getB().x, tri.getB().y, tri.getB().z);
glVertex3d(tri.getC().x, tri.getC().y, tri.getC().z);
}
glEnd();
Result:
In the VAO variant, i'm storing the normals in a separate buffer, but calculated the same way:
for(Triangle tri : triangles) {
Vec3d normal = Vec3d.vectorProduct(
Vec3d.sub(tri.getB(), tri.getA()),
Vec3d.sub(tri.getC(), tri.getA()));
normals.put((float) normal.x);
normals.put((float) normal.y);
normals.put((float) normal.z);
}
normals.flip();
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(4, normals);
glVertexPointer(3,3*4, vertices);
glDrawArrays(GL_TRIANGLES, 0, (vertices.capacity() / 3));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
Result 2:
Obviously the normals get somehow mismatched but i can't find my mistake.
And the final question: What is the difference between glNormal3f and glNormalPointer considering the values passed to the shader?
The stride of your normal array is suspect. Why are you passing 4 for the stride to glNormalPointer (...)?
You are telling GL that there are 4-bytes (1 float) worth of space between each of your normals. However, normals is built in your code with 12-bytes between each successive normal. Incidentally, this is what you would call tightly packed and therefore passing 0 for the stride implies the same thing (4-bytes per-component x 3-components).
Your vertex array and normal array should actually have the same stride. 4*3 or simply 0.

Categories