How to draw many small Bitmaps in opengl - java

I am new to opengl and i want to draw many bitmaps fast. I wrote myself a few classes to draw bitmaps. If i use the classes to draw a few big bitmaps its fast, but if i use it to draw many small bitmaps its slow. here is my code:
painter class:
public void draw(int id, FloatBuffer vertexBuffer) {
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[id]);
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glColor4f(1,1,1,1);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
// Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
Then I call it like this:
painter.draw(bmp, vert);
"bmp" is the int value and vert is a FloatBuffer.
The bitmaps change position in every frame, so i calculate the FloatBuffer in each frame for each bitmap.
Do you think there is a faster way to draw it?

My suggestion would be to put the vertices of all bitmaps that you need to render in a frame in one big vertex array and render that in one go. However, to avoid switching textures manually, you would have to either create a texture atlas with all your bitmaps or use array textures, if those are available to you (you need OpenGL 3 or EXT_texture_array).

Related

Feed Shader Program with World Units instead of (0f,1f) values

I'm rendering a simple rectangle mesh using libgdx, and other geometric elements that are similar in simplicity. Therse are going to interact with the sprites I have setup in my game. The sprites' position and other properties are setup in world units and before each sprite draw session I setup the camera like this:
camera.update();
batch.setProjectionMatrix(camera.combined);
It all works well but I need to draw meshes using world units. How can I feed the shader program world coordinates(12.5f, 30f, etc, based on my game world data) instead of (0f, 1f) ranges? I want to draw several textured meshes so I need coordinates that are in relation with the other elements in the game.
Here is how I draw a simple rectangle mesh :
mesh = new Mesh(true, 4, 6,
new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoords"));
mesh.setVertices(new float[] {
-1.0f, -1.0f, 0, 0,1,
0.0f, -1.0f, 0,1,1,
0.0f, 0.0f, 0, 1,0,
-1.0f, 0.0f, 0, 0,0 });
mesh.setIndices(new short[] { 0, 1, 2, 2, 3, 0});
Gdx.graphics.getGL20().glEnable(GL20.GL_TEXTURE_2D);
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE);
createShader();
shader.begin();
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Is there any way I can feed world units to the mesh vertices array ?
You can transform the vertices in the vertex shader. This allows you to project world coordinates onto the -1 to 1 range required for rendering. This is typically done by multiplying the position vertex attribute with a (projection) matrix. Have a look at the default spritebatch shader, for an example of how to implement this.
You can use the camera.combined matrix to multiply these vertices in vertex shader. Just like you did when specifying the projection matrix for the spritebatch. You'll have to assign this matrix to the uniform you've used in your vertex shader. An example of how to do this can also be found in default spritebatch implementation.
However, you might want to reconsider your approach. Since you're using a spritebatch, you can profit from a performance gain by using the spritebatch instead of manually rendering. This will also simplify the rendering for you, because you dont have to mess with the shader and matrices yourself. Spritebatch contains a method (javadoc) which allows you to specify a manually created mesh (or vertices actually). Each vertex is expected to be 5 floats (x, y, u, v, color) in size and a multiple of four vertices (doesn't have to be a rectangle shape though) must provided (you can use Color.WHITE.toFloatBits() for the color).
But, since you're trying to render a simple rectangle, you might as well use one of the more convenient methods that allows you to render a rectangle without having to create a mesh all together (javadocs). Or, even easier, use it how it is designed by creating a Sprite for your rectangle (wiki page).
Now, if you're still certain that you do want to create a mesh and shader manually, then I'd suggest learning that using e.g. a tutorial, instead of just diving into it. E.g. these tutorials might help you get started. The wiki also contains an article describing how to do this, including transforming vertices in the vertex shader.

JOGL - Add Texture to Object - Only black object

How can i add a texture to an object in Java Open GL (especially for AndAR)... What's wrong with my code ? I read a few examples but always the same, only a "Black Rectangle" or the texture is bound on the background... How can i bind it to my rectangle ?
Here is my Code:
int[] textureIDs = new int[1];
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glGenTextures(1, textureIDs, 0);
//load the textures into the graphics memory
Bitmap bm = BitmapFactory.decodeResource(CustomActivity.context.getResources(), R.drawable.icon);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureIDs[0]);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bm,0);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLUT.glutSolidBox(gl,200.0f,100.0f,10.0f);
For texturinh to have a useful effect, you will need texture coordianates which tell the GL which part of the texture is to be mapped to which parts of the primitives. Since you are using the fixed function pipeline, there are two options:
Supply geometric primitives with texture coordinates per vertex, or
use the automatic texture coordinate generation.
The GLUT objects never provide any texture coordinates. Which means that OpenGL will use the currently set texture coordinate for every vertex. That will result that just uni specific texture location is sampled over and over again - it doesn't have to be black, but your object will be evenly colored.
You might be tempted to go for option 2 then, automatic texture coordinate generation, which is controlled with the `glTexGen() family of functions. However, the available texture coordinate generation modes are all not suitable for texturing a cube.
SO the only real solution is to specify the cube vertex manually, and specifying useful texture coordinates. You never specified the mapping you wanted. The texture is a rectangular image, you could map it to each of the face, our you could want to have a different sub-rectangle of your image mapped to each side - and you have to tell the GL how to map it, it cannot guess that just because you draw 6 faces and have texturing enabled.
or the texture is bound on the background.
You need to disable texturing again when you want to draw untextured.

How would I read a sprite sheet in LWJGL?

I currently use LWJGL Textures to draw images on the screen. I would like to read Textures* from a sprite sheet. I am using slick's TextureLoader class to load the textures.
I draw an LWJGL Shape and bind a Texture onto it.
e.g:
Me drawing an image:
Texture texture = ResourceManager.loadTexture("Images/Tests/test.png");
GL11.glBegin(GL11.GL_QUADS);
{
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(0, 0);
GL11.glTexCoord2f(0, texture.getHeight());
GL11.glVertex2f(0, height);
GL11.glTexCoord2f(texture.getWidth(), texture.getHeight());
GL11.glVertex2f(width,height);
GL11.glTexCoord2f(texture.getWidth(), 0);
GL11.glVertex2f(width,0);
}
GL11.glEnd();
I think there is a way by when calling glTexCoord2f, I could give it a sprite offset and load the sprite sheet in the texture instead,
for example one call would be like this:
GL11.glTexCoord2f(0+spriteXOffset, texture.getHeight()-spriteYOffset);
But I would really like to know if there is a simpler way, maybe extracting Textures from a single texture for example like they do in here:
Reading images from a sprite sheet Java
Just instead of BufferedImage, Texture object.
Thank you for the help!
Texture coordinates for GL_TEXTURE_2D, used internally by the Slick texture loader, require normalized texture coordinates. That is, the coordinates range from 0.0 to 1.0. So (0,0) is the top-left corner of the texture, and (1,1) is the bottom-right corner. Assuming that you have your sprite coordinates in pixel coordinates at hand, you then have to divide the x coordinate by the texture width and the y coordinate by the texture height, resulting in normalized texture coordinates. You would then supply these coordinates to OpenGL using glTexCoord.
glTexCoord2f(spriteX / textureWidth, spriteY / textureHeight);
glVertex2f(coordinateX, coordinateY);
glTexCoord2f(spriteX+spriteWidth / textureWidth, spriteY / textureHeight);
glVertex2f(coordinateX2, coordinateY);
// Et cetera
There is, however, also an easier way of doing this. Take a look at this video (I created it), to see how you can use the pixel coordinates for textures instead of normalized ones.

Starting with OpenGL ES. Drawing using pixels

I'm starting to learn open GL in android (GL10) using java and I followed some tutorials to draw squares, triangles, etc.
Now I'm starting to draw some ideas I have but I'm really confused with the drawing vertexs of the screen. When I draw something using openGL ES, I have to specify the part of the screen I want to draw and the same for the texture...
So I started to make some tests and I printed a fullscreen texture with this vertexs:
(-1, -1, //top left
-1, 1, //bottom left
1, -1, //top right
1, 1); //bottom right
Why is this fullscreen? Isn't the center of OpenGL coordinates at top left (0, 0)? Why with that vertexs the draw is correct? It seems that the center is really the real center of the screen and the width and height is from -1...1, but I dont really understand it because I thought that the center was at the top left...
Another question... I read a lot of c++ code where they print using pixels. It seems really necesary in videogames using pixels because needs the exact position of the things, and with -1...1 I cant be really precise. How can I use pixels instead of -1...1?
Really thanks and sorry about my poor english. Thanks
Why is this fullscreen? Isn't the center of OpenGL coordinates at top left (0, 0)? Why with that vertexs the draw is correct? It seems that the center is really the real center of the screen and the width and height is from -1...1, but I dont really understand it because I thought that the center was at the top left...
There are 3 things coming together. The so called viewport, the so called normalized device coordinates (NDC), and the projection from model space to eye space to clip space to NDC space.
The viewport selects the portion of your window into which the NDC range
[-1…1]×[-1…1]
is mapped to. The function signature is glViewport(x, y, width, height). OpenGL assumes a coordinate system with rising NDC x-coordinates as going to the right and rising NDC y-coordinates going up.
So if you call glViewport(0, 0, window_width, window_height), which is also the default after a OpenGL context is bound the first time to a window, the NDC coordinate (-1, -1) will be in the lower left and the NDC coordinate (1,1) in the upper right corners of the window.
OpenGL starts with all transformations being set to identity, which means, that the vertex coordinates you pass through are getting right through to NDC space and are interpreted like this. However most of the time in OpenGL you're applying to successive transformations:
modelview
and
projection
The modelview transformation is used to move around the world in front of the stationary eye/camera (which is always located at (0,0,0)). Placing a camera just means, adding an additional transformation of the whole world (view transformation), that's the exact opposite of how you'd place the camera in the world. Fixed function OpenGL calls this the MODELVIEW matrix, being accessed if matrix mode has been set to GL_MODELVIEW.
The projection transformation is kind of the lens of OpenGL. You use it to set if it's a wide or small angle (in case of perspective) or the edges of a cuboid (ortho projection) or even something different. Fixed function OpenGL calls this the PROJECTION matrix, being accessed if matrix mode has been set to GL_PROJECTION.
After the projection primitives are clipped, and then the so called homogenous divide applied, which is creating the actual perspective effect, if a perspective projection has been applied.
At this point vertices have been transformed into NDC space, which then gets mapped to the viewport as explained in the beginning.
Regarding your problem: What you want is a projection that maps vertex coordinates 1:1 to viewport pixels. Easy enough:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if( origin_lower_left ) {
glOrtho(0, width, height, 0, -1, 1);
} else {
glOrtho(0, width, 0, height, -1, 1);
}
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
Now vertex coordinates map to viewport pixels.
Update: Drawing a full viewport textured quad by triangles:
OpenGL-2.1 and OpenGL ES-1
void fullscreenquad(int width, int height, GLuint texture)
{
GLfloat vtcs[] = {
0, 0,
1, 0,
1, 1,
0, 1
};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vtcs);
glTexCoordPointer(2, GL_FLOAT, 0, vtcs);
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1, 0, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}
By default opengl camera is placed at origin pointing towards negative z axis.
Whatever is projected on the camera near plane is the what is seen on your screen.
Thus the center of the screen corresponds to (0,0)
Depending on what you want to draw you have the option of using GL_POINT, GL_LINE and GL_TRIANGLE for drawing.
You can use GL_POINTS to color some pixels on the screen but in case you need to draw objects/mesh such as teapot, cube etc than you should go for triangles.
You need to read a bit more to get things clearer. In openGL, you specify a viewport. This viewport is your view of the OpenGL space, so if you set it so that the center of the screen is in the middle of your screen and the view to extend from -1 to 1, then this is your full view of the OpenGL prespective. Do not mix that with the screen coordinates in Android (these coordinates have the origin at the top left corner as you mentioned). You need to translate between these coordinates (for touchevents for e.g.) to match the other coordinate system.

Texturing Vertex Buffer Objects

What I want to do is drawing a (large) terrain with OpenGL. So I have a set of vertices, lets say 256 x 256 which I store in a vertex buffer object in the VRAM. I properly triangulated them, so I've got an index buffer for the faces.
// vertexes
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertexBufferId);
glVertexPointer(3, GL_FLOAT, 0, 0);
// textures
glBindBufferARB(GL_ARRAY_BUFFER_ARB, texCoordBufferId);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
// indexes
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indexBufferId);
// draw it
glDrawRangeElements(GL11.TRIANGLES, 0, size - 1, size, GL_UNSIGNED_INT, 0);
I also loaded a square texture which has to be applied to each triangle. So I've got a problem with the texture coords:
Each vertex is included in 4 triangles which means it needs 4 texture coords. But glDrawRangeElements() requires as much texture coords as vertexes.
So I don't the see the way how to do this with the VBOs. Maybe there is better concept for solving my problem or I'm just lacking a good idea.
Thanks in advance.
If your texture should repeat (or mirror) itself in each quad the best way would be to use texture coordinates that match the number of the (x, y) position in your array. E.g. for the first line of vertices use these texture coordinates: (0.0, 0.0), (1.0, 0.0), (2.0, 0.0)...(255.0, 0.0).
As you probably want your texture to seamlessly tile, all you need to is to compute the proper texture coordinate for each vertex and pass them just like the vertices.

Categories