libgdx spritebatch not rendering textures - java

I'm working on a top down RPG game using LibGDX, and am creating an Ortho.. camera for my game. However in doing so, only my tile textures render now. This is how the render code looks:
Camera initialized as new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
(note that the camera. calls are actually in the world.update method, I just figured I would minimize the amount of code needed on here)
Updating/Rendering:
camera.position.set(getPlayer().getPosition().x, getPlayer().getPosition().y, 0);
camera.update();
batch.setProjectionMatrix(world.getCamera().combined);
batch.begin();
world.render(batch);
batch.end();
the world's render method ends up calling this:
public void render(SpriteBatch batch, float x, float y, float w, float h) {
batch.draw(region, x, y, w, h);
}
Where region is a TextureRegion
Without the camera, it all works just fine, so I am very confused as to why textures only render in order now (my tile textures are rendered below entities) Does anyone have any idea why this might be? In case you want to see more code, I also have this on my github: CLICK HERE

I hadn't realized this later, but I was commenting out a lot of rendering lines, line by line to see if I could find what was wrong, it turns out that my debugging tool I made (which renders collision bounds using a ShapeRenderer) was messing it up because apparently, a ShapeRenderer cannot be used between a batch.begin and a batch.end
I figured this out with the help of this badlogicgames forum post

Can't get much out of the code but are you using this at the root of the render structure?
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);

Related

How to create a proper ortographic camera in libgdx

So I am having a little hard time understanding how ortographic cameras work in libgdx.
what I want is to have a camera that will only render things within a square while having another camera set the bounds for my whole screen.
So here, I was able to do what I wanted on the whole screen for the game pad. But, the thing you see on the top right is the background map of the game and i want to render the parts only fall within the red square you see here. How do I achieve that?
Are cameras supposed to do that or do I need to figure out a way to do it manually? I am really confused as to how cameras and projection matrices work.
Here on this screen, The red square and the green pad on the left are being drawn using the projection matrix of my screen camera. The map(top right) is drawn using my map cam.
Map cam is a view port of 400x400 but as you can see , the tiles are rectangular and that isnt the aspect ration i want. If someone can briefly explain how cameras work , I'd greatly appreciate it.
The reason I am not posting my code here is because I feel like I need to understand how camera mechanics work to even code it properly so I want to address that issue first.
Following #Tenfour04's advice worked perfectly. In case anyone wonders what I wanted to achieve. Here's a picture.
A camera alone cannot crop off part of the screen. For that you need to use glViewport. There is already a Viewport class in Libgdx that can do that for you. You will need two orthographic cameras (one for the map and one for the GUI), but the viewport can create its own.
private Viewport viewport;
//in create:
viewport = new FitViewport(400, 400);
//in resize:
viewport.update(width, height);
//in render:
viewport.getCamera().position.set(/*...move your map camera as needed*/);
viewport.apply(); //viewport cropped to your red square
batch.setProjectionMatrix(viewport.getCamera().combined);
batch.begin();
//draw map
batch.end();
//return to full screen viewport
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(yourGUICamera.combined);
batch.begin();
//draw gui
batch.end();
What happens, is the camera will fit itself to the size of the screen. In order to change this, you would want to use a FrameBuffer. The frame buffer will constrain the camera into the desired size, then can be drawn as a texture.
Create the frame buffer with the dimensions being in pixels.
//Initialize the buffer
FrameBuffer fbo = new FrameBuffer(Format.RGB565, width, helght, false);
Render the world within the buffer.
fbo.begin();
//Draw the world here
fbo.end();
Draw the buffer to the screen with a batch.
batch.begin();
batch.draw(fbo.getColorBufferTexture(), x, y);
batch.end();

Libgdx SpriteBatch.SetProjectionMatrix makes spritebatch not draw

SpriteBatch.SetProjectionMatrix(cam.combined) makes Spritebatch not draw blocks and my character but camera movement works, but if I don't use this line of code everything is drawn but camera is not working? Does anyone know the solution, because if simply can't see it.
EDIT: sorry for the messy first post
Here is the piece of code that is troublesome:
public void render()
{
cam.update();
spriteBatch.setProjectionMatrix(cam.combined);
spriteBatch.begin();
drawBlocks();
drawBob();
spriteBatch.end();
cam.position.x = world.bob.GetPosition().x;
cam.update();
drawCollisionBlocks();
if(debug)
drawDebug();
}
I found the solution but for anyone who may have this kind of problem in the future, the problem was in drawing methods where i was drawing
textures like this:
CODE:
spriteBatch.draw(bobFrame, bob.GetPosition().x * PPuX, bob.GetPosition().y*PPuY ,Bob.SIZE*PPuX , Bob.SIZE*PPuY );
PPuX,PPuY were of type int and were used for different screen sizes and that was messing spriteBatch up when i was setting the projection matrix
The spriteBatch.setProjectionMatrix(cam.combined) lets spriteBatch use the coordinate system that specified by cam instead of the default ones. This is because the both's coordinate system are different and the cam.combined will do the maths for you.
I believe you actually ARE drawing the sprites, however you are not seeing them because your camera viewports are not set(ie. you are looking at the wrong coordinate area).
Adding
cam.setToOrtho(false); //true to invert y axis
"Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down."
Link to JavaDOC here

Blending problems using opengl (via lwjgl) when using a png texture

I have a (hopefully) small problem when using blending in OpenGL.
Currently I use LWJGL and Slick-Util to load the Texture.
The texture itself is a 2048x2048 png graphic, in which I store tiles of a size of 128x128 so that I have 16 sprites per row/column.
Since glTexCoord2f() uses normalized Coordinates, I wrote a little function to scale the whole image to only show the sprite I want to.
It looks like this:
private Rectangle4d getTexCoord(int x, int y) {
double row = 0, col = 0;
if(x > 0)
row = x/16d;
if(y > 0)
col = y/16d;
return new Rectangle4d(row, col, 1d/16d, 1d/16d);
}
(Rectangle4d is just a type to store x, y, width and height as double coords)
Now the problem is, once I use these coords, the sprite displays correctly, the transparency works correctly too, just everything else becomes significantly darker (well more correctly it becomes transparent I guess, but since the ClearColor is black). The sprite itself however is drawn correctly. I already tried changing all the glColor3d(..) to glColor4d(..) and setting alpha to 1d, but that didn't change anything. The sprite is currently the only image, everything else are just colored quads.
Here is how I initialised OpenGL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, WIDTH, HEIGHT, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And here is how I load the sprite (using slick-util):
texture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/sprites.png"));
And finally I render it like this (using the helper function getTexCoord() mentioned at the top):
texture.bind();
glColor4d(1, 1, 1, 1);
glBegin(GL_QUADS);
{
Rectangle4d texCoord = getTexCoord(0, 0);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
}
glEnd();
The result is this (sprite is drawn correctly, just everything else is darker/transparent):
Without the texture (just a gray quad), it looks like this (now everything is correctly drawn except I don't have a sprite):
Thanks for everyone who bothers to read this at all!
Edit:
Some additional info, from my attempts to find the problem:
This is how it looks when I set the ClearColor to white (using glClearColor(1, 1, 1, 1) ):
Another thing I tried is enabling blending just before I draw the player and disable it again right after I finished drawing:
Its a bit better now, but its still noticeably darker. In this case it really seems to be "darker" not "more transparent" because it is the same when I use white as a clear color (while still only enabling blending when needed and disabling it right after) as seen here:
I read some related questions and eventually found the solution (Link). Apparantly I can't/shouldn't have GL_TEXTURE_2D enabled all the time when I want to render textureless objects (colored quads in this case)!
So now, I enable it only before I render the sprite and then disable it again once the sprite is drawn. It works perfectly now! :)

Android OpenGL ES: How do you select a 2D object?

I have been searching for a introductory to 2D selection in OpenGL ES in Stack Overflow. I mostly see questions about 3D.
I'm designing a 2D tile-based level editor on Android 4.0.3, using OpenGL ES. In the level editor, there is a 2D, yellow, square object placed in the center of the screen. All I wanted is to detect to see if the object has been touched by a user.
In the level editor, there aren't any tiles overlapping. Instead, they are placed side-by-side, just like two nearby pixels in a bitmap image in MS Paint. My purpose is to individually detect a touch event for each square object in the level editor.
The object is created with a simple vertex array, and using GL_TRIANGLES to draw 2 flat right triangles. There are no manipulations and no loading from a file or anything. The only thing I know is that if a user touches any one of the yellow triangles, then both yellow triangles are to be selected.
Could anyone provide a hint as to how I need to do this? Thanks in advance.
EDIT:
This is the draw() function:
public void draw(GL10 gl) {
gl.glPushMatrix();
gl.glTranslatef(-(deltaX - translateX), (deltaY - translateY), 1f);
gl.glColor4f(1f, 1f, 0f, 1f);
//TODO: Move ClientState and MatrixStack outside of draw().
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertices);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 6);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glPopMatrix();
}
EDIT 2:
I'm still missing some info. Are you using a camera? or pushing other
matrixes before the model rendering?. For example, if you are using an
orthographic camera, you can easily unproject your screen coordinates
[x_screen, y_screen] like this (y is analogous):
I'm not using a camera, but I'm probably using an orthographic projection. Again, I do not know, as I'm just using a common OpenGL function. I do pushing and popping matrices, because I plan on integrating many tiles (square 2D objects) with different translation matrices. No two tiles will have the same translation matrix M.
Is a perspective projection the same as orthographic projection when it comes to 2D? I do not see any differences between the two.
Here's the initial setup when the surface is created (a class extending GLSurfaceView, and implementing GLSurfaceView.Renderer):
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
}
public void onSurfaceCreated(GL10 gl, EGLConfig arg1) {
reset();
}
public void onDrawFrame(GL10 gl) {
clearScreen(gl);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, super.getWidth(), 0f, super.getHeight(), 1, -1);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
canvas.draw(gl);
}
private void clearScreen(GL10 gl) {
gl.glClearColor(0.5f, 1f, 1f, 1f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
}
A basic approach would be the following:
Define a bounding box for each "touchable" object. This could be
just a rectangle (x, y, width, height).
When you update a tile in the world you update its
bounding box (completely in world coordinates).
When user touches the screen, you have to unproject screen
coordinates to world coordinates
Check if unprojected point overlaps with any bounding box.
Some hints on prev items.[Edited]
1 and 2. You should have to keep track of where you are rendering
your tiles. Store their position and size. A rectangle is a
convenient structure. In your example it could be computed like
this. And you have to recompute it when model changes. Lets call it Rectangle r:
r.x = yourTile.position.x -(deltaX - translateX)
r.y = yourTile.position.y -(deltaY - translateY)
r.width= yourTile.width //as there is no model scaling
r.height = yourTile.height//
3 - if you are using
an orthographic camera, you can easily unproject your screen
coordinates [x_screen, y_screen] like this (y is analogous):
x_model = ((x_screen/GL_viewport_width) -0.5 )*camera.WIDTH + Camera.position.x
4 - For each of your Rectangles check if [x_model; y_model] is inside it.
[2nd Edit] By the way you are updating your matrixes, you can consider you are using a camera with postition surfaceView.width()/2, surfaceView.height()/2. You are matching 1 pixel on screen to 1 unit in world, so you dont need to unproject anything. You can replace that values on my formula and get x_screen = x_model - (You 'll need to flip the Y component of the touch event because of the Y grows downwards in Java, and upwards in GL).
Final words. If user touches point [x,y] check if [x, screenHeight-y]* hits some of your rectangles and you are done.
Do some debugging, log the touching points and see if they are as expected. Generate your rectangles and see if they match what you see on screen, then is a matter of checking if a point is inside a rectangle.
I must tell you that you should not set the camera to screen dimensions, because your app will look dramatically different on different devices. This is a topic on its own so i won't go any further, but consider defining your model in terms of world units - independent from screen size. This is getting so off-topic, but i hope you have gotten a good glimpse of what you need to know!
*The flipping i told you.
PS: stick with the orthographic projection (perspective would be more complex to use).
Please, allow me to post a second answer to your question. This is completely more high-level/philosophical. May be a silly, useless answer but, I hope it will help someone new to computer graphics to change it's mind to "graphics mode".
You can't really select a triangle on the screen. That square is not 2 triangles. That square is just a bunch of yellow pixels. OpenGL takes some vertices, connects them, process them and colors some pixels on the screen. At one stage on the graphics pipeline even geometrical information is lost, and you only have isolated pixels. That's analogous to a letter printed by a printer on a paper. You usually don't process information from a paper (ok, maybe a barcode reader does :D)
If you need to further process your drawings, you have to model them and process them yourself with auxiliary data structures. That's why I suggested you created a rectangle to model your tiles. You create your imaginary "world" of objects, and then render them to screen. The user touch-event does not belong to the same world, so you have to "translate" screen coordinates into your world coordinates. Then you change something in your world (may be the user drags her finger and you have to move an object), and back again tell OpenGL to render your world to screen.
You should operate on your model, not the view. Meshes are more of a view thing, so you shouldn't mix them with the model information, it's a good practice to separate both things. (please, an expert correct me, I'm quite a graphics hobbyist)
Have you checked out LibGDX?
Makes life so much easier when working with OpenGL ES.

Render ellipse using libgdx

I am attempting to render an ellipse using ShapeRenderer, and have come up with the following partial solution:
void drawEllipse(float x, float y, float width, float height) {
float r = (width / 2);
ShapeRenderer renderer = new ShapeRenderer();
renderer.setProjectionMatrix(/* camera matrix */);
renderer.begin(ShapeType.Circle);
renderer.scale(1f, (height / width), 1f);
renderer.circle(x + r, y, r);
renderer.identity();
renderer.end();
}
This draws an ellipse at the specified coordinates with the correct width and height; however, it appears that the scale transformation causes the circle to be translated in the viewport, and I have not been successful in determining the mathematics behind the translation. I am using an orthogonal projection with y-up where the coordinates map to a pixel on the screen. I am not very familiar with OpenGL.
How can I draw an ellipse using libgdx, and have it draw the ellipse at the exact coordinates I specify? Ideally, that would mean that the origin of the ellipse is located in the top-left corner, if the ellipse was contained in a rectangle.
The new Libgdx ShapeRenderer API (current nightlies, in whatever release will come after v0.9.8) contains an ellipse drawing method so you can ignore the rest of this answer. The ShapeRenderer method has changed in other ways, too though (e.g., the ShapeType is just Filled, Line, or Point now).
For folks stuck with the older API, you should be able to work-around the distortion by making sure the scaling happens around the origin. This is a standard OpenGL practice (so its a bit obtuse, but they're following OpenGL's lead). See Opengl order of matrix transformations and OpenGL: scale then translate? and how?. Even better (again standard OpenGL practice) you end up listing the operations in the reverse order you want them to happen at, so to make a circle, distort it into an ellipse, then move it to a specific destination you actually write code like:
renderer.begin(ShapeType.Circle);
renderer.translate(x, y, 0);
renderer.scale(1f, (height/width), 1f);
renderer.circle(0, 0, r);
renderer.end();

Categories