Maze Drawing (better performance) libgdx - java

In my game, I'm using the ShapeRenderer class to draw a maze. Basically, I'm using the rectangle function (in the ShapeRenderer class) to draw small black lines. In the past, I had no problem debugging the game performance-wise (fps = 60). But lately, I've been having some performance issues. To make it short, I took out every sprites and actors I've drawn in the game and decided to draw the maze ONLY. Everytime I debug my game through the Desktop Launcher, the fps performance lowers by half (basically around 33, 34 fps). Yet, when i run it, it goes up to 60 fps.
I believe that it's a clear indication that the ShapeRenderer class wouldn't be the best choice for me to draw the maze performance-wise. I've tried with a spritebatch with texture png image (rectangle) and that didn't change a thing. I was wondering if there was a better choice to draw the maze and still maintain an fps performance to 60 (in debug mode) or is it just normal that debugging the game would lower my fps performance by half?
P.S: This is my code which is inside the render method to draw the maze:
for(int x = 0; x < rows; x++){
for(int y = 0; y < columns; y++){
if(this.grid[x][y].north.hasWall){ //NORTH BORDER LINE
shapeRenderer.rect(22+((GENERIC_WIDTH_HEIGHT_MAZE+10)*x), 450-((GENERIC_WIDTH_HEIGHT_MAZE+10)*y), GENERIC_WIDTH_HEIGHT_MAZE+10, 0, color1, color2, color3, color4);
}
if(this.grid[x][y].west.hasWall){ //WEST BORDER LINE
shapeRenderer.rect(22+((GENERIC_WIDTH_HEIGHT_MAZE+10)*x), 450-((GENERIC_WIDTH_HEIGHT_MAZE+10)*y), 0, -GENERIC_WIDTH_HEIGHT_MAZE-10, color1, color2, color3, color4);
}
if(this.grid[x][y].east.hasWall){ //EAST BORDER LINE
shapeRenderer.rect(22+((GENERIC_WIDTH_HEIGHT_MAZE+10)*(x+1)), 450-((GENERIC_WIDTH_HEIGHT_MAZE+10)*y), 0, -GENERIC_WIDTH_HEIGHT_MAZE-10, color1, color2, color3, color4);
}
if(this.grid[x][y].south.hasWall){ //SOUTH BORDER LINE
shapeRenderer.rect(22+((GENERIC_WIDTH_HEIGHT_MAZE+10)*x), 450-((GENERIC_WIDTH_HEIGHT_MAZE+10)*(y+1)), GENERIC_WIDTH_HEIGHT_MAZE+10, 0, color1, color2, color3, color4);
}
}
}
Any insights would be appreciated. are the following values:
GENERIC_WIDTH_HEIGHT_MAZE = 26 (Integer)
rows = 9
columns = 12
color1 = color2 = color3 = color4 = Color.BLACK

If when you run it the rendering speed is good enough then I would not worry about the performance when debugging.
But in general this looks like something you can optimize greatly:
Since it is a maze you can significantly reduce the number of draw calls by generating "blobs". You can join the walls and even use triangle stripes to draw the whole chunks.
Are you using face culling to reduce a number of fragments? (you should)
You most likely don't need to draw all of the walls anyway. Create a system to find only the walls that are not behind other walls (should be easy since it looks like a normal 2d grid).
Reduce redundant calls: I assume you keep setting things like color for every rect you draw. Try to do that only when it needs changing.
The maze is most likely static or changes rarely. Generate a GPU buffer on load time with all the vertices and then keep reusing that buffer to reduce the traffic to the GPU.
Again these are just a few pointers where you may optimize but I would try to optimize it as late as possible and only if needed. Being too slow on debug is usually not a good reason to start optimizing.
Since there can be very many reasons why the debug is slow you might want to have a system to check what is your actual drawing FPS at the moment. You may test this by drawing your scene to a FBO of the same size as your screen and try to just keep drawing your scene in a for (or) while loop and measure FPS. This gives you the rough estimation as to how close to your limit you are.

Well, in the end, I made a TextureAtlas containing the walls drawn (small png pics) and called two of them (one horizontal and on vertical) to draw the maze and got rid of ShapeRenderer. Like Deniz Yılmaz mentionned, ShapeRendereris only used for debugging which is probably why the performance slows down inside the for loop. I also a couple of other performance optimization on other parts of my code. Now the performance is at 60 fps all the time. Thanks everyone!

Related

Expanding textures in libGDX

I'm using libgdx to make simple tile based game and everything seemed to be fine, until I added a rectangle, which follows mouse position. I figure out, that whenever I jump, rectangle (and other blocks too) expands probably by 1 px, until I let the spacebar. When I hit the spacebar again, it gets to normal size. I tried printing out rectangle width and height, but they didn't change, so problem is with rendering.
Everything allright
On this picture you can see game before jump.
Wider textures
Here is game after jump. You can also clearly see it on players head.
A little more detail. I don't use block2d. Tiles sizes are 8x8 scaled to 20x20. Using texturepacker without padding (problem occurs with padding anyway). I don't know which code to post, because I have no idea where the problem could be, so here is just simple block class. Any help would be much appreciated, thanks.
public class Block extends Sprite {
private int[] id = { 0, 0 };
public Rectangle rect;
private int textureSize = 8;
public Block(PlayScreen play,String texture, int x, int y, int[] id) {
super(play.getAtlas().findRegion("terrain"));
this.id = id;
rect = new Rectangle(x, y, ID.tileSize, ID.tileSize);
setRegion(id[0] * textureSize, id[1] * textureSize + 32, textureSize, textureSize);
setBounds(rect.x, rect.y, rect.width, rect.height);
}
public void render(SpriteBatch batch) {
draw(batch);
}
Welcome to libGDX!
TL;DR- there isn't enough of your code there to tell what the exact problem is, but my guess is that somewhere in your code you are confusing pixel-space with game-space.
A Matter of Perspective
When you first create a libGDX game that is 2D, it's really tempting to think that you are just painting pixels onto the screen. After all, your screen is measured in pixels, your window is measured in pixels, and your texture is measured in pixels.
However, if you start looking closer at the API, you'll find weird little things such as your camera and sprite positions and sizes being measured as floating point values instead of integers (Why floats? You can't have a fraction of a pixel!).
The reason the dimensions of your game object are different than how big they are drawn. It's really easy to understand this in a 3D world- when I am close to something, it is drawn really big on the screen. When I am far away, it is drawn really small. The actual size of the object doesn't change based on my distance from it, but the perceived size did. This tells us that we can't safely measure things in our game just based on how they're drawn- we have to measure based on their true size.
As a side note, while you may be using an Orthographic camera (i.e. one without perspective) and drawing 2D sprites, libGDX is really drawing a flat 3D object (a plane) behind the scenes.
Game Units
So how do we measure the "true size" of something? The answer is that we can measure it using whatever type of unit we want! We can say something is 3.5 meters long, or 42 bananas- whatever you want! For the sake of this conversation, I'm going to call these units "Game Units" (GU).
For your game, you might consider making each block one GU high and one GU wide (essentially measuring your game world in blocks). Your character can move in fractions of a block, but you measure speed in terms of "blocks per second." I can almost guarantee it will make your game logic a lot simpler.
But our textures are in pixels!
As you probably already know, your game uses three things to render: A viewport (the patch of the screen where your game can be painted), A Camera (think of it like a real camera- you change the position and size of the lens to change how much of your world is 'in view'), and your game objects (the things you may or may not want to draw, depending on whether they're visible to the camera).
Now let's look at how they're measured:
Viewport: This is a chunk of your screen (set to be the size of your game window), and as such is measured in pixels.
Camera: The Camera is interesting, because its size and position are measured in Game Units, not pixels. Since the viewport uses the Camera to know what to paint on the screen, it does contain the mapping of GU to pixel.
Game Object: This is measured in Game Units. It may have a texture measured in pixels, but that different than the "true size" of the game object.
Now libGDX defaults all of these sizes such that 1 GU == 1 Pixel, which misleads a lot of folks into thinking that everything is measured by pixels. Once you realize that this isn't really the case, there are some really cool implications.
Really Cool Implications
The first implication is that even if my screen size changes, my camera size can stay the same. For example, if I have a small 800x600 pixel screen, I can set my camera size to 40x30. This maintains a nice aspect ratio, and allows me to draw 40x30 blocks on the screen.
If the screen size changes (say to 1440x900), my game will still show 40x30 blocks on the screen. They may look a little stretched if the aspect ratio changes, but libGDX has special viewports that will counteract this for you. This makes it much easier to support your game on other monitors, other devices, or even just handling screen resizes.
The second cool implication is that you stop caring about texture sizes to a large degree. If you start telling libGDX "Hey, go draw this 32x32px sprite on this 1x1 GU object" instead of "Hey, go draw this 32x32px sprite" (notice the difference?) it means that changing texture sizes doesn't change how big the things on your screen are drawn, it changes how detailed they are. If you want to change how big they are drawn, you can change your camera size to 'zoom in.'
The third cool implication is that this makes your game logic a lot cleaner. For example you start thinking of speeds in "Game Units per second", not "Pixels per second". This means that changes in drawing size won't affect how fast things are in the game, and will save you a ton of bug-hunting further down the road. You also avoid a lot of the weird "My jump behaves differently when I resize the screen" bugs.
Summary
I hope this is helpful and makes sense. It's difficult to get your mind around it at first, but it will make your life a lot easier and your game a lot better in the long run. If you'd like a better example with pictures, I recommend that you read this article by one of the libGDX developers.

Setting a spherical area in a BufferedImage to be a certain opacity efficently

first of all I have scoured Google and SO for this answer, finding only how to change the actual pixels to be of a certain alpha value, which would be incredibly slow or actually making a part of the BufferedImage completely transparent via the use of lwg2.setComposite(AlphaComposite.getInstance(AlphaComposite.CLEAR)). This is the exact functionality I need, however, I need to have the value to be less than 1f, which you cannot do with this specific instance of AlphaComposite.CLEAR.
What I want this implementation for is to make a wall inside my 2.5d game become transparent when the player goes behind it, like so:
The logic behind my game is that the terrain is one BufferedImage which is only updated when called, and then having the rest of the walls, etc, being drawn onto another BufferedImage where entities are also drawn, so the opacity transformation would only affect the trees (or walls).
This is the code I am using atm, but as I said I don't want the circle that I am drawing to make a part of the image completely transparent, but only slightly (about 50%):
g2.setComposite(AlphaComposite.getInstance(AlphaComposite.CLEAR, 0.5f));
g2.fillOval(x - (int) (TILE_WIDTH * 1), y - (int) (TILE_HEIGHT * 1.5), TILE_WIDTH * 2, TILE_HEIGHT * 3);
(The 0.5f in the AlphaComposite constructor does nothing).
The reason I need this to be efficient is because I am updating this image 30 times a second, so efficiency > quality.
So, I ended up solving the issue by not manipulating the image directly via making a part of the image translucent, but manipulating the opacity of the images I am drawing with. As #user343760 and #NESPowerGlove mentioned, I could just make the assets I am using translucent when the player is behind it. Since I am using a underlying grid array to back my game, I could do this by working out if the tile.x - 1 == (int) player.x and tile.y - 1== (int) player.y. In isometry, this meant that the player was on the tile directly above it in our perspective. Then I had to solve the issue if the wall.z is bigger than 0 or 1, hence having the problem where a tile 5 blocks "bellow" the player could obstruct him if the walls extended z = 5 above the tile. For this problem, I implemented the following solution:
for(int i = 0; i < wall.getAsset(1f).getHeight()/TILE_HEIGHT; i++) {
if((tile.x - i - wall.z == (int) world.player.getX() && tile.y - i -wall.z == (int) world.player.getY())) {
lwg2.drawImage(wall.getAsset(0.5f), x, y, this);
}
}
This also ensures that the image is transparent even if the player is "above" the tile "above" the tile where the wall is situated, in terms of the image extending above that limit. I have fixed this via using the for loop which looks above for i number of times, depending on the image.height/tile_height, which is an universal constant.
If you require to make a part of the image transparent, I have not found solutions which would work fault free, except for manipulating the pixels in the low-levels of BufferedImage. If you also want to erase a part of an image directly, use the code g2.setComposite(AlphaComposite.getInstance(AlphaComposite.CLEAR)); and draw as you would normally. Remember to switch back to a normal composite via g2.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER));.
You could also draw with a certain opacity in the first place using the Composite g2.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, opacity));, where opacity is a float with values from 0f to 1f, 0f being completely transparent and 1f being completely opaque.
I hope this helped anyone out there. If you find a better way of doing this, please leave a comment for future readers.
This is what my solution looks like :):

Apply Bitmask to BufferedImage

I'm developing a Worms-like game (with destructible terrain and everything) in Java.
All did fine until i tried to update the terrain image using a bitmask.
Let me explain the process in detail :
Whenever a projectile collision occurs i draw a black circle into my
terrain mask (which has black for transparent and white for opaque pixels).
public void drawExplosion(Vector2 position,BufferedImage explosionImage){
Graphics2D gMask = (Graphics2D) terrainMask.getGraphics();
gMask.drawImage(explosionImage,(int) position.x, (int) position.y, null);
gMask.dispose();
}
After the black circle was drawn into my terrainMask BufferedImage whose type is
BufferedImage.TYPE_BYTE_INDEXED, i update my visible terrain BufferedImage by setting
every pixel to 0 if the terrainMask's pixel is black at the same position.
public void mapUpdate(){
for(int x = 0 ; x < terrainMask.getWidth(); x++){
for(int y = 0 ; y < terrainMask.getHeight(); y++){
if(terrainMask.getRGB(x, y) == -16777216){
terrain.setRGB(x, y, 0);
}
}
}
}
After these steps the terrain BufferedImage is updated and every looks fine, showing the
explosion hole in the terrain.
Here comes my problem :
Whenever I call mapUpdate() the Game stops for 300-500 ms checking 2400*600 pixels and setting transparent pixels in the terrain if a check returns true.
Without setRGB() the lag does not occur. So my Question is how can I apply a bitmask to
a BufferedImage more efficiently.
Important : All BufferedImages are converted to compatible ones using
GraphicsConfiguration.createCompatibleImage() method.
When I call getData() on the BufferedImage to get the pixel array, the fps drops to
~23 fps making the game unplayable, so this is not an option here.
I also setSystem.setProperty("sun.java2d.opengl","True");
to enabled OpenGL Pipeline. Another weird thing is whenever i don't set the openGL property my Game reaches more than 700 fps (with openGL enabled 140 - 250 fps) and my laptop freezes completely. My game loop is the same as described here : http://www.koonsolo.com/news/dewitters-gameloop/ (Constant Game Speed independent of Variable FPS , the last one).
The fastest way you can do this in Java (i.e. no OpenGL) that I know of, would be to:
a) Change your mask (terrainMask) image's colors to white and transparent (instead of white and black). Just changing the color table (IndexColorModel) will do, I guess.
b) Replace the double getRGB/setRGB loop with painting the mask over the terrain, using the proper alpha composite rule. Both setRGB and getRGB are potentially slow operations, due to lookups, color conversion and possible data type conversion (all depending on your images), so they should generally be avoided in performance critical code. The updated code could look something like the following:
public void mapUpdate() {
Graphics2D g = terrain.createGraphics();
try {
g.setComposite(AlphaComposite.DstIn); // Porter-Duff "destination-in" rule
g.drawImage(terrainMask); // Clear out transparent parts from terrainMask
}
finally {
g.dispose();
}
}
Doing it this way should also keep your images managed (i.e. no fps drop).
For more information on AlphaComposite, see Compositing Graphics from the Java2D Advanced Topics tutorial.
PS: Another optimization you could do, is to only update the part of terrain that are covered by the explosion (i.e. the rectangle covered by position.x, position.y, explosionImage.getWidth(), explosionImage.getHeight()). No need to update the pixels you know isn't covered...

Most efficient way to fade-in and fade-out circles which are colored using glColorPointer

I am drawing a lot of circles for my android game which uses OpenGL ES 1.1. I use glColorPointer(4, GL10.GL_FLOAT, 0, vertexColors); to give these circles nice radial gradients.
These circles fade-in and fade-out smoothly, so I some how need to set the alpha value for the whole circle.
Now, I tried glColor4f(1.0f, 1.0f, 1.0f, opacity); but it didn't work. As I discovered in my reserch, glColorPointer ignores glColor4f calls and unlike textures, glColor4f does not modulate the gradient.
Second option is to change the FloatBuffer vertexColors which is easy enough
for (int i = 3; i < vertexColors.capacity(); i = i + 4) {
vertexColors.put(i, opacity);
}
Now, my circles have 129 vertices each, and there will be 100-150 circles all fading in and fading out, vertexColors.put will be called approximately 15,000 times per game loop! and 375,000 times per second in 25fps and I have heard FloatBuffer.put(int index, float f) is expensive. So, this adds a significant overhead.
So my question is,
Can I in any way set a global alpha value in OpenGL ES 1.1? (I know glBlendColor in ES 2.0 can do a lot more but it's not available in 1.1)
or,
Can I some how change the transparency of my circle without changing the vertex colors stored in the FloatBuffer?
or,
What would be the most efficient way to fade-in and fade-out circles which are colored using glColorPointer?

Strange Vertical Bars appear when using LWJGL on Windows

I'm writing a game in java that uses LWJGL's native OpenGL bindings to render the game graphics in OpenGL. When I run the game on Linux (tested on 2 different computers with different graphics cards), everything works fine. However, when I run the game on Windows, strange vertical 1-pixel-wide bars appear on the display.
Here's a picture of the problem:
I tried this on three different Windows computers with different graphics cards, all of which had up-to-date drivers.
What's interesting is the bars appear in exactly the same location every time I run the program, even though the terrain generated is different every time.
I'm confused why these vertical bars are appearing. The code I use to render the terrain is
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// draw the terrain
glDisable(GL_BLEND);
glDisable(GL_LINE_SMOOTH);
glColor3f(0F, 0F, 0F);
glBegin(GL_LINES);
for (int x = 0; x < Constants.NUM_WIDE; x++) {
glVertex2i(x, 0);
glVertex2i(x, originalHeightMap[x]);
}
glEnd();
//draw a smooth line strip on top of the terrain to anti-alias it
glEnable(GL_BLEND);
glEnable(GL_LINE_SMOOTH);
glBegin(GL_LINE_STRIP);
for (int x = 0; x < Constants.NUM_WIDE; x++) {
glVertex2i(x, originalHeightMap[x]);
}
glEnd();
This code should fill the entire width.
Another interesting fact is if you look closely, you should be able to see the smooth line strip on top of the terrain in places where the white bars are. This white-bars problem somehow removed the black lines that were part of the terrain but didn't remove the lines that anti-aliased the top.
I tried turning blending on and off when rendering the terrain, the same problem occurred in both cases.
Does anyone know how I can fix this problem, or why it only occurs on windows?

Categories