Strange Vertical Bars appear when using LWJGL on Windows - java

I'm writing a game in java that uses LWJGL's native OpenGL bindings to render the game graphics in OpenGL. When I run the game on Linux (tested on 2 different computers with different graphics cards), everything works fine. However, when I run the game on Windows, strange vertical 1-pixel-wide bars appear on the display.
Here's a picture of the problem:
I tried this on three different Windows computers with different graphics cards, all of which had up-to-date drivers.
What's interesting is the bars appear in exactly the same location every time I run the program, even though the terrain generated is different every time.
I'm confused why these vertical bars are appearing. The code I use to render the terrain is
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// draw the terrain
glDisable(GL_BLEND);
glDisable(GL_LINE_SMOOTH);
glColor3f(0F, 0F, 0F);
glBegin(GL_LINES);
for (int x = 0; x < Constants.NUM_WIDE; x++) {
glVertex2i(x, 0);
glVertex2i(x, originalHeightMap[x]);
}
glEnd();
//draw a smooth line strip on top of the terrain to anti-alias it
glEnable(GL_BLEND);
glEnable(GL_LINE_SMOOTH);
glBegin(GL_LINE_STRIP);
for (int x = 0; x < Constants.NUM_WIDE; x++) {
glVertex2i(x, originalHeightMap[x]);
}
glEnd();
This code should fill the entire width.
Another interesting fact is if you look closely, you should be able to see the smooth line strip on top of the terrain in places where the white bars are. This white-bars problem somehow removed the black lines that were part of the terrain but didn't remove the lines that anti-aliased the top.
I tried turning blending on and off when rendering the terrain, the same problem occurred in both cases.
Does anyone know how I can fix this problem, or why it only occurs on windows?

Related

Maze Drawing (better performance) libgdx

In my game, I'm using the ShapeRenderer class to draw a maze. Basically, I'm using the rectangle function (in the ShapeRenderer class) to draw small black lines. In the past, I had no problem debugging the game performance-wise (fps = 60). But lately, I've been having some performance issues. To make it short, I took out every sprites and actors I've drawn in the game and decided to draw the maze ONLY. Everytime I debug my game through the Desktop Launcher, the fps performance lowers by half (basically around 33, 34 fps). Yet, when i run it, it goes up to 60 fps.
I believe that it's a clear indication that the ShapeRenderer class wouldn't be the best choice for me to draw the maze performance-wise. I've tried with a spritebatch with texture png image (rectangle) and that didn't change a thing. I was wondering if there was a better choice to draw the maze and still maintain an fps performance to 60 (in debug mode) or is it just normal that debugging the game would lower my fps performance by half?
P.S: This is my code which is inside the render method to draw the maze:
for(int x = 0; x < rows; x++){
for(int y = 0; y < columns; y++){
if(this.grid[x][y].north.hasWall){ //NORTH BORDER LINE
shapeRenderer.rect(22+((GENERIC_WIDTH_HEIGHT_MAZE+10)*x), 450-((GENERIC_WIDTH_HEIGHT_MAZE+10)*y), GENERIC_WIDTH_HEIGHT_MAZE+10, 0, color1, color2, color3, color4);
}
if(this.grid[x][y].west.hasWall){ //WEST BORDER LINE
shapeRenderer.rect(22+((GENERIC_WIDTH_HEIGHT_MAZE+10)*x), 450-((GENERIC_WIDTH_HEIGHT_MAZE+10)*y), 0, -GENERIC_WIDTH_HEIGHT_MAZE-10, color1, color2, color3, color4);
}
if(this.grid[x][y].east.hasWall){ //EAST BORDER LINE
shapeRenderer.rect(22+((GENERIC_WIDTH_HEIGHT_MAZE+10)*(x+1)), 450-((GENERIC_WIDTH_HEIGHT_MAZE+10)*y), 0, -GENERIC_WIDTH_HEIGHT_MAZE-10, color1, color2, color3, color4);
}
if(this.grid[x][y].south.hasWall){ //SOUTH BORDER LINE
shapeRenderer.rect(22+((GENERIC_WIDTH_HEIGHT_MAZE+10)*x), 450-((GENERIC_WIDTH_HEIGHT_MAZE+10)*(y+1)), GENERIC_WIDTH_HEIGHT_MAZE+10, 0, color1, color2, color3, color4);
}
}
}
Any insights would be appreciated. are the following values:
GENERIC_WIDTH_HEIGHT_MAZE = 26 (Integer)
rows = 9
columns = 12
color1 = color2 = color3 = color4 = Color.BLACK
If when you run it the rendering speed is good enough then I would not worry about the performance when debugging.
But in general this looks like something you can optimize greatly:
Since it is a maze you can significantly reduce the number of draw calls by generating "blobs". You can join the walls and even use triangle stripes to draw the whole chunks.
Are you using face culling to reduce a number of fragments? (you should)
You most likely don't need to draw all of the walls anyway. Create a system to find only the walls that are not behind other walls (should be easy since it looks like a normal 2d grid).
Reduce redundant calls: I assume you keep setting things like color for every rect you draw. Try to do that only when it needs changing.
The maze is most likely static or changes rarely. Generate a GPU buffer on load time with all the vertices and then keep reusing that buffer to reduce the traffic to the GPU.
Again these are just a few pointers where you may optimize but I would try to optimize it as late as possible and only if needed. Being too slow on debug is usually not a good reason to start optimizing.
Since there can be very many reasons why the debug is slow you might want to have a system to check what is your actual drawing FPS at the moment. You may test this by drawing your scene to a FBO of the same size as your screen and try to just keep drawing your scene in a for (or) while loop and measure FPS. This gives you the rough estimation as to how close to your limit you are.
Well, in the end, I made a TextureAtlas containing the walls drawn (small png pics) and called two of them (one horizontal and on vertical) to draw the maze and got rid of ShapeRenderer. Like Deniz Yılmaz mentionned, ShapeRendereris only used for debugging which is probably why the performance slows down inside the for loop. I also a couple of other performance optimization on other parts of my code. Now the performance is at 60 fps all the time. Thanks everyone!

Apply Bitmask to BufferedImage

I'm developing a Worms-like game (with destructible terrain and everything) in Java.
All did fine until i tried to update the terrain image using a bitmask.
Let me explain the process in detail :
Whenever a projectile collision occurs i draw a black circle into my
terrain mask (which has black for transparent and white for opaque pixels).
public void drawExplosion(Vector2 position,BufferedImage explosionImage){
Graphics2D gMask = (Graphics2D) terrainMask.getGraphics();
gMask.drawImage(explosionImage,(int) position.x, (int) position.y, null);
gMask.dispose();
}
After the black circle was drawn into my terrainMask BufferedImage whose type is
BufferedImage.TYPE_BYTE_INDEXED, i update my visible terrain BufferedImage by setting
every pixel to 0 if the terrainMask's pixel is black at the same position.
public void mapUpdate(){
for(int x = 0 ; x < terrainMask.getWidth(); x++){
for(int y = 0 ; y < terrainMask.getHeight(); y++){
if(terrainMask.getRGB(x, y) == -16777216){
terrain.setRGB(x, y, 0);
}
}
}
}
After these steps the terrain BufferedImage is updated and every looks fine, showing the
explosion hole in the terrain.
Here comes my problem :
Whenever I call mapUpdate() the Game stops for 300-500 ms checking 2400*600 pixels and setting transparent pixels in the terrain if a check returns true.
Without setRGB() the lag does not occur. So my Question is how can I apply a bitmask to
a BufferedImage more efficiently.
Important : All BufferedImages are converted to compatible ones using
GraphicsConfiguration.createCompatibleImage() method.
When I call getData() on the BufferedImage to get the pixel array, the fps drops to
~23 fps making the game unplayable, so this is not an option here.
I also setSystem.setProperty("sun.java2d.opengl","True");
to enabled OpenGL Pipeline. Another weird thing is whenever i don't set the openGL property my Game reaches more than 700 fps (with openGL enabled 140 - 250 fps) and my laptop freezes completely. My game loop is the same as described here : http://www.koonsolo.com/news/dewitters-gameloop/ (Constant Game Speed independent of Variable FPS , the last one).
The fastest way you can do this in Java (i.e. no OpenGL) that I know of, would be to:
a) Change your mask (terrainMask) image's colors to white and transparent (instead of white and black). Just changing the color table (IndexColorModel) will do, I guess.
b) Replace the double getRGB/setRGB loop with painting the mask over the terrain, using the proper alpha composite rule. Both setRGB and getRGB are potentially slow operations, due to lookups, color conversion and possible data type conversion (all depending on your images), so they should generally be avoided in performance critical code. The updated code could look something like the following:
public void mapUpdate() {
Graphics2D g = terrain.createGraphics();
try {
g.setComposite(AlphaComposite.DstIn); // Porter-Duff "destination-in" rule
g.drawImage(terrainMask); // Clear out transparent parts from terrainMask
}
finally {
g.dispose();
}
}
Doing it this way should also keep your images managed (i.e. no fps drop).
For more information on AlphaComposite, see Compositing Graphics from the Java2D Advanced Topics tutorial.
PS: Another optimization you could do, is to only update the part of terrain that are covered by the explosion (i.e. the rectangle covered by position.x, position.y, explosionImage.getWidth(), explosionImage.getHeight()). No need to update the pixels you know isn't covered...

Difficulty drawing a background sprite using LWJGL

I'm trying to render a background image for a new game I'm creating. To do this, I thought I'd just create a simple Quad and draw it first so that it stretched over the background of my game. The problem is that the quad doesn't draw to it's correct size and draws at the complete wrong place on the screen. I am using LWJGL and an added slick-util library for loading textures.
background = TextureHandler.getTexture("background", "png");
This is the line of code which basically gets my background texture using a class that I wrote using slick-util. I then bind the texture to a quad and draw it using glBegin() and glEnd() like this:
// Draw the background.
background.bind();
glBegin(GL_QUADS);
{
glTexCoord2d(0.0, 0.0);
glVertex2d(0, 0);
glTexCoord2d(1.0, 0.0);
glVertex2d(Game.WIDTH, 0);
glTexCoord2d(1.0, 1.0);
glVertex2d(Game.WIDTH, Game.HEIGHT);
glTexCoord2d(0.0, 1.0);
glVertex2d(0, Game.HEIGHT);
}
glEnd();
You'd expect this block to draw the quad so that it covered the entire screen, but it actually doesn't do this. It draws it in the middle of the screen, like so:
http://imgur.com/Xw9Xs9Z
The large, multicolored sprite that takes up the larger portion of the screen is my background, but it isn't taking up the full space like I want it to.
A few things I've tried:
Checking, double-checking, and triple-checking to make sure that the sprite's size and the window's size are identical
Resizing the sprite so that it is both larger and smaller than my target size. Nothing seems to change when I do this.
Positioning the sprite at different intervals or messing with the parameters of the glTexCoord2d() and glVertex2d(). This is just messy, and looks unnatural.
Why won't this background sprite draw to it's correct size?
If you have not created your own orthogonal projection (I.E. using glOrtho()), then your vertex coordinates will need to range from -1 to +1. Right now you're only drawing on the left half of that projection, thus giving you this result.

Most efficient way to fade-in and fade-out circles which are colored using glColorPointer

I am drawing a lot of circles for my android game which uses OpenGL ES 1.1. I use glColorPointer(4, GL10.GL_FLOAT, 0, vertexColors); to give these circles nice radial gradients.
These circles fade-in and fade-out smoothly, so I some how need to set the alpha value for the whole circle.
Now, I tried glColor4f(1.0f, 1.0f, 1.0f, opacity); but it didn't work. As I discovered in my reserch, glColorPointer ignores glColor4f calls and unlike textures, glColor4f does not modulate the gradient.
Second option is to change the FloatBuffer vertexColors which is easy enough
for (int i = 3; i < vertexColors.capacity(); i = i + 4) {
vertexColors.put(i, opacity);
}
Now, my circles have 129 vertices each, and there will be 100-150 circles all fading in and fading out, vertexColors.put will be called approximately 15,000 times per game loop! and 375,000 times per second in 25fps and I have heard FloatBuffer.put(int index, float f) is expensive. So, this adds a significant overhead.
So my question is,
Can I in any way set a global alpha value in OpenGL ES 1.1? (I know glBlendColor in ES 2.0 can do a lot more but it's not available in 1.1)
or,
Can I some how change the transparency of my circle without changing the vertex colors stored in the FloatBuffer?
or,
What would be the most efficient way to fade-in and fade-out circles which are colored using glColorPointer?

Android OpenGL ES: How do you select a 2D object?

I have been searching for a introductory to 2D selection in OpenGL ES in Stack Overflow. I mostly see questions about 3D.
I'm designing a 2D tile-based level editor on Android 4.0.3, using OpenGL ES. In the level editor, there is a 2D, yellow, square object placed in the center of the screen. All I wanted is to detect to see if the object has been touched by a user.
In the level editor, there aren't any tiles overlapping. Instead, they are placed side-by-side, just like two nearby pixels in a bitmap image in MS Paint. My purpose is to individually detect a touch event for each square object in the level editor.
The object is created with a simple vertex array, and using GL_TRIANGLES to draw 2 flat right triangles. There are no manipulations and no loading from a file or anything. The only thing I know is that if a user touches any one of the yellow triangles, then both yellow triangles are to be selected.
Could anyone provide a hint as to how I need to do this? Thanks in advance.
EDIT:
This is the draw() function:
public void draw(GL10 gl) {
gl.glPushMatrix();
gl.glTranslatef(-(deltaX - translateX), (deltaY - translateY), 1f);
gl.glColor4f(1f, 1f, 0f, 1f);
//TODO: Move ClientState and MatrixStack outside of draw().
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertices);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 6);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glPopMatrix();
}
EDIT 2:
I'm still missing some info. Are you using a camera? or pushing other
matrixes before the model rendering?. For example, if you are using an
orthographic camera, you can easily unproject your screen coordinates
[x_screen, y_screen] like this (y is analogous):
I'm not using a camera, but I'm probably using an orthographic projection. Again, I do not know, as I'm just using a common OpenGL function. I do pushing and popping matrices, because I plan on integrating many tiles (square 2D objects) with different translation matrices. No two tiles will have the same translation matrix M.
Is a perspective projection the same as orthographic projection when it comes to 2D? I do not see any differences between the two.
Here's the initial setup when the surface is created (a class extending GLSurfaceView, and implementing GLSurfaceView.Renderer):
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
}
public void onSurfaceCreated(GL10 gl, EGLConfig arg1) {
reset();
}
public void onDrawFrame(GL10 gl) {
clearScreen(gl);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, super.getWidth(), 0f, super.getHeight(), 1, -1);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
canvas.draw(gl);
}
private void clearScreen(GL10 gl) {
gl.glClearColor(0.5f, 1f, 1f, 1f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
}
A basic approach would be the following:
Define a bounding box for each "touchable" object. This could be
just a rectangle (x, y, width, height).
When you update a tile in the world you update its
bounding box (completely in world coordinates).
When user touches the screen, you have to unproject screen
coordinates to world coordinates
Check if unprojected point overlaps with any bounding box.
Some hints on prev items.[Edited]
1 and 2. You should have to keep track of where you are rendering
your tiles. Store their position and size. A rectangle is a
convenient structure. In your example it could be computed like
this. And you have to recompute it when model changes. Lets call it Rectangle r:
r.x = yourTile.position.x -(deltaX - translateX)
r.y = yourTile.position.y -(deltaY - translateY)
r.width= yourTile.width //as there is no model scaling
r.height = yourTile.height//
3 - if you are using
an orthographic camera, you can easily unproject your screen
coordinates [x_screen, y_screen] like this (y is analogous):
x_model = ((x_screen/GL_viewport_width) -0.5 )*camera.WIDTH + Camera.position.x
4 - For each of your Rectangles check if [x_model; y_model] is inside it.
[2nd Edit] By the way you are updating your matrixes, you can consider you are using a camera with postition surfaceView.width()/2, surfaceView.height()/2. You are matching 1 pixel on screen to 1 unit in world, so you dont need to unproject anything. You can replace that values on my formula and get x_screen = x_model - (You 'll need to flip the Y component of the touch event because of the Y grows downwards in Java, and upwards in GL).
Final words. If user touches point [x,y] check if [x, screenHeight-y]* hits some of your rectangles and you are done.
Do some debugging, log the touching points and see if they are as expected. Generate your rectangles and see if they match what you see on screen, then is a matter of checking if a point is inside a rectangle.
I must tell you that you should not set the camera to screen dimensions, because your app will look dramatically different on different devices. This is a topic on its own so i won't go any further, but consider defining your model in terms of world units - independent from screen size. This is getting so off-topic, but i hope you have gotten a good glimpse of what you need to know!
*The flipping i told you.
PS: stick with the orthographic projection (perspective would be more complex to use).
Please, allow me to post a second answer to your question. This is completely more high-level/philosophical. May be a silly, useless answer but, I hope it will help someone new to computer graphics to change it's mind to "graphics mode".
You can't really select a triangle on the screen. That square is not 2 triangles. That square is just a bunch of yellow pixels. OpenGL takes some vertices, connects them, process them and colors some pixels on the screen. At one stage on the graphics pipeline even geometrical information is lost, and you only have isolated pixels. That's analogous to a letter printed by a printer on a paper. You usually don't process information from a paper (ok, maybe a barcode reader does :D)
If you need to further process your drawings, you have to model them and process them yourself with auxiliary data structures. That's why I suggested you created a rectangle to model your tiles. You create your imaginary "world" of objects, and then render them to screen. The user touch-event does not belong to the same world, so you have to "translate" screen coordinates into your world coordinates. Then you change something in your world (may be the user drags her finger and you have to move an object), and back again tell OpenGL to render your world to screen.
You should operate on your model, not the view. Meshes are more of a view thing, so you shouldn't mix them with the model information, it's a good practice to separate both things. (please, an expert correct me, I'm quite a graphics hobbyist)
Have you checked out LibGDX?
Makes life so much easier when working with OpenGL ES.

Categories