OpenGL ES 2.0 blending only on different objects - java

I was wondering if there is a way to blend objects that are in different draw calls alone.
I have a particle system that draws many points close to each other and I don't want to add their color values. However I do want to add those particles with other particles from a different particle system draw call. I know I could achieve this using a frame buffer object but it doesn't seem efficient.

It's not possible directly via blending; the only state GL has at any point in time is the current fragment, and the contents of the framebuffer.
You could imagine using a stencil mask (clear the stencil at the start of the draw, set the stencil to 1 with each triangle in the particle system, and fail if the stencil test if the value is already 1). However most particles need some level of alpha transparency to fade out each particle at the edges, so this is probably not what you actually want ...
Actually, given the need for the "fade" region of one particle to overlap the "bright" part of a particle behind it, I'm not entirely sure that you can make this work without blending all of the particles in the particle system together.

Related

Understanding different coordinate systems, Tiled, Stage, Screen,

I am very confused about all the different coordinate systems.
I am using LibGDX with Tiled.
These all have their own coordinate system (sortof).
LibGDX screen
Tiled map
UIcamera
Orthogonal TiledMapCamera
UIStage
TiledMapStage
It's too many concepts and I can no longer mentally understand how they affect each other in complex scenarios, like
having different screen dimensions than the tiled map dimensions
when resizing the screen.
Can someone shed some light on this?
Many thanks!
In a 2D game, you really only have to think about the coordinate system of the orthographic camera. Whatever is drawn with a certain camera's combined matrix is fit to the rectangle of the screen (and if you set up the camera correctly, it will not be distorted).
LibGDX provides the Viewport classes for helping to set up your camera. You can think of them as camera managers that will size the camera to meet the arrangement you want. You instantiate them with a desired size "window" you want to see of the game world. And the only place you have to consider the actual screen dimensions is in the resize method, where you pass the dimensions to the Viewport class and let it handle sizing your camera for you so the scene won't be distorted.
You might have more than one camera. Typically your UI will have its own, and the gameplay world will have another (because you want it to move around in the world).
When it comes to input, the raw X and Y are given in screen pixel coordinates, but you just pass these coordinates to the camera.unproject method to have them converted to the same coordinates as your game world.
I don't use tiles, so I can't get specific there, but the same principles should apply.

Drawing a isometric map in chunks?

I want to reduce draw calls on my isometric map implementation so I thought about combining multiple meshes and draw them in one go. I am currently creating a mesh from all tiles in a 8x8 "chunk". This means a mesh containing floors, walls and objects. These floors, walls and objects are just quads placed along the regular axis where Y is up. A rotated orthographic camera provides the isometric projection matrix. Most floors and walls fill up the full quad with an opaque texture but some, and most objects have alpha transparency like a window, broken floor or a chair object. I am building the "chunk" mesh by adding individual meshes to it, I do this in order of drawing back to front and bottom to top otherwise the combined chunk mesh is not properly rendered at all. I render all of the needed chunks also in the same order and this works great.
However the drawing problems start when I need to add additional objects "inside" these mesh chunks. For example a moving player which is also just a quad with a transparent texture. I need to be able to put him inside that chunk mesh, perhaps on a tile where he is in front of a wall and behind a window. If I would render the player prior to the chunk, it's transparent pixels would not show the chunk mesh. If I would render the player after the chunk the player would not be visible trough a transparent quad in the chunk. If this could be solved easily and not to expensive on the CPU/GPU that would be the solution to my question. However I am considering myself new to OpenGL so I do not know it's magic very well.
I can think of a view solutions to tackle this problem that do not involve OpenGL:
Dump the chunk mesh method and just draw back to front. I either need a more efficient way of drawing or don't allow to zoom out as much to reduce draw calls since this is a bottleneck.
Get rid of the quads with transparency and make them full 3D. I feel this should be a design choice and not a mandatory thing, besides that it would add a lot of additional work to creating all these assets. Now I just have a textures projected on a quad instead of fully UV'd models with individual textures.
Draw all transparent objects after the chunks in proper order. This feels kinda hacky and error prone since some objects need to go into that chunk mesh and some don't.
Only combine floors in a batch mesh. The floors are the biggest bottleneck, the bottom of the map has all floor tiles filled which are about 4000 draw calls when drawn individually, a big building uses a lot of floors too for each Z level. Walls and objects are drawn significantly less, probably just a couple hundred maximum all the way zoomed out. SO for each chunk draw all floors in one call and then each individual object, I'd reduce draw calls a lot by just combining the floors. When drawing walls and objects I have to check if there is a potential dynamic object to be rendered or just check if there are dynamic objects within the chunk and sort them with all the walls and objects before drawing them.
This is how I currently render.
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
shader.begin();
shader.setUniformMatrix("u_worldView", cam.combined);
// If I draw the player before the chunk the transparency of the player would not render the chunk.
player.getMesh().render(shader, GL20.GL_TRIANGLES);
// Drawing the mesh chunk combines of floors, walls and objects of each tile within the chunk.
chunk.getCombinedMesh().render(shader, GL20.GL_TRIANGLES);
// If I draw the player after the chunk the player would not be drawn if it is behind a transparent mesh like a window or a tree.
player.getMesh().render(shader, GL20.GL_TRIANGLES);
shader.end();
So what are my options here? Can I fix this by using some magic tricks out of the OpenGL hat? Or do you have another suggestion to put on the list above? I am using LibGDX for my project.

Slick2d using anti-aliasing with Animations/Images

I'm using slick2d to render an Animation, but using anti-aliasing
Without AA, the movement is choppy, as one would expect for something without AA.
I turn on AA in my game's preRenderState with:
g.setAntiAlias(true);
This leads to:
Note the diagonal line in the center, presumably caused by the two triangles that render the rectangle not meeting precicely. How can I remove that while still using AA to smooth my movement? I found http://www.java-gaming.org/index.php?topic=27313.0 but the solution was "remove AA" which I am reluctant to do.
This looks suspiciously like artifacts due to GL_POLYGON_SMOOTH.
When you use that (deprecated) functionality, you are expected to draw all opaque geometry with blending enabled and the blend function: GL_SRC_ALPHA_SATURATE, GL_ONE. Failure to do so produces white outlines on most contours (aliased edges, basically).
Chapter 6 of the OpenGL Redbook states:
Now you need to blend overlapping edges appropriately. First, turn off the depth buffer so that you have control over how overlapping pixels are drawn. Then set the blending factors to GL_SRC_ALPHA_SATURATE (source) and GL_ONE (destination). With this specialized blending function, the final color is the sum of the destination color and the scaled source color; the scale factor is the smaller of either the incoming source alpha value or one minus the destination alpha value.
This means that for a pixel with a large alpha value, successive incoming pixels have little effect on the final color because one minus the destination alpha is almost zero. With this method, a pixel on the edge of a polygon might be blended eventually with the colors from another polygon that's drawn later. Finally, you need to sort all the polygons in your scene so that they're ordered from front to back before drawing them.
That is a lot of work that almost nobody does correctly when they try to enable GL_POLYGON_SMOOTH.
Every time you enable anti-aliasing in Slick2D, it should look like this.
g.setAntiAlias(true);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA_SATURATE, GL11.GL_ONE);
Basically same as Andon's answer, but with code that should solve the problem.
Note that with this whatever you draw first will be on top, rather than what you draw last. So you may need to reverse rendering order.

Ray Casting a transparent PNG in Java with LWJGL

Im making a game and id like to implement raycasting for the hero's laser (and other stuff in the future), i have my sprites in a sprite sheet which i bind in the beggining and access when i draw since each element knows how to draw itself, but the spritesheet is a PNG, and thus some elements posess transparency, which works ok in openGL. i know each element's position, size etc but if some sprites have transparency, the position and size arent enough for the ray cast to be perfect since it would only hit the "bounding box". So is there a way to throw a ray using Bresenham algorithm (i believe it is the lightest way, correct me if im wrong) and make it pixel perfect in openGL, so that i can acquire the collision point of the ray with the actual non-transparent zone of the first sprite it appears in the way?
There is no easy way to do this. You would have to create a custom collision checker for your raycast to see if it would pass through or if it would collide with part of the sprite.
However it might be a better idea to use a smaller bounding box, or a circle to represent it, or both. These are much easier and faster to calculate then checking every pixel within the texture.

Generating Very Large Images at Runtime with OpenGL and libgdx

I am generating very large hex grids (up to 120k total hexes at 32px wide hexes results in over 12k wide images) and I'm trying to find an efficient way to bind these to OpenGL textures in libgdx. I was thinking of using multiple FBOs and breaking the grid up as necessary into tiles, but I'm not sure how to ensure continuity between the FBOs. I can't start with one massive FBO, because that is backed up by a texture so it would fail from trying to load it to video memory. I can't use a standard bitmap on the heap because I need the drawing functionality of an OpenGL surface.
So what I was thinking was I would need to overdraw on the FBOs and somehow pick up on the next FBO exactly where the previous left off. However I'm not sure how to go about this. I'm drawing the hex grid with a series of hexagonal meshes, FYI.
Of course, there's probably some other much simpler and more efficient way to do this that I'm not even thinking of, which is why I pose this question to you fine people!
You have to draw it in pieces. You need to be able to draw your hex grid from an arbitrary position. This means being able to compute which hexes to draw based on a rectangle overlaid over the map. This isn't a hard problem, and I wouldn't worry too much about drawing extra stuff off-screen. You should master this ability to view the hexmap from any position before moving on.
Once you've mastered that, it's really simple.
Draw the top-left corner and store the pixel data. Then move the area you're drawing over exactly one image width. Draw and store that. Move the area over one image width. Draw and store it. Keep doing that until you've covered the entire width.
Move down one image height and repeat the process. Once you've run out of width and height, you're done. Save your mega-huge image.
You don't need FBOs for this. You could draw it to the screen if you wanted. Though if you want maximum performance, I would suggest using FBOs, double buffering them, and using glReadPixels though a pixel buffer object. That should cut down a lot on latency.

Categories