I am attempting to draw two textures to 3D space that containing transparency. When they do not overlap they work fine:
However when one texture overlaps the other the the transparency means that you can see through the one behind:
I use GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA when initialising blending.
You need to either depth sort or use alpha testing:
glEnable(GL_ALPHA_TEST);
glAlphaTest(GL_GREATER, 0.0f);
which will only draw pixels that have an alpha value of more than 0f. However, this doesn't work for blending transparent pixels. Andon's solution is the one that I use, although I work in 2D and I have to have transparency for smoke effects.
One possibility is to use the discard keyword in the fragment shader, as the alpha test is no longer with us. This has the disadvantage of having aliased edges of objects.
Another possibility is to depth-sort the objects and draw back to front. Obvious disadvantage is having to perform the transformations and the sorting in the first place. This can be sometimes avoided if the order of the objects can be determined statically (when the camera doesn't change much). Another disadvantage is overdrawing of the shaded pixels by something different, therefore throwing away performance.
Finally, you can use alpha-to-coverage, where the antialiassing hardware is employed to take care of the transparency. This doesn't require sorting and makes the edges of the objects smooth. The disadvantage is that this is enabled per rendering context and may not be available everywhere.
Related
I was searching for an anti-aliasing algorithm for my OpenGL program (so I searched for a good shader). The thing is, all shaders want to do something with the textures, but I dont use textures, only colors. I looked at FXAA most of the time, so is there a anti-aliasing algorithm that just works with colors? My game, what this is for looks blocky like minecraft, but only works with colors and cubes of different size.
I hope someone can help me.
Greetings
Anti-aliasing has nothing specifically to do with either textures or colors.
Proper anti-aliasing is about sample rate, which while highly technical can be thought of as doing extra work to make a better educated guess at some value that cannot be directly looked up (e.g. a pixel that is only partially covered by a triangle).
Multisample Anti-Aliasing (MSAA) will work nicely for you, it will only anti-alias polygon edges and does nothing for texture aliasing on the interior of a polygon. Since you are not using textures you do not need to worry about aliasing inside a polygon.
Incidentally, FXAA is not proper anti-aliasing. FXAA is basically a shader-based edge detection and blur image processing filter. FXAA will blur any part of the scene with sharp edges, whether it is a polygon edge or an edge due to a mapped texture. It indiscriminately blurs anything it thinks is an aliased edge and gets this wrong often, resulting in blurry textures.
To use MSAA, you need:
A framebuffer with at least 2 samples
Enable multisample rasterization
Satisfying (1) is going to depend on what you used to create your window (in this case LWJGL). Most frameworks let you select the sample count as one of the parameters at the time of creation.
Framebuffer Objects can also be used to do this without messing with your window's parameters, but they are more complicated than need be for this discussion.
(2) is as simple as calling glEnable (GL_MULTISAMPLE).
I am currently using pixels as units for placing objects within my world, however this can get tedious because I only ever place objects every 16 pixels. For example, I would like to be able to place an object at position 2 and have the object rendered at the pixel position 32.
I was wondering if the best way to do this is simply having a pixel-to-unit variable and multiplying/dividing based on what I need to be doing with pixels or if there is a better way.
You shouldn't use constant a pixel-to-unit conversion, as this would lead to different behavior on different screen sizes/resolutions.
Also don't forget about different aspect ratios, you also need to take care about them.
The way you should solve this problem is using Viewports.
Some of them support virtual screen sizes, which are what you are looking for. You can calculate everything in your virtual space, and Libgdx converts those things to pixels for you.
They also take care about different aspect ratios in different ways.
The FitViewport for example shows black borders, if the aspect ratio is not the same as the virtual one.
The StretchViewport instead of showing black borders stretches the image to fit the screen.
If I draw something with coordinations like -80 and -90 will it affect performance same way as if it was actually drawn inside?
Is it actually worth it checking if the final image will appear on screen?
(and not drawing it if won't)
If I draw something with coordinations like -80 and -90 will it affect performance same way as if it was actually drawn inside?
Somewhat, but not nearly as much as if it is inside the screen.
Is it actually worth it checking if the final image will appear on screen? (and not drawing it if won't)
It's practically never worth implementing your own culling/clipping in a library where drawing out of bounds isn't an error/access violation, since the library would already have to make that check to avoid writing to memory out of bounds, and it would generally be wise to bet that the library's way of checking this is smart and fast.
So if you were to add your own basic check on top, now you're just making the regular, on-screen drawing perform two of such checks (your own on top of whatever is going on under the hood), and for off-screen cases, it would be likely that your check would actually be slower (or at least no better) than the library's.
Now I have to place emphasis on basic culling/clipping here. By basic, I mean checking for each shape you draw on a per-shape basis. There you'll just more likely damage performance.
Acceleration Structures and Clipping/Culling in Bulk
Yet there are cases where you might have a data structure to do efficient culling of thousands of triangles at once with a single bounding box check to see if it's in the frustum, for example, in a 3D case with structures like bounding volume hierarchies. Games use these types of data structures to massively reduce the amount of drawing requests required per frame with very few checks, and there you do gain a potentially massive performance benefit. A more basic version of this is simply check if the object/mesh containing the triangles has a bounding box that is inside the screen, eliminating potentially thousands of triangles from being culled individually with a single bounding box check.
In 2D with clipping, you might be able to use something like a quad tree or fixed grid to only selectively draw what's on the screen (and also accelerate collision detection or click-detection, e.g.). There you might actually get a performance boost if you can eliminate many superfluous drawing calls with a single check. But again, that's using a data structure that eliminates a boatload of unnecessary drawing calls with a single check. These are spatial partitioning structures whose sole point is to avoid checking things on a per-shape basis.
For a more basic 2D example, if you have say, a 2D "widget" which, in order to draw it, involves drawing dozens of different shapes to the screen, you might be able to squeeze a performance gain if you can avoid requesting to draw dozens of shapes with a single check to see if the rectangle encompassing the entire widget is in the screen. Again, there you're doing one check to eliminate many drawing calls. You won't get a performance gain on a level playing field where you're doing that check on a per-shape basis, but if you can turn many checks into a single check, then you have a chance.
According to the Graphics implementation for most common draws/fills (i.e. drawRectangle see: source of Graphics on grepcode.com they start with checking if the width and height are bigger then zero and then are doing more operations, therefore doing check for x,y < 0 are in doing the same number of operations in worst case.
Keep in mind that a rectangle starting at -80 and -90 as you said but width and height i.e. 200 will be displayed on screen.
Yes it will still affect the performance as it still does exist within the program, it's just not visible on the screen
I am developing an isometric game in Java2D. I.e, note that I do not have direct access to hardware pixel shaders (real-time software pixel shaders aren't practical. I can do a single pass on every entity texture without a noticeable hit on performance)
I know the typical method would be to somehow encode the depth of the individual pixels into a depth buffer and look that up. However, I don't know how I can do that efficiently in Java2D. How would I store the depth buffer? How would I filter out the alpha in an image? Etc.
Up until now I have just been reversing the projection matrix I use to calculate the tile-coordinates. However, that doesn't work well when you have entities that render outside of those tile's bounds.
Another method I considered was using a color-map, however I have the same problems with this as I do with the depth buffer (and if I can get the depth buffer working I'd much rather use that.)
Here is a picture of what I am working with:
I've resolved this quite nicely. The solution is actually very simple, just unconventional.
The graphics are depth sorted via a TreeMap, and then rendered to the screen. One can simply traverse this TreeMap in reverse (and keep it until the next render cycle) to translate the cursor location to the proper image it falls over (by testing the pixels [in reverse render order] and checking if they are transparent.)
The solution is in the open-source project, under the io.github.jevaengine.world.World class, pick method. https://github.com/JeremyWildsmith/JevaEngine/blob/master/jevaengine/src/main/java/io/github/jevaengine/world/World.java
I am writing a game on Android, and it is coming along well enough. I am trying to keep everything as efficient as possible, so I am storing as much as I can in Vertex Buffer Objects to avoid unnecessary CPU overhead. However the simple act of drawing lots of unrelated primitives, or even a varying length string of sprites efficiently (such as drawing text to the screen) is escaping me.
The purpose of these primitives is menus and buttons, as well as text.
For drawing the menus, I could just make a vertex array for each element (menu background, buttons, etc), but since they are all just quads, this feels very inefficient. I could also create a sort of drawQuad() function that lets me just transparently load a single saved vertex array with data for xy/height&width/color/texture/whatever. However, reloading each element of the array with the new coordinates and other data each time, to copy it to the Float Buffer (For C++ guys, this is a special step you have to do in Java to pass the data to GL) so I can resend it to the GPU also feels lacking in efficiency, though I don't know how else I could do it. (One boost in efficiency I could see is setting the quad coordinates to be a unit square and then using Uniforms to scale it, but this seems unscalable).
For text it is even worse since I don't know how long the text will be and don't want to have to create larger buffers for larger text (causing the GC to randomly fire later). The alternate is to draw each letter with a independent draw command, but this also seems very inefficient for even a hundred letters on the screen (Since I read that you should try to have as few draw commands as possible).
It is also possible that I am looking way too deep into the necessary optimization of openGL, but I don't want to back myself into a corner with some terrible design early on.
You should try looking into the idea of interleaving data for your glDrawArrays calls.
Granted this link is for iphone, but there is a nice graphic at the bottom of the page that details this concept. http://iphonedevelopment.blogspot.com/2009/06/opengl-es-from-ground-up-part-8.html
I'm going to assume for drawing your characters that you are specifying some vertex coords and some texture coords into some sort of font bitmap to pick the correct character.
So you could envision your FloatBuffer as looking like
[vertex 1][texcoord 1][vertex 2][texcoord 2][vertex 3][texcoord 3]
[vertex 2][texcoord 2][vertex 3][texcoord 3][vertex 4][texcoord 4]
The above would represent a single character in your sentence if you're using GL_TRIANGLES, and you could expand on this idea to have vertices 5 - 8 to represent the second character and so on and so forth. Now you could draw all of your text on screen with a single glDrawArrays call. Now you might be worried about having redundant data in your FloatBuffer, but the savings will be huge. For example, in rendering a teapot with 1200 vertices and having this redundant data in my buffer, I was able to get a very visible speed increase over calling glDrawArrays for each individual triangle maybe something like 10 times better.
I have a small demo on sourceforge where I use data interleaving to render the teapot I mentioned earlier.
Its the ShaderProgramTutorial.rar. https://sourceforge.net/projects/androidopengles/files/ShaderProgram/
Look in teapot.java in the onDrawFrame function to see it.
On a side note you might find some of the other things on that sourceforge page helpful in your future Android OpenGL ES 2.0 fun!