Java OpenGL GL_TEXTURE_2D darkens other colours - java

I'm trying to put textures into my Java OpenGL scene but when I do the colours of other things get skewed, as if it is blending the colours incorrectly. I am using LWJGL for OpenGL and Slick for loading the textures. When I leave the GL11.glEnable(GL11.GL_TEXTURE_2D); call uncommented the colours are darkened, but when I comment that one line the colours are correct, however I obviously have no textures.
I have put my code here http://codepaste.net/26bguu
The line in question is line 63
One work around I have found is enabling textures just before I draw the textures, then disabling again immediately after. However I feel this should be unnecessary. Below are some screenshots showing what I mean. The only difference being that one line commented vs uncommented.

You do actually need to enable and disable GL_TEXTURE_2D as required.
If GL_TEXTURE_2D is enabled, the GL will (normally) ignore the vertex colors you supply, and instead map the given texture coordinates to the given texture to get the color for each fragment/pixel. If you don't pass it texture coordinates, anything could happen - like, say, the second screenshot you have posted.
It's not uncommon to have to make 20+ opengl calls to prepare for drawing each "object". This is why OpenGL programmers spend large amounts of time combining large numbers of triangles into single buffers to be drawn at once with a single draw call - it greatly improves performance.

Related

Anti Aliasing based on colors (not textures)

I was searching for an anti-aliasing algorithm for my OpenGL program (so I searched for a good shader). The thing is, all shaders want to do something with the textures, but I dont use textures, only colors. I looked at FXAA most of the time, so is there a anti-aliasing algorithm that just works with colors? My game, what this is for looks blocky like minecraft, but only works with colors and cubes of different size.
I hope someone can help me.
Greetings
Anti-aliasing has nothing specifically to do with either textures or colors.
Proper anti-aliasing is about sample rate, which while highly technical can be thought of as doing extra work to make a better educated guess at some value that cannot be directly looked up (e.g. a pixel that is only partially covered by a triangle).
Multisample Anti-Aliasing (MSAA) will work nicely for you, it will only anti-alias polygon edges and does nothing for texture aliasing on the interior of a polygon. Since you are not using textures you do not need to worry about aliasing inside a polygon.
Incidentally, FXAA is not proper anti-aliasing. FXAA is basically a shader-based edge detection and blur image processing filter. FXAA will blur any part of the scene with sharp edges, whether it is a polygon edge or an edge due to a mapped texture. It indiscriminately blurs anything it thinks is an aliased edge and gets this wrong often, resulting in blurry textures.
To use MSAA, you need:
A framebuffer with at least 2 samples
Enable multisample rasterization
Satisfying (1) is going to depend on what you used to create your window (in this case LWJGL). Most frameworks let you select the sample count as one of the parameters at the time of creation.
Framebuffer Objects can also be used to do this without messing with your window's parameters, but they are more complicated than need be for this discussion.
(2) is as simple as calling glEnable (GL_MULTISAMPLE).

Transparency issue with opengl/lwjgl

I am attempting to draw two textures to 3D space that containing transparency. When they do not overlap they work fine:
However when one texture overlaps the other the the transparency means that you can see through the one behind:
I use GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA when initialising blending.
You need to either depth sort or use alpha testing:
glEnable(GL_ALPHA_TEST);
glAlphaTest(GL_GREATER, 0.0f);
which will only draw pixels that have an alpha value of more than 0f. However, this doesn't work for blending transparent pixels. Andon's solution is the one that I use, although I work in 2D and I have to have transparency for smoke effects.
One possibility is to use the discard keyword in the fragment shader, as the alpha test is no longer with us. This has the disadvantage of having aliased edges of objects.
Another possibility is to depth-sort the objects and draw back to front. Obvious disadvantage is having to perform the transformations and the sorting in the first place. This can be sometimes avoided if the order of the objects can be determined statically (when the camera doesn't change much). Another disadvantage is overdrawing of the shaded pixels by something different, therefore throwing away performance.
Finally, you can use alpha-to-coverage, where the antialiassing hardware is employed to take care of the transparency. This doesn't require sorting and makes the edges of the objects smooth. The disadvantage is that this is enabled per rendering context and may not be available everywhere.

LibGDX FrameBuffer scaling

I'm working on a painting application using the LibGDX framework, and I am using their FrameBuffer class to merge what the user draws onto a solid texture, which is what they see as their drawing. That aspect is working just fine, however, the area the user can draw on isn't always going to be the same size, and I am having trouble getting it to display properly on resolutions other than that of the entire window.
I have tested this very extensively, and what seems to be happening is the FrameBuffer is creating the texture at the same resolution as the window itself, and then simply stretching or shrinking it to fit the actual area it is meant to be in, which is a very unpleasant effect for any drawing larger or smaller than the window.
I have verified, at every single step of my process, that I am never doing any of this stretching myself, and that everything is being drawn how and where it should, with the right dimensions and locations. I've also looked into the FrameBuffer class itself to try and find the answer, but strangely found nothing in there either, but, given all of the testing I've done, it seems to be the only possible place for this issue to be created somehow.
I am simply completely out of ideas, having spent a considerable amount of time trying to troubleshoot this problem.
Thank you so much Synthetik for finding the core issue. Here is the proper way to fix this situation that you elude to. (I think!)
The way to make frame buffer produce a correct ratio and scale texture regardless of actual device window size is to set the projection matrix to the size required like so :
SpriteBatch batch = new SpriteBatch();
Matrix4 matrix = new Matrix4();
matrix.setToOrtho2D(0, 0, 480,800); // here is the actual size you want
batch.setProjectionMatrix(matrix);
I believe I've solved my problem, and I will give a very brief overview of what the problem is.
Basically, the cause of this issue lies within the SpriteBatch class. Specifically, assuming I am not using an outdated version of the class, the problem lies on line 181, where the projection matrix is set. The line :
projectionMatrix.setToOrtho2D(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
This is causing everything that is drawn to, essentially, be drawn at the scale of the window/screen and then stretched to fit where it needs to afterwards. I am not sure if there is a more "proper" way to handle this, but I simply created another method within the SpriteBatch class that allows me to call this method again with my own dimensions, and call that when necessary. Note that it isn't required on every draw or anything like that, only once, or any time the dimensions may change.

Can I use OpenGL-ES glBlendFunc to influence the blending to take the target into account?

the title must seem somewhat cryptic but I could not really explain there what I want to do, so I drew a picture to visualize my problem:
The black parts are transparent (aka alpha = 0). I have the blue object (left) in the framebuffer and want to render the white bitmap (middle) onto it, so that it looks like the merged bitmap (right).
The problem is that if I use the standard glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); the whole part of the white object is being displayed. I don't want it to completely overlap the stuff in the framebuffer (blue) but only be visible on the parts where it has an alpha value > 0 (is visible). And then it should also still takes its own alpha values into account (notice the hole in the white object).
Is something like this possible with glBlendFunc or do I have to write a shader for this ?
PS: I looked at the documentation of glBlendFunc at http://www.khronos.org/opengles/documentation/opengles1_0/html/glBlendFunc.html but I don't really get anywhere with it.
PPS: I am using OpenGL-ES 2.0 on Android with C++, but I don't think the language/platform matters all that much.
I don't think it will be possible to do this purely with the blend function. You want the source pixel to be multiplied by both the source and destination alpha, while the blendfunc only allows one or the other.
However the result you want may be possible with some use of the stencil buffer. I'm not an expert in it but I think you can set the stencil op to increment while drawing the background image, and then when you draw the bitmap set the stencil test to reject where stencil == 0 (with blending still enabled to get the transparent area of the bitmap correct). You'll have to review the API for glStencilOp and glStencilFunc to figure out the exact right arguments to use.
It might also be possible with some combination of glBlendFunc and glAlphaFunc, but it would depend on the order of which they are evaluated, so I'm not positive.

Render string to texture in Android and OpenGL ES

I've googled around everywhere, but cannot find much for rendering strings to textures and then displaying that texture on a quad on the screen. Can someone provide a run-down on the process or provide good resources that describe how? Is rendering strings to textures even the best method for displaying text in an Android OpenGL ES app?
EDIT:
Okay, so LabelMaker interferes with alpha blending, the texture (created from a PNG with a transparent background) now has a solid black background, rather than a transparent background. If I comment out all the LabelMaker-related code, it works fine.
UPDATE:
Nevermind. I took a look at the code to find that LabelMaker was disabling blending after drawing the labels.
I think this is what you are looking for.
If you don't want to use GL extensions you need to create the font as a bitmap and then create a class to convert that string into quads that you can draw.
I use this method with the 2 fonts in my game. I have a class that takes a wide texture with all the letters evenly spaced, and a string that matches the image, then uses lookups on the letters to find out how far in the bitmap it should go.
Your other option is to render your text to a offscreen bitmap using android, and then bind the text as a texture. This will let you use androids built-in font processing and rendering to create texture-based fonts.
The second method I have not used yet, but I have rendered google maps to a offscreen canvas and then bound the bitmap as a GL texture, so doing it for text should be much simpler.
If you are planning to have modifying string data in a gl loop you need to really worry about StringBuilder too, because it causes GC and performance issues. I hardcode all my strings so it doesn't allocate, and all my rapidly numbers are done through a second draw function dedicated to drawing changing numbers without using string-builder.

Categories