Whenever I draw a texture that has alpha around the edges (it is anti-aliased by photoshop), these edges become dark. I have endlessly messed around with texture filters and blend modes but have had no success.
Here is what I mean:
minFilter: Linear magFilter: Linear
minFilter: MipMapLinearNearest magFilter: Linear
minFilter: MipMapLinearLinear magFilter: Linear
As you can see, changing the filter on the libGDX Texture Packer makes a big difference with how things look, but alphas are still dark.
I have tried manually setting the texture filter in libgdx with:
texture.setFilter(minFilter, magFilter);
But that does not work.
I have read that downscaling with a linear filter causes the alpha pixels to default to black? If this is the case, how can I avoid it?
I have also tried changing the blend mode: glBlend(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_APHA) makes no difference. glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) removes alpha altogether so that doesn't work.
I do NOT want to set my minFilter to Nearest because it makes things look terribly pixellated. I have tried every other combination of texture filters but everything results in the same black/dark edges/outline effect.
I have read that downscaling with a linear filter causes the alpha pixels to default to black?
This is not necessarily true; it depends on what colour Photoshop decided to put in the fully transparent pixels. Apparently this is black in your case.
The problem occurs because the GPU is interpolating between two neighbouring pixels, one of which is fully transparent (with all colour channels set to zero as well). Let's say that the other pixel is bright red:
(255, 0, 0, 255) // Bright red, fully opaque
( 0, 0, 0, 0) // Black, fully transparent
Interpolating with a 50/50 ratio gives:
(128, 0, 0, 128)
This is a half-opaque dark red pixel, which explains the dark fringes you're seeing.
There are two possible solutions.
1. Add bleeds to transparent regions
Make sure that the fully transparent pixels have the right colour assigned to them; essentially "bleed" the colour from the nearest non-transparent pixel into adjacent fully transparent pixels. I'm not sure Photoshop can do this, but the libGDX TexturePacker can; see the bleed and bleedIterations settings. You need to be careful to set bleedIterations high enough, and add enough padding for the bleed to expand into, for your particular level of downscaling.
Now the example comes out like this:
(255, 0, 0, 255)
(255, 0, 0, 0) // Red bled into the transparent region
Interpolation now gives a bright red transparent pixel, as desired:
(255, 0, 0, 128)
2. Use premultiplied alpha
This is less finicky to get right, but it helps to know exactly what you're doing. Again TexturePacker has you covered, with the premultipliedAlpha setting. This is OpenGL blend mode glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
The numbers in the example don't change; this is still the pixel that comes out:
(128, 0, 0, 128)
However, this is no longer interpreted as "half-transparent dark red", but rather as "add some red, and remove some background". More generally, with premultiplied alpha, the colour channels are not "which colour to blend in" but "how much colour to add".
Note that a pixel like (255, 0, 0, 0) can no longer exist in the source texture: because the alpha is premultiplied, an alpha of zero automatically means that all colour channels must be zero as well. (If you want to get really fancy, you can even use such pixels to apply additive blending in the same render pass and the same texture as regular blending!)
Further reading on premultiplied alpha:
Shawn Hargreaves' blog
Alpha Blending: To Pre or Not To Pre by NVIDIA
I was able to resolve this issue by changing filtering on texture packer pro from Linear to MipMap
Related
I'm using slick2d to render an Animation, but using anti-aliasing
Without AA, the movement is choppy, as one would expect for something without AA.
I turn on AA in my game's preRenderState with:
g.setAntiAlias(true);
This leads to:
Note the diagonal line in the center, presumably caused by the two triangles that render the rectangle not meeting precicely. How can I remove that while still using AA to smooth my movement? I found http://www.java-gaming.org/index.php?topic=27313.0 but the solution was "remove AA" which I am reluctant to do.
This looks suspiciously like artifacts due to GL_POLYGON_SMOOTH.
When you use that (deprecated) functionality, you are expected to draw all opaque geometry with blending enabled and the blend function: GL_SRC_ALPHA_SATURATE, GL_ONE. Failure to do so produces white outlines on most contours (aliased edges, basically).
Chapter 6 of the OpenGL Redbook states:
Now you need to blend overlapping edges appropriately. First, turn off the depth buffer so that you have control over how overlapping pixels are drawn. Then set the blending factors to GL_SRC_ALPHA_SATURATE (source) and GL_ONE (destination). With this specialized blending function, the final color is the sum of the destination color and the scaled source color; the scale factor is the smaller of either the incoming source alpha value or one minus the destination alpha value.
This means that for a pixel with a large alpha value, successive incoming pixels have little effect on the final color because one minus the destination alpha is almost zero. With this method, a pixel on the edge of a polygon might be blended eventually with the colors from another polygon that's drawn later. Finally, you need to sort all the polygons in your scene so that they're ordered from front to back before drawing them.
That is a lot of work that almost nobody does correctly when they try to enable GL_POLYGON_SMOOTH.
Every time you enable anti-aliasing in Slick2D, it should look like this.
g.setAntiAlias(true);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA_SATURATE, GL11.GL_ONE);
Basically same as Andon's answer, but with code that should solve the problem.
Note that with this whatever you draw first will be on top, rather than what you draw last. So you may need to reverse rendering order.
I am writing a voxel engine and at the moment
I am working on the Chunk-Rendering-System but I have a problem.
It seems that the textures were repeated on the quads.
There is this green line at the bottom of the grass blocks and I don't know why.
This is the OpenGL-Render-Code:
Texture texture = TextureManager.getTexture(block.getTextureNameForSide(Direction.UP));
texture.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2d(0, 0); GL11.glVertex3f(0, 1, 0);
GL11.glTexCoord2d(1, 0); GL11.glVertex3f(0, 1, 1);
GL11.glTexCoord2d(1, 1); GL11.glVertex3f(1, 1, 1);
GL11.glTexCoord2d(0, 1); GL11.glVertex3f(1, 1, 0);
GL11.glEnd();
And here is the OpenGL-Setup:
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glShadeModel(GL11.GL_SMOOTH);
GL11.glClearColor(0.1F, 0.4F, 0.6F, 0F);
GL11.glClearDepth(1F);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
GL11.glCullFace(GL11.GL_BACK);
GL11.glEnable(GL11.GL_CULL_FACE);
Make sure GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T are set to GL_CLAMP_TO_EDGE.
genpfault's answer should do the trick for you, I just wanted to give you some insight into why you need this particular wrap state.
To be clear, the green line in your screenshot corresponds to the edges of one of your voxels?
It looks like you are using GL_LINEAR filtering (default) together with an inappropriate texture wrap state (e.g. GL_REPEAT or GL_CLAMP). I will explain why GL_CLAMP is a bad idea later.
You may think that the texture coordinate 0.0 and 1.0 are perfectly within the normalized texture coordinate range and therefore not subject to wrapping, but you would be wrong.
This particular combination of states will pickup texels from the other side of your texture at either extreme of the [0,1] texture coordinate range. The texture coordinate 1.0 is actually slightly beyond the center of the last texel in your texture, so when GL fetches the 4 nearest texels for linear filtering, it wraps around to the other side of the texture for at least 2 of them.
GL_CLAMP_TO_EDGE modifies this behavior, it clamps the texture coordinates to a range that is actually more restrictive than [0,1] so that no coordinate goes beyond the center of any edge texels in your texture. Linear filtering will not pickup texels from the other side of your texture with this set. You could also (mostly) fix this by using GL_NEAREST filtering, but that will result in a lot of texture aliasing.
It is also possible that you are using GL_CLAMP, which, by the way was removed in OpenGL 3.1. In older versions of GL it was designed to clamp the coordinates into the range [0,1] and then if linear filtering tried to fetch a texel beyond the edge it would use a special set of border texels rather than wrapping around. Border texels are no longer supported, and thus that wrap mode is gone.
The bottom line is do not use GL_CLAMP, it does not do what most people think. GL_CLAMP_TO_EDGE is almost always what you really want when you think of clamping textures.
EDIT:
genpfault brings up a good point; this would be a lot easier to understand with a diagram...
The following diagram illustrates the problem in 1 dimension:
http://i.msdn.microsoft.com/dynimg/IC83860.gif
I have a more thorough explanation of this diagram in an answer I wrote to a similar issue.
I was wondering if there actually a difference between Graphics2D.setComposite(..., alpha) and Graphics2D.setColor(new Color(..., alpha)) when using transparency in Java? How do they affect each other when using a combination of both, e.g.
Graphics2D.setComposite(..., 0.5f)
Graphics2D.setColor(new Color(..., 0.5f))
It seems that the result is not a transparency of 0.5, but more like 0.25. Is there any recommendation to use one of the previously mentioned approaches?
Graphics2D.setComposite(..., 0.5f) will effect EVERYTHING that is painted to the Graphics context after you apply it. This includes primitives as well as images.
Graphics2D.setColor(new Color(..., 0.5f)) will only effect the painting for primitives, every thing else will painted full opaque.
You are right in in the fact that if you paint a color that is 50% transparent onto a Graphics context that is 50% transparent will result in a color that appears to be 25% transparent. The two won't cancel each other out, but will compound.
Think of it this way.
#100% opacity, the color is 50% opaque.
#75% opacity, the color is reduced by 25%, making it 37.5% opaque
#50% opacity, the color is reduced by 50%, making it 25% opaque
#25% opacity, the color is reduced by 75%, making it 12.5% opaque
My question involves the drawImage method in Java Graphics2D (this is for a desktop app, not Android).
My BufferedImage that I'd like to draw contains high resolution binary data, most pixels are black but I have some sparse green pixels (the green pixels represent data points from an incoming raw data stream). The bitmap is quite large, larger than my typical panel size. I made it large so I could zoom in and out. The problem is when I zoom out I lose some of my green pixels .. as an example if my image is 1000 pixels and by panel is 250 pixels, I'd lose 1 out of 4 pixels in each direction (X and Y). If I use nearest neighbor interpolation when I scale the pixels can just disappear to black. If I use something like bilinear interpolation my green pixel will get recolored to somewhere between black and green.
I understand all this behavior, but my question is that is there any way to get the behavior I want, which is to make sure if any pixels is non-black I want it to be drawn at it's full intensity. Perhaps something like a "max-hold" interpolation.
I realize I could probably do what I want by drawing shape primitive over a black background, and maybe this is what I'll have to do. But there is a reason I'm using bitmaps (has to do with the fact that I'm showing the data in a falling spectrogram-type display - and it does have a mode where all the pixels could be colored and not just black and green).
Thanks,
You could look at the implementation of drawImage and override it to get your desired behaviour, however probably the core of the scaling uses hardware acceleration, so re implementing it in Java would be really slow.
You could look into JOGL, but my impression is that, if your pixels are really sparse, just drawing the green pixels on a black background (or over an image) would be both easy to code and very fast.
You could even have an heuristic switching between painting the dots to scaling the image if the number of dots starts being too high.
I want to ignore all other colours. I just want to count the colours between white and yellow(Bright yellow, Light yellow.. all the way to white). and then give a rating of how yellow a certain image is. is that possible?
I have been playing with Bitmap.getPixel() but I can't figure out how to ignore other colours.
In this example, image 1 would be the one select because it has more colour between bright yellow and white.
How can I detect yellowish colours only?
Well I would focus on the HUE of the pixel. Using this code:
// get HSV values for the pixel's colour
float[] pixelColorHsv = new float[3];
Color.colorToHSV(Bitmap.getPixel(x,y), pixelColorHsv);
What is Yellow might be up to you but a range could be between 72 and 49 (you can play with this tool to get an idea) then you can quantify where it is in this range or how high or low the brightness and saturation are.
The Bitmap.getPixel(int x, int y) method returns a Color object with the RGB values for that pixel. Yellow is a combination of red and green, so a straight yellow RGB triple would be (255, 255, 0), right? If you get darker, you lower both of the red and green values. If you get brighter, you bring up the blue value. So basically, you need to find a way to detect how "close" any given pixel's RGB value comes to (255, 255, 0) and then accumulate those closeness values for the entire image. Do the same for the second image, then compare the 2 results.