How to generate a transparent PNG image with LibGDX? - java

There's an entity of my LibGDX game I would like to render to a PNG. So I made a small tool that is a LibGDX app to display that entity and it takes a screenshot on F5. The goal of that app is only to generate the PNG.
camera.update();
Gdx.gl.glClearColor(0, 0, 0, 0);
Gdx.gl.glClear(GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
animation.update(Gdx.graphics.getDeltaTime() * 1000);
animation.draw(batch);
batch.end();
if(exporting)
// export...
From that wiki page I found out how to make a screenshot and by removing the for loop, I was able to get a screenshot that doesn't replace transparent pixels by black pixels.
byte[] pixels = ScreenUtils.getFrameBufferPixels(0, 0, Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), true);
Pixmap pixmap = new Pixmap(Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), Pixmap.Format.RGBA8888);
BufferUtils.copy(pixels, 0, pixmap.getPixels(), pixels.length);
PixmapIO.writePNG(Gdx.files.external("mypixmap.png"), pixmap);
pixmap.dispose();
It works well for the edges of the entity but not for the multiple parts inside.
Edges: (perfect)
Inside: (should not be transparent)
So I started playing with blending to fix that.
With
batch.enableBlending();
batch.setBlendFunction(
exporting ? GL20.GL_ONE : GL20.GL_SRC_ALPHA, // exporting is set to true on the frame where the screenshot is taken
GL20.GL_ONE_MINUS_SRC_ALPHA);
This improved it a bit:
But with images like glasses that are supposed to be transparent, it's opaque:
Instead of:
Any idea of what I should do to fix this? What I want is pretty standard, a transparent background with semi transparent images on top of it. I want it to behave just like a regular image software would with layers (like GIMP).

Your issue is because written colors and alpha are both modulated by same function : SRC_ALPHA and ONE_MINUS_SRC_ALPHA.
You need to use glBlendFuncSeparate to achieve this. In your case :
batch.begin();
// first disable batch blending changes (see javadoc)
batch.setBlendFunction(-1, -1);
// then use special blending.
Gdx.gl.glBlendFuncSeparate(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA,GL20.GL_ONE, GL20.GL_ONE);
... your drawings ...
batch.end();
In this way, colors channels still blended as usual but alpha channels are added (both source and destination).
Note that with libgdx 1.9.7+, the batch blending hack is not required anymore and could be :
batch.begin();
batch.setBlendFunctionSeparate(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA,GL20.GL_ONE, GL20.GL_ONE);
... your drawings ...
batch.end();
There are some limitations in some cases though, please take a look at my GIST for more information.

Related

How to create a proper ortographic camera in libgdx

So I am having a little hard time understanding how ortographic cameras work in libgdx.
what I want is to have a camera that will only render things within a square while having another camera set the bounds for my whole screen.
So here, I was able to do what I wanted on the whole screen for the game pad. But, the thing you see on the top right is the background map of the game and i want to render the parts only fall within the red square you see here. How do I achieve that?
Are cameras supposed to do that or do I need to figure out a way to do it manually? I am really confused as to how cameras and projection matrices work.
Here on this screen, The red square and the green pad on the left are being drawn using the projection matrix of my screen camera. The map(top right) is drawn using my map cam.
Map cam is a view port of 400x400 but as you can see , the tiles are rectangular and that isnt the aspect ration i want. If someone can briefly explain how cameras work , I'd greatly appreciate it.
The reason I am not posting my code here is because I feel like I need to understand how camera mechanics work to even code it properly so I want to address that issue first.
Following #Tenfour04's advice worked perfectly. In case anyone wonders what I wanted to achieve. Here's a picture.
A camera alone cannot crop off part of the screen. For that you need to use glViewport. There is already a Viewport class in Libgdx that can do that for you. You will need two orthographic cameras (one for the map and one for the GUI), but the viewport can create its own.
private Viewport viewport;
//in create:
viewport = new FitViewport(400, 400);
//in resize:
viewport.update(width, height);
//in render:
viewport.getCamera().position.set(/*...move your map camera as needed*/);
viewport.apply(); //viewport cropped to your red square
batch.setProjectionMatrix(viewport.getCamera().combined);
batch.begin();
//draw map
batch.end();
//return to full screen viewport
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(yourGUICamera.combined);
batch.begin();
//draw gui
batch.end();
What happens, is the camera will fit itself to the size of the screen. In order to change this, you would want to use a FrameBuffer. The frame buffer will constrain the camera into the desired size, then can be drawn as a texture.
Create the frame buffer with the dimensions being in pixels.
//Initialize the buffer
FrameBuffer fbo = new FrameBuffer(Format.RGB565, width, helght, false);
Render the world within the buffer.
fbo.begin();
//Draw the world here
fbo.end();
Draw the buffer to the screen with a batch.
batch.begin();
batch.draw(fbo.getColorBufferTexture(), x, y);
batch.end();

LWJGL: Discolored Textures

So I am developing a small Pong replica simply for some practice with LWJGL. Since there is no easy way to write customizable text in LWJGL, I am using textures for the start button, other buttons, and so on. However, when I drew the texture, it turned out to be discolored on my purple background. If I change the background to white, there is no discoloration. Help?
Also, my start button texture is something like 50x22, but I put it on a 64x64 image because Slick can only load resolutions that are an exponent of two. I adjusted the rectangle being drawn so that it is not warped, and the rest of the image is transparent, so it shouldn't be visible once I sort out the above problem. Are there any alternatives to my method?
This is where I initialize my OpenGL stuff:
public static void setCamera()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,width,0,height,-1,1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
And this is where I draw the texture:
public void draw()
{
logic();
glPushMatrix();
glTranslated(x,y,0);
texture.bind();
glBegin(GL_QUADS);
glTexCoord2d(0,1); glVertex2d(0,-196.875);
glTexCoord2d(1,1); glVertex2d(width+18.75,-196.875);
glTexCoord2d(1,0); glVertex2d(width+18.75,height);
glTexCoord2d(0,0); glVertex2d(0,height);
glEnd();
glPopMatrix();
}
Thanks :)
As discussed in comments, your initial problem was that you had neglected to reset the "current color" before drawing your texture. GL is a glorified state machine, it will continue to use the color you set for every other draw operation... so setting glColor3d (...) when you drew your background also affects your foreground image.
Adding the following before drawing your textured quad will fix this problem:
glColor3f (1.0f, 1.0f, 1.0f);
However, you have brought up a new issue in your comments related to blending. This question boils down to a lack of a blending function. By default when you draw something in OpenGL it will merely overwrite anything else in the framebuffer.
What you need for transparency to work is enable blending and use this function:
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This effectively does the following:
NewColor = (NewColor.RGB * NewColor.A) + OldColor.RGB * (1.0 - NewColor.A)
With this, the parts of your texture that have an alpha value != 1.0 will be a mix of whatever was already in the framebuffer and what you just drew.
Remember to enable blending before you draw your transparent objects:
glEnable (GL_BLEND);
and disable it when you do not need it:
glDisable (GL_BLEND);
Lastly, you should be aware that the order you draw translucent objects in OpenGL is pretty important. Opaque objects (such as your background) need to be drawn first. In general, you need to draw things from back-to-front in order for alpha blending to function correctly. This is the opposite order you would ideally want for opaque geometry (since hardware can skip shading for obstructed objects if you draw objects in front of them first).

How to apply gradient background in andengine

I've been looking around for awhile now, and can't seem to find out how to set a scene's background as a gradient... it's hard to find solid Andengine-related answers,
I guess my options are:
using a sprite from a gradient image I've created myself (which can't be the best way)
using a gradient xml resource (but I don't know how to create a sprite from a resId, and I'm confused on how to make the gradient fit the camera)
or some other andengine built-in method
Any help is appreciated.
The following code inside your activity class (onCreateScene or onPopulateScene) should set a red/blue gradient as your background.
Gradient g = new Gradient(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT, this.getVertexBufferObjectManager());
g.setGradient(Color.RED, Color.BLUE, 1, 0);
this.setBackground(new EntityBackground(g));

How can I properly use bind() on a subImage?

I'm having a lot of trouble using the Slick2D bind() functionality and then trying to draw an image in OpenGL.
I'm using an Image I obtained from getSubImage. If I use the graphics.drawImage() method it draws this Image perfectly. If, however, I use bind(), it binds the entire Image that I obtained this sub-image from, so can I not bind sub images or am I doing it wrong?
Some extracts from my code:
In the constructor of my class:
ui = new Image("resources/img/ui/ui.png");
// I've tried with SpriteSheet too but Image is more appropriate for my purposes.
border_t = ui.getSubImage(12, 24, 12, 12);
In the render method:
border_t.bind();
graphics.setColor(Color.white);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(12, 0);
GL11.glTexCoord2f(9, 0);
GL11.glVertex2f(108, 0);
GL11.glTexCoord2f(9, 1);
GL11.glVertex2f(108, 12);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(12, 12);
GL11.glEnd();
This renders the entire spritesheet 9 times extremely scaled down instead of the top border as I had hoped.
Is this functionality lacking from Slick2d? Is it a bug? Or am I just simply doing it wrong?
"Subimages" are a construct of Slick2D, and only Slick2D. Once you start talking directly to OpenGL, you're now using OpenGL concepts, not Slick2D concepts.
There, there are no "subimages"; there are only textures. You can't bind a part of a texture. You must bind the whole thing. If you want to render a subset of a texture, you need to adjust your texture coordinates accordingly to select just that piece.
So using bind on a subimage isn't very useful.

Blending problems using opengl (via lwjgl) when using a png texture

I have a (hopefully) small problem when using blending in OpenGL.
Currently I use LWJGL and Slick-Util to load the Texture.
The texture itself is a 2048x2048 png graphic, in which I store tiles of a size of 128x128 so that I have 16 sprites per row/column.
Since glTexCoord2f() uses normalized Coordinates, I wrote a little function to scale the whole image to only show the sprite I want to.
It looks like this:
private Rectangle4d getTexCoord(int x, int y) {
double row = 0, col = 0;
if(x > 0)
row = x/16d;
if(y > 0)
col = y/16d;
return new Rectangle4d(row, col, 1d/16d, 1d/16d);
}
(Rectangle4d is just a type to store x, y, width and height as double coords)
Now the problem is, once I use these coords, the sprite displays correctly, the transparency works correctly too, just everything else becomes significantly darker (well more correctly it becomes transparent I guess, but since the ClearColor is black). The sprite itself however is drawn correctly. I already tried changing all the glColor3d(..) to glColor4d(..) and setting alpha to 1d, but that didn't change anything. The sprite is currently the only image, everything else are just colored quads.
Here is how I initialised OpenGL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, WIDTH, HEIGHT, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And here is how I load the sprite (using slick-util):
texture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/sprites.png"));
And finally I render it like this (using the helper function getTexCoord() mentioned at the top):
texture.bind();
glColor4d(1, 1, 1, 1);
glBegin(GL_QUADS);
{
Rectangle4d texCoord = getTexCoord(0, 0);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
}
glEnd();
The result is this (sprite is drawn correctly, just everything else is darker/transparent):
Without the texture (just a gray quad), it looks like this (now everything is correctly drawn except I don't have a sprite):
Thanks for everyone who bothers to read this at all!
Edit:
Some additional info, from my attempts to find the problem:
This is how it looks when I set the ClearColor to white (using glClearColor(1, 1, 1, 1) ):
Another thing I tried is enabling blending just before I draw the player and disable it again right after I finished drawing:
Its a bit better now, but its still noticeably darker. In this case it really seems to be "darker" not "more transparent" because it is the same when I use white as a clear color (while still only enabling blending when needed and disabling it right after) as seen here:
I read some related questions and eventually found the solution (Link). Apparantly I can't/shouldn't have GL_TEXTURE_2D enabled all the time when I want to render textureless objects (colored quads in this case)!
So now, I enable it only before I render the sprite and then disable it again once the sprite is drawn. It works perfectly now! :)

Categories