I'm having a lot of trouble using the Slick2D bind() functionality and then trying to draw an image in OpenGL.
I'm using an Image I obtained from getSubImage. If I use the graphics.drawImage() method it draws this Image perfectly. If, however, I use bind(), it binds the entire Image that I obtained this sub-image from, so can I not bind sub images or am I doing it wrong?
Some extracts from my code:
In the constructor of my class:
ui = new Image("resources/img/ui/ui.png");
// I've tried with SpriteSheet too but Image is more appropriate for my purposes.
border_t = ui.getSubImage(12, 24, 12, 12);
In the render method:
border_t.bind();
graphics.setColor(Color.white);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(12, 0);
GL11.glTexCoord2f(9, 0);
GL11.glVertex2f(108, 0);
GL11.glTexCoord2f(9, 1);
GL11.glVertex2f(108, 12);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(12, 12);
GL11.glEnd();
This renders the entire spritesheet 9 times extremely scaled down instead of the top border as I had hoped.
Is this functionality lacking from Slick2d? Is it a bug? Or am I just simply doing it wrong?
"Subimages" are a construct of Slick2D, and only Slick2D. Once you start talking directly to OpenGL, you're now using OpenGL concepts, not Slick2D concepts.
There, there are no "subimages"; there are only textures. You can't bind a part of a texture. You must bind the whole thing. If you want to render a subset of a texture, you need to adjust your texture coordinates accordingly to select just that piece.
So using bind on a subimage isn't very useful.
Related
There's an entity of my LibGDX game I would like to render to a PNG. So I made a small tool that is a LibGDX app to display that entity and it takes a screenshot on F5. The goal of that app is only to generate the PNG.
camera.update();
Gdx.gl.glClearColor(0, 0, 0, 0);
Gdx.gl.glClear(GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
animation.update(Gdx.graphics.getDeltaTime() * 1000);
animation.draw(batch);
batch.end();
if(exporting)
// export...
From that wiki page I found out how to make a screenshot and by removing the for loop, I was able to get a screenshot that doesn't replace transparent pixels by black pixels.
byte[] pixels = ScreenUtils.getFrameBufferPixels(0, 0, Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), true);
Pixmap pixmap = new Pixmap(Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), Pixmap.Format.RGBA8888);
BufferUtils.copy(pixels, 0, pixmap.getPixels(), pixels.length);
PixmapIO.writePNG(Gdx.files.external("mypixmap.png"), pixmap);
pixmap.dispose();
It works well for the edges of the entity but not for the multiple parts inside.
Edges: (perfect)
Inside: (should not be transparent)
So I started playing with blending to fix that.
With
batch.enableBlending();
batch.setBlendFunction(
exporting ? GL20.GL_ONE : GL20.GL_SRC_ALPHA, // exporting is set to true on the frame where the screenshot is taken
GL20.GL_ONE_MINUS_SRC_ALPHA);
This improved it a bit:
But with images like glasses that are supposed to be transparent, it's opaque:
Instead of:
Any idea of what I should do to fix this? What I want is pretty standard, a transparent background with semi transparent images on top of it. I want it to behave just like a regular image software would with layers (like GIMP).
Your issue is because written colors and alpha are both modulated by same function : SRC_ALPHA and ONE_MINUS_SRC_ALPHA.
You need to use glBlendFuncSeparate to achieve this. In your case :
batch.begin();
// first disable batch blending changes (see javadoc)
batch.setBlendFunction(-1, -1);
// then use special blending.
Gdx.gl.glBlendFuncSeparate(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA,GL20.GL_ONE, GL20.GL_ONE);
... your drawings ...
batch.end();
In this way, colors channels still blended as usual but alpha channels are added (both source and destination).
Note that with libgdx 1.9.7+, the batch blending hack is not required anymore and could be :
batch.begin();
batch.setBlendFunctionSeparate(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA,GL20.GL_ONE, GL20.GL_ONE);
... your drawings ...
batch.end();
There are some limitations in some cases though, please take a look at my GIST for more information.
is there a way to combine a number of textures into one texture with libgdx API?
For example, consider having these 3 textures:
Texture texture1 = new Texture("texture1.png");
Texture texture2 = new Texture("texture2.png");
Texture texture3 = new Texture("texture3.png");
The goal is to combine them into one usable texture, is there a way to do it?
I wanted to get the same result, but I did not find the previous instructions given to be of any help. Neither I was able to find any other clear solution for this issue, so here's what I managed to get working after I read a bit of api-docs and did some testing of my own.
Texture splashTexture = new Texture("texture1.png"); // Remember to dispose
splashTexture.getTextureData().prepare(); // The api-doc says this is needed
Pixmap splashpixmap = splashTexture.getTextureData().consumePixmap(); // Strange name, but gives the pixmap of the texture. Remember to dispose this also
Pixmap pixmap = new Pixmap(splashTexture.getWidth(), splashTexture.getHeight(), Format.RGBA8888); // Remember to dispose
// We want the center point coordinates of the image region as the circle origo is at the center and drawn by the radius
int x = (int) (splashTexture.getWidth() / 2f);
int y = (int) (splashTexture.getHeight() / 2f);
int radius = (int) (splashTexture.getWidth() / 2f - 5); // -5 just to leave a small margin in my picture
pixmap.setColor(Color.ORANGE);
pixmap.fillCircle(x, y, radius);
// Draws the texture on the background shape (orange circle)
pixmap.drawPixmap(splashpixmap, 0, 0);
// TADA! New combined texture
this.splashImage = new Texture(pixmap); // Not sure if needed, but may be needed to get disposed as well when it's no longer needed
// These are not needed anymore
pixmap.dispose();
splashpixmap.dispose();
splashTexture.dispose();
Then use the splashImage (which is a class variable of type Texture in my case) to render the combined image where you want it. The resulting image has the given background and the foreground is a png image which has transparent parts to be filled by the background color.
#Ville Myrskyneva's answer works, but it doesn't entirely show how to combine 2 or more textures. I made a Utility method that takes 2 textures as input and returns a combined texture. Note that the second texture will always be drawn on top of the first one.
public static Texture combineTextures(Texture texture1, Texture texture2) {
texture1.getTextureData().prepare();
Pixmap pixmap1 = texture1.getTextureData().consumePixmap();
texture2.getTextureData().prepare();
Pixmap pixmap2 = texture2.getTextureData().consumePixmap();
pixmap1.drawPixmap(pixmap2, 0, 0);
Texture textureResult = new Texture(pixmap1);
pixmap1.dispose();
pixmap2.dispose();
return textureResult;
}
You should use the TexturePacker for creating one big Texture from lots of small one. https://github.com/libgdx/libgdx/wiki/Texture-packer .
And here is GUI version (jar) whcih will help you to pack all your textures in one before putting them to your project: https://code.google.com/p/libgdx-texturepacker-gui/.
You should also remember that maximum size of one texture for most Android and IOS devices is: 2048x2048.
In case you want to use combined texture as atlas for performance benefits, you better use #Alon Zilberman answer.
But I think that your question is about another problem. If you want to get some custom texture from these three textures (e.g. merge them by drawing one at one), then your best solution is to draw to FrameBuffer. Here you can find nice example of how to do it https://stackoverflow.com/a/7632680/3802890
I'm trying to render a background image for a new game I'm creating. To do this, I thought I'd just create a simple Quad and draw it first so that it stretched over the background of my game. The problem is that the quad doesn't draw to it's correct size and draws at the complete wrong place on the screen. I am using LWJGL and an added slick-util library for loading textures.
background = TextureHandler.getTexture("background", "png");
This is the line of code which basically gets my background texture using a class that I wrote using slick-util. I then bind the texture to a quad and draw it using glBegin() and glEnd() like this:
// Draw the background.
background.bind();
glBegin(GL_QUADS);
{
glTexCoord2d(0.0, 0.0);
glVertex2d(0, 0);
glTexCoord2d(1.0, 0.0);
glVertex2d(Game.WIDTH, 0);
glTexCoord2d(1.0, 1.0);
glVertex2d(Game.WIDTH, Game.HEIGHT);
glTexCoord2d(0.0, 1.0);
glVertex2d(0, Game.HEIGHT);
}
glEnd();
You'd expect this block to draw the quad so that it covered the entire screen, but it actually doesn't do this. It draws it in the middle of the screen, like so:
http://imgur.com/Xw9Xs9Z
The large, multicolored sprite that takes up the larger portion of the screen is my background, but it isn't taking up the full space like I want it to.
A few things I've tried:
Checking, double-checking, and triple-checking to make sure that the sprite's size and the window's size are identical
Resizing the sprite so that it is both larger and smaller than my target size. Nothing seems to change when I do this.
Positioning the sprite at different intervals or messing with the parameters of the glTexCoord2d() and glVertex2d(). This is just messy, and looks unnatural.
Why won't this background sprite draw to it's correct size?
If you have not created your own orthogonal projection (I.E. using glOrtho()), then your vertex coordinates will need to range from -1 to +1. Right now you're only drawing on the left half of that projection, thus giving you this result.
I've been looking around for awhile now, and can't seem to find out how to set a scene's background as a gradient... it's hard to find solid Andengine-related answers,
I guess my options are:
using a sprite from a gradient image I've created myself (which can't be the best way)
using a gradient xml resource (but I don't know how to create a sprite from a resId, and I'm confused on how to make the gradient fit the camera)
or some other andengine built-in method
Any help is appreciated.
The following code inside your activity class (onCreateScene or onPopulateScene) should set a red/blue gradient as your background.
Gradient g = new Gradient(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT, this.getVertexBufferObjectManager());
g.setGradient(Color.RED, Color.BLUE, 1, 0);
this.setBackground(new EntityBackground(g));
I'm currently exploring opengl through the use of the JOGL library (the java wrappers for openGL) which I'm using to create 2d/3d graphs. At the minute I'm having a little issue with text I've rendered through the "glutBitmapString" method, it isn't resizing in respect of the window as shown in the screenshot below. Unfortunately the job spec I've been given is that this must be done in Java, so I can't jump to any other language that has a better supported version of openGL.
Everything else in the window resizes correctly so I'm assuming the issue is in the code I've posted below, if not then I'll be happy to post code you feel is relevant to the issue.
Here is a snippet of my code I'm using to render the text
GL gl = drawable.getGL();
GLUT glut = new GLUT();
float textPosx = -0.4f;
float textPosy = -2.1f;
gl.glColor3f(1.0f, 0.0f, 0.0f);
// Move to rastering position
gl.glRasterPos2f(textPosx, textPosy);
// convert text to bitmap and tell what string to put
glut.glutBitmapString(GLUT.BITMAP_HELVETICA_12, "0");
textPosx = 1.75f;
textPosy = -2.15f;
// Move to rastering position
gl.glRasterPos2f(textPosx, textPosy);
// convert text to bitmap and tell what string to put
glut.glutBitmapString(GLUT.BITMAP_HELVETICA_18, "TIME");
textPosx = -1.0f;
textPosy = 1.0f;
gl.glColor3f(0.0f, 1.0f, 0.0f);
// Move to rastering position
gl.glRasterPos2f(textPosx, textPosy);
// convert text to bitmap and tell what string to put
glut.glutBitmapString(GLUT.BITMAP_HELVETICA_18, "ERRORS");
glutBitmapString draws text in 2D. 2D text size is based on font size. So, if you set the font size to 18, as you have in this example, then it will be standard 18 pt font size on the screen, no matter how large you make the window or how close you zoom in. This is not a Java issue. Java is not actually drawing anything. Everything is being drawn by the OpenGL native libraries, which are written in C++, so it will be exactly the same in C++ as it is in Java.
There are two ways you could work around this. One would be to change the font size of the text as you zoom in or out. This would be kind of a awkward, and may be difficult to get right. A better option, imo, would be to simply use 3D text. In JOGL you use the TextRenderer object to draw 3D text.
In your init method create a global variable like so:
textr = new TextRenderer(new Font("SansSerif", Font.PLAIN, 18));
Obviously, change the font settings to whatever you prefer. Then in your display loop:
textr.setColor(Color.GREEN);
textr.begin3DRendering();
textr.draw3D("ERRORS", xLocation, yLocation, zLocation, scale);
textr.end3DRendering();
Personally, I prefer to use a large font size and then scale it down some, that way, when you zoom in, it doesn't get pixelated.
Also, unlike 2D text, 3D text will not always face the screen. You have to do that manually. It depends on how your camera is set up, but if you're using basic rotations to move the camera around, usually you can just negate those rotations on the 3D text object to make it face the camera.
For the x, y, and z locations, those are the locations within the current object (local coordinates). Think of the beginRendering() to endRendering() as one object with its own local coordinate system. Usually, I prefer to draw my text at 0, 0, 0 local coordinates, then move the entire object to the proper location. That way rotations are easier to understand.
You could try using TextRenderer. Works fine.