I've been looking around for awhile now, and can't seem to find out how to set a scene's background as a gradient... it's hard to find solid Andengine-related answers,
I guess my options are:
using a sprite from a gradient image I've created myself (which can't be the best way)
using a gradient xml resource (but I don't know how to create a sprite from a resId, and I'm confused on how to make the gradient fit the camera)
or some other andengine built-in method
Any help is appreciated.
The following code inside your activity class (onCreateScene or onPopulateScene) should set a red/blue gradient as your background.
Gradient g = new Gradient(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT, this.getVertexBufferObjectManager());
g.setGradient(Color.RED, Color.BLUE, 1, 0);
this.setBackground(new EntityBackground(g));
Related
There's an entity of my LibGDX game I would like to render to a PNG. So I made a small tool that is a LibGDX app to display that entity and it takes a screenshot on F5. The goal of that app is only to generate the PNG.
camera.update();
Gdx.gl.glClearColor(0, 0, 0, 0);
Gdx.gl.glClear(GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
animation.update(Gdx.graphics.getDeltaTime() * 1000);
animation.draw(batch);
batch.end();
if(exporting)
// export...
From that wiki page I found out how to make a screenshot and by removing the for loop, I was able to get a screenshot that doesn't replace transparent pixels by black pixels.
byte[] pixels = ScreenUtils.getFrameBufferPixels(0, 0, Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), true);
Pixmap pixmap = new Pixmap(Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), Pixmap.Format.RGBA8888);
BufferUtils.copy(pixels, 0, pixmap.getPixels(), pixels.length);
PixmapIO.writePNG(Gdx.files.external("mypixmap.png"), pixmap);
pixmap.dispose();
It works well for the edges of the entity but not for the multiple parts inside.
Edges: (perfect)
Inside: (should not be transparent)
So I started playing with blending to fix that.
With
batch.enableBlending();
batch.setBlendFunction(
exporting ? GL20.GL_ONE : GL20.GL_SRC_ALPHA, // exporting is set to true on the frame where the screenshot is taken
GL20.GL_ONE_MINUS_SRC_ALPHA);
This improved it a bit:
But with images like glasses that are supposed to be transparent, it's opaque:
Instead of:
Any idea of what I should do to fix this? What I want is pretty standard, a transparent background with semi transparent images on top of it. I want it to behave just like a regular image software would with layers (like GIMP).
Your issue is because written colors and alpha are both modulated by same function : SRC_ALPHA and ONE_MINUS_SRC_ALPHA.
You need to use glBlendFuncSeparate to achieve this. In your case :
batch.begin();
// first disable batch blending changes (see javadoc)
batch.setBlendFunction(-1, -1);
// then use special blending.
Gdx.gl.glBlendFuncSeparate(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA,GL20.GL_ONE, GL20.GL_ONE);
... your drawings ...
batch.end();
In this way, colors channels still blended as usual but alpha channels are added (both source and destination).
Note that with libgdx 1.9.7+, the batch blending hack is not required anymore and could be :
batch.begin();
batch.setBlendFunctionSeparate(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA,GL20.GL_ONE, GL20.GL_ONE);
... your drawings ...
batch.end();
There are some limitations in some cases though, please take a look at my GIST for more information.
I've been developing a game for android on LibGDX, and I've just found a problem that it seems no one had before. I searched for answers on how to smooth text, and they say two things:
Use FreeType fonts.
Use a MipMap texture filter.
So, I used a FreeType font, and applied the mipmap, and I'm pretty sure the filter is scaling the font down (so, using mipmap), since the font is of size 200 and the real screen is absolutely not that big.
I don't know if I'm making some kind of stupid mistake or something, but I just can't figure out why is this happening, since what I did seems the thing that solves this issue for other people.
So, that's what I did:
I have an assets class with the things I want to load, that (forgetting about the sprites) looks like this:
public static BitmapFont font;
public static void load(){
FreeTypeFontGenerator fontGen = new FreeTypeFontGenerator(Gdx.files.internal("vaques/OpenSans-Bold.ttf")); //free font from fontsquirrel.com
FreeTypeFontParameter fontPar = new FreeTypeFontParameter();
fontPar.size = 200;
fontPar.color = Color.valueOf("ffffff");
fontPar.genMipMaps = true;
font = fontGen.generateFont(fontPar);
font.getRegion().getTexture().setFilter(TextureFilter.MipMapLinearNearest, TextureFilter.Linear);
fontGen.dispose();
}
And then, I load the fonts on the create() method in the main class, and I draw them on the Screen.java file. Getting just the texty things it looks like this:
//Outside the constructor (variable declaration part):
OrthographicCamera textCam;
ViewPort textPort;
SpriteBatch textBatch;
//...
//Inside the constructor:
textBatch = new SpriteBatch();
textCam = new OrthographicCamera();
textPort = new ExtendViewport(1600,0,textCam);
textPort.apply(false);
//...
//On the Render() method:
Gdx.gl.glClearColor(0.2f, 0.2f, 0.2f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
textCam.update();
textBatch.setProjectionMatrix(textCam.combined);
textBatch.begin();
Assets.font.setColor(Color.valueOf("821201")); //whatever the color
Assets.font.draw(textBatch,Integer.toString((int) (1/delta)), 80, 2400-40);
textBatch.end();
//I draw more text, but it looks the same as this one, just in other colors.
And here a screenshot of that text:
It seems I can't post images directly, so here's the link.
The image shows too a part of a circle comming from a 1024x1024 png scaled down through mipmapping. They looked exactly like the text when I was drawing them through the ShapeRenderer, but now they look fine.
Any idea why this is happening?
Is your MAC is supporting FreeType fonts with MipMap texture filter. I'm little bit doubt on this. Confirm.
I finally found the answer by myself. It seems the stupid big size of the font was causing the sharpness, instead of helping make the text smooth.
I reduced the size of the viewport used to write text along with the font size (divided both by 2), and now the text is smooth.
Recently I have been given a project, where I have to extract face (face+hair) from a given image.
I am solving this problem in the following ways.
I am extracting face locations from given image. [I am getting a rectangle]
I am extracting that rectangle and placing it in another image of same dimensions as input image.[face_image]
I am applying grabCut algorithm on the face_image of step 2.
When the face_image contains smooth background then the algorithm grabCut it working well but when the background of face_image is complex then the algorithm grabCut extracts some part of background too in the processed image.
Here is a snapshot of the results that I am getting.
Here is my code of grabCut:
public void extractFace(Mat image, String fileNameWithCompletePath,
int xOne, int xTwo, int yOne, int yTwo) throws CvException {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Rect rectangle = new Rect(xOne, yOne, xTwo, yTwo);
Mat result = new Mat();
Mat bgdModel = new Mat();
Mat fgdModel = new Mat();
Mat source = new Mat(1, 1, CvType.CV_8U, new Scalar(3));
Imgproc.grabCut(image, result, rectangle, bgdModel, fgdModel, 8, Imgproc.GC_INIT_WITH_RECT);
Core.compare(result, source, result, Core.CMP_EQ);
Mat foreground = new Mat(image.size(), CvType.CV_8UC3, new Scalar(255, 255, 255));
image.copyTo(foreground, result);
Imgcodecs.imwrite(fileNameWithCompletePath, foreground);
}
How can I improve performance of grabCut algorithm so that it will detect only face and hair from given image?
You should be able to do this by "helping" grabCut know a little about the foreground and background. There is a python tutorial that shows how this is done manually by selecting the foreground and background.
To do this automatically, you will need to find programmatic ways to detect the foreground and background. The foreground consists mostly of hair and skin so you will need to detect them.
Skin - There are several papers and blogs on how to do this. Some of them are pretty simple and this OpenCV tutorial may also help. I've found plain hue/saturation to get me pretty far.
Hair - This is trickier but is definitely still doable. You may be able to hair and just use skin and background if this turns out to be too much work.
Background - You should be able to use range() to find things in the image that are purple, green, and blue. You know for sure that these things are not skin or hair and are therefore part of the background.
Use thresholding to create a mask of the areas that are most likely skin, hair, and background. You can then use them as bgdModel and fgdModel (or the skin and hair masks) instead of Mat().
Sorry this is so high-level. I hope it helps!
Another approach, since you have already detected the face, is to simply choose a better initial mask for initialising GrabCut - e.g. by using an oval instead of a rectangle.
Detect face rectangle (as you are already doing)
Create a mask:
a) Create a new black image of the same size as your input image
b) Draw a white-filled ellipse with the same height, width, top and left positions as the face rectangle
Call GrabCut with GC_INIT_WITH_MASK instead of GC_INIT_WITH_RECT:
Imgproc.grabCut(image, mask, rectangle, bgdModel, fgdModel, 8, Imgproc.GC_INIT_WITH_MASK);
This initializes the foreground with a better model because faces are more oval-shaped than rectangle-shaped, so it should include less of the background to begin with.
I would suggest to "play" with the rectangle coordinates (int xOne, int xTwo, int yOne, int yTwo). Using your code and these coordinates 1, 400, 30, 400 I was able to avoid the background. (I tried to post the images I successfully cropped but I need at least 10 reputation to do so)
The best optimization that can be done to any Java routine is conversion to a native language.
Good evening,
It is about an android app.
I would like to use the method Canvas.drawText(...). But I don't know how I can rotate the text. I need the text at a certain position at a certain angle. Do you know how I can achieve this?
Next question, usually the point to which the position coordinates are refering is the lower left corner. I want to change this "anchor-point" to the lower center. Is that possible? The pivot-point of the rotation should be the same.
I guess simple questions, but I don't know how to do this. Thanks for help.
here is a basic example of drawing text on a path, to get it how you want you should mess with the path and the paint Path, Paint
Paint paint = new Paint();
Path path = new Path();
paint.setColor(Color.BLACK);
path.moveTo(canvas.getWidth()/2, 0);
path.lineTo(canvas.getWidth()/4, 400);
canvas.drawTextOnPath("text manipulated", path, 0, 0, paint);
I'm having a lot of trouble using the Slick2D bind() functionality and then trying to draw an image in OpenGL.
I'm using an Image I obtained from getSubImage. If I use the graphics.drawImage() method it draws this Image perfectly. If, however, I use bind(), it binds the entire Image that I obtained this sub-image from, so can I not bind sub images or am I doing it wrong?
Some extracts from my code:
In the constructor of my class:
ui = new Image("resources/img/ui/ui.png");
// I've tried with SpriteSheet too but Image is more appropriate for my purposes.
border_t = ui.getSubImage(12, 24, 12, 12);
In the render method:
border_t.bind();
graphics.setColor(Color.white);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(12, 0);
GL11.glTexCoord2f(9, 0);
GL11.glVertex2f(108, 0);
GL11.glTexCoord2f(9, 1);
GL11.glVertex2f(108, 12);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(12, 12);
GL11.glEnd();
This renders the entire spritesheet 9 times extremely scaled down instead of the top border as I had hoped.
Is this functionality lacking from Slick2d? Is it a bug? Or am I just simply doing it wrong?
"Subimages" are a construct of Slick2D, and only Slick2D. Once you start talking directly to OpenGL, you're now using OpenGL concepts, not Slick2D concepts.
There, there are no "subimages"; there are only textures. You can't bind a part of a texture. You must bind the whole thing. If you want to render a subset of a texture, you need to adjust your texture coordinates accordingly to select just that piece.
So using bind on a subimage isn't very useful.