Android LibGDX Make Texture/Text "Touch to Start" blink - java

I pretty much finished my LibGDX project and now I'm just adding user-friendliness.
I have a Texture (also placed in a sprite) that I would like to fade in and fade out repeatedly (NOT fast blinking). It's just rectangular funky-text that says "Touch to Start".
I considered making an animation of 6 or so pictures with varying opacity and just keep changing slides. Is this the best way to go?
I'm also looking for a libGDX effect that controls the transparency to avoid all the overhead and not make my animation choppy.
Can't think of any relevant code to add, Thanks for your help
EDIT
Gdx.gl.glClearColor(0, 0, 0.2f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(touchToStartImage, screenWidth / 2 - touchToStartImage.getWidth() / 2, screenHeight / 2 - touchToStartImage.getHeight() / 2);
elapsed += Gdx.graphics.getDeltaTime();
blinkFontCache.setAlphas(Interpolation.fade.apply((elapsed / 0.01f) % 1f));
blinkFontCache.draw(batch);
blinkFontCache.translate(2f, 2f);
batch.end();
I also defined blinkFontCache = new BitmapFontCache(numberPrinter); where numberPrinter is bitmapfont that is supposed to draw text. I've read the API guide for Interpolation and blinkFontCache, but unfortunately with the above I do not notice any change in the screen I have. Thanks
SOLUTION
EDIT with INTERPOLATION
elapsed += Gdx.graphics.getDeltaTime();
touchToStartSprite.setAlpha(Interpolation.fade.apply((elapsed / FADE_TIME) % 1f));
blinker.begin();
touchToStartSprite.draw(batch);
blinker.end();
EDIT with ACTIONS
definitions
text = new Image(highScoreImage);
text.addAction(Actions.alpha(0));
text.act(0);
text.addAction(Actions.forever(Actions.sequence(Actions.fadeIn(FADE_TIME), Actions.fadeOut(FADE_TIME))));
render()
blinker.begin();
text.act(Gdx.graphics.getDeltaTime());
text.draw(blinker, 1);
blinker.end();

You could use the Image class from scene2d, which is an actor that can take a texture region and gives you several methods that can be useful. Here's an implementation.
Image text = new Image(clickToStartRegion);
Float fadeTime = 1f;
//...
text.addAction(Actions.alpha(0)); //make the text transparent.
text.act(0); //update the text once
text.addAction(Actions.sequence(Actions.fadeIn(fadeTime), Actions.fadeOut(fadeTime));
//...
text.act(deltaTime);
//...
text.draw(batch, 1);

You can use the Interpolation class for the alpha. Assuming you're using a Sprite to draw this:
private float elapsed;
private static final float FADE_TIME = 1f; //time between blinks
//...
elapsed += deltaTime;
sprite.setAlpha(Interpolation.fade.apply((elapsed / FADE_TIME) % 1f));
//...
spriteBatch.begin();
sprite.draw(spriteBatch);
spriteBatch.end();

Related

How to prevent texture bleeding in a tilemap in LibGDX

I know there are quite some questions (and answers) on this topic, but they all have different solutions, and none of them seems to be working in my case.
I'm developing a small test project with libGDX, in which I tried to add a simple tilemap. I created the tilemap using Tiled, which seems to be working quite good, except for the texture bleeding, that causes black lines (the background color) to appear between the tiles sometimes.
What I've tried so far:
I read several SO-questions, tutorials and forum posts, and tried almost all of the solutions, but I just don't seem to get this working. Most of the answers said that I would need a padding between the tiles, but this doesn't seem to fix it. I also tried loading the tilemap with different parameters (e.g. to use the Nearest filter when loading them) or rounding the camera's position to prevent rounding problems, but this did even make it worse.
My current setup:
You can find the whole project on GitHub. The branch is called 'tile_map_scaling'
At the moment I'm using a tileset that is made of this tile-picture:
It has two pixels of space between every tile, to use as padding and margin.
My Tiled tileset settings look like this:
I use two pixels of margin and spacing, to (try to) prevent the bleeding here.
Most of the time it is rendered just fine, but still sometimes there are these lines between the tiles like in this picture (sometimes they seem to appear only on a part of the map):
I'm currently loading the tile map into the asset manager without any parameters:
public void load() {
AssetManager manager = new AssetManager();
manager.setLoader(TiledMap.class, new TmxMapLoader(new InternalFileHandleResolver()));
manager.setErrorListener(this);
manager.load("map/map.tmx", TiledMap.class, new AssetLoaderParameters());
}
... and use it like this:
public class GameScreen {
public static final float WORLD_TO_SCREEN = 4.0f;
public static final float SCENE_WIDTH = 1280f;
public static final float SCENE_HEIGHT = 720f;
//...
private Viewport viewport;
private OrthographicCamera camera;
private TiledMap map;
private OrthogonalTiledMapRenderer renderer;
public GameScreen() {
camera = new OrthographicCamera();
viewport = new FitViewport(SCENE_WIDTH, SCENE_HEIGHT, camera);
map = assetManager.get("map/map.tmx");
renderer = new OrthogonalTiledMapRenderer(map);
}
#Override
public void render(float delta) {
//clear the screen (with a black screen)
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
moveCamera(delta);
renderer.setView(camera);
renderer.render();
//... draw the player, some debug graphics, a hud, ...
moveCameraToPlayer();
}
private void moveCamera(float delta) {
if (Gdx.input.isKeyPressed(Keys.LEFT)) {
camera.position.x -= CAMERA_SPEED * delta;
}
else if (Gdx.input.isKeyPressed(Keys.RIGHT)) {
camera.position.x += CAMERA_SPEED * delta;
}
// ...
//update the camera to re-calculate the matrices
camera.update();
}
private void moveCameraToPlayer() {
Vector2 dwarfPosition = dwarf.getPosition();
//movement in positive X and Y direction
float deltaX = camera.position.x - dwarfPosition.x;
float deltaY = camera.position.y - dwarfPosition.y;
float movementXPos = deltaX - MOVEMENT_RANGE_X;
float movementYPos = deltaY - MOVEMENT_RANGE_Y;
//movement in negative X and Y direction
deltaX = dwarfPosition.x - camera.position.x;
deltaY = dwarfPosition.y - camera.position.y;
float movementXNeg = deltaX - MOVEMENT_RANGE_X;
float movementYNeg = deltaY - MOVEMENT_RANGE_Y;
camera.position.x -= Math.max(movementXPos, 0);
camera.position.y -= Math.max(movementYPos, 0);
camera.position.x += Math.max(movementXNeg, 0);
camera.position.y += Math.max(movementYNeg, 0);
camera.update();
}
// ... some other methods ...
}
The question:
I am using padding on the tilemap and also tried different loading parameters and rounding the camera position, but still I have this texture bleeding problem in my tilemap.
What am I missing? Or what am I doing wrong?
Any help on this would be great.
You need to pad the edges of your tiles in you tilesheet.
It looks like you've tried to do this but the padding is transparent, it needs to be of the color of the pixel it is padding.
So if you have an image like this (where each letter is a pixel and the tile size is one):
AB
CB
then padding it should look something like this
A B
AAABBB
A B
C C
CCCCCC
C C
The pixel being padded must be padded with a pixel of the same color.
(I'll try try create a pull request with a fix for your git-repo as well.)
As a little addition to bornander's answer, I created some python scripts, that do all the work to generate a tileset texture, that has the correct edge padding (that bornander explained in his answer) from a texture, that has no padding yet.
Just in case anyone can make use of it, it can be found on GitHub:
https://github.com/tfassbender/libGdxImageTools
There is also a npm package that can extrude the tiles. It was built for the Phaser JS game library, but you could still use it. https://github.com/sporadic-labs/tile-extruder

Huge Image when Using Pixel Per Meter on Libgdx Box2d World

hi guys I am trying to implement a box2d world. I have read that box2d uses meters. and You need to convert it from pixels to meters.
I tried to draw an image but do I have to scale down also the image? I think that is a bad I idea to draw the image, the image are very huge and can't figure what to do to make it work with the box2d pixel per meter
public class TestScreen extends ScreenAdapter {
private final Body body;
private int V_WIDTH = 320;
private int V_HEIGHT = 480;
private int PPM = 100;
private SpriteBatch batch;
private OrthographicCamera camera;
private World world;
private Sprite sprite;
Box2DDebugRenderer box2DDebugRenderer;
public TestScreen(){
batch = new SpriteBatch();
camera = new OrthographicCamera();
camera.setToOrtho(false, V_WIDTH / PPM, V_HEIGHT / PPM);
camera.position.set(0,0,0);
world = new World(new Vector2(0,0) , true);
sprite = new Sprite(new Texture("test/player.png"));
box2DDebugRenderer = new Box2DDebugRenderer();
BodyDef bodyDef = new BodyDef();
bodyDef.type = BodyDef.BodyType.KinematicBody;
body = world.createBody(bodyDef);
FixtureDef fixtureDef = new FixtureDef();
PolygonShape shape = new PolygonShape();
shape.setAsBox(sprite.getWidth()/2 / PPM, sprite.getHeight()/2 / PPM);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
sprite.setPosition(body.getPosition().x - sprite.getWidth() /2 ,body.getPosition().y - sprite.getHeight() / 2 );
}
#Override
public void render(float delta) {
super.render(delta);
camera.position.set( body.getPosition().x, body.getPosition().y , 0);
camera.update();
world.step(1/60.0f, 6, 2);
batch.setProjectionMatrix(camera.combined);
batch.begin();
sprite.draw(batch);
batch.end();
box2DDebugRenderer.render(world, camera.combined);
}
}
with out ppm
with PPm
should I scale down the image? what is the best way to draw the image
You don't need to convert from pixel to meter. As a matter of fact you should forget about pixels. They exist only on your screen and you game logic should not know anything about your screen. That is what a camera or viewport is for, you specify how much of the world to show and if the display should be stretched or blackboxed or whatever. So no pixels, period. They are evil and give you wrong ideas.
Now if you create your own game you can say that a single unit represents 1mm, 34cm or a couple of lightyears. You tell the object responsible for displaying your game how much of these units to display. However you are using Box2D, and Box2D has already filled in the unit for you 1 unit == 1m. It is probably possible to change this or at least create a wrapper class that converts you units to the Box2D unit.
The reason why it is important to keep true to the Box2D unit is the following. If you drop a marble on the ground it seems to be moving faster then the sun in the sky. But believe me, the sun is moving a lot faster but since it is a lot further away it seems to move slowly. Since Box2D is all about movement you should keep true to the unit or things will start to act strange.
Let's just use 1 unit == 1m for now and suddenly everything should become a lot simpler by asking a view questions.
how much of your game world do you want to show in meters?
float width = 20; // 20 meters
//You can calculate on your chosen width or height to maintain aspect ratio
float height = (Gdx.graphics.getHeight() / Gdx.graphics.getWidth()) * width;
camera = new OrthographicCamera(width, height);
//Now the center of the camera is on 0,0 in the game world. It's often more desired and practical to have it's bottom left corner start out on 0,0
//All we need to do is translate it by half it's width and height since that is the offset from it's center point (and that is currently set to 0,0.
camera.translate(camera.viewportWidth / 2, camera.viewportHeight / 2, 0);
camera.update();
How large is our object? Keep in mind that mass, weight and size are completely different things.
Sprite mySprite = new Sprite(myTexture);
//position it somewhere within the bounds of the camera, in the below case the center
//This sprite also gets a size of 1m by 1m
mySprite.setBounds(width / 2, height / 2, 1, 1);
How do we want the SpriteBatch to draw to the screen?
//We tell the SpriteBatch to use out camera settings to draw
spriteBatch.setProjectionMatrix(camera.combined);
//And draw the sprite using this SpriteBatch
mySprite.draw(spriteBatch);
Same counts for the Box2dDebugRenderer implemenation. If you want shapes to show you need to use that combined matrix from your camera again to draw it.
box2DDebugRenderer.render(world, camera.combined);
Of course, when things move around you need to update your sprite position accordingly. You can get this information from the box2d.Body object. But this is beyond the scope of your question.
To finally show you what is going wrong:
camera.setToOrtho(false, V_WIDTH / PPM, V_HEIGHT / PPM);
Your camera shows 320/100 == 3.2f x 480/100 == 4.8f of your game world. Your sprite might be 64x64 pixels. You are not telling anywhere at what size to draw your sprite so it will assume 1 pixel = 1 unit and you set your camera to show 3.2f units in width. We can and should leave pixels out of the equation and just ask what size you want your object to be. Then set the Sprite to that size. Here you see that thinking in pixels just gives you problems.
For a space game where you fly a ship of 100x20 meters in 3th person you probably want your camera viewport to be very large. But for a ant game where your ants are real size you want a very small camera viewport. Do think about physics in real life. Galileo Galilei discovered that objects fall at the same speed, disregarding resistance. So if that ant would drop a sand grain it would look like it would fall very fast because your screen represents much less meters.
For a implementation of a dropping soccer ball look at my answer here. It creates a box2D body and attaches a image to it. I keep the functionality of the ball encapsulated within the Ball() class. (disclaimer: I have just played around a bit with Box2D and I don't know the exact physical behaviors of a soccer ball so I am not stating this is a correct implementation, but it does show how to setup your scene and have a image represent your Box2D body).

LibGDX tile map flickering when camera moving

I'm attempting to make a short game for my family for christmas using libgdx and when going forward through the level the edge of the screen flickers but when going backwards there is no flickering and it's quite annoying.
Here is a demo of what I mean.
Also, here is my code:
if (direction == "right") {
body.setTransform(body.getPosition().x + 1 / PPM, body.getPosition().y, body.getAngle());
b2dCam.position.x += (1 / PPM);
camera.position.x += (1*(PPM/(8/2)));
} else if (direction == "left") {
b2dCam.translate(-1 / PPM, 0);
camera.translate(-1*(PPM/(8/2)), 0);
}
tmr.setView(camera);
tmr.render();
camera.update();
b2dCam.update();
b2dr.render(world, b2dCam.combined);
cntrlOverlay.act();
cntrlOverlay.draw();
world.step(1 / 60f, 6, 2);
Any help would be greatly appreciated, thanks.
I just solved this issue by calling camera.update before everything else so instead of:
tmr.setView(camera);
tmr.render();
camera.update();
b2dCam.update();
b2dr.render(world, b2dCam.combined);
cntrlOverlay.act();
cntrlOverlay.draw();
world.step(1 / 60f, 6, 2);
I now use:
camera.update();
tmr.setView(camera);
tmr.render();
b2dCam.update();
b2dr.render(world, b2dCam.combined);
cntrlOverlay.act();
cntrlOverlay.draw();
world.step(1 / 60f, 6, 2);
Two things come to mind.
Do your tiles have at least 2px of padding around them?
When OpenGL pulls textures from an image, it blends the pixels surrounding the texture region you are using with the edge of the texture region. Annoying huh? But there are reasons for it. I couldn't tell for sure, but your video looks like you are getting horizontal gutters (the flickering at the bottom and between the house and the ground).
To fix this, each tile on your image asset needs to have at least 2 pixels of padding all around it. To create the padding, create a 2px wide border around each tile in your image and then copy the edge pixels of the tile into this 2px wide border.
VSync
If you still have issues after trying suggestion 1, I have had some flickering issues with libgdx scrolling when vsync was disabled. You can make sure it is enabled in your "Launcher" classes with:
LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration();
cfg.vSyncEnabled = true;

How to draw in code this path

First happy new year to all here!
I have a question about drawing background in code. I have a code for simple Android game and all assests is in png format, expect background. I`m not a programmer (but newbie in this and I learn with live examples).
I think this code draw background clouds on the screen:
//draw cloud layer 1
background_shader.setColor(getResources().getColor(R.color.blue_dark));
int radius = DrawBackgroundCloud(canvas, (ScreenHeight() / 2), 7);
canvas.drawRect(0, (float) ((ScreenHeight() / 2.2) + radius * 1.5), ScreenWidth(), ScreenHeight(), background_shader);
//draw cloud layer 2
background_shader.setColor(getResources().getColor(R.color.blue_darkest));
radius = DrawBackgroundCloud(canvas, (int) (ScreenHeight() / 1.5), 4);
canvas.drawRect(0, (float) ((ScreenHeight() / 1.7) + radius * 1.5), ScreenWidth(), ScreenHeight(), background_shader);
This draw some random circles as clouds but I want to change this to draw something like hills or mountains. Here is a picture of current background and what I`m looking for.
http://prntscr.com/5nqa25
Can anyone help me with this? I will be really thankfuly
Responding to the further question in the comment:
you cant really do that with canvas.drawColor, but you can use a proper Paint object and use canvas.drawPaint (or other canvas method that uses Paint object - if you want to for example draw shape with gradient).
The key part of creating your gradient Paint object is calling its setShader(...) method. For example like so:
mGradientPaint = new Paint();
mGradientPaint.setStyle(Paint.Style.FILL);
mGradientPaint.setShader(new LinearGradient(0, 0, 0, getHeight(), Color.TRANSPARENT, Color.GREEN, Shader.TileMode.MIRROR));

Camera following target in Box2D

I am trying to follow player with camera in Box2D world. But there is an offset. And I think it has something to do with pixel per meter conversion. Before you check my code you should know that Values.WTB = World_To_Box and has a values of 0.032f and Values.BTW = Box_To_World and has a values of 32f.
Here is the render part:
#Override
public void render(float delta) {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl.glClearColor(0.105f,0.105f,0.105f,1f);
camera.position.set(player.getPosition().x*Values.BTW, player.getPosition().y*Values.BTW, 0);
camera.update();
Matrix4 cameraCopy = camera.combined.cpy();
cameraCopy.scl(Values.BTW);
batch.setProjectionMatrix(cameraCopy);
shapeRenderer.setProjectionMatrix(cameraCopy);
batch.begin();
player.draw(batch);
batch.end();
debugRenderer.render(world, cameraCopy);
world.step(1/60f, 6, 2);
shapeRenderer.begin(ShapeType.Filled);
shapeRenderer.setColor(Color.GREEN);
shapeRenderer.circle(player.getPosition().x, player.getPosition().y, 5*Values.WTB,10);
shapeRenderer.setColor(Color.ORANGE);
shapeRenderer.circle(camera.position.x*Values.WTB, camera.position.y*Values.WTB, 5*Values.WTB,10);
shapeRenderer.end();
}
and here is picture to demonstrate:
Green point is where the center of player is and Orange point is where the camera center is. And further you go from 0,0 coordinates the bigger is offset.
What am I doing wrong?
Values.WTB = World_To_Box and has a values of 0.032f and Values.BTW = Box_To_World and has a values of 32f
There is no reason to change your WTB / BTW values to 0.01f and 100f like it was suggested by others, since yours are nearly correct. Conversions in powers of two are also a lot faster than conversions by 100.
If you want 32 screen pixels per box2d meter then keep using Values.BTW = 32f. But then Values.WTB would be 1f / 32f = 0.03125f, not 0.032f. It is just a small difference, but it makes a difference in the end.
Change your values to:
static final float WORLD_TO_BOX = 0.01f;
static final float BOX_TO_WORLD = 100f;
Why 0.032 and 32 are not working:
For example if you want to convert 100px to Box2d units:
100 * 0.032 = 3.2
And then from Box2d units to pixels:
3.2 * 32 = 102.4
And of course the difference will be bigger if you are converting bigger values.

Categories