I'm working on a LibGDX game which uses a smaller viewport.
public static float BOX_SCALE = 10;
public static final float VIRTUAL_WIDTH = (int) (320 / BOX_SCALE);
public static final float VIRTUAL_HEIGHT = (int) (480 / BOX_SCALE);
float viewportHeight = MyConstants.Screen.VIRTUAL_HEIGHT;
float viewportWidth = MyConstants.Screen.VIRTUAL_HEIGHT * Gdx.graphics.getWidth() / Gdx.graphics.getHeight();
For example my viewport can have the size (32, 48). I use Scene2D for rendering. For some reason whenever i create a TextButton the text is never centered. This is the BitmapFont used for the button.
FreeTypeFontParameter fontParam = new FreeTypeFontParameter();
fontParam.size = 14;
FreeTypeFontGenerator generator2 = new FreeTypeFontGenerator(Gdx.files.internal("data/font.ttf"));
labelFont = generator2.generateFont(fontParam);
labelFont.setScale(1f / BOX_SCALE);
labelFont.setColor(Color.BLACK);
If i set the BOX_SCALE value to 1 then TextButton acts normal but i need for simulating the Box2D world. I guess i could create separate labels for each button and position them manually but I can't figure out why this is happening. Also interested if there is a cleaner solution.
By default, font positions are rounded off to nearest world game unit. This is based on an assumption that your font will render pixel perfect. In your case, you don't want a pixel perfect font, so call:
labelFont.setUseIntegerPositions(false);
Also, in your fontParam you should set it to use mipmaps, and set the minFilter to MipmapLinearNearest and the magFilter to Linear. That'll make it look better, since by default the filtering is set to Nearest/Nearest which looks bad if you aren't rendering pixel perfect.
Related
I know there are quite some questions (and answers) on this topic, but they all have different solutions, and none of them seems to be working in my case.
I'm developing a small test project with libGDX, in which I tried to add a simple tilemap. I created the tilemap using Tiled, which seems to be working quite good, except for the texture bleeding, that causes black lines (the background color) to appear between the tiles sometimes.
What I've tried so far:
I read several SO-questions, tutorials and forum posts, and tried almost all of the solutions, but I just don't seem to get this working. Most of the answers said that I would need a padding between the tiles, but this doesn't seem to fix it. I also tried loading the tilemap with different parameters (e.g. to use the Nearest filter when loading them) or rounding the camera's position to prevent rounding problems, but this did even make it worse.
My current setup:
You can find the whole project on GitHub. The branch is called 'tile_map_scaling'
At the moment I'm using a tileset that is made of this tile-picture:
It has two pixels of space between every tile, to use as padding and margin.
My Tiled tileset settings look like this:
I use two pixels of margin and spacing, to (try to) prevent the bleeding here.
Most of the time it is rendered just fine, but still sometimes there are these lines between the tiles like in this picture (sometimes they seem to appear only on a part of the map):
I'm currently loading the tile map into the asset manager without any parameters:
public void load() {
AssetManager manager = new AssetManager();
manager.setLoader(TiledMap.class, new TmxMapLoader(new InternalFileHandleResolver()));
manager.setErrorListener(this);
manager.load("map/map.tmx", TiledMap.class, new AssetLoaderParameters());
}
... and use it like this:
public class GameScreen {
public static final float WORLD_TO_SCREEN = 4.0f;
public static final float SCENE_WIDTH = 1280f;
public static final float SCENE_HEIGHT = 720f;
//...
private Viewport viewport;
private OrthographicCamera camera;
private TiledMap map;
private OrthogonalTiledMapRenderer renderer;
public GameScreen() {
camera = new OrthographicCamera();
viewport = new FitViewport(SCENE_WIDTH, SCENE_HEIGHT, camera);
map = assetManager.get("map/map.tmx");
renderer = new OrthogonalTiledMapRenderer(map);
}
#Override
public void render(float delta) {
//clear the screen (with a black screen)
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
moveCamera(delta);
renderer.setView(camera);
renderer.render();
//... draw the player, some debug graphics, a hud, ...
moveCameraToPlayer();
}
private void moveCamera(float delta) {
if (Gdx.input.isKeyPressed(Keys.LEFT)) {
camera.position.x -= CAMERA_SPEED * delta;
}
else if (Gdx.input.isKeyPressed(Keys.RIGHT)) {
camera.position.x += CAMERA_SPEED * delta;
}
// ...
//update the camera to re-calculate the matrices
camera.update();
}
private void moveCameraToPlayer() {
Vector2 dwarfPosition = dwarf.getPosition();
//movement in positive X and Y direction
float deltaX = camera.position.x - dwarfPosition.x;
float deltaY = camera.position.y - dwarfPosition.y;
float movementXPos = deltaX - MOVEMENT_RANGE_X;
float movementYPos = deltaY - MOVEMENT_RANGE_Y;
//movement in negative X and Y direction
deltaX = dwarfPosition.x - camera.position.x;
deltaY = dwarfPosition.y - camera.position.y;
float movementXNeg = deltaX - MOVEMENT_RANGE_X;
float movementYNeg = deltaY - MOVEMENT_RANGE_Y;
camera.position.x -= Math.max(movementXPos, 0);
camera.position.y -= Math.max(movementYPos, 0);
camera.position.x += Math.max(movementXNeg, 0);
camera.position.y += Math.max(movementYNeg, 0);
camera.update();
}
// ... some other methods ...
}
The question:
I am using padding on the tilemap and also tried different loading parameters and rounding the camera position, but still I have this texture bleeding problem in my tilemap.
What am I missing? Or what am I doing wrong?
Any help on this would be great.
You need to pad the edges of your tiles in you tilesheet.
It looks like you've tried to do this but the padding is transparent, it needs to be of the color of the pixel it is padding.
So if you have an image like this (where each letter is a pixel and the tile size is one):
AB
CB
then padding it should look something like this
A B
AAABBB
A B
C C
CCCCCC
C C
The pixel being padded must be padded with a pixel of the same color.
(I'll try try create a pull request with a fix for your git-repo as well.)
As a little addition to bornander's answer, I created some python scripts, that do all the work to generate a tileset texture, that has the correct edge padding (that bornander explained in his answer) from a texture, that has no padding yet.
Just in case anyone can make use of it, it can be found on GitHub:
https://github.com/tfassbender/libGdxImageTools
There is also a npm package that can extrude the tiles. It was built for the Phaser JS game library, but you could still use it. https://github.com/sporadic-labs/tile-extruder
I've spent several frustrating hours trying to implement (what I thought would be) a simple FontActor class.
The idea is to just draw text at a specific position using a provided BitmapFont. That much, I've managed to accomplish. However, I'm struggling to compute my actor's width/height based on the rendered text.
(Using a FitViewport for testing)
open class FontActor<T : BitmapFont>(val font: T, var text: CharSequence = "") : GameActor() {
val layout = Pools.obtain(GlyphLayout::class.java)!!
companion object {
val identity4 = Matrix4().idt()
val distanceFieldShader: ShaderProgram = DistanceFieldFont.createDistanceFieldShader()
}
override fun draw(batch: Batch?, parentAlpha: Float) {
if (batch == null) return
batch.end()
// grab ui camera and backup current projection
val uiCamera = Game.context.inject<OrthographicCamera>()
val prevTransform = batch.transformMatrix
val prevProjection = batch.projectionMatrix
batch.transformMatrix = identity4
batch.projectionMatrix = uiCamera.combined
if (font is DistanceFieldFont) batch.shader = distanceFieldShader
// the actor has pos = x,y in local coords, but we need UI coords
// start by getting group -> stage coords (world)
val coords = Vector3(localToStageCoordinates(Vector2(0f, 0f)), 0f)
// world coordinate destination -> screen coords
stage.viewport.project(coords)
// screen coords -> font camera world coords
uiCamera.unproject(coords,
stage.viewport.screenX.toFloat(),
stage.viewport.screenY.toFloat(),
stage.viewport.screenWidth.toFloat(),
stage.viewport.screenHeight.toFloat())
// adjust position by cap height so that bottom left of text aligns with x, y
coords.y = uiCamera.viewportHeight - coords.y + font.capHeight
/// TODO: use BitmapFontCache to prevent this call on every frame and allow for offline bounds calculation
batch.begin()
layout.setText(font, text)
font.draw(batch, layout, coords.x, coords.y)
batch.end()
// viewport screen coordinates -> world coordinates
setSize((layout.width / stage.viewport.screenWidth) * stage.width,
(layout.height / stage.viewport.screenHeight) * stage.height)
// restore camera
if (font is DistanceFieldFont) batch.shader = null
batch.projectionMatrix = prevProjection
batch.transformMatrix = prevTransform
batch.begin()
}
}
And in my parent Screen class implementation, I rescale my fonts on every window resize so that they don't become "smooshed" or stretched:
override fun resize(width: Int, height: Int) {
stage.viewport.update(width, height)
context.inject<OrthographicCamera>().setToOrtho(false, width.toFloat(), height.toFloat())
// rescale fonts
scaleX = width.toFloat() / Config.screenWidth
scaleY = height.toFloat() / Config.screenHeight
val scale = minOf(scaleX, scaleY)
gdxArrayOf<BitmapFont>().apply {
Game.assets.getAll(BitmapFont::class.java, this)
forEach { it.data.setScale(scale) }
}
gdxArrayOf<DistanceFieldFont>().apply {
Game.assets.getAll(DistanceFieldFont::class.java, this)
forEach { it.data.setScale(scale) }
}
}
This works and looks great until you resize your window.
After a resize, the fonts look fine and automatically adjust with the relative size of the window, but the FontActor has the wrong size, because my call to setSize is wrong.
Initial window:
After making window horizontally larger:
For example, if I then scale my window horizontally (which has no effect on the world size, because I'm using a FitViewport), the font looks correct, just as intended. However, the layout.width value coming back from the draw() changes, even though the text size hasn't changed on-screen. After investigation, I realized this is due to my use of setScale, but simply dividing the width by the x-scaling factor doesn't correct the error. And again, if I remove my setScale calls, the numbers make sense, but the font is now squished!
Another strategy I tried was converting the width/height into screen coordinates, then using the relevant project/unproject methods to get the width and height in world coordinates. This suffers from the same issue shown in the images.
How can I fix my math?
Or, is there a smarter/easier way to implement all of this? (No, I don't want Label, I just want a text actor.)
One problem was my scaling code.
The fix was to change the camera update as follows:
context.inject<OrthographicCamera>().setToOrtho(false, stage.viewport.screenWidth.toFloat(), stage.viewport.screenHeight.toFloat())
Which causes my text camera to match the world viewport camera. I was using the entire screen for my calculations, hence the stretching.
My scaleX/Y calculations were wrong for the same reason. After correcting both of those miscalculations, I have a nicely scaling FontActor with correct bounds in world coordinates.
hi guys I am trying to implement a box2d world. I have read that box2d uses meters. and You need to convert it from pixels to meters.
I tried to draw an image but do I have to scale down also the image? I think that is a bad I idea to draw the image, the image are very huge and can't figure what to do to make it work with the box2d pixel per meter
public class TestScreen extends ScreenAdapter {
private final Body body;
private int V_WIDTH = 320;
private int V_HEIGHT = 480;
private int PPM = 100;
private SpriteBatch batch;
private OrthographicCamera camera;
private World world;
private Sprite sprite;
Box2DDebugRenderer box2DDebugRenderer;
public TestScreen(){
batch = new SpriteBatch();
camera = new OrthographicCamera();
camera.setToOrtho(false, V_WIDTH / PPM, V_HEIGHT / PPM);
camera.position.set(0,0,0);
world = new World(new Vector2(0,0) , true);
sprite = new Sprite(new Texture("test/player.png"));
box2DDebugRenderer = new Box2DDebugRenderer();
BodyDef bodyDef = new BodyDef();
bodyDef.type = BodyDef.BodyType.KinematicBody;
body = world.createBody(bodyDef);
FixtureDef fixtureDef = new FixtureDef();
PolygonShape shape = new PolygonShape();
shape.setAsBox(sprite.getWidth()/2 / PPM, sprite.getHeight()/2 / PPM);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
sprite.setPosition(body.getPosition().x - sprite.getWidth() /2 ,body.getPosition().y - sprite.getHeight() / 2 );
}
#Override
public void render(float delta) {
super.render(delta);
camera.position.set( body.getPosition().x, body.getPosition().y , 0);
camera.update();
world.step(1/60.0f, 6, 2);
batch.setProjectionMatrix(camera.combined);
batch.begin();
sprite.draw(batch);
batch.end();
box2DDebugRenderer.render(world, camera.combined);
}
}
with out ppm
with PPm
should I scale down the image? what is the best way to draw the image
You don't need to convert from pixel to meter. As a matter of fact you should forget about pixels. They exist only on your screen and you game logic should not know anything about your screen. That is what a camera or viewport is for, you specify how much of the world to show and if the display should be stretched or blackboxed or whatever. So no pixels, period. They are evil and give you wrong ideas.
Now if you create your own game you can say that a single unit represents 1mm, 34cm or a couple of lightyears. You tell the object responsible for displaying your game how much of these units to display. However you are using Box2D, and Box2D has already filled in the unit for you 1 unit == 1m. It is probably possible to change this or at least create a wrapper class that converts you units to the Box2D unit.
The reason why it is important to keep true to the Box2D unit is the following. If you drop a marble on the ground it seems to be moving faster then the sun in the sky. But believe me, the sun is moving a lot faster but since it is a lot further away it seems to move slowly. Since Box2D is all about movement you should keep true to the unit or things will start to act strange.
Let's just use 1 unit == 1m for now and suddenly everything should become a lot simpler by asking a view questions.
how much of your game world do you want to show in meters?
float width = 20; // 20 meters
//You can calculate on your chosen width or height to maintain aspect ratio
float height = (Gdx.graphics.getHeight() / Gdx.graphics.getWidth()) * width;
camera = new OrthographicCamera(width, height);
//Now the center of the camera is on 0,0 in the game world. It's often more desired and practical to have it's bottom left corner start out on 0,0
//All we need to do is translate it by half it's width and height since that is the offset from it's center point (and that is currently set to 0,0.
camera.translate(camera.viewportWidth / 2, camera.viewportHeight / 2, 0);
camera.update();
How large is our object? Keep in mind that mass, weight and size are completely different things.
Sprite mySprite = new Sprite(myTexture);
//position it somewhere within the bounds of the camera, in the below case the center
//This sprite also gets a size of 1m by 1m
mySprite.setBounds(width / 2, height / 2, 1, 1);
How do we want the SpriteBatch to draw to the screen?
//We tell the SpriteBatch to use out camera settings to draw
spriteBatch.setProjectionMatrix(camera.combined);
//And draw the sprite using this SpriteBatch
mySprite.draw(spriteBatch);
Same counts for the Box2dDebugRenderer implemenation. If you want shapes to show you need to use that combined matrix from your camera again to draw it.
box2DDebugRenderer.render(world, camera.combined);
Of course, when things move around you need to update your sprite position accordingly. You can get this information from the box2d.Body object. But this is beyond the scope of your question.
To finally show you what is going wrong:
camera.setToOrtho(false, V_WIDTH / PPM, V_HEIGHT / PPM);
Your camera shows 320/100 == 3.2f x 480/100 == 4.8f of your game world. Your sprite might be 64x64 pixels. You are not telling anywhere at what size to draw your sprite so it will assume 1 pixel = 1 unit and you set your camera to show 3.2f units in width. We can and should leave pixels out of the equation and just ask what size you want your object to be. Then set the Sprite to that size. Here you see that thinking in pixels just gives you problems.
For a space game where you fly a ship of 100x20 meters in 3th person you probably want your camera viewport to be very large. But for a ant game where your ants are real size you want a very small camera viewport. Do think about physics in real life. Galileo Galilei discovered that objects fall at the same speed, disregarding resistance. So if that ant would drop a sand grain it would look like it would fall very fast because your screen represents much less meters.
For a implementation of a dropping soccer ball look at my answer here. It creates a box2D body and attaches a image to it. I keep the functionality of the ball encapsulated within the Ball() class. (disclaimer: I have just played around a bit with Box2D and I don't know the exact physical behaviors of a soccer ball so I am not stating this is a correct implementation, but it does show how to setup your scene and have a image represent your Box2D body).
I want everyone to see the same things on their screen regardless of their screen size and aspect ratio so this is the code I am currently using. (also I am sending net data across with the coordinates of where the other players are on the screen)
int width = 1920, height = 1080;
public OrthographicCamera camera;
Viewport viewport;
//constructor
camera = new OrthographicCamera();
viewport = new ScalingViewport(Scaling.stretch, width, height, camera);
viewport.apply();
camera.position.set(camera.viewportWidth / 2, camera.viewportHeight / 2, 0);
camera.update();
public void resize(int width, int height) {
viewport.update(width, height);
camera.position.set(camera.viewportWidth / 2, camera.viewportHeight / 2, 0);
}
now for example I wanted 10 perfect squares going across the middle of the screen so I made then 192 pixels by 192 pixels so I could have 10 perfect squares going across the middle of the screen my system right now works perfect except for the fact that it is rendered internally 1920x1080 on all devices big and small. How would I convert my camera to units and get the size needed for 10 perfect squares to go across the screen? Is that even possible?
Here is my code to draw 10 squares across the screen
float size = 192;
for(int i = 0; i<10; i++){
walls.add(new Stuff(i*size,height/2-size/2,size,size,"middle",1,1,0,1));
}
How would I convert all this code to say units? Or is this an acceptable approach?
You are already using units, they just aren't very meaningful (and it certainly aren't pixels). If you want to use meaningful units (e.g. SI units), then the only thing you have to change in this code are the values. E.g. if the size of your stuff (wall?) is, say 2 meter, then use the value 2 instead of 192. And if you want your users screen to be, say 20 meters (10 walls e.g.) in width and 16:9 aspect ratio, then use that for the Viewport worldWidth and worldHeight.
float worldWidth = 20;
float worldHeight = worldWidth * 9f / 16f;
...
viewport = new StretchViewport(worldWidth, worldHeight, camera);
Make sure to understand that these "pixels" you are talking about only exist in your imagination. See also: http://blog.xoppa.com/pixels/.
You created your ScalingViewport with a width of 1920, so the width in world units will be 1920 on all screens, no matter what. Also, your scene will be distorted on any screen that is not 16:9, since you are stretching to fit whatever the screen is. (Because of the distortion, I personally would never use ScalingViewport with Scaling.stretch, aka StretchViewport.)
If you want your squares to look square on all screens with this type of viewport, you'll have to do some math to change their height (but their width should always be 192 if you want exactly ten to fit across the screen).
public void resize(int width, int height){
float viewportAspect = 1920f / 1080f;
float screenAspect = (float)width / (float)height; //Make sure you cast to floats
boxHeight = 192 * screenAspect / viewportAspect;
viewport.update(width, height, true);
}
The camera always shows the scene in world units, so there's no conversion to do.
I have a BitmapFont that is displaying a player's score as he moves across the screen at a constant rate. Because the player is always moving, I have to recalculate at what position I draw the font every frame. I use this code.
scoreFont.setScale(4f, 4f);
scoreFont.draw(batch, "" + scoreToShow, playerGhost.pos.x + 100f, 600f);
playerGhost.render(batch);
The problem? The font won't stop shaking. It's only a couple of pixels worth of vibration, but it's slightly noticeable. It's more noticeable when I run it on my tablet.
Is this a known bug?
How can I get it to stop shaking?
Call scorefont.setUseIntegerPositions(false); so it won't round the font's position to the nearest integer. You will also probably want to set the font's min filtering to Linear or MipmapLinearNearest, and max filtering to Linear.
The reason for the default behavior is that the default configuration is for text that is pixel perfect, for a viewport set with units equal to the size of a pixel. If your viewport had dimensions exactly the same as the screen's pixel dimensions, this configuration would help keep text from looking slightly blurry.
It could actually be the fact that you're scaling your font.
I had this problem and it's quite complex to understand (and also to fix).
Basically, when you scale fonts, BitmapFont changes the values inside the BitmapFontData by dividing/multiplying. If you do a lot of scaling, with a lot of different values (or an unlucky combination of values), it can introduce rounding errors which can cause flickering around the edges of the font.
The solution I implemented in the end was to write a Fontholder which stores all of the original BitmapFontData values. I then reset the font data to those original values at the beginning of every frame (i.e. start of render() method).
Here's the code...
package com.bigcustard.blurp.core;
import com.badlogic.gdx.graphics.g2d.*;
public class FontHolder {
private BitmapFont font;
private final float lineHeight;
private final float spaceWidth;
private final float xHeight;
private final float capHeight;
private final float ascent;
private final float descent;
private final float down;
private final float scaleX;
private final float scaleY;
public FontHolder(BitmapFont font) {
this.font = font;
BitmapFont.BitmapFontData data = font.getData();
this.lineHeight = data.lineHeight;
this.spaceWidth = data.spaceWidth;
this.xHeight = data.xHeight;
this.capHeight = data.capHeight;
this.ascent = data.ascent;
this.descent = data.descent;
this.down = data.down;
this.scaleX = data.scaleX;
this.scaleY = data.scaleY;
}
// Call this at start of each frame.
public void reset() {
BitmapFont.BitmapFontData data = font.getData();
data.lineHeight = this.lineHeight;
data.spaceWidth = this.spaceWidth;
data.xHeight = this.xHeight;
data.capHeight = this.capHeight;
data.ascent = this.ascent;
data.descent = this.descent;
data.down = this.down;
data.scaleX = this.scaleX;
data.scaleY = this.scaleY;
}
public BitmapFont getFont() {
return font;
}
}
I'm not wild about this, as it's slightly hacky, but it's a necessary evil, and will completely and properly solve the issue.
The correct way to handle this would be to use two different cameras, and two different spriteBatches, one for the game itself and one for the UI.
You call the update() method on both cameras, and use spriteBatch.setProjectionMatrix(camera.combined); on each batch to render them at the same time each frame.