LibGdx - Scaling Sprites, Camera and Viewports - java

Hell All & thanks for reading,
I recently started working on an 2D Android/Desktop project and have become stuck trying to display my sprites in the way i want.
I have a background Sprite that is 144(w) by 160(h) that I want to be able to position other sprites onto the screen relative to points on the background sprite.
I think I understand that if I create a camera/viewport that is 144 x 160 I would be able to position my sprites on the background sprite using the co-ordinates based on the 144 x 160 of the background sprite. This will work across the different screen resolutions found on mobile devices but will stretch the background sprite despite experimenting with the different viewport types (FillViewport, FitViewport etc..).
What I want to achieve is to have my background sprite to maintain it ratio across different screen resolutions and to be able to place other sprites over the background sprite. The placing of sprite need to work across different resolutions.
Apologies if my explanation is confusing or makes no sense. I would add some image to help explain but I reputation to add any to the post. However I think the TLTR question is "What is the correct way to display sprites on multiple screen resolutions while keeping a correct ratios and scaling to the screen size and position of sprite in a way that works across multiple resolutions?"
Thank, All Questions Welcome

A FitViewport would do what you described (maintain aspect ratio), but you will have black bars on some devices. Based on the code you posted on the libgdx forum, I see that you forgot to update the viewport in the resize method, so it is not behaving as designed.
However, for a static camera game like what you described, I think the best solution would be to plan your game around a certain area that is always visible on any device, for example, the box from (0,0) to (144,160). Then use an ExtendViewport with width and height of 144 and 160. After you update the viewport in resize, you can move the camera to be centered on the rectangle like this:
private static final float GAME_WIDTH = 144;
private static final float GAME_HEIGHT = 160;
public void create(){
//...
viewport = new ExtendViewport(GAME_WIDTH, GAME_HEIGHT);
//...
}
public void resize(int width, int height){
viewport.update(width, height, false); //centering by putting true here would put (0,0) at bottom left of screen, but then the game rectangle would be off center
//manually center the center of your game box
Camera camera = viewport.getCamera();
camera.position.x = GAME_WIDTH /2;
camera.position.y = GAME_HEIGHT/2;
camera.update();
}
Now your 144x160 box is centered on the screen as it would be with FitViewport, but you are not locked into having black bars, because you can draw extra background outside the 144x160 area using whatever method you like.
In your case 144:160 is a wider portrait aspect ratio than any screen out there, so you wouldn't need to worry about ever filling in area to the sides of your game rectangle. The narrowest aspect ratio of any phone or tablet seems to be 9:16, so you can do the math to see how much extra background above and below the game rectangle should be drawn to avoid black showing through on any device.
In this case it works out to 48 units above and below the rectangle that you would want to fill in:
144 pixels wide at 9:16 would be 256 tall.
(256 - 160) / 2 = 48
EDIT: I see from your post on the libgdx forum that you want the game area stuck at the top of the screen and the remainder of the area to be used for game controls. In that case, I would change the resize method like this, since you want to have the game area's top edge aligned with the top edge of the screen. You can also calculate where the bottom of the controls area will be on the Y axis. (The top will be at Y=0.)
public void resize(int width, int height){
viewport.update(width, height, false);
//align game box's top edge to top of screen
Camera camera = viewport.getCamera();
camera.position.x = GAME_WIDTH /2;
camera.position.y = GAME_HEIGHT - viewport.getWorldHeight()/2;
camera.update();
controlsBottomY = GAME_HEIGHT - viewport.getWorldHeight();
}
I'm not sure how you plan to do your controls, but they would need to fit in the box (0, controlsBottomY) to (GAME_WIDTH, 0). Keep in mind that there are some phones with aspect ratios as small as 3:4 (although rare now). So with your 0.9 aspect ratio, on a 3:4 phone only the bottom 17% of the screen would be available for controls. Which might be fine if it's just a couple of buttons, but would probably be problematic if you have a virtual joystick.

Related

Pan within large image borders with LibGdx

I am building a small testing game, basically where is waldo. I have a large image that I can pan around and look for Waldo, but I can't figure out how to keep the camera within the sprite borders (x, y). Right now you can pan past the image borders and on and on and on forever.
Relevant code:
sprite.setPosition(-sprite.getWidth()/2, -sprite.getHeight()/2);
public boolean pan(float x, float y, float deltaX, float deltaY) {
camera.translate(-deltaX * PAN_SPEED, deltaY * PAN_SPEED);
camera.update();
return true;
}
there isn't much, I've tried quite a few things but the problem is I can't figure out how to get the distance I have panned, and I need that if I am going to put up a "border". Right now the sprite.getX() == -2200), and the camera viewport is only (480x800), so I am having a hard time working with the Image size and the viewport, and the distance that has been panned.
I've had to solve this before, but I did it very inelegantly. I basically just had a log printing out my current camera's position as I panned around. When I could see off the edge of the screen (to the blank GL wipe underneath), I wrote down the camera's position.
I ended up knowing that if the camera.getX() > sprite.getWidth() - someVal, the edge would on screen. So, I just added in a method that clipped down any X/Y val of a camera if it over shot these predefined bounds.
It's not a great answer, but it also allows you control.

Why does the sample author hardcode width and height for orthographic camera?(LibGdx Zombie Bird tutorial)

I am trying to follow along with the guide on here and learn LibGdx.
http://www.kilobolt.com/day-4-gameworld-and-gamerenderer-and-the-orthographic-camera.html
Here's the author's code for setting the width and height of the orthographic camera(camera used to project the 3d stuff all evenly into 2d?
private OrthographicCamera cam;
and later in a constructor
cam = new OrthographicCamera();
cam.setToOrtho(true, 136, 204);
Is there a reason why he choose to hardcode the width and height and not retrieve the height and width of the screen the game is being run on via Gdx.graphics.getWidth/getHeight?
(-from Changing the Coordinate System in LibGDX (Java))
You didn't understand how camera behaves. It doesn't matter if screen is 320x480 or 1080*1920 for camera. Camera uses own coordinate system. For example we have 1920*1080 screen. We DON'T wanna use pixels because it's bad practice. What we really want is to have own coordinate system of our world. If you have world 16*9 m then you can calculate that 1 m = 120 pixels. But your friend can have 800*450 screen and for him 1 m = 50 pixels. That's why we hardcode camera's width and height. But there is another problem here, the ratio. We considered that our ratio is 16/9 but some devices can have 4/3 ratio. Supporting a lot of ratios is very complex theme so i don't wanna mention it here.
Screenshots on different ratios of my game
If you want i can share with you my code. But note it isn't perfect and it's not complete game. And as you can see from screenshots i didn't hardcode height, only width. So i have empty space up and down.
If anyone is still struggling with this, I suggest reading into part 5, where the author explains how
"we are going to assume that the width of the game is always 136. The height will be dynamically determined! We will calculate our device's screen resolution and determine how tall the game should be."

Drawing subset of a gradient in libgdx

I know this question has been accessed before (like here), but I was wondering how to do the following. In my game, I have a scrolling background. There is for example a blue sky that is light blue at the bottom and gets darker the higher you go. This is not really possible with the suggested solution:
shapeRenderer.filledRect(x, y, width, height,
lightBlue, lightBlue, darkBlue, darkBlue);
since you can only give the colors that really will be shown. I would like to have a gradientPaint with at the top darkblue and the bottom lightblue that stretches out over for example 500 pixels. This, while I only draw only 200 pixels of it. With this, the color would still get darker when the background scrolls. Does anybody know how to do this with libgdx?
What you want is to see a smaller (say 200 pixel) window onto a larger (say 500 pixel) gradient. To do that you just need to compute the colors of four corners colors based on the location of your window in the overall gradient, and then draw just that. (So don't think about drawing the entire background, but about figuring out how to draw just the part that you need.)
Since you're just moving smoothly between the two colors (between 0 and 500), you're doing a "linear interpolation" (that is a straight-line estimation) between the colors based on where the Window is. Libgdx supports this via the lerp() methods on Color.
Assuming the window is travelling along the Y axis, something like this should give what you want:
Color baseColor = lightBlue;
Color topColor = darkBlue;
int skyHeight = 500;
int windowHeight = 200;
int windowLocation = ...; // something betweeen 0 and skyHeight - windowHeight;
Color windowBottomColor = baseColor.copy().lerp(topColor, windowLocation / skyHeight);
Color windowTopColor = baseColor.copy().lerp(topColor, (windowLocation + windowHeight) / skyHeight);
Now windowBottomColor and windowTopColor should be suitable for calling filledRect:
shapeRenderer.filledRect(x, y, width, height,
windowBottomColor, windowBottomColor, windowTopColor, windowTopColor);
Note that the "copy()" calls create a new Color object for each invocation, so you might want to optimize that to avoid the allocation.
Disclaimer: I haven't tried this code, so it probably has some stupid bugs in it, but hopefully it gives you the right idea.

libgdx sprite dimension meters or pixels?

I have been reading "Learning Libgdx Game development". I tried the below snippet:
// First the camera object is created with viewport of 5 X 5.
OrthographicCamera camera = new OrthographicCamera(5, 5);
I have a texture having a dimension of 32 pixels by 32 pixels. I form a sprite out of this
Sprite spr = new Sprite(texture);
// I set the size of Spr as
spr.setSize(1,1);
According to the book the dimensions above are meters and not pixels.
What I don't understand is how is mapping from meters to pixels happening on the screen? When I draw the sprite on the screen the size is not even half a meter let alone 1.
Also, the size of the underlying texture is 32 X 32 pixels. WHen I resize, the size of my sprites also changes.
Then, what would be the dimensions of spr.setPosition(x, y)? Will they be meters or pixels?
The library uses pixels for dimensions like texture size, and meters for in-game units.
setPosition will move an object in game units. When you move an object X game units, the number of pixels changes based on the camera's projection matrix amongst other settings.
If you think about it, it wouldn't make sense to move in pixels. If camera A is zoomed in more than cameraB moving X pixels in the view of each camera would require moving two different amounts.
Edit: Sorry, I made some assumptions in your understanding above, partially misunderstood the question, and frankly used the misleading wording. The key is that the convention of meters for units is not built-in, it's one that you enforce because the ratio of one pixel to one meter in Box2D wouldn't make sense. The wording I used implied that internally setPosition cares about meters, but you should be doing the scaling yourself. Often times the ratio I see in libgdx is 30 pixels = 1 meter.

LibGDX Scaling and Rendering Sprites from TextureAtlas

I am currently experiencing issues when drawing and scaling sprites.
I am loading my assets from a texture-atlas, which I packed no problem with the LibGDX texture packer gui tool. My texture atlas image currently looks like this.
These images are supposed to be buttons, but as you can see, the image is very small, so when the sprites are loaded, they load a sprite of say, 34x16 pixels. When I render these buttons, on a canvas of 1920x1080, they are much too small. I use sprite.scale(int scale) to scale the sprites, but when I scale them, they appear blurry. What I would like to happen, is when they are scaled, each pixel is scaled proportionally, keeping the pixelated effect on the button, rather than a blurry resized image from a really small texture. I currently render the sprites using sprite.render(SpriteBatch batch). Is this the proper way of rendering a sprite, after they are loaded using atlas.createSprite(String name)? I am new to using sprites, and loading textures from a texture-atlas, so i am wondering if this is the correct way of doing things.
Also, when I initialize my game, I load numerous different Sprite objects from a TextureAtlas. Each sprite holds a texture that will represent a game object, however it is my understanding that you render a sprite using sprite.render(SpriteBatch batch), so therefore I could only use a sprite, loaded from the TextureAtlas for one game object, because I would also have to set the scale, and position of the sprite, as it represents the game object. I am used to loading a Texture, then rendering this texture using batch.render(), at a given position, but I don't see how this is possible if I am using a sprite. Even if I use batch.render(Sprite, x, y), I am unable to scale the sprite properly, because as I mentioned before, I would like to scale the sprite while maintaining a pixelated effect, and even so, using the Sprite.scale() method, this would scale the Sprite object as a whole, making it impossible to use the Sprite's texture multiple times for numerous game objects.
Any suggestions would be greatly appreciated.
The code I am currently using to render/load the sprites is as follows:
Loading from TextureAtlas:
public static TextureAtlas atlas = new TextureAtlas(Gdx.files.internal("data/texture/pack/output/pack.pack"));
public static Sprite sprite = atlas.createSprite("buttonUp");
sprite.setScale(10);
Rendering Sprite: GdxGame.WIDTH/HEIGHT are 1920x1080. Though the Desktop window is scaled down from that size. Thus, everything is rendered as if the screen were 1920x1080.
batch = new SpriteBatch();
camera = new OrthographicCamera(GdxGame.WIDTH, GdxGame.HEIGHT);
camera.position.set(GdxGame.WIDTH/2, GdxGame.HEIGHT/2, 0);
camera.setToOrtho(false, GdxGame.WIDTH, GdxGame.HEIGHT);
public void render(float delta){
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
camera.update();
sprite.draw(batch);
batch.end();
}
After further investigation, I have discovered that I can use Sprite.set(Sprite sprite) to make Sprite x a copy of Sprite y, and therefore render each sprite multiple times. However, this does not solve the issue of scaling the sprite. I must emphasize further that when rendering a scaled sprite, the scaling is not done by pixel, meaning that it is blurry. But, when rendering a TextureRegion, like: batch.draw(TextureRegion, x, y, width, height) if the width and height are greater than that of the original texture, it will scale each pixel, rather than blur the whole thing to try and make it look better. The following is an image of the blurriness I am talking about:
Notice how this sprite is scaled to be blurry, even though the original image is small, and pixelated.
What TextureFilter settings are you using in your code or in the texturepacker? Try the "Nearest" filter. If you have set it to "Linear" or alike, it will always take 4 texture pixels (texels) and interpolate them to get the color of the pixel to be drawn.
That might help against the blur, but I am not sure if it will produce exactly that 8-bit look you are aiming for...

Categories