Libgdx: Infinity world. How to? - java

I write simple game with libGdx. I have a hero, which always is in screen center and I must move my background sprite (or region?) to make move illusion. But my background sprite isn't infinity.
How can I create illusion of seamless infinity world?
Of course I can add several background sprites to try to cover all empty space of screen. But I must to draw out of the sceen a lot of all another objects: Houses, monsters, others heroes, etc. So I have a second question:
When I try to draw other object (a lot of objects!) out of the screen, how badly it affects memory? How to draw it correctly?
I know that OrthographicCamera in libgdx draw only viewportWidth-viewportHeight area. If it's right, then I must to move my camera and all my sprites too. I think it's not correctly.
How can I render infinity world in libgdx with OrthographicCamera?

How can I create illusion of seamless infinity world?
Create a tile background. Tile background means that if it was besides or top or bottom of itself, the edges of sticking line will not be visible to viewer.
To do this open your background image in photoshop and go to Filters > Other > Offset.
Set the offset filter to offset the background to center then try using photoshop tools to hide the edges (the + shape in image). Now again go to offset and return to 0, 0 and save your background.
When I try to draw other object (a lot of objects!) out of the screen,
how badly it affects memory? How to draw it correctly?
I have checked this and that was not much fps loosing on my test. So don't worry about it.
How can I render infinity world in libgdx with OrthographicCamera?
Move camera where-ever you want any x, y. Every time see where is camera and calculate needing tile backgrounds to draw (for example every time draw 3x3=9 backgrounds sticking together).

Related

Understanding different coordinate systems, Tiled, Stage, Screen,

I am very confused about all the different coordinate systems.
I am using LibGDX with Tiled.
These all have their own coordinate system (sortof).
LibGDX screen
Tiled map
UIcamera
Orthogonal TiledMapCamera
UIStage
TiledMapStage
It's too many concepts and I can no longer mentally understand how they affect each other in complex scenarios, like
having different screen dimensions than the tiled map dimensions
when resizing the screen.
Can someone shed some light on this?
Many thanks!
In a 2D game, you really only have to think about the coordinate system of the orthographic camera. Whatever is drawn with a certain camera's combined matrix is fit to the rectangle of the screen (and if you set up the camera correctly, it will not be distorted).
LibGDX provides the Viewport classes for helping to set up your camera. You can think of them as camera managers that will size the camera to meet the arrangement you want. You instantiate them with a desired size "window" you want to see of the game world. And the only place you have to consider the actual screen dimensions is in the resize method, where you pass the dimensions to the Viewport class and let it handle sizing your camera for you so the scene won't be distorted.
You might have more than one camera. Typically your UI will have its own, and the gameplay world will have another (because you want it to move around in the world).
When it comes to input, the raw X and Y are given in screen pixel coordinates, but you just pass these coordinates to the camera.unproject method to have them converted to the same coordinates as your game world.
I don't use tiles, so I can't get specific there, but the same principles should apply.

Expanding textures in libGDX

I'm using libgdx to make simple tile based game and everything seemed to be fine, until I added a rectangle, which follows mouse position. I figure out, that whenever I jump, rectangle (and other blocks too) expands probably by 1 px, until I let the spacebar. When I hit the spacebar again, it gets to normal size. I tried printing out rectangle width and height, but they didn't change, so problem is with rendering.
Everything allright
On this picture you can see game before jump.
Wider textures
Here is game after jump. You can also clearly see it on players head.
A little more detail. I don't use block2d. Tiles sizes are 8x8 scaled to 20x20. Using texturepacker without padding (problem occurs with padding anyway). I don't know which code to post, because I have no idea where the problem could be, so here is just simple block class. Any help would be much appreciated, thanks.
public class Block extends Sprite {
private int[] id = { 0, 0 };
public Rectangle rect;
private int textureSize = 8;
public Block(PlayScreen play,String texture, int x, int y, int[] id) {
super(play.getAtlas().findRegion("terrain"));
this.id = id;
rect = new Rectangle(x, y, ID.tileSize, ID.tileSize);
setRegion(id[0] * textureSize, id[1] * textureSize + 32, textureSize, textureSize);
setBounds(rect.x, rect.y, rect.width, rect.height);
}
public void render(SpriteBatch batch) {
draw(batch);
}
Welcome to libGDX!
TL;DR- there isn't enough of your code there to tell what the exact problem is, but my guess is that somewhere in your code you are confusing pixel-space with game-space.
A Matter of Perspective
When you first create a libGDX game that is 2D, it's really tempting to think that you are just painting pixels onto the screen. After all, your screen is measured in pixels, your window is measured in pixels, and your texture is measured in pixels.
However, if you start looking closer at the API, you'll find weird little things such as your camera and sprite positions and sizes being measured as floating point values instead of integers (Why floats? You can't have a fraction of a pixel!).
The reason the dimensions of your game object are different than how big they are drawn. It's really easy to understand this in a 3D world- when I am close to something, it is drawn really big on the screen. When I am far away, it is drawn really small. The actual size of the object doesn't change based on my distance from it, but the perceived size did. This tells us that we can't safely measure things in our game just based on how they're drawn- we have to measure based on their true size.
As a side note, while you may be using an Orthographic camera (i.e. one without perspective) and drawing 2D sprites, libGDX is really drawing a flat 3D object (a plane) behind the scenes.
Game Units
So how do we measure the "true size" of something? The answer is that we can measure it using whatever type of unit we want! We can say something is 3.5 meters long, or 42 bananas- whatever you want! For the sake of this conversation, I'm going to call these units "Game Units" (GU).
For your game, you might consider making each block one GU high and one GU wide (essentially measuring your game world in blocks). Your character can move in fractions of a block, but you measure speed in terms of "blocks per second." I can almost guarantee it will make your game logic a lot simpler.
But our textures are in pixels!
As you probably already know, your game uses three things to render: A viewport (the patch of the screen where your game can be painted), A Camera (think of it like a real camera- you change the position and size of the lens to change how much of your world is 'in view'), and your game objects (the things you may or may not want to draw, depending on whether they're visible to the camera).
Now let's look at how they're measured:
Viewport: This is a chunk of your screen (set to be the size of your game window), and as such is measured in pixels.
Camera: The Camera is interesting, because its size and position are measured in Game Units, not pixels. Since the viewport uses the Camera to know what to paint on the screen, it does contain the mapping of GU to pixel.
Game Object: This is measured in Game Units. It may have a texture measured in pixels, but that different than the "true size" of the game object.
Now libGDX defaults all of these sizes such that 1 GU == 1 Pixel, which misleads a lot of folks into thinking that everything is measured by pixels. Once you realize that this isn't really the case, there are some really cool implications.
Really Cool Implications
The first implication is that even if my screen size changes, my camera size can stay the same. For example, if I have a small 800x600 pixel screen, I can set my camera size to 40x30. This maintains a nice aspect ratio, and allows me to draw 40x30 blocks on the screen.
If the screen size changes (say to 1440x900), my game will still show 40x30 blocks on the screen. They may look a little stretched if the aspect ratio changes, but libGDX has special viewports that will counteract this for you. This makes it much easier to support your game on other monitors, other devices, or even just handling screen resizes.
The second cool implication is that you stop caring about texture sizes to a large degree. If you start telling libGDX "Hey, go draw this 32x32px sprite on this 1x1 GU object" instead of "Hey, go draw this 32x32px sprite" (notice the difference?) it means that changing texture sizes doesn't change how big the things on your screen are drawn, it changes how detailed they are. If you want to change how big they are drawn, you can change your camera size to 'zoom in.'
The third cool implication is that this makes your game logic a lot cleaner. For example you start thinking of speeds in "Game Units per second", not "Pixels per second". This means that changes in drawing size won't affect how fast things are in the game, and will save you a ton of bug-hunting further down the road. You also avoid a lot of the weird "My jump behaves differently when I resize the screen" bugs.
Summary
I hope this is helpful and makes sense. It's difficult to get your mind around it at first, but it will make your life a lot easier and your game a lot better in the long run. If you'd like a better example with pictures, I recommend that you read this article by one of the libGDX developers.

Understanding the libGDX Projection Matrix

Over the past few weeks I've been attempting to learn the libGDX library. I'm finding it hard, especially for my first endeavor toward game development, to comprehend the system of Camera/viewport relationships. One line of code that I've been told to use, and the API mentions, is:
batch.setProjectionMatrix(camera.combined);
Despite a good 4 hours of research, I'm still lacking a complete understanding of the functionality of this code. It is to my basic understanding that it "tells" the batch where the camera is looking. My lack of comprehension is depressing and angering, and I'd appreciate if anyone could assist me. Another issue with the code snippet is that I'm unsure of when it's necessary to implement (in the render method, create method, etc).
Consider taking a picture with a camera. E.g. using your smartphone camera taking a picture of a bench in the park. When you do that, then you'll see the bench in the park on the screen of your smartphone. This might seem very obvious, but let's look at what this involves.
The location of the bench on the picture is relative to where you were standing when taking the photo. In other words, it is relative to the camera. In a typical game, you don't place object relative to the object. Instead you place them in your game world. Translating between your game world and your camera, is done using a matrix (which is simply a mathematical way to transform coordinates). E.g. when you move the camera to the right, then the bench moves to the left on the photo. This is called the View matrix.
The exact location of the bench on the picture also depends on the distance between bench and the camera. At least, it does in 3D (2D is very similar, so keep reading). When it is further away it is smaller, when it is close to the camera it is bigger. This is called a perspective projection. You could also have an orthographic projection, in which case the size of the object does not change according to the distance to the camera. Either way, the location and size of the bench in the park is translated to the location and size in pixels on the screen. E.g. the bench is two meters wide in the park, while it is 380 pixels on the photo. This is called the projection matrix.
camera.combined represents the combined view and projection matrix. In other words: it describes where things in your game world should be rendered onto the screen.
Calling batch.setProjectionMatrix(cam.combined); instruct the batch to use that combined matrix. You should call that whenever the value changes. This is typically when resize is called and also whenever you move or otherwise alter the camera.
If you are uncertain then you can call that in the start of your render method.
The other answer is excellent, but I figure a different way of describing it might help it to click.
You generally deal with your game in "world space", a coordinate system that is analogous to the real world. In linear algebra, you can convert points in space from one coordinate system to another by multiplying the point's coordinates by a matrix that represents the relation between two coordinate systems.
The view matrix is multiplied by a point to convert it from world space to camera space (the camera's point of view). The projection matrix is used to convert a point from camera space to screen space (the flat 2D rectangle of your device's screen). When you call update() on a camera in Libgdx, it applies your latest changes to position, orientation, viewport size, field of view, etc. to its view and projection matrices so they can be used in shaders.
You rarely need to deal with stuff in camera space in 2D, so SpriteBatch doesn't need separate view and projection matrices. They can be combined into a single matrix that converts straight from world space to screen space, which is already done automatically in the Camera, hence the camera.combined matrix.
SpriteBatch has a default built-in shader that multiplies this projection matrix by all the vertices of your sprites so they will be properly mapped to the flat screen.
You should call setProjectionMatrix whenever you have moved the camera or resized the screen.
There is a third type of matrix called a model matrix that is used for 3D stuff. A model matrix describes the model's orientation, scale, and position in world space. So it is multiplied by coordinates in the model to move them from local space to world space.
Take for example a basic sidescrolling game. As the player moves to the side, the camera pans to follow them. This means that where objects are in the world doesn't necessarily correspond to where they are on the screen, since the screen and the world move relative to each other.
Here's an example: say your screen is 100px*100px square (for some reason). You place an object at position (50, 0), so it's now in the middle and at the bottom of the screen. Now say you move your player over to the right, and the whole screen pans to follow the player. This means that the object you placed earlier should have moved left on the screen. So it's still at (50, 0) in the world, since it didn't actually move relative to the rest of the scenery, but it should now be drawn at, say, (10, 0) on the screen, since which part of the world the screen is looking at has changed. This is the difference between "worldspace" (where an object is in the world) and "screenspace" (where the object is drawn on the actual display).
When you try to draw with a SpriteBatch, it is by default going to assume worldspace coordinates are the same as screenspace coordinates: when you say "draw at (50, 0)", it's going to draw the object at (50, 0) on the screen. Even if the camera moves, it's always going to draw at (50, 0) on the screen, so as the camera pans, the object will follow and stay stuck to the same place on the screen.
Since you usually don't want that, you give the SpriteBatch a projection matrix, which is a transformation matrix that tells how to convert screenspace coordinates to worldspace coordinates, and vice versa. This way, when you tell the batch "draw at (50, 0)", it can look at the matrix it got from the camera and see that, since the camera has moved, (50, 0) in the world actually means (10, 0) on the screen, and it will draw your sprite in the right place.

Can I zoom just a part of my OrthograficCamera libgdx

imagine that, an imaginary minecraft, where you build cubes.
You have a panel where you have the diferents cubes(tool box),
and other big panel, the world where you build, but the world is bigger.
and all of this things are in the same screen, imagine how can i put a orthografic camera that only zoom the panel where you have the whole world, that it allows you to zoom and build cubes with details and presicion, but the tool box panel still the same position. always visible. ?
Use two cameras. One for the "game" and one for the "HUD".
During a render call, use one camera to render the game, then "switch" cameras and then render the HUD. How you switch cameras depends on what API you're using to render objects (for example use setProjectionMatrix on a SpriteBatch).
You could also create more than one SpriteBatch or more than one Stage (as each tracks camera state internally), but they're a bit heavy-weight.
Beware that when switching contexts, you will probably have to explicitly end/flush/complete the first context before starting the second.
You do not use one Camera to show the stuff. For example you use a Camera for the background of your game and you have a second camera for the game Hud that always has a fixed size. But the game in background can be zoomed in and out because its an different camera.
But for simple boxes on the screen you do not use a second camera. You just use one more Stage that get displayed or not where the new "ui" stuff is shown.
Hope this helps a bit.

Using a canvas much larger than the screen

I'm trying to draw a 2D contour plot of some data on Android and I'm wondering what the best approach would be to draw those. The whole datasets can be relatively large (2k * 2k points) and zooming and moving inside the plot should be very fast. Most of the time only a small part of the data will be drawn as the user has zoomed in on the data.
My idea now would be to draw the whole plot onto a large canvas, but clip it to the portion visible on the screen, so that only that part would be really drawn in the end. I find the 2D drawing API of Android somewhat confusing and I'm not sure if this is really a feasible approach and how I would then go about executing it.
So my questions are:
Is it a good idea to draw onto a canvas much larger than the screen and use clipping to display only the relevant part?
How would I create a larger canvas and how would I select which parts should be drawn?
You should start the other way around. Instead of creating a huge canvas you should detect what part of your plot you need to draw and draw only that.
So basically you need some navigation/scrolling and you need to keep the offset from the starting point in memory to calculate where you are. Using the offset you can easily zoom in and out because you just need to scale the plot to the screen.
Is it a good idea to draw onto a
canvas much larger than the screen and
use clipping to display only the
relevant part?
A better question might be, do you have any other options. Some might argue that this is a bad idea since your going to keep memory in use when it isn't relevant to whats happening on the UI. However, from my experiences with the Canvas, I think you'll find this should work out just fine. Now, if you are trying to keep "5 square miles" of canvas in memory your definitely going to have to find a better way to manage it.
How would I create a larger canvas and
how would I select which parts should
be drawn?
I would expect that you will be creating your own "scrolling" method when the user touches the screen via overriding the onTouchEvent method. Basically your going to need to keep track of a starting point X and Y and just track that value as you move the Canvas on screen. In order to move the Canvas there are a number of built in's like translate and scale that you can use to both move the Canvas in X and Y as well as scale it when the user zooms in or out.
I don't think that is a good idea to draw your 2D contour plot on a big bitmap because you need a vector type graphics to zoom in and out in order to keep it sharp. Only pictures are good to scale down but graphs will lose thin lines or come out deformed when scaled down in bitmaps.
The proper way is to do it all mathematically and to calculate which part of the graph should be drawn for required position and zoom. Using anti_alias paint for lines and text, the graph would always come out sharp and good...
When the user zooms out, some items should not be drawn as they could not fit into the screen or would clutter it. So the graph would be always optimised for the zoom level...

Categories