I have been trying for the past hours to find a solution to this problem, but I can't seem to find anything.
I am developing a game for Android using LibGDX. In the emulator, the game looks fine, but when I play it on my phone, everything is different and misplaced. The solution I found for this is using Density Individual Pixels instead of regular pixels, so everything is placed corectly, no matter what device I use. However, I can't seem to find a proper way to do that. The only relevant solution I have found was to use this:
public static float pixelToDP(float dp) {
return dp * Gdx.graphics.getDensity();
}
I tried resizing some of the objects using the formula above, but they are still different from the emulator.
Please, if anyone has a solution that doesn't involve changing the Ortographic Camera(already tried those), help me!
This answer is just to add to the what TomGrill said in the comments.
The reason your game looks fine in the emulator is because you have used values that fits the resolution of the emulator.
If you position a sprite at 100,100 on a 1920x1080 resolution, the sprite will be in the upper (or lower, depending on how you orient your y axis) left corner.
On a 200,200 resolution, the sprite will be placed in the middle of the screen.
The size of the sprite is also dependent on the resolution / pixel density. If you have 1 pixel per sq inch, a 32x32 pixel sprite will be 32 inches wide and high. But on a screen with a high pixel density, lets say 100px pr. sq. inch. the 32x32 sprite will look pretty small.
This is where viewports come in. You choose a resolution, lets say 900x540 and you just code for this resolution. The viewport will make sure your game scales up or down to fit any resolution and pixel density. If you place a sprite in the middle of you 900x540 screen, the viewport will make sure that it is placed in the middle of a 1920x1080 resolution.
Even if you wanted to do these calculations yourself, Gdx.graphics.getDensity(); is not of any use on its own. You need the width and the height of the physical screen to find the resolution. And what you would be doing next is reinventing the wheel.
Related
I am using libGDX to make a small game, I made a little sprite (32x32) that is shown in the center of the screen. For some reason when I render the texture to the screen it loses its quality. Since the textures are so small I made the screen width and height 200 and 100 respectively. Any tips or answers would be much appreciated.
Your sprite (32x32) needs to be displayed on an area which is larger than 32x32, meaning that the image needs to be upscaled and interpolated (i.e. pixels between the 32 known ones need to be calculated). A common approach is smooth (often times linear) interpolation to fill in the additional pixels, which works well for photorealistic textures; it appears to have occurred here.
For pixel-art, you likely want "nearest-neighbor" interpolation instead. While the exact way to set it depends on the structure of your code, you may be able to do something like:
textureObject.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
I'm trying to create a game for Android device and I have a small question about the rendering of the scene. Effectively I want to draw a square of a precise size but I'm not pretty sure about the way I can get the coordinates of the border of the screen in openGL dimension. My application is set in landscape mode, so computation looks easier.
I have drawn a square with a border size of 2 and I have the impression that the square takes all the height of the screen. Since I know the resolution of my screen which is equal to 1920*1080, I can compute the width of my scene. Then, by drawing several squares I found the coordinates of on corner.
This way of computing the coordinates are a bit weird and I'm not pretty sure that the computation will always lead to a good answer. Is there a nicer way and obviously a better way to compute those coordinates ?
Thank you in advance !
I am trying to follow along with the guide on here and learn LibGdx.
http://www.kilobolt.com/day-4-gameworld-and-gamerenderer-and-the-orthographic-camera.html
Here's the author's code for setting the width and height of the orthographic camera(camera used to project the 3d stuff all evenly into 2d?
private OrthographicCamera cam;
and later in a constructor
cam = new OrthographicCamera();
cam.setToOrtho(true, 136, 204);
Is there a reason why he choose to hardcode the width and height and not retrieve the height and width of the screen the game is being run on via Gdx.graphics.getWidth/getHeight?
(-from Changing the Coordinate System in LibGDX (Java))
You didn't understand how camera behaves. It doesn't matter if screen is 320x480 or 1080*1920 for camera. Camera uses own coordinate system. For example we have 1920*1080 screen. We DON'T wanna use pixels because it's bad practice. What we really want is to have own coordinate system of our world. If you have world 16*9 m then you can calculate that 1 m = 120 pixels. But your friend can have 800*450 screen and for him 1 m = 50 pixels. That's why we hardcode camera's width and height. But there is another problem here, the ratio. We considered that our ratio is 16/9 but some devices can have 4/3 ratio. Supporting a lot of ratios is very complex theme so i don't wanna mention it here.
Screenshots on different ratios of my game
If you want i can share with you my code. But note it isn't perfect and it's not complete game. And as you can see from screenshots i didn't hardcode height, only width. So i have empty space up and down.
If anyone is still struggling with this, I suggest reading into part 5, where the author explains how
"we are going to assume that the width of the game is always 136. The height will be dynamically determined! We will calculate our device's screen resolution and determine how tall the game should be."
I implemented a top view camera which moves with the player, only a bit slower using camera.position.lerp. The problem is that the textures are flashing (flickering) a little because i have scaled my textures. If I use normal size of textures the flickering stops. Does anyone have any ideas on how to move the camera with zoom (or textures scaled - same thing) without getting the textures to flicker (or flash)? I use linear filtering and load every asset from an atlas. I saw this problem on multiple forums, but no answer. I wanted to load higher resolution textures and resize them in code, that's why I am asking this question.
You need to extrude the borders of your sprites in the atlas. The Extrude option tends to be close to the Padding ones.
Most texture packers support that feature and Libgdx will pick that information up from the atlas file straight away.
This way you get to use the filter you want.
I'm trying to draw a 2D contour plot of some data on Android and I'm wondering what the best approach would be to draw those. The whole datasets can be relatively large (2k * 2k points) and zooming and moving inside the plot should be very fast. Most of the time only a small part of the data will be drawn as the user has zoomed in on the data.
My idea now would be to draw the whole plot onto a large canvas, but clip it to the portion visible on the screen, so that only that part would be really drawn in the end. I find the 2D drawing API of Android somewhat confusing and I'm not sure if this is really a feasible approach and how I would then go about executing it.
So my questions are:
Is it a good idea to draw onto a canvas much larger than the screen and use clipping to display only the relevant part?
How would I create a larger canvas and how would I select which parts should be drawn?
You should start the other way around. Instead of creating a huge canvas you should detect what part of your plot you need to draw and draw only that.
So basically you need some navigation/scrolling and you need to keep the offset from the starting point in memory to calculate where you are. Using the offset you can easily zoom in and out because you just need to scale the plot to the screen.
Is it a good idea to draw onto a
canvas much larger than the screen and
use clipping to display only the
relevant part?
A better question might be, do you have any other options. Some might argue that this is a bad idea since your going to keep memory in use when it isn't relevant to whats happening on the UI. However, from my experiences with the Canvas, I think you'll find this should work out just fine. Now, if you are trying to keep "5 square miles" of canvas in memory your definitely going to have to find a better way to manage it.
How would I create a larger canvas and
how would I select which parts should
be drawn?
I would expect that you will be creating your own "scrolling" method when the user touches the screen via overriding the onTouchEvent method. Basically your going to need to keep track of a starting point X and Y and just track that value as you move the Canvas on screen. In order to move the Canvas there are a number of built in's like translate and scale that you can use to both move the Canvas in X and Y as well as scale it when the user zooms in or out.
I don't think that is a good idea to draw your 2D contour plot on a big bitmap because you need a vector type graphics to zoom in and out in order to keep it sharp. Only pictures are good to scale down but graphs will lose thin lines or come out deformed when scaled down in bitmaps.
The proper way is to do it all mathematically and to calculate which part of the graph should be drawn for required position and zoom. Using anti_alias paint for lines and text, the graph would always come out sharp and good...
When the user zooms out, some items should not be drawn as they could not fit into the screen or would clutter it. So the graph would be always optimised for the zoom level...