This is my first time using Box2d with Libgdx and I'm confused because now I'm dealing with meters and not pixels, for example if I wan't to set the size of my shape to 80x80 pixels, how do I do that in meters? Obviously I would have to zoom the camera which I have no idea how to do it. I wan't to know how can you set the amount of pixels needed to have one meter.
Related
I have been trying for the past hours to find a solution to this problem, but I can't seem to find anything.
I am developing a game for Android using LibGDX. In the emulator, the game looks fine, but when I play it on my phone, everything is different and misplaced. The solution I found for this is using Density Individual Pixels instead of regular pixels, so everything is placed corectly, no matter what device I use. However, I can't seem to find a proper way to do that. The only relevant solution I have found was to use this:
public static float pixelToDP(float dp) {
return dp * Gdx.graphics.getDensity();
}
I tried resizing some of the objects using the formula above, but they are still different from the emulator.
Please, if anyone has a solution that doesn't involve changing the Ortographic Camera(already tried those), help me!
This answer is just to add to the what TomGrill said in the comments.
The reason your game looks fine in the emulator is because you have used values that fits the resolution of the emulator.
If you position a sprite at 100,100 on a 1920x1080 resolution, the sprite will be in the upper (or lower, depending on how you orient your y axis) left corner.
On a 200,200 resolution, the sprite will be placed in the middle of the screen.
The size of the sprite is also dependent on the resolution / pixel density. If you have 1 pixel per sq inch, a 32x32 pixel sprite will be 32 inches wide and high. But on a screen with a high pixel density, lets say 100px pr. sq. inch. the 32x32 sprite will look pretty small.
This is where viewports come in. You choose a resolution, lets say 900x540 and you just code for this resolution. The viewport will make sure your game scales up or down to fit any resolution and pixel density. If you place a sprite in the middle of you 900x540 screen, the viewport will make sure that it is placed in the middle of a 1920x1080 resolution.
Even if you wanted to do these calculations yourself, Gdx.graphics.getDensity(); is not of any use on its own. You need the width and the height of the physical screen to find the resolution. And what you would be doing next is reinventing the wheel.
I am trying to follow along with the guide on here and learn LibGdx.
http://www.kilobolt.com/day-4-gameworld-and-gamerenderer-and-the-orthographic-camera.html
Here's the author's code for setting the width and height of the orthographic camera(camera used to project the 3d stuff all evenly into 2d?
private OrthographicCamera cam;
and later in a constructor
cam = new OrthographicCamera();
cam.setToOrtho(true, 136, 204);
Is there a reason why he choose to hardcode the width and height and not retrieve the height and width of the screen the game is being run on via Gdx.graphics.getWidth/getHeight?
(-from Changing the Coordinate System in LibGDX (Java))
You didn't understand how camera behaves. It doesn't matter if screen is 320x480 or 1080*1920 for camera. Camera uses own coordinate system. For example we have 1920*1080 screen. We DON'T wanna use pixels because it's bad practice. What we really want is to have own coordinate system of our world. If you have world 16*9 m then you can calculate that 1 m = 120 pixels. But your friend can have 800*450 screen and for him 1 m = 50 pixels. That's why we hardcode camera's width and height. But there is another problem here, the ratio. We considered that our ratio is 16/9 but some devices can have 4/3 ratio. Supporting a lot of ratios is very complex theme so i don't wanna mention it here.
Screenshots on different ratios of my game
If you want i can share with you my code. But note it isn't perfect and it's not complete game. And as you can see from screenshots i didn't hardcode height, only width. So i have empty space up and down.
If anyone is still struggling with this, I suggest reading into part 5, where the author explains how
"we are going to assume that the width of the game is always 136. The height will be dynamically determined! We will calculate our device's screen resolution and determine how tall the game should be."
I have been reading "Learning Libgdx Game development". I tried the below snippet:
// First the camera object is created with viewport of 5 X 5.
OrthographicCamera camera = new OrthographicCamera(5, 5);
I have a texture having a dimension of 32 pixels by 32 pixels. I form a sprite out of this
Sprite spr = new Sprite(texture);
// I set the size of Spr as
spr.setSize(1,1);
According to the book the dimensions above are meters and not pixels.
What I don't understand is how is mapping from meters to pixels happening on the screen? When I draw the sprite on the screen the size is not even half a meter let alone 1.
Also, the size of the underlying texture is 32 X 32 pixels. WHen I resize, the size of my sprites also changes.
Then, what would be the dimensions of spr.setPosition(x, y)? Will they be meters or pixels?
The library uses pixels for dimensions like texture size, and meters for in-game units.
setPosition will move an object in game units. When you move an object X game units, the number of pixels changes based on the camera's projection matrix amongst other settings.
If you think about it, it wouldn't make sense to move in pixels. If camera A is zoomed in more than cameraB moving X pixels in the view of each camera would require moving two different amounts.
Edit: Sorry, I made some assumptions in your understanding above, partially misunderstood the question, and frankly used the misleading wording. The key is that the convention of meters for units is not built-in, it's one that you enforce because the ratio of one pixel to one meter in Box2D wouldn't make sense. The wording I used implied that internally setPosition cares about meters, but you should be doing the scaling yourself. Often times the ratio I see in libgdx is 30 pixels = 1 meter.
I read the related question on this topic here: How to detect subjective image quality
I would like to detect whether the uploaded image by the user could be zoomed in at high % (say, 500%) and still be or good image quality. I understand that "good image quality" is subjective, but for my purpose I would say it means "whether the image is pixelated?". I'm trying to find ways of how to do that?
Should I calculate the size of the uploaded image? Higher the size, more zoomed in it can be?
Should I calculate the total pixel count of the image?
Should I read the metatags of the uploaded image?
Combination of multiple things?
Are there solutions/libraries out there that determine the bad pixelation of an image?
Use the horizontal and vertical pixel count.
As long as the image has more pixels than you're displaying, you can zoom. Once you have more display pixels than image pixels, you have to interpolate the pixels. The higher the interpolation, the blurrier the picture.
Say you have a 4000 x 3000 pixel picture, and you're displaying 640 x 480 pixels.
You can zoom up to 625% horizontally (4000 / 640) and 625% vertically (3000 / 480). The smaller zoom number would be 625%. Rounding to an even zoom number, this picture could be zoomed up to 600% without pixelation.
Now, you could zoom even higher than 600%, if you're willing to interpolate the pixels. How high can you zoom? That would depend on what you consider acceptable interpolation.
My guess would be 25% higher. For this example picture, you could go to 800% before the picture got too blurry.
If you want to be on the safe side, keep it below the calculated zoom size. If not, go as high as you wish and let the user determine how high is too high.
My question involves the drawImage method in Java Graphics2D (this is for a desktop app, not Android).
My BufferedImage that I'd like to draw contains high resolution binary data, most pixels are black but I have some sparse green pixels (the green pixels represent data points from an incoming raw data stream). The bitmap is quite large, larger than my typical panel size. I made it large so I could zoom in and out. The problem is when I zoom out I lose some of my green pixels .. as an example if my image is 1000 pixels and by panel is 250 pixels, I'd lose 1 out of 4 pixels in each direction (X and Y). If I use nearest neighbor interpolation when I scale the pixels can just disappear to black. If I use something like bilinear interpolation my green pixel will get recolored to somewhere between black and green.
I understand all this behavior, but my question is that is there any way to get the behavior I want, which is to make sure if any pixels is non-black I want it to be drawn at it's full intensity. Perhaps something like a "max-hold" interpolation.
I realize I could probably do what I want by drawing shape primitive over a black background, and maybe this is what I'll have to do. But there is a reason I'm using bitmaps (has to do with the fact that I'm showing the data in a falling spectrogram-type display - and it does have a mode where all the pixels could be colored and not just black and green).
Thanks,
You could look at the implementation of drawImage and override it to get your desired behaviour, however probably the core of the scaling uses hardware acceleration, so re implementing it in Java would be really slow.
You could look into JOGL, but my impression is that, if your pixels are really sparse, just drawing the green pixels on a black background (or over an image) would be both easy to code and very fast.
You could even have an heuristic switching between painting the dots to scaling the image if the number of dots starts being too high.