I am using libGDX to make a small game, I made a little sprite (32x32) that is shown in the center of the screen. For some reason when I render the texture to the screen it loses its quality. Since the textures are so small I made the screen width and height 200 and 100 respectively. Any tips or answers would be much appreciated.
Your sprite (32x32) needs to be displayed on an area which is larger than 32x32, meaning that the image needs to be upscaled and interpolated (i.e. pixels between the 32 known ones need to be calculated). A common approach is smooth (often times linear) interpolation to fill in the additional pixels, which works well for photorealistic textures; it appears to have occurred here.
For pixel-art, you likely want "nearest-neighbor" interpolation instead. While the exact way to set it depends on the structure of your code, you may be able to do something like:
textureObject.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
Related
I have been trying for the past hours to find a solution to this problem, but I can't seem to find anything.
I am developing a game for Android using LibGDX. In the emulator, the game looks fine, but when I play it on my phone, everything is different and misplaced. The solution I found for this is using Density Individual Pixels instead of regular pixels, so everything is placed corectly, no matter what device I use. However, I can't seem to find a proper way to do that. The only relevant solution I have found was to use this:
public static float pixelToDP(float dp) {
return dp * Gdx.graphics.getDensity();
}
I tried resizing some of the objects using the formula above, but they are still different from the emulator.
Please, if anyone has a solution that doesn't involve changing the Ortographic Camera(already tried those), help me!
This answer is just to add to the what TomGrill said in the comments.
The reason your game looks fine in the emulator is because you have used values that fits the resolution of the emulator.
If you position a sprite at 100,100 on a 1920x1080 resolution, the sprite will be in the upper (or lower, depending on how you orient your y axis) left corner.
On a 200,200 resolution, the sprite will be placed in the middle of the screen.
The size of the sprite is also dependent on the resolution / pixel density. If you have 1 pixel per sq inch, a 32x32 pixel sprite will be 32 inches wide and high. But on a screen with a high pixel density, lets say 100px pr. sq. inch. the 32x32 sprite will look pretty small.
This is where viewports come in. You choose a resolution, lets say 900x540 and you just code for this resolution. The viewport will make sure your game scales up or down to fit any resolution and pixel density. If you place a sprite in the middle of you 900x540 screen, the viewport will make sure that it is placed in the middle of a 1920x1080 resolution.
Even if you wanted to do these calculations yourself, Gdx.graphics.getDensity(); is not of any use on its own. You need the width and the height of the physical screen to find the resolution. And what you would be doing next is reinventing the wheel.
I'm trying to create a game for Android device and I have a small question about the rendering of the scene. Effectively I want to draw a square of a precise size but I'm not pretty sure about the way I can get the coordinates of the border of the screen in openGL dimension. My application is set in landscape mode, so computation looks easier.
I have drawn a square with a border size of 2 and I have the impression that the square takes all the height of the screen. Since I know the resolution of my screen which is equal to 1920*1080, I can compute the width of my scene. Then, by drawing several squares I found the coordinates of on corner.
This way of computing the coordinates are a bit weird and I'm not pretty sure that the computation will always lead to a good answer. Is there a nicer way and obviously a better way to compute those coordinates ?
Thank you in advance !
My question involves the drawImage method in Java Graphics2D (this is for a desktop app, not Android).
My BufferedImage that I'd like to draw contains high resolution binary data, most pixels are black but I have some sparse green pixels (the green pixels represent data points from an incoming raw data stream). The bitmap is quite large, larger than my typical panel size. I made it large so I could zoom in and out. The problem is when I zoom out I lose some of my green pixels .. as an example if my image is 1000 pixels and by panel is 250 pixels, I'd lose 1 out of 4 pixels in each direction (X and Y). If I use nearest neighbor interpolation when I scale the pixels can just disappear to black. If I use something like bilinear interpolation my green pixel will get recolored to somewhere between black and green.
I understand all this behavior, but my question is that is there any way to get the behavior I want, which is to make sure if any pixels is non-black I want it to be drawn at it's full intensity. Perhaps something like a "max-hold" interpolation.
I realize I could probably do what I want by drawing shape primitive over a black background, and maybe this is what I'll have to do. But there is a reason I'm using bitmaps (has to do with the fact that I'm showing the data in a falling spectrogram-type display - and it does have a mode where all the pixels could be colored and not just black and green).
Thanks,
You could look at the implementation of drawImage and override it to get your desired behaviour, however probably the core of the scaling uses hardware acceleration, so re implementing it in Java would be really slow.
You could look into JOGL, but my impression is that, if your pixels are really sparse, just drawing the green pixels on a black background (or over an image) would be both easy to code and very fast.
You could even have an heuristic switching between painting the dots to scaling the image if the number of dots starts being too high.
I am generating very large hex grids (up to 120k total hexes at 32px wide hexes results in over 12k wide images) and I'm trying to find an efficient way to bind these to OpenGL textures in libgdx. I was thinking of using multiple FBOs and breaking the grid up as necessary into tiles, but I'm not sure how to ensure continuity between the FBOs. I can't start with one massive FBO, because that is backed up by a texture so it would fail from trying to load it to video memory. I can't use a standard bitmap on the heap because I need the drawing functionality of an OpenGL surface.
So what I was thinking was I would need to overdraw on the FBOs and somehow pick up on the next FBO exactly where the previous left off. However I'm not sure how to go about this. I'm drawing the hex grid with a series of hexagonal meshes, FYI.
Of course, there's probably some other much simpler and more efficient way to do this that I'm not even thinking of, which is why I pose this question to you fine people!
You have to draw it in pieces. You need to be able to draw your hex grid from an arbitrary position. This means being able to compute which hexes to draw based on a rectangle overlaid over the map. This isn't a hard problem, and I wouldn't worry too much about drawing extra stuff off-screen. You should master this ability to view the hexmap from any position before moving on.
Once you've mastered that, it's really simple.
Draw the top-left corner and store the pixel data. Then move the area you're drawing over exactly one image width. Draw and store that. Move the area over one image width. Draw and store it. Keep doing that until you've covered the entire width.
Move down one image height and repeat the process. Once you've run out of width and height, you're done. Save your mega-huge image.
You don't need FBOs for this. You could draw it to the screen if you wanted. Though if you want maximum performance, I would suggest using FBOs, double buffering them, and using glReadPixels though a pixel buffer object. That should cut down a lot on latency.
I'm trying to draw a 2D contour plot of some data on Android and I'm wondering what the best approach would be to draw those. The whole datasets can be relatively large (2k * 2k points) and zooming and moving inside the plot should be very fast. Most of the time only a small part of the data will be drawn as the user has zoomed in on the data.
My idea now would be to draw the whole plot onto a large canvas, but clip it to the portion visible on the screen, so that only that part would be really drawn in the end. I find the 2D drawing API of Android somewhat confusing and I'm not sure if this is really a feasible approach and how I would then go about executing it.
So my questions are:
Is it a good idea to draw onto a canvas much larger than the screen and use clipping to display only the relevant part?
How would I create a larger canvas and how would I select which parts should be drawn?
You should start the other way around. Instead of creating a huge canvas you should detect what part of your plot you need to draw and draw only that.
So basically you need some navigation/scrolling and you need to keep the offset from the starting point in memory to calculate where you are. Using the offset you can easily zoom in and out because you just need to scale the plot to the screen.
Is it a good idea to draw onto a
canvas much larger than the screen and
use clipping to display only the
relevant part?
A better question might be, do you have any other options. Some might argue that this is a bad idea since your going to keep memory in use when it isn't relevant to whats happening on the UI. However, from my experiences with the Canvas, I think you'll find this should work out just fine. Now, if you are trying to keep "5 square miles" of canvas in memory your definitely going to have to find a better way to manage it.
How would I create a larger canvas and
how would I select which parts should
be drawn?
I would expect that you will be creating your own "scrolling" method when the user touches the screen via overriding the onTouchEvent method. Basically your going to need to keep track of a starting point X and Y and just track that value as you move the Canvas on screen. In order to move the Canvas there are a number of built in's like translate and scale that you can use to both move the Canvas in X and Y as well as scale it when the user zooms in or out.
I don't think that is a good idea to draw your 2D contour plot on a big bitmap because you need a vector type graphics to zoom in and out in order to keep it sharp. Only pictures are good to scale down but graphs will lose thin lines or come out deformed when scaled down in bitmaps.
The proper way is to do it all mathematically and to calculate which part of the graph should be drawn for required position and zoom. Using anti_alias paint for lines and text, the graph would always come out sharp and good...
When the user zooms out, some items should not be drawn as they could not fit into the screen or would clutter it. So the graph would be always optimised for the zoom level...