I have a game developed natively for Android, and now my users also want an iOS version. I thought LibGDX would be the better choice because it'll let me reuse Java code from the game, and also I already have some experience with it.
In my game I have different image sizes for different device densities (in drawable-hdpi, drawable-xhdpi and so on).
So, my question is: how can I achieve the same, but using LibGDX (also taking care of the new densities required by iOS device resolutions, if any change is required)?
Thank you.
Yes you can achieve the same, but it wont be automatic like on Android unless you write some native code as well. I have found that the best way to manage it is simply to do it yourself:
1) When your app starts you can get the screen size and density using Gdx.graphics.getHeight(), getWidth(), Gdx.graphics.getDensity()
2) Depending on the size and density you can change the location path to the correct folder where your assets should be loaded from.
3) Now when any asset loading code is run make sure that it uses your pre-set path from the step above, so that you get the correct assets for that display size/density.
Most of the time you can use the largest image and use `Viewports' to handle resolution and aspect ratio for you. The larger images will be scaled down and this will result in some loss of detail of course.
Viewports will automatically scale the size you want to show of your game world to the screen it displays it. For example FitViewport(100, 100) will create a viewport that shows 100 x 100 "game units". If you would play this on a 1920 x 1080 device it will scale that 100 x 100 game world to a 1080 x 1080 area and leave an empty bar of 840 x 1080.
The size of the game world has nothing to do with pixels. You could create a enemy with the size of 0.5f x 0.5f world units and give that a texture of 256 x 256 pixels. Your viewport scales this for you to the correct size.
Unless you want a pixel perfect game this should be good enough. On some bigger screens but low resolutions devices you might get some minor artefacts due to filtering, setting the filtering for your textures Texture.setFilter(TextureFilter.Nearest, TextureFilter.Linear) might fix some.
All I ever think about when designing graphics are the pixels in my art should represent roughly or at least 1 screen pixel. Usually I just draw pixel perfect for HD and it looks fine on a 800 x 480 screen. If you want to squeeze out a bit more performance you could use MipMaps, I think TexturePacker generates them automatically with the right Filter settings but I have no experience with them.
This can be done using
com.badlogic.gdx.assets.loaders.resolvers.ResolutionFileResolver.
Here is javadoc for it.
Related
I made a little game for my android phone (1440x2960) and I used pixels to draw (without any layout):
canvas.drawBitmap(image, x, y, null);
The code is working fine with 1440x1960 screens but didn't pay attention to the different screen sizes (at 720p you can't see even half of the game). How can I solve this problem nicely? I know, that I can make different sizes for the image, but the real problem are the x and y coordinates. I thought about that maybe I can get the actual pixels of the screen, make a ratio, and multiply the coordinates with it, but it's harder than it looks (needs many changes) and should be a better solution for this.
To get a phone's resolution, use
DisplayMetrics metrics = new DisplayMetrics();
This object will contain the data you need to adjust the images accordingly, when used with the functions in this documentation.
Also, have a look at converting dp to pixels and vice versa for appropriate scaling when referencing sizes in Java and XML simultaneously.
I am trying to write a card game, that will have an image in the middle, a coloured border and potentially a symbol in the top left. I am using libgdx and a stage/scene2d.
The image will change depending on the suit (these are not your normal 52 deck type of cards), the border colour will change to match the suit, though this colour can be pre determined by the user so I can't pre-save coloured images (though I guess I could pre save about 15 diff colours and give the user a choice of 15) and the symbol will only be on some cards and not others.
As you can see from the two images I've added I have two diff images, border colours and symbols.
My question is relating to groups and overdraw.
1) I presume I should set up a group with 3 images in, and I'm hoping having this won't slow my game down as I will have potentially 30 cards on screen at once and 30 groups with up to 3 images in each could be a lot to draw. Is this right and will LibGDX be able to handle it fine?
2) How should I do the coloured border? Should I have the entire card a coloured rectangle and the image drawn on top? Would the draw method be trying to draw the coloured section underneath the image and thus wasting GPU/CPU time? A friend of mine said I could just have a white image, and then set the colour using RGB values in the code? Is this possible? That would mean I could only have a single jpg which would be much better for apk size.
3) Or should I ignore using a jpg image and try and draw the coloured square using ShapeRenderer?
Thanks, I hope these questions aren't too many in one post.
Of course it will not slow you game - 90 (30 groups x 3 textures) sprites on screen are like nothing for (I guess) every framework - so LibGDX will do the thing also.
Although it does matter how these graphics will be stored! Please use one TextureAtlas with all textures instead of many textures in many files - you can prepare the Atlas by using TexturePacker (for a basic usage like this free version will be ok).
The reason of this is that if you are using many single Textures LibGDX needs to switch graphic in GPU bufor (maybe it is not perfect term) before rendering each of them - if you have one big texture (including others) it does not need to switch anything because nothing changes in the buffor.
To get more information about using TextureAtlas take a look at the manual
Yes you can "color" the Actor (like Image) by using for example
actor.addAction(Actions.color(new Color(1f,1f,1f, 0.2f), 2f));
another approach is to have 1px x 1px texture of every color (packed in your Texture Atlas) and just to change it's size. There no reason to keep full size singe-colour textures since it is great vaste of space and memory.
It is generally not good idea to use ShapeRenderer - of course it is helpful sometimes but drawing sprites has a greater performance.
You are generally a little bit too scared about performance but if you want to save your CPU avoiding ShapeRenderer seems to be good idea.
I’m very new to Android programming and the one thing that really has me confused relates to screen density and screen dimensions. I’ve read plenty of replies to other questions on here and I’ve read the Google docs on how to program for multiple screen sizes. None have really helped address either the problem or my own general ignorance. I hope it is okay to ask this here so somebody might finally explain it simply enough so that I’ll be able to wrap my brain around this problem.
First of all, I’ve been working with SurfaceViews onto which I’m throwing bitmaps. I’ve been primarily programming for the Samsung Note 10.1 (2014) edition. The screen is 2048x1536 and returns a screen density of 2.0 when I query the display. My approach has been to make graphics that work at those dimensions but within the code, I’ve used the oft-quoted formula to convert floating point dp coordinates into pixels, ready for the moment I move to other devices.
px = (dp * density) + 0.5f
I’ve now been trying to get the app working on a Samsung S2. The screen is 480 by 800.On the phone, the app is (I assume correctly) loading graphics from the HDPI folder because the pixel density is 1.5.
My first problem was that the graphics in the HDPI were originally far too big. I’d used the Resize program to quickly resize my original XHDPI folder. Perhaps I simply didn’t select the correct source setting but the resulting graphics where far bigger than the actual 480x800 graphic I finally found filled the screen.
However, that was only a symptom of my larger confusion.
When developing an app using bitmaps, is there some magic formula I’ve missed which allows dp values to be translated to pixels or should I be doing calculations based on the actual screen dimensions? By the formular, 100dp is approximately 150px on the (1.5 density) 800px wide screen but 200px on the bigger (2.0 density) 2560 display. That’s 18% horizontally across the S2’s screen but only 8% across the wider screen on the Note 10.1.
I naively assumed that a dp value would translate across all devices and simply put things in the right place or do I have that wrong? Just writing this up makes me even more convinced that I misunderstood what dp values are. I was confused by the suggestion of working to a theoretical Google device with a pixel density of 1 and then adapting everything based on other pixel densities or screen sizes.
Simply to say, as I keep hearing, work in dp unites so everything is uniform hasn’t quite worked for me so I’m now seeking the advice of wiser council. In other words: please help!
Thanks.
i have a problem in android development that bored me. my problem is screen size and dealing with that. specially i have some problems with images. for example i want to create a background image for my activity that i created in photoshop and my background image contains a "HELLO" word on it. but when i put it on drawable-xhdpi folder, it seems blurry and its not sharp!! my phone is a nexus 4 and according to Google documentation i create background image in 640 x 480 size.
when i create background image in 960 x 720 size it seems better but not perfect. in this case my image file size is very high!
but what is the standard way for this? please help me to solve this problem for ever. i read google documentation but its not solve my problem!
http://developer.android.com/guide/practices/screens_support.html
You should usually avoid creating images for certain screen sizes to make them background, because there are thousands of different devices and you would have to create dozens of such images.
The first thing you need to be aware of is screen density.
Generally you create 3 to 5 images when not even looking at screen size: low (120 dpi), medium (160 dpi), high (240 dpi), extra high (320 dpi) and 2*extra high (480 dpi). These go into drawable-Xdpi folders, where X is one of l, m, h, xh, xxh.
Next thing when you want to have bigger images on bigger screens (bigger phones, small and big tablets), you may want to put images to folders like drawable-sw600dp-Xdpi. This is not a case for your phone.
Nexus 4 is a xhdpi 640x384 dp device, but you should not treat it differently than Samsung Galaxy S2 (hdpi 533x320 dp).
Create an image of smaller size for both phones and center it horizontally. E.g. 320x100 px for mdpi, 480x150 px for hdpi and 640x200 px for xhdpi (your phone).
the screen resolution for Nexus is 1280x768 (http://www.google.com/nexus/4/specs/), resize the image to this resolution. In especial consideration some images can't handle the resolution and the image became disproportionately.
for interesting
resolution calculator:
http://members.ping.de/~sven/dpi.html
This is problem of Android Fragmentation and you just cannot deal with it perfectly as there is a several hundreds different devices. As colleague above wrote Nexus 4 has resolution -1280 x 768 so for sure res of image as equal as 960 x 720 is good choice. I'm even surprised that google suggest 640 x 480 for xhdpi, it's definitely too less.
So as I said you are not able to make perfect looking graphics for all existing devices. You should choose the most popular devices from every screen category(xhdpi,mdpi,ldpi ... etc) to cover the most important market share.
With 1600+ android models even after they are categorized in few Screen size and a few DPI's its very difficult to manage layouts.. i suggest that you just concentrate on designing layouts w.r.t to screen size and then create views as Resizeable Views to neglect density effects.
Once you have created your layouts Resize the Views .. You can create a Custom View or resize on its onMeasure();
I have a very large hi-res map which I want to use in an application (imagesize is around 80 mb).
I would like to know the following:
How can I load this image the best way possible? I know it will take some seconds to load the image (which is ok) but I would like to notify the user of the progress. I would like to use a determined mode and show this in some sort of JProgressBar to the user. This should reflect the number of bytes that have been loaded or something like that. Is there any Image loading method that can provide this functionality (like ImageIO.read())?
Because the map is of very high resolution I would like to offer the user to scroll to zoom in and out. How can I do this the best way? I know for a fact that rescaling a BufferedImage the standard way would take a VERY long time for such a big file. Is there any efficient way of doing this?
Thank you for your input!
kind regards,
Héctor van den Boorn
p.s. The image will be drawn on the canvas of a JPanel.
Hi Andrew, Thank you so much for your help; everything worked out perfectly and is loading quick.
Without your expertise and explanation I would have still been working on this so you've earned the bounty fair and square.
What I did was the following; using the imagemagick I created multiple images of different resolution and at the start of execution I load only the smallest res. image. The rest are loaded in seperate threads so execution is not stalled. Using the information you provided me I then use the appropriate images when zooming in or out. I'm a bit sceptical of using the tiles because I need to draw my own images on top of the map and I couldn't find the paint function in the external jar you told me to use, so I ended up using something simple; when zooming or panning the rescale mode is set to fast and when you're not zooming or panning the rescale is set to smooth for pixel-perfect images (just like you suggested), but this turns out to be fast enough and I don't need tiles (altough I do see that with even larger images this would be necesarry and I understand the information you've given me).
So thanks again and everything is working perfectly :)
There are two approaches you should (simultaneously) take:
Downscaling your image into various sizes. You should downscale your image at a series of lower resolutions (1/2, 1/4, 1/8, etc until the image is about the largest likely screen resolution). When the user first opens the image, you show the lower resolution image. This will load fast and allow the user to pan. When the user zooms in, you use a higher resolution image. You can use ImageMagick for this: http://www.imagemagick.org/Usage/resize/
Tile your larger images. This breaks down the single, large image into a large number of small images in a grid pattern. When a user zooms in on an area, you compute which tiles the user is looking at, and you render only them, not the other areas of the image. You can use ImageMagick to do split an image into tile, eg ImageMagick. What is the correct way to dice an image into sub-tiles. The documentation is http://www.imagemagick.org/Usage/crop/#crop_tile
(Providing a cache of appropriately sized and tiles images is what allows GoogleEarth and countless other mapping applications, to render so fast, yet zoom into the map at incredibly high resolution)
Once you have your tiles, you can use one of several engines in Java:
https://wiki.openstreetmap.org/wiki/Tirex
http://www.slick2d.org/wiki/index.php/Tiled
There may be others as well.
You can implement arbitrary zooming (suitable for pinch-to-zoom or similar) within this framework. Within the zoom limits you allow, your algorithm would be something like:
For the zoom level chosen by the user, choose the closest higher resolution cache. For example, if you have 100%, 50%, 25% and 12.5% tiles, and the user chooses 33% zoom, select the 50% tiles
Set the layout for the tiles so the tile squares have the correct size for the chosen zoom (this might be a single tile at lowest zoom levels). For example, at 33% zoom using 50% tiles, with the tiles being 100 pixels square, the grid will be 67 pixel squares
Individually load and scale the tile images to fit the screen (this can be multi-threaded which works well on modern CPU architectures)
There are a couple of points to note:
The scaling algorithm changes when you reach the greatest resolution you have tiles for.
Up to 100% zooming for the image, use bilinear or bicubic scaling. This provides excellent appearance for photographs with little jaggedness
Above 100%, you probably want to show the pixels, so nearest-neighbour might be a good choice
For higher fidelity, use a higher scale tile and downscale > 50%. For example, suppose you have tiles prepared at 100%, 50%, 25% and 12.5%. To show 40% zoom, don't scale down the 50% tiles; instead use the 100% tiles and scale them down to 40%. This is useful:
If your images are textual or diagrams (i.e. the raster images containing many straight lines). Scaling these type of images will often produce nasty artefacts if you don't oversample
If you need very high fidelity on photographic-style images
If you need to render a preview of the zoom (eg while the user is still pinching-and-zooming), grab a screenshot at the start of the gesture and zoom that. It matters much more that the animation is smooth than the zoom preview is pixel-perfect.
Selection of the right size of tile is important. Very large tiles (<1 per screen) is slow to render. Too small tiles creates other overheads and often produces nasty rendering artefacts where you see the screen filling up randomly. A good compromise between performance and complexity is to make the tiles about a quarter of the full-screen size.
When using these techniques, the images should load very much faster and so the progress bar is not so important. If it is, then you need to register a IIOReadProgressListener on the ImageReader:
ImageReader.addIIOReadProgressListener()
From the JavaDoc:
An interface used by ImageReader implementations to notify callers of their image and thumbnail reading methods of progress.
This interface receives general indications of decoding progress (via the imageProgress and thumbnailProgress methods), and events indicating when an entire image has been updated (via the imageStarted, imageComplete, thumbnailStarted and thumbnailComplete methods). Applications that wish to be informed of pixel updates as they happen (for example, during progressive decoding), should provide an IIOReadUpdateListener.