A Beginner’s Confusion about Screen Dimensions and Screen Density - java

I’m very new to Android programming and the one thing that really has me confused relates to screen density and screen dimensions. I’ve read plenty of replies to other questions on here and I’ve read the Google docs on how to program for multiple screen sizes. None have really helped address either the problem or my own general ignorance. I hope it is okay to ask this here so somebody might finally explain it simply enough so that I’ll be able to wrap my brain around this problem.
First of all, I’ve been working with SurfaceViews onto which I’m throwing bitmaps. I’ve been primarily programming for the Samsung Note 10.1 (2014) edition. The screen is 2048x1536 and returns a screen density of 2.0 when I query the display. My approach has been to make graphics that work at those dimensions but within the code, I’ve used the oft-quoted formula to convert floating point dp coordinates into pixels, ready for the moment I move to other devices.
px = (dp * density) + 0.5f
I’ve now been trying to get the app working on a Samsung S2. The screen is 480 by 800.On the phone, the app is (I assume correctly) loading graphics from the HDPI folder because the pixel density is 1.5.
My first problem was that the graphics in the HDPI were originally far too big. I’d used the Resize program to quickly resize my original XHDPI folder. Perhaps I simply didn’t select the correct source setting but the resulting graphics where far bigger than the actual 480x800 graphic I finally found filled the screen.
However, that was only a symptom of my larger confusion.
When developing an app using bitmaps, is there some magic formula I’ve missed which allows dp values to be translated to pixels or should I be doing calculations based on the actual screen dimensions? By the formular, 100dp is approximately 150px on the (1.5 density) 800px wide screen but 200px on the bigger (2.0 density) 2560 display. That’s 18% horizontally across the S2’s screen but only 8% across the wider screen on the Note 10.1.
I naively assumed that a dp value would translate across all devices and simply put things in the right place or do I have that wrong? Just writing this up makes me even more convinced that I misunderstood what dp values are. I was confused by the suggestion of working to a theoretical Google device with a pixel density of 1 and then adapting everything based on other pixel densities or screen sizes.
Simply to say, as I keep hearing, work in dp unites so everything is uniform hasn’t quite worked for me so I’m now seeking the advice of wiser council. In other words: please help!
Thanks.

Related

Android studio different screen sizes pixels

I made a little game for my android phone (1440x2960) and I used pixels to draw (without any layout):
canvas.drawBitmap(image, x, y, null);
The code is working fine with 1440x1960 screens but didn't pay attention to the different screen sizes (at 720p you can't see even half of the game). How can I solve this problem nicely? I know, that I can make different sizes for the image, but the real problem are the x and y coordinates. I thought about that maybe I can get the actual pixels of the screen, make a ratio, and multiply the coordinates with it, but it's harder than it looks (needs many changes) and should be a better solution for this.
To get a phone's resolution, use
DisplayMetrics metrics = new DisplayMetrics();
This object will contain the data you need to adjust the images accordingly, when used with the functions in this documentation.
Also, have a look at converting dp to pixels and vice versa for appropriate scaling when referencing sizes in Java and XML simultaneously.

Real time logo detection in live camera preview with OpenCV on Android

I am new to computer vision but I am trying to code an android app which does the following:
Get the live camera preview and try to detect one logo in that (i have the logo in my resources). In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I already tried a couple of things including template-matching and feature detection using ORB.
Why that didnt work:
Template-matching:
Issues with scaling and rotation. I tried a multi scale variant of it but a) the performance was really bad and b) the rectangle was of course always shown trying to search for the image. There was no way to actually confirm in the code if the logo was found or not.
ORB feature detection:
Also pretty slow (5-6 fps) but it worked ok-ish. The other problem was that also i never could be sure if the logo was in the picture or not. ORB found random matches even if the logo was not in the picture.
Like I said, I am very new to this. I would appreciate the help on what would be the best way to achieve:
Confirm if a picture A (around 200x200 pixels) is in ROI of camera picture (around 600x600 pixels).
This shouldnt take longer than 50ms per frame. I dont know if thats even possible though. So if a correct way to do this would take a bit longer than that, I would just do the work in a seperate thread and only analyze like every fifth camera frame or so.
Would appreciate any hints or code examples on how to achieve that. Thank you!
With logo detection, I would highly recommend using OpenCV HaarClassifier. It is easy to generate training samples from a collection of images of the logo, or one logo image with many distortions.
If you can use a few rules like the minimum and maximum size of the logo to be detected, and possible regions on the image where it can appear, you can run the detector at a speed better than you mention with ORB.

LibGDX different assets for different resolutions like Android

I have a game developed natively for Android, and now my users also want an iOS version. I thought LibGDX would be the better choice because it'll let me reuse Java code from the game, and also I already have some experience with it.
In my game I have different image sizes for different device densities (in drawable-hdpi, drawable-xhdpi and so on).
So, my question is: how can I achieve the same, but using LibGDX (also taking care of the new densities required by iOS device resolutions, if any change is required)?
Thank you.
Yes you can achieve the same, but it wont be automatic like on Android unless you write some native code as well. I have found that the best way to manage it is simply to do it yourself:
1) When your app starts you can get the screen size and density using Gdx.graphics.getHeight(), getWidth(), Gdx.graphics.getDensity()
2) Depending on the size and density you can change the location path to the correct folder where your assets should be loaded from.
3) Now when any asset loading code is run make sure that it uses your pre-set path from the step above, so that you get the correct assets for that display size/density.
Most of the time you can use the largest image and use `Viewports' to handle resolution and aspect ratio for you. The larger images will be scaled down and this will result in some loss of detail of course.
Viewports will automatically scale the size you want to show of your game world to the screen it displays it. For example FitViewport(100, 100) will create a viewport that shows 100 x 100 "game units". If you would play this on a 1920 x 1080 device it will scale that 100 x 100 game world to a 1080 x 1080 area and leave an empty bar of 840 x 1080.
The size of the game world has nothing to do with pixels. You could create a enemy with the size of 0.5f x 0.5f world units and give that a texture of 256 x 256 pixels. Your viewport scales this for you to the correct size.
Unless you want a pixel perfect game this should be good enough. On some bigger screens but low resolutions devices you might get some minor artefacts due to filtering, setting the filtering for your textures Texture.setFilter(TextureFilter.Nearest, TextureFilter.Linear) might fix some.
All I ever think about when designing graphics are the pixels in my art should represent roughly or at least 1 screen pixel. Usually I just draw pixel perfect for HD and it looks fine on a 800 x 480 screen. If you want to squeeze out a bit more performance you could use MipMaps, I think TexturePacker generates them automatically with the right Filter settings but I have no experience with them.
This can be done using
com.badlogic.gdx.assets.loaders.resolvers.ResolutionFileResolver.
Here is javadoc for it.

Libgdx - TexturePacker combination of texture filters

apologies as this is a common topic and haven't found a widely-agreed on solution.
We have a game world "grid" size of 1220 x 1080 (based on our Designer's photoshop designs). Currently we test on a Nexus 4 (1280x768 #320DPI) and TF201 Transformer Prime Tablet (1280x800 #149DPI).
When packing textures, with the TexturePacker, we're a bit confused about which combination of filters to use. We've read the following page:
http://www.badlogicgames.com/wordpress/?p=1403
.. and when using "Nearest, Nearest", our FPS was fine at 60, but assets became pixelated. Now we packed using "Mipmap, Mipmap", and our FPS went down to 30, but the textures are smoothly edged again.
Is there an agreed upon combination of these filters, or are they simply dependent on requirements? There are quite a lot of combinations to set for "min filter" and "mag filter" in the Packer, so don't want to keep randomly setting them until everything is smoothly resized and FPS is high again, without fully understanding what it is doing.
Many thanks.
J
If you are supporting multiple screen sizes (which you are if targeting Android), the Mag filter should always be Linear. There is no such thing as a mip-mapped mag filter, and on some devices that won't even work (you'll get pure black). It's kind of a "gotcha", because some devices will just assume you meant Linear and fix it for you, so if you fail to test on a device that doesn't do this for you, you'll be unaware of the problem. Nearest will look pixelated when stretched bigger, and you would only want to use it if your are doing retro low resolution graphics, or drawing something pixel perfect.
You can choose one of the following for the Min filter, from fastest (and worst looking) to slowest (and best looking):
Nearest - this will look pixelated and I can't think of any situation where this would be the right choice for a min filter.
MipMapNearestNearest - Won't look or perform better than nearest, and uses more memory. No reason to ever use this.
MipMapNearestLinear - Gets the nearest pixel from the two nearest mips and then linearly interpolates between them. This will still look pixelated. I don't think this is ever used either.
MipMapLinearNearest - Gets the nearest mip level and linearly determines the pixel color. This is most commonly used on mobile for smooth graphics, I think. It performs significantly faster than the below option, but there are cases where it will look slightly blurry (when the nearest mip is kind of on the small side for what's on screen).
MipMapLinearLinear - Gets the two nearest mip levels, linearly determines the pixel color on each of them, and then linearly blends between the two. If you have a sprite that shrinks from nothing to full size, you probably won't be able to detect any difference in quality from smallest to largest. But this is also slow. In the past, I have limited its use to my fonts. I have also done one project that could run at 60fps on new devices three years ago, where I used this on everything. I was very careful about overdraw in that app, so I could get away with it.
Finally, there's linear filtering, which looks and performs worse than the mip-mapping options (for a Min filter):
Linear - this will look smooth if the image is slightly smaller on screen than its original texture. This doesn't use up the 33% extra texture memory that mip mapping does, but the performance will be worse than it would with mip mapping if the texture gets any smaller than 50% of the original, because for each screen pixel it will have to sample and blend more than four pixels from the original texture.

drawable sizes for different screen sizes in android

i have a problem in android development that bored me. my problem is screen size and dealing with that. specially i have some problems with images. for example i want to create a background image for my activity that i created in photoshop and my background image contains a "HELLO" word on it. but when i put it on drawable-xhdpi folder, it seems blurry and its not sharp!! my phone is a nexus 4 and according to Google documentation i create background image in 640 x 480 size.
when i create background image in 960 x 720 size it seems better but not perfect. in this case my image file size is very high!
but what is the standard way for this? please help me to solve this problem for ever. i read google documentation but its not solve my problem!
http://developer.android.com/guide/practices/screens_support.html
You should usually avoid creating images for certain screen sizes to make them background, because there are thousands of different devices and you would have to create dozens of such images.
The first thing you need to be aware of is screen density.
Generally you create 3 to 5 images when not even looking at screen size: low (120 dpi), medium (160 dpi), high (240 dpi), extra high (320 dpi) and 2*extra high (480 dpi). These go into drawable-Xdpi folders, where X is one of l, m, h, xh, xxh.
Next thing when you want to have bigger images on bigger screens (bigger phones, small and big tablets), you may want to put images to folders like drawable-sw600dp-Xdpi. This is not a case for your phone.
Nexus 4 is a xhdpi 640x384 dp device, but you should not treat it differently than Samsung Galaxy S2 (hdpi 533x320 dp).
Create an image of smaller size for both phones and center it horizontally. E.g. 320x100 px for mdpi, 480x150 px for hdpi and 640x200 px for xhdpi (your phone).
the screen resolution for Nexus is 1280x768 (http://www.google.com/nexus/4/specs/), resize the image to this resolution. In especial consideration some images can't handle the resolution and the image became disproportionately.
for interesting
resolution calculator:
http://members.ping.de/~sven/dpi.html
This is problem of Android Fragmentation and you just cannot deal with it perfectly as there is a several hundreds different devices. As colleague above wrote Nexus 4 has resolution -1280 x 768 so for sure res of image as equal as 960 x 720 is good choice. I'm even surprised that google suggest 640 x 480 for xhdpi, it's definitely too less.
So as I said you are not able to make perfect looking graphics for all existing devices. You should choose the most popular devices from every screen category(xhdpi,mdpi,ldpi ... etc) to cover the most important market share.
With 1600+ android models even after they are categorized in few Screen size and a few DPI's its very difficult to manage layouts.. i suggest that you just concentrate on designing layouts w.r.t to screen size and then create views as Resizeable Views to neglect density effects.
Once you have created your layouts Resize the Views .. You can create a Custom View or resize on its onMeasure();

Categories