I created an Android Mobile Application which uses OSMDroid Mapview. It loads perfectly fine from zoom levels below 18. However it only rescales and never serves me tiles from 19 and below.
I have my own tile server using Mapnik, Renderd, Mod_tile so on and so forth. I've set my application to use my own tile server too. Using .../osm_tiles/{zoom}/{x}/{y} I know it goes down to level 20, as I've set. It simply doesn't serve it to my mobile device.
I notice that after a while, the tiles cached on my mobile phone tends to "mix" with some loaded with the default MAPNIK one. Which is causing my map to look weird in certain places.
EDIT: Thanks for all who gave me tips and advice. After deleting and re-downloading the tiles, it's no longer mixed! I didn't see any mixing of names when I looked through my code but I'm pretty sure at some point I must've made the mistake of naming them the same.
It however, still doesn't go below level 18. Here's what I set in my code though:
ITileSource tileSource;
tileSource = new XYTileSource("custom", ResourceProxy.string.mapnik, 16, 20, 256, ".png", custom);
TileSourceFactory.addTileSource(tileSource);
The tiles do go down to 20 upon checking, mod_tile serves them when I access it from the web browser. Looking through the Android Monitor (Using Android Studios Preview 4) I see it downloading and fetching tiles at higher levels but as soon as it goes down to 19, all that stops fetching.
A few different things going on here.
1) At zoom levels > than the available imagery, osmdroid will scale that last viewed tile (stretch it to make it bigger). This gives the illusion of zooming in and is generally replaced immediately as the new tiles are loaded. It's a feature that's used primarily during the animation inbetween zoom levels. In this case, you're simply not getting level 19+ tiles. When configuring osmdroid for your app, did you tell it that the map source (ITileSource) supports the higher zoom levels? Also, you may want to turn on debugging and watch the logs to see if attempts are made to download the > 18 tiles.
2) Have you confirmed that the tile server really does produce tiles at zoom>18? osmdroid should go up to around 22. It's difficult to test as few map sources provide imagery at that level.
3) Mixed up tiles. When using custom tile sources, always make sure the tile source "Name" that you tell osmdroid is somewhat unique. If you use "MAPNIK" for bing and "MAPNIK" for mapquest, you'll end up with a mismatch of tiles from both sources when viewing either one. The only way to fix it is to clear the tile catch, which is usually /sdcard/osmdroid/tiles/
Firstly, solved the issue of the mixed tiles by deleting and re-downloading the tiles, it's no longer mixed! I didn't see any mixing of names when I looked through my code but I'm pretty sure at some point I must've made the mistake of naming them the same. Thanks to Spy for pointing it out.
So just plug in the your android phone, look for the folder named "osmdroid" and delete tiles accordingly.
Next, it seems I had forgotten to add the line
mv.setTileSource(TileSourceFactory.getTileSource("private"));
After creating the source and adding it, I need to also set it on my mapview specifically. It previously wasn't going past level 18 and downloading tiles because it was using the default OSM tile source! :)
Related
I am new in Augmented Reality and I am trying to build an app that tracks a marker. The problems I have is that I can't find good enough documentation online. I am using OpenCV 3.4 and Android. When I say markers I mean:
Now in my case I could also track a ball (a red ball for example) and use that for tracking (is that proper AR though?). My main problem is how to achieve good tracking of a marker.. What should I use ? ArCore/OpenCV/Vuforia?
Thanks
Vuforia is probably not the solution here, and ARCore is limited to new Android OS version - you have to decide if you're ok with that.
For detecting a red ball - there are tons of articles out there, you should simply try it out. Most methods rely on OpenCV's findContours or HoughCircles. The tracking quality depends on your use case and performance requirements - the more complex the environment in which the detection happens (visually-wise), the more filters and algorithms you need in order to isolate your ball, and the more filters and algorithms you apply, the better the result is, but it might affect the frame rate. It is a matter of trial and error per your specific requirements.
For using the marker above, you can check out Aruco library with Opencv:
Detection of ArUco Markers (I havn't tried it, though).
What is the best practice to store resources in libGDX library. I'm know that I can use AssetManager and also e.g. I can link the resources from android folder into iOS, but I dont know how it will behaviour on multiplatform devices. The resources are scale according to screen size/operating system, or I need to manually set diffrent size or resulution in each platform resource folder.I want to avoid any ovelaying or stretch behaviour.
There are many ways to go about this and there is no "best" solution. However if you do already build for android just use the android assets folder. This is the default and will be used for other builds (due to the default libgdx project configurations).
The resources only scale if you tell them too. You can choose to use a viewport (a fit/fill viewport will not stretch but can add black/background bars that do not have the default aspect ratio). But you can also choose to implement screen dependency yourself by using the aspect ratio and the scale.
For instance:
A 1080x1920 mobile phone vs a 1440x1920 tablet
If you use a fit viewport you will have unused space on the tablet. if you use a fillviewport you might lose stuff on the phone. But if you take the phone as a default aspect ratio and calculate the width offset for the tablet (1440-1080/2) you can use this value to choose to put actors/sprites on the same location as on the prone (by using this offset) or relative to the screen edge (by using the screen size). I personally use this to place the UI relative to the screen and the game itself the same as on the phone. You can even choose to use a different layout depending on the aspect ratio.
Do note that in this way you will also have to calculate a global scale and use this everywhere in your application. This can be tedious to implement but gives you much more control!
So if you have a simple game and you don't care about tablets or different screen sizes I suggest you start with a fit viewport.
p.s. Not sure what you mean by "multiplatform devices", but as I said, the default libGDX setup does the heavy lifting here, so I suggest you use it!
Hi guys i have a medical application for android that will be used by the elderly. The problem is when they use the application they struggle to see the print. So i am looking to apply the pinch and zoom technique to the whole app so each page viewed can be zoomed in to make the font clearer. Can anybody point me in the right direction. I have looked at pinch and zoom examples and even downloaded some but they focus on images specifically. where as i want the content on every page to still function the way they do. i.e buttons etc. But i want every page to allow its users to zoom in.
This whole "Pinch and Zoom" thing is not really that usable as it requires a very high API to be used I think you should scrap it and try using the SeekBar that is connected to the size of the text it's a lot easier and it's something I have managed to implement quite easily in my apps from a very low API level. You can check this image of a snapshot of my App.
https://lh4.ggpht.com/9LDEkOV-QotFPHEa9SDpIHZ1OtgMgSDFdcrTsR1DZuBjpwonlAmREhhJQc3znVQ_LEo
If you want some code for this I may post it if you want
I am developing an application on the android platform. The app basically needs to capture an image using the camera(already done) and then analyse the captured image to see where does one color ends and the next one starts (assuming the image would always have 2 or 3 dominant colors and would be really simple). Any ideas?
P.S. I have already tried OpenCV, but there are two problems: 1. The library needs to be installed previously on your phone for your app to work and I can't have that since it will be a commercial app (I am not sure about this dependency though) 2. Secondly, the min-sdk for my app is android 2.2 and for OpenCV it's 2.3
I have just started using OpenCv and the problem you mentioned (installing library) was one of the major issues faced by me too. However I found a solution to this problem by declaring the OpenCv initialization as static. When you make the initialization static then there is no need for the pre-installation of those libraries.
Gud Luck!
OpenCV, while a good general purpose library, is just a collection of utilities that deal with pixels. If the license and min SDK are issues, write it yourself. Segmentation is a matter of choosing a starting x,y location within the image and traversing in each direction until a pixel is encountered that meets or exceeds your threshold for "different". Use a stack to keep account of where you stepped in x and y and then backtrack by popping indices off the stack and follow another direction when you get back to where you were. Push indices onto the stack when you step in either x or y.
It's not difficult, just rather tedious, but that's why people wrote libraries to do this stuff.
To do something like that you'll need to do image processing. A very popular library for C++/Java which can certainly handle this is OpenCV.
I have been developing Android application since 3 to 4 months. I am naive, But pretty much exposed to all of the fundamentals regarding application development on android. However I found really painful while developing application with lots of images, By saying images I mean one of my application has around 10 to 13 images(Small enough to accommodate screen size). The problem is I have to make different copies of it by making,
HDPI - High resolution support
MDPI - Medium resolution support
LDPI - Low resolution support
I have come up with an idea,
IDEA : My idea is to actually have only MDPI images in drawable folder, When my
application will installed first time, I want my application to detect what type of
resolution is supported by device? After knowing which resolution is supported one of my
built in method will either use a MDPI version(images), if handset supports it or else
it will scale up or scale down my images and stores into internal storage for future
reference. When user uninstall my application I will remove these images from internal
storage.
Now this idea has raised a question,
Question :
Whether this idea is feasible? and Programatically possible?
If it is, Should I be really concerned about one time computational overhead?
Is there any mechanism(third party) which can ease my problem? (I hate photoshop and scaling up and down all those images)
Any expert help or guidance will be a big favour!
Thanks in advance!
Krio
I dont really understand why you would do this. The system already basically does this for you. You dont have to specify different images for different display densities, the system just gives you the opportunity to so you can make your app look its best. If you only supply a single image the system will scale it appropriately for you based on the density of the handset.
As for help with scaling the images yourself for packaging, you could look at image magick. This is a powerful scriptable image manipulation tool. You might need to spend a bit of time getting up to speed with it, but I am sure you could write a script that you could reuse for all of your images after that to convert high dpi images to lower dpi ones by scaling down.
Take a look to this article. It describes how android handle directory names for resources. Also take a look a look to this - how android choose the best match for directory name. If you want to use the same resource for all dpis just place it in the default drawable folder.