I am developing an application on the android platform. The app basically needs to capture an image using the camera(already done) and then analyse the captured image to see where does one color ends and the next one starts (assuming the image would always have 2 or 3 dominant colors and would be really simple). Any ideas?
P.S. I have already tried OpenCV, but there are two problems: 1. The library needs to be installed previously on your phone for your app to work and I can't have that since it will be a commercial app (I am not sure about this dependency though) 2. Secondly, the min-sdk for my app is android 2.2 and for OpenCV it's 2.3
I have just started using OpenCv and the problem you mentioned (installing library) was one of the major issues faced by me too. However I found a solution to this problem by declaring the OpenCv initialization as static. When you make the initialization static then there is no need for the pre-installation of those libraries.
Gud Luck!
OpenCV, while a good general purpose library, is just a collection of utilities that deal with pixels. If the license and min SDK are issues, write it yourself. Segmentation is a matter of choosing a starting x,y location within the image and traversing in each direction until a pixel is encountered that meets or exceeds your threshold for "different". Use a stack to keep account of where you stepped in x and y and then backtrack by popping indices off the stack and follow another direction when you get back to where you were. Push indices onto the stack when you step in either x or y.
It's not difficult, just rather tedious, but that's why people wrote libraries to do this stuff.
To do something like that you'll need to do image processing. A very popular library for C++/Java which can certainly handle this is OpenCV.
Related
I am new in Augmented Reality and I am trying to build an app that tracks a marker. The problems I have is that I can't find good enough documentation online. I am using OpenCV 3.4 and Android. When I say markers I mean:
Now in my case I could also track a ball (a red ball for example) and use that for tracking (is that proper AR though?). My main problem is how to achieve good tracking of a marker.. What should I use ? ArCore/OpenCV/Vuforia?
Thanks
Vuforia is probably not the solution here, and ARCore is limited to new Android OS version - you have to decide if you're ok with that.
For detecting a red ball - there are tons of articles out there, you should simply try it out. Most methods rely on OpenCV's findContours or HoughCircles. The tracking quality depends on your use case and performance requirements - the more complex the environment in which the detection happens (visually-wise), the more filters and algorithms you need in order to isolate your ball, and the more filters and algorithms you apply, the better the result is, but it might affect the frame rate. It is a matter of trial and error per your specific requirements.
For using the marker above, you can check out Aruco library with Opencv:
Detection of ArUco Markers (I havn't tried it, though).
I created an Android Mobile Application which uses OSMDroid Mapview. It loads perfectly fine from zoom levels below 18. However it only rescales and never serves me tiles from 19 and below.
I have my own tile server using Mapnik, Renderd, Mod_tile so on and so forth. I've set my application to use my own tile server too. Using .../osm_tiles/{zoom}/{x}/{y} I know it goes down to level 20, as I've set. It simply doesn't serve it to my mobile device.
I notice that after a while, the tiles cached on my mobile phone tends to "mix" with some loaded with the default MAPNIK one. Which is causing my map to look weird in certain places.
EDIT: Thanks for all who gave me tips and advice. After deleting and re-downloading the tiles, it's no longer mixed! I didn't see any mixing of names when I looked through my code but I'm pretty sure at some point I must've made the mistake of naming them the same.
It however, still doesn't go below level 18. Here's what I set in my code though:
ITileSource tileSource;
tileSource = new XYTileSource("custom", ResourceProxy.string.mapnik, 16, 20, 256, ".png", custom);
TileSourceFactory.addTileSource(tileSource);
The tiles do go down to 20 upon checking, mod_tile serves them when I access it from the web browser. Looking through the Android Monitor (Using Android Studios Preview 4) I see it downloading and fetching tiles at higher levels but as soon as it goes down to 19, all that stops fetching.
A few different things going on here.
1) At zoom levels > than the available imagery, osmdroid will scale that last viewed tile (stretch it to make it bigger). This gives the illusion of zooming in and is generally replaced immediately as the new tiles are loaded. It's a feature that's used primarily during the animation inbetween zoom levels. In this case, you're simply not getting level 19+ tiles. When configuring osmdroid for your app, did you tell it that the map source (ITileSource) supports the higher zoom levels? Also, you may want to turn on debugging and watch the logs to see if attempts are made to download the > 18 tiles.
2) Have you confirmed that the tile server really does produce tiles at zoom>18? osmdroid should go up to around 22. It's difficult to test as few map sources provide imagery at that level.
3) Mixed up tiles. When using custom tile sources, always make sure the tile source "Name" that you tell osmdroid is somewhat unique. If you use "MAPNIK" for bing and "MAPNIK" for mapquest, you'll end up with a mismatch of tiles from both sources when viewing either one. The only way to fix it is to clear the tile catch, which is usually /sdcard/osmdroid/tiles/
Firstly, solved the issue of the mixed tiles by deleting and re-downloading the tiles, it's no longer mixed! I didn't see any mixing of names when I looked through my code but I'm pretty sure at some point I must've made the mistake of naming them the same. Thanks to Spy for pointing it out.
So just plug in the your android phone, look for the folder named "osmdroid" and delete tiles accordingly.
Next, it seems I had forgotten to add the line
mv.setTileSource(TileSourceFactory.getTileSource("private"));
After creating the source and adding it, I need to also set it on my mapview specifically. It previously wasn't going past level 18 and downloading tiles because it was using the default OSM tile source! :)
I work on a big project with codenameone(i can't attach my codes because it's really big). I get android app and it's works on android devices. But recently i get ios build for this project and it's not working on ios device(just showing a white page instead of map).
My project is a map-framework that render tiles and ... on graphics(i used graphics class for drawing, transforming, writing text and more).
I used input stream for working with file because of File not supported.
I need a solution to how debug and find my problem about ios build(why tiles doesn't showed).
In fact i don'n know anything about ios and objective-c.
Thanks in advance.
Most of the logging functionality that allows inspecting issues is for pro developers (you can try the trial) its discussed in this video (mostly focused on crashes): http://www.codenameone.com/how-do-i---use-crash-protection-get-device-logs.html
From your description I would guess you created a really large mutable image (larger than screen bounds) and are drawing onto that. This would be both slow on iOS (and on newer Android devices) and might actually produce that result if the image exceeds the maximum texture size of the device.
If that is not the case you would need to explain what you are doing more precisely.
I have been developing Android application since 3 to 4 months. I am naive, But pretty much exposed to all of the fundamentals regarding application development on android. However I found really painful while developing application with lots of images, By saying images I mean one of my application has around 10 to 13 images(Small enough to accommodate screen size). The problem is I have to make different copies of it by making,
HDPI - High resolution support
MDPI - Medium resolution support
LDPI - Low resolution support
I have come up with an idea,
IDEA : My idea is to actually have only MDPI images in drawable folder, When my
application will installed first time, I want my application to detect what type of
resolution is supported by device? After knowing which resolution is supported one of my
built in method will either use a MDPI version(images), if handset supports it or else
it will scale up or scale down my images and stores into internal storage for future
reference. When user uninstall my application I will remove these images from internal
storage.
Now this idea has raised a question,
Question :
Whether this idea is feasible? and Programatically possible?
If it is, Should I be really concerned about one time computational overhead?
Is there any mechanism(third party) which can ease my problem? (I hate photoshop and scaling up and down all those images)
Any expert help or guidance will be a big favour!
Thanks in advance!
Krio
I dont really understand why you would do this. The system already basically does this for you. You dont have to specify different images for different display densities, the system just gives you the opportunity to so you can make your app look its best. If you only supply a single image the system will scale it appropriately for you based on the density of the handset.
As for help with scaling the images yourself for packaging, you could look at image magick. This is a powerful scriptable image manipulation tool. You might need to spend a bit of time getting up to speed with it, but I am sure you could write a script that you could reuse for all of your images after that to convert high dpi images to lower dpi ones by scaling down.
Take a look to this article. It describes how android handle directory names for resources. Also take a look a look to this - how android choose the best match for directory name. If you want to use the same resource for all dpis just place it in the default drawable folder.
I'm writing a time management application and I have an idea for presenting timelines and todo items in 3D. Visually, I imagine this as looking down a corridor or highway in 3D, with upcoming deadlines and tasks represented as signposts - more important items are larger and upcoming deadlines are nearer.
I want to do this in Java, however I have no idea how to begin. For example, I would like to be able to render text and 2D graphics (dates, calendars etc) on the floor/walls of the corridoor, as well as on task items. The tasks themselves could be simple blocks. However examples of 3D code I have seen all look to operate on a very low level, and what I can't figure out are the appropriate co-ordinates to be using, or how the user would be able to interact with the view by selecting items with the mouse (for example clicking an expand or info button to get/edit task properties with the usual swing components).
Is there any higher level API I could be using, or example code that does this sort of thing? Any other ideas for how best to approach this problem?
edit: removed Java3D requirement - I just need to do this in Java.
To be perfectly honest, most "clever" user interfaces are ineffective for a number of reasons:
They favor complexity (for the sake of coolness) over simplicity and usability.
They are likely to be totally unfamiliar to the user.
You have to implement them yourself without some library having done the hard work for you.
I think the interface you describe runs the risk of falling into this trap. What's wrong with the current interface? Won't it be really hard to get an overview due to foreground stuff getting in the way? Now of course you could take it a step further and zoom out/rotate to get an overview but that complicates things.
Having said all that, if you can make it easy to use and really slick, then it can make an application. Users will have a lower tolerance for failure in "fancy" UIs, so perhaps this isn't the best first 3D project.
I do think there is a need for more examples geared towards visualisation rather than games.
To be honest, having tried both, if you use JOGL instead you'll find there's tonnes of OpenGL examples to copy that sort of thing from, and you won't have to code around the limits of the scene graph Java3D gives you. I tried Java3D a couple of years ago in a simple wireframe viewer, and it was such a pain to get the camera control right and the rendering anywhere near OK that I gave up on it.
I've found Pro Java 6 3D Game Development to contain very good code examples.
Here's a code example of 3D text, from NeHe Productions!, check the "DOWNLOAD Java Code" and "DOWNLOAD JoGL Code" at the end of the example.
On a side-note, I was very impressed with LWJGL which makes you write in a very similar way to straight-forward OpenGL.