I'm making an app in which I would like to implement an offline map service for the city Rome, I thought the simplest way to do this was to implement an high-resolution map-picture of the city, but now I am getting into two problems:
If I add a picture in eclipse to my app it is being scaled down to the size of an icon and when I scale it up again it has very low resolution, how can this be solved?
Do you have any experience with multi touch or zooming possibilites? Because when you have the map people would like to zoom in to see street-names in detail. I know you have to give the fingers coordinates x,x and y,y but is this the easiest way and how should I continue?
Related
I’m making an app of the Ticket to Ride board game. I need to display a map of the US. I want to be able to zoom in and out on the map, “draw” trains over the top of the image, and listen for clicks at a specific location on the image.
Can anyone give me some direction on how to implement this? Is there a widget that has the functionality I’m looking for?
You can do it by this SubsamplingScaleImageView
I am new to computer vision but I am trying to code an android app which does the following:
Get the live camera preview and try to detect one logo in that (i have the logo in my resources). In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I already tried a couple of things including template-matching and feature detection using ORB.
Why that didnt work:
Template-matching:
Issues with scaling and rotation. I tried a multi scale variant of it but a) the performance was really bad and b) the rectangle was of course always shown trying to search for the image. There was no way to actually confirm in the code if the logo was found or not.
ORB feature detection:
Also pretty slow (5-6 fps) but it worked ok-ish. The other problem was that also i never could be sure if the logo was in the picture or not. ORB found random matches even if the logo was not in the picture.
Like I said, I am very new to this. I would appreciate the help on what would be the best way to achieve:
Confirm if a picture A (around 200x200 pixels) is in ROI of camera picture (around 600x600 pixels).
This shouldnt take longer than 50ms per frame. I dont know if thats even possible though. So if a correct way to do this would take a bit longer than that, I would just do the work in a seperate thread and only analyze like every fifth camera frame or so.
Would appreciate any hints or code examples on how to achieve that. Thank you!
With logo detection, I would highly recommend using OpenCV HaarClassifier. It is easy to generate training samples from a collection of images of the logo, or one logo image with many distortions.
If you can use a few rules like the minimum and maximum size of the logo to be detected, and possible regions on the image where it can appear, you can run the detector at a speed better than you mention with ORB.
I am implementing a map as a computer game accessory; it should show the geography of the computer game with a few informative overlays. I wonder what's the best way to achieve this in JavaFX, or more precisely ScalaFX.
Right now, I have a very naive implementation:
I have tiles for the map in different zoom levels. For each zoom level, I arrange ImageViews in a Group and transform that group, so that all zoom levels use the same coordinates.
A separate group with the same coordinates is used for my overlays.
I zoom using the mouse wheel and make only one zoom level visible, depending on the current zoom.
I pack all this into a ScrollPane.
That has a few limitations:
all tiles, for all zoom levels, are loaded at startup. That's ugly, but works in my particular use case.
using a ScrollPane only works if the map has limited bounds. Again, fine here, but not for maps in general.
the UX is weird: the ScrollPane scrolls with the mouse wheel, while most maps scroll per drag and drop; most maps zoom with the mouse wheel or pinch to zoom. It's critical that zooming preserves the "anchors" (mouse/touch positions) during the gesture. (It would also be nice to be mobile-ready out of the box, but that's just dreaming right now...)
different levels of detail in the overlay, depending on the current zoom, are possible, but probably not very efficient or convenient.
Obviously, this is one of the approaches I tried. This question mentions a library by some eppleton, but it doesn't seem to be maintained, and the blog that used to describe the library doesn't exist anymore. Also, it seems to focus on providing a game engine, with a tile having meaning to the game; a tile in a map is just an image, and the overlay doesn't care where one tile begins or ends.
To finish this with a concrete question: Are there any libraries or techniques that I can use to fulfill my needs? I'm especially interested in my third bullet point (UX), but I guess that if there's a suitable approach, it would cover points 1 to 3.
UPDATED:
Few more options are on the table now:
Gluon Map
GMapsFX
One option is not to build but use existing open source project, such as openmapfx
So without a screenshot you guys won't know what I'm talking about.. So here we go:
This particular part of my application I need to do an overlay over the camera interface as seen in the screenshot. 2 rectangles - And when you take the picture, I want to end up with 2 images, the cuttings taken from the 2 rectangles.
If this doesn't make sense, please comment and ask for clarification. I honestly don't even know where to begin with this one. What would be a good starting point? Is this even possible?
I guess you will have to create your own Custom Camera:
http://developer.android.com/guide/topics/media/camera.html#custom-camera
where you can implement your own preview layout - thus you can put those frames over camera preview as you wish
but it won't be easy as many of devices exists with many different camera resolutions, some work only in landscape, even though their native interface is portrait, it's a fake landscape; and also aligning those frames with the picture you'll be actually getting will be a challenge; but definitely this is Custom Camera - good luck
I have multiple geo-locations that have a weight of +1 or -1 in an android app. I'd like to be able to plot these points as an overlay to a google-maps activity. I thought that the best way to view the data is not on a point-by-point basis but by shading regions depending on the average density of its values. I have done a few searches and I want a heatmap-like rendering and was wondering if anyone had any direction as to how to accomplish this.
If you are okay with using a library then you can use Mapex.
https://code.google.com/p/mapex/source/browse/MapExLib/