I'd like to "black out" certain locations on a google map in an android app. I'd like to be able to pull location information that would be loaded onto the screen, and just determine whether or not I want to load that segment of the map, or replace it with a black block.
Is this supported by the maps api? Is there any way to make this work?
Take a look at Ground Overlays in Android Maps API. It will let you overlay (or black out) any section of the map based on coordinates.
So if you have a list of locations and their coordinates, you should be able to overlay with a custom image on top of those areas. It has custom options to automatically scale or anchor the image bounds.
Related
i am making an android app where the camera preview part is circle so that it will only have to capture the face of the user and beside the circle part make other part black.enter image description here
That would imply making a custom camera yourself instead of using the default one.
Refer to this link for this: https://stackoverflow.com/a/15392209/7528995
You could also use the normal camera instead, since making the user fit their head inside the circled preview would be a bit inconvenient. After capturing the photo you could simply put it into a CircleImageView.
To do this you can refer to this answer: https://stackoverflow.com/a/36613446/7528995
I want to play video in in photo frame when image is detected, anybody who have done this using ARCore? would be great help
Thanks
I think you mean you want to add a video as a renderable in ARCore, in your case when an image is detected.
There is actually (at the time of writing) an example included with Sceneform showing how to add a video as a renderable - it is available here: https://github.com/google-ar/sceneform-android-sdk/tree/master/samples/chromakeyvideo
This particular example also applies a Chroma filter but you can simply ignore that part.
The approach is roughly:
create an ExternalTexture to play the video on
create a MediaPlayer and set its surface to the ExternalTexture's surface
build a new renderable with the ExternalTexture
create a node and add it to your scene
set the renderable for the node to the the new ModelRenderable you built
For Augmented images, ArCore will automatically calculate the size of the image that it detects so long as the state of the image is 'TRACKING". From the documentation:
ARCore will attempt to estimate the physical image's width based on its understanding of the world. If the optional physical size is specified in the database, this estimation process will happen more quickly. However, the estimated size may be different from the specified size.
Your renderable will be sized to fit inside this by default but you can scale the renderable up or down as you want also.
There is a series of articles available which may cover your exact case, depending on exactly what you need, along with some example code here: https://proandroiddev.com/arcore-sceneform-simple-video-playback-3fe2f909bfbc
External texture is not working now a days create video node to show videos.
The link to refer is https://github.com/SceneView/sceneform-android/blob/master/samples/video-texture/src/main/java/com/google/ar/sceneform/samples/videotexture/MainActivity.java
I am creating a small indoor navigation android application using wifi fingerprinting. Since it is a small scale application I am using a custom made map(which is basically a png image)I want to show the location of the user on a particular spot on the image and update it accordingly as the user moves. So what is the best way to do it?I thought of dividing image like x-y axis and placing the dot on the axis according to value(Tell me this also).
It involves a lot of Bitmap manipulation . Take that marker as an ImageView which you should be able to put it across your FrameLayout inside which you would have the root Map imageView/ map view and then over the top of it . you should be able to put that marker on top of it. but if its a static marker image. then you should be able to use LayoutParams and put on top of the root map view.
There are hundreds of ways.
One easy way would be to use:
https://github.com/chrisbanes/PhotoView
That lib handles scaling, panning, etc and even provides access to the matrix used.
It is important to note too that the users coordinates need to be translated into scale.
In one of my apps, I dont handle user locations, but I allow a user to put pins on the map. My Pin object contains XY coords relative to the original map size.
To convert to the device/image scale size, I do this:
float[] convertedPin = {pin.getX(), pin.getY()};
getImageMatrix().mapPoints(convertedPin);
getImageMatrix() is provided with the library posted above.
Then, I modified the libs onDraw():
for (Pin pin : pins) {
if (pin.isVisible()) {
float[] goal = {pin.getX(), pin.getY()};
getImageMatrix().mapPoints(goal);
if (pin.getPinBitmap() != null) {
float pinSize = TypedValue.applyDimension(COMPLEX_UNIT_DIP, pin.getPinSize(), getContext().getResources().getDisplayMetrics());
canvas.drawBitmap(pin.getPinBitmap(), goal[0] - (pinSize / 2f), goal[1] - pinSize, null);
}
canvas.save();
}
}
What I want to do:
I want to create a little program in Java (beginner in Java here) in which I specify a GPS location and zoom level and download the map image that Google Maps is showing. Associated with this, I need to have a scale associated with the image (the size in Km of the x and y dimensions of the rectangular image).
What I know:
I know how to get the right image displayed in the browser, specifying GPS location and zoom level directly in the URL (example: https://maps.google.com/maps?f=q&hl=de&geocode=&q=48.167432,+10.533072&z=7). There should be some kind of library like urllib in Python with which I will call this URL.
How to then download this image, and how to know the area pictured in it?
I actually found a formula to relate the amount of meters (or whatever unit) "contained" in a pixel (this function depends on latitude and zoom level of the map). In this case, how to know about the pixels used by the map?
Thank you for any suggestion and/or pointer!
You want to use static maps API.
https://developers.google.com/maps/documentation/staticmaps/index
Your particular map would be at URL
http://maps.googleapis.com/maps/api/staticmap?center=48.167432,10.533072&size=400x400&sensor=true&zoom=7
as just an image. Set sensor=true if you're using a GPS sensor to find the location. Otherwise set it to false.
To read the image from this url:
Getting Image from URL (Java)
How to then download this image, and how to know the area pictured in
it? I actually found a formula to relate the amount of meters (or
whatever unit) "contained" in a pixel (this function depends on
latitude and zoom level of the map). In this case, how to know about
the pixels used by the map?
You pass pixels and zoom level into the image, so that information is all given with this solution, so your scale should be very easy to calculate.
Is it possible to create a touch draw area to take signatures in a form?
I am working on an application to collect information though a form and would like to collect signatures by letting users of the app draw a signature on the screen. How would I do this for android. Preferably I would like the "signature" stored as a image to save somewhere.
Does anyone have any idea?
Many thanks,
Check out Google's API example of Fingerpaint -- the full app is included in the samples -- but the code to the main Java file is here.
Basically, make some small section of the view a canvas control (like in finger paint) and then when the user presses done, save the Canvas as an image (using the Bitmmap object attached to the Canvas.)
You may want to turn rotation off for this particular view, otherwise the signature will get cleared when you rotate the image -- unless you implement some way of rotating it.