Downloading a map image using Google Map using Java - java

What I want to do:
I want to create a little program in Java (beginner in Java here) in which I specify a GPS location and zoom level and download the map image that Google Maps is showing. Associated with this, I need to have a scale associated with the image (the size in Km of the x and y dimensions of the rectangular image).
What I know:
I know how to get the right image displayed in the browser, specifying GPS location and zoom level directly in the URL (example: https://maps.google.com/maps?f=q&hl=de&geocode=&q=48.167432,+10.533072&z=7). There should be some kind of library like urllib in Python with which I will call this URL.
How to then download this image, and how to know the area pictured in it?
I actually found a formula to relate the amount of meters (or whatever unit) "contained" in a pixel (this function depends on latitude and zoom level of the map). In this case, how to know about the pixels used by the map?
Thank you for any suggestion and/or pointer!

You want to use static maps API.
https://developers.google.com/maps/documentation/staticmaps/index
Your particular map would be at URL
http://maps.googleapis.com/maps/api/staticmap?center=48.167432,10.533072&size=400x400&sensor=true&zoom=7
as just an image. Set sensor=true if you're using a GPS sensor to find the location. Otherwise set it to false.
To read the image from this url:
Getting Image from URL (Java)
How to then download this image, and how to know the area pictured in
it? I actually found a formula to relate the amount of meters (or
whatever unit) "contained" in a pixel (this function depends on
latitude and zoom level of the map). In this case, how to know about
the pixels used by the map?
You pass pixels and zoom level into the image, so that information is all given with this solution, so your scale should be very easy to calculate.

Related

Camera2: Make SurfaceView aspect fit to fill (scaleType CENTER_CROP)

I've been following the Camera2 example (android/camera-samples/Camera2Video) to create an abstraction over the Camera2 library. The idea of my abstraction is to give the user the ability to use a Camera view from React Native, so it's just a <Camera> view which can be whatever size/aspect ratio.
While this works out of the box on iOS, I can't seem to get the preview on Android to display "what the camera sees" in the correct aspect ratio.
The official example from Android works like this:
They create a custom SurfaceView extension that should automatically fit to the correct aspect ratio (see: AutoFitSurfaceView)
They use that AutoFitSurfaceView in their layout
They add a listener to the AutoFitSurfaceView's Surface to find out when it has been created (source)
Once the Surface has been created, they call getPreviewOutputSize(...) to get the best matching camera preview size (e.g. so you don't stream 4k for a 1080p screen, that's wasted pixels)
Then they pass the best matching camera preview size to the AutoFitSurfaceView::setAspectRatio(...) function
By knowing the desired aspect ratio, the AutoFitSurfaceView should then automatically perform a center-crop transform in it's onMeasure override
If you read the source code of their getPreviewOutputSize(...) function, you might notice that this uses a Display to find the best matching preview size. If I understood the code correctly, this would only work if the camera preview (AutoFitSurfaceView) is exactly the same size as the device's screen. This is poorly designed, as there are lots of cases where that simply isn't true. In my case, the Camera has a bit of a bottom spacing/margin, so it doesn't fill the screen and therefore has weird resolutions (1080x1585 on a 1080x1920 screen)
With that long introduction, here comes my question: How do I actually perform a correct center crop transform (aka scaleType = CENTER_CROP) on my SurfaceView? I've tried the following:
Set the size of my SurfaceView using SurfaceHolder::setFixedSize(...), but that didn't change anything at all
Remove their getPreviewOutputSize(...) stuff and simply use the highest resolution available
Use the Android View properties scaleX and scaleY to scale the view by the aspect-ratio difference of the view <-> camera input scaler size (this somewhat worked, but is giving me errors if I try to use high-speed capture: Surface size 1080x1585 is not part of the high speed supported size list [1280x720, 1920x1080])
Any help appreciated!

ARCore: How to play video in the photo frame when an image detected

I want to play video in in photo frame when image is detected, anybody who have done this using ARCore? would be great help
Thanks
I think you mean you want to add a video as a renderable in ARCore, in your case when an image is detected.
There is actually (at the time of writing) an example included with Sceneform showing how to add a video as a renderable - it is available here: https://github.com/google-ar/sceneform-android-sdk/tree/master/samples/chromakeyvideo
This particular example also applies a Chroma filter but you can simply ignore that part.
The approach is roughly:
create an ExternalTexture to play the video on
create a MediaPlayer and set its surface to the ExternalTexture's surface
build a new renderable with the ExternalTexture
create a node and add it to your scene
set the renderable for the node to the the new ModelRenderable you built
For Augmented images, ArCore will automatically calculate the size of the image that it detects so long as the state of the image is 'TRACKING". From the documentation:
ARCore will attempt to estimate the physical image's width based on its understanding of the world. If the optional physical size is specified in the database, this estimation process will happen more quickly. However, the estimated size may be different from the specified size.
Your renderable will be sized to fit inside this by default but you can scale the renderable up or down as you want also.
There is a series of articles available which may cover your exact case, depending on exactly what you need, along with some example code here: https://proandroiddev.com/arcore-sceneform-simple-video-playback-3fe2f909bfbc
External texture is not working now a days create video node to show videos.
The link to refer is https://github.com/SceneView/sceneform-android/blob/master/samples/video-texture/src/main/java/com/google/ar/sceneform/samples/videotexture/MainActivity.java

How to display user position on custom made map in indoor navigation?

I am creating a small indoor navigation android application using wifi fingerprinting. Since it is a small scale application I am using a custom made map(which is basically a png image)I want to show the location of the user on a particular spot on the image and update it accordingly as the user moves. So what is the best way to do it?I thought of dividing image like x-y axis and placing the dot on the axis according to value(Tell me this also).
It involves a lot of Bitmap manipulation . Take that marker as an ImageView which you should be able to put it across your FrameLayout inside which you would have the root Map imageView/ map view and then over the top of it . you should be able to put that marker on top of it. but if its a static marker image. then you should be able to use LayoutParams and put on top of the root map view.
There are hundreds of ways.
One easy way would be to use:
https://github.com/chrisbanes/PhotoView
That lib handles scaling, panning, etc and even provides access to the matrix used.
It is important to note too that the users coordinates need to be translated into scale.
In one of my apps, I dont handle user locations, but I allow a user to put pins on the map. My Pin object contains XY coords relative to the original map size.
To convert to the device/image scale size, I do this:
float[] convertedPin = {pin.getX(), pin.getY()};
getImageMatrix().mapPoints(convertedPin);
getImageMatrix() is provided with the library posted above.
Then, I modified the libs onDraw():
for (Pin pin : pins) {
if (pin.isVisible()) {
float[] goal = {pin.getX(), pin.getY()};
getImageMatrix().mapPoints(goal);
if (pin.getPinBitmap() != null) {
float pinSize = TypedValue.applyDimension(COMPLEX_UNIT_DIP, pin.getPinSize(), getContext().getResources().getDisplayMetrics());
canvas.drawBitmap(pin.getPinBitmap(), goal[0] - (pinSize / 2f), goal[1] - pinSize, null);
}
canvas.save();
}
}

Black out area of Google Maps activity

I'd like to "black out" certain locations on a google map in an android app. I'd like to be able to pull location information that would be loaded onto the screen, and just determine whether or not I want to load that segment of the map, or replace it with a black block.
Is this supported by the maps api? Is there any way to make this work?
Take a look at Ground Overlays in Android Maps API. It will let you overlay (or black out) any section of the map based on coordinates.
So if you have a list of locations and their coordinates, you should be able to overlay with a custom image on top of those areas. It has custom options to automatically scale or anchor the image bounds.

Java Image Generation Of Heat Map for Google Earth

I'm working on a Java program that processes some data and generates a Heat Map to display the results. This program takes a target area and divides that area into a grid, which for the sake of testing each cell is 1NM by 1NM. I generate a KML file and that for each cell in the grid is represented by a polygon and the polygon is coloured based on the value of the cell. However with the volume of data that may be used, I am worried that Google Earth may not be able to handle the amount of polygons being drawn(hundreds of polygons).
I have heard that pictures are less resource heavy for Google Earth, so is there a way of generating a image(like .jpg or .png) in Java of the Heat Map and overlaying it in Google Earth. The center of the cell is know and the 4 corners are calculated, with each cells RGB and hex value known. I'm using Geotools and JAK as libraries for this project. Any help would be greatly appreciated.
I have some GeoTools based code that generates heat map images at http://code.google.com/p/spatial-cluster-detection/ - there is code in there that will show you how to convert your grid into georeferenced imagery.
Yes, its possible, and that's as far as I can go with an answer with the information provided on the question.
However, if you already have your .kml created, it's worth a try testing it to see if Google Earth will really go crazy of if you are just speculating.

Categories