I'm developing right now an application for Android devices. The main functionality is to draw polylines on map to show what is the traffic in the city on each street. Unfortunately when I draw around 3K polylines - the number is reduced according to the screen size and zoom level - my map gets incredibly slow... I do not mention the time of drawing all of the lines.
Maybe you know more efficient way to mark streets or draw lines on a map?
I was also thinking about switching to OSM but I never used it and I don't know how efficient it is.
I debug app on Samsung Galaxy Note 10.1 and App uses Map API v2
My code to draw polylines:
Polyline line;
List<Float> coordinatesStart;
List<Float> coordinatesEnd;
LatLng start;
LatLng end;
List<List<Float>> coordinates;
int polylinesNumber = 0;
for(Features ftr : features){
coordinates = ftr.geometry.coordinates;
for(int i = 0; i<coordinates.size()-1; i++){
coordinatesStart = coordinates.get(i);
coordinatesEnd = coordinates.get(i+1);
start = new LatLng(coordinatesStart.get(1), coordinatesStart.get(0));
end = new LatLng(coordinatesEnd.get(1), coordinatesEnd.get(0));
line = map.addPolyline(new PolylineOptions()
.add(start, end)
.width(3)
.color(0x7F0000FF)); //semi-transparent blue
polylinesNumber++;
}
}
I would appreciate any help!
Great optimization here:
Your main error is that you use new PolyLineOptions instance for each and every line you draw to the maps. This is make the drawing terribly slow.
The solution would be:
Only use one instance of polyline options and use only the .add(LatLng) function inside the loops.
//MAGIC #1 here
//You make only ONE instance of polylineOptions.
//Setting width and color, points for the segments added later inside the loops.
PolylineOptions myPolylineOptionsInstance = new PolylineOptions()
.width(3)
.color(0x7F0000FF);
for (Features ftr : features) {
coordinates = ftr.geometry.coordinates;
for (int i = 0; i < coordinates.size(); i++) {
coordinatesStart = coordinates.get(i);
start = new LatLng(coordinatesStart.get(1), coordinatesStart.get(0));
//MAGIC #2 here
//Adding the actual point to the polyline instance.
myPolylineOptionsInstance.add(start);
polylinesNumber++;
}
}
//MAGIC #3 here
//Drawing, simply only once.
line = map.addPolyline(myPolylineOptionsInstance);
Attention:
If you would like to have different colors for different line segmnents/sections you would have to use multiple polyline options, because polyline option could have only 1 color. But the method would be the same: Use as few polylineOptions as you can.
Do you check if the polyline that you draw is even visible to the user on the screen? If not, that would be my first idea. This question could be of help for that.
This might be of help as well:
http://discgolfsoftware.wordpress.com/2012/12/06/hiding-and-showing-on-screen-markers-with-google-maps-android-api-v2/
I want to chime in on this because I didn't find this answer complete. If you zoom out you're going to still have a ton of individual polylines on screen and the UI thread will grind to a halt. I solved this problem using a custom TileProvider and a spherical mercator projection of my LatLng points to screen pixels. The idea came from the map-utils-library, which has most of the needed tools to write a canvas to a tile (and a lot of other niceities, too).
I've written an example ComplexTileOverlays from a project I was working on. This includes ways to change alpha and line thickness in the CustomTileProvider.
I first load my custom database of polylines into memory using a splashscreen (for this example, it's an open database of bike facilities on the island of Montréal). from there, I draw each line projection on a canvas 256x256 pixel canvas representing one tile. Overall this technique is faster by leaps and bounds if you have a lot of graphical overlays to tie to the map.
Related
I am very new to this ARCore and I have been looking at the HelloAR Java Android Studio project provided in the SDK.
Everthing works OK and is pretty cool, however, I want to place/drop an object when I touch the screen even when no planes have been detected. Let me explain a little better...
As I understand ARCore, it will detect horizontal planes and ONLY on those horizontal planes I can place 3D objects to be motion tracked.
Is there any way (perhaps using PointCloud information) to be able to place an object in the scene even if there are no horizontal planes detected? Sort of like these examples?
https://experiments.withgoogle.com/ar/flight-paths
https://experiments.withgoogle.com/ar/arcore-drawing
I know they are using Unity and openFrameworks, but could that be done in Java?
Also, I have looked at
How to put an object in the air?
and
how to check ray intersection with object in ARCore
but I don't think I'm understanding the concept of Ancor (I managed to drop the object on the scene, but it either disappears immediately or it is just a regular OpenGL object with no knowledge about the real world.
What I want to understand is:
- How and is it possible to create a custom/user defined plane, that is, a plane that is NOT automatically detected by ARCore?
- How can I create an Ancor (the sample does it in the PlaneAttachment class, I think) that is NOT linked to any plane OR that is linked to some PointCloud point?
- How do I draw the object and place it at the Ancor previously created?
I think this is too much to ask but looking at the API documentation has not helped me at all
Thank you!
Edit:
Here is the code that I added to HelloArActivity.java (Everything is the same as the original file except for the lines after // ***** and before ...
#Override
public void onDrawFrame(GL10 gl) {
...
MotionEvent tap = mQueuedSingleTaps.poll();
// I added this to use screenPointToWorldRay function in the second link I posted... I am probably using this wrong
float[] worldXY = new float[6];
...
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
// ***** I added this to use screenPointToWorldRay function
worldXY = screenPointToWorldRay(tap.getX(), tap.getY(), frame);
...
}
...
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (PlaneAttachment planeAttachment : mTouches) {
...
}
// ***** This places the object momentarily in the scene (it disappears immediately)
frame.getPose().compose(Pose.makeTranslation(worldXY[3], worldXY[4], worldXY[5])).toMatrix(mAnchorMatrix, 0);
// ***** This places the object in the middle of the scene but since it is not attached to anything, there is no tracking, it is always in the middle of the screen (pretty much expected behaviour)
// frame.getPose().compose(Pose.makeTranslation(0, 0, -1.0f)).toMatrix(mAnchorMatrix, 0);
// *****I duplicated this code which gets executed ONLY when touching a detected plane/surface.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
...
}
You would first have to perform a hit test via Frame.hitTest and iterate over the HitResult objects until you hit a Point type Trackable. You could then retrieve a pose for that hit result via HitResult.getHitPose, or attach an anchor to that point and get the pose from that via ArAnchor.getPose (best approach).
However, if you want to do this yourself from an arbitraty point retrieved with ArPointCloud.getPoints, it will take a little more work. In this approach, the question effectively reduces down to "How can I derive a pose / coordinate basis from a point?".
When working from a plane it is relatively easy to derive a pose as you can use the plane normal as the up (y) vector for your model and can pick x and y vectors to configure where you want the model to "face" about that plane. (Where each vector is perpendicular to the other vectors)
When trying to derive a basis from a point, you have to pick all three vectors (x, y and z) relative to the origin point you have. You can derive the up vector by transforming the vector (0,1,0) through the camera view matrix (assuming you want the top of the model to face the top of your screen) using ArCamera.getViewMatrix. Then you can pick the x and z vectors as any two mutually perpendicular vectors that orient the model in your desired direction.
Today i saw an picture of an old game. It was an googlemaps based mmorgp, where you could build up your empire. To claim land, you easily builded an flagpole. But im not the best in describing things.
So lets get back to my question. They used googlemaps circles to mark the area of an building, like an flagpole. When a few of those flagpoles were build very close to each other, they merged their borders, that looked like this :
http://imgur.com/a/0hBdK [Wanted to post picture, but stackoverflow image uploader was broken]
So as you can see those circles were "combined". When the border isnt drawn, it looks like they are one big polygon instead of 2 circles. But how to achieve something like that ? Heres how i create a circle :
GoogleMap map;
// ... get a map.
// Add a circle in Sydney
Circle circle = map.addCircle(new CircleOptions()
.center(new LatLng(-33.87365, 151.20689))
.radius(10000)
.strokeColor(Color.RED)
.fillColor(Color.BLUE));
Until now i didnt found any way to merge or combine multiple circles... I didnt even found out how to make the circle border collide with another circle border. Is there a way to do this ?
Thanks for your time and help !^^
The easiest way to do this is calculating the point where those 2 or multiple circles meet each other.
I have about 9000 areas (i.e. 9000 lines) I have sourced in a CSV file.
There are 6 location related values in each line.
1) I, therefore, have 6 arraylists holding about 9000 values each (doing this in background Async Task) . The size of each of these array lists says "6227" or something like that - so I need to troubleshoot if some values are not being added or is there an arraylist size limitation?
2) Now, I am trying to create 9000 markers with the associated values in the title and snippet section. Please point me to a good tutorial on creating a custom marker with text views. I went to some and couldn't understand anything.
3) My third question is simple: How to efficiently handle this? I am a newcomer and I hate to say that most of the tutorials I have seen on clustering or hiding are impossible to understand. Please provide an understandable description of how to handle this problem. I am begging you.
This is how I collect the data from my CSV file; This is in the background task of AsynTask; And on PostExecute, I pass these values to the method that actually plots the marker on the Google Map.
String mLine = reader.readLine();
while (mLine != null) {
String[] coord = mLine.split(",");
Names.add(coord[0]);
city.add(coord[1]);
country.add(coord[2]);
Code.add(coord[3]);
arrLat=Double.parseDouble(coord[4]);
arrLong=Double.parseDouble(coord[5]);
arrLong=Double.parseDouble(coord[1]);
arrRadius=Double.parseDouble(coord[2]);*/
LatLng thisLoc = new LatLng(arrLat,arrLong);
coordinates.add(thisLoc);
mLine = reader.readLine();
}
For the arraylist size limitation, it should hold up to Integer.MAX_VALUE, you may refer to this link.
I would recommend Clusterer for this particular problem. You may refer to this github sample of MarkerClusterer in which every method has a description, and should be easier to understand. Then, using Viewport Marker Manager to optimize your app's performance. This turns off the markers that are not within the bounds of the screen especially when the user is zooming.
Lastly, in customizing marker with Textview, this link might be helpful. What it does is generate a bitmap and attach it to a marker.
This is the sample code for the custom marker as taken from the link:
Bitmap.Config conf = Bitmap.Config.ARGB_8888;
Bitmap bmp = Bitmap.createBitmap(200, 50, conf);
Canvas canvas = new Canvas(bmp);
canvas.drawText("TEXT", 0, 50, paint); // paint defines the text color, stroke width, size
mMap.addMarker(new MarkerOptions()
.position(clickedPosition)
//.icon(BitmapDescriptorFactory.fromResource(R.drawable.marker2))
.icon(BitmapDescriptorFactory.fromBitmap(bmp))
.anchor(0.5f, 1)
);
Good luck!
I'm new to opencv. I'm working with it in java, which is a pain, since most of the example and resources around the internet is in C++.
Currently my project involves recognizing a chessboard and then be able to draw on specific parts of the board.
I've gotten so far as to get the corners through the Calib3d part of the library. But this is where i get stuck. My question is how do i convert the corners info i got (Which is the corners placement on the 2D image) to something i can use in a 3D space to draw on with LibGdx?
Following is my code (in snippets):
public class chessboardDrawer implements ApplicationListner{
... //Fields are here
MatOfPoint2f corners = new MatOfPoint2f();
MatOfPoint3f objectPoints = new MatOfPoint3f();
public void create(){
webcam = new VideoCapture(0);
... //Program sleeps to make sure camera is ready
}
public void render(){
//Fetch webcam image
webcam.read(webcamImage);
//Grayscale the image
Imgproc.cvtColor(webcamImage, greyImage, Imgproc.COLOR_BGR2GRAY);
//Check if image contains a chessboard of with 9x6 corners
boolean foundCorners = Calib3d.findChessboardCorners(greyImage,
new Size(9,6),
corners, Calib3d.CALIB_CB_FAST_CHECK | Calib3d.CALIB_CB_ADAPTIVE_THRESH);
if(foundCorners){
for(int i = 0; i < corners.height; i++){
//This is where i have to convert the corners
//to something i can use in libGdx to draw boxes
//And insert them into the objectPoints variable
}
}
//Show the corners on the webcamIamge
Calib3d.drawChessboardCorners(webcamImage, new Size(9,6), corners, true);
//Helper library to show the webcamImage
UtilAR.imDrawBackground(webcamImage);
}
}
Any help?
You actually need to localize the (physical) camera using those coordinates.
Fortunately, it is really easy in case of a chessboard.
Camera pose estimation
Note:
The current implementation in OpenCV may not satisfy you in terms of accuracy (at least for a monocular camera). A good AR experience demands nice accuracy.
(Optional) Use some noise filtering method/estimation algorithm to stabilize the pose estimate across time/frames (Preferably Kalman Filter).
This would reduce jerks and wobbling.
Control pose (position + orientation) of a PerspectiveCamera using aforementioned pose estimated.
Draw 3D stuff using scales and initial orientation in accordance with the objPoints that you provided to the camera calibration method.
You can follow this nice blog post to do it.
All 3D models that you render now would be in the chessboard's frame of reference.
Hope this helps.
Good luck.
I'm trying to make it so that I can move a camera around a world, however I'm having a tough time finding resources that will explain this clearly. Most resources I have found are explaining (At least I think they are) how to move the world around the camera, without the camera moving, to create the illusion of movement.
I have implemented this, however rotation of the world results in the world spinning around the origin of the world rather than the camera. Now I am of the mindset that I would get far better results if I could move the camera through the world and rotate it independently. I am asking which way is better for creating camera movement in JOGL.. Moving the world around the camera, or moving the camera through the world?
Use modern openGL with shaders so you can do the latter. Store the transform on the camera object and use that to compute the View Matrix which gets passed to your objects. Try having multiple cameras and multiple viewports.
//C++
//choose camera to use in the viewport
for (int i = 0; i < myWin.allObj.size(); ++i)
{
if (myWin.allObj[i]->name->val_s == "persp1") //set persp1 by default
selCam = myWin.allObj[i];
}
//rendering loop
ViewM_t = glm::translate(glm::mat4(), -selCam->t->val_3);
ViewM_rx = glm::rotate(glm::mat4(), selCam->r->val_3.x, myWin.axisX);
ViewM_ry = glm::rotate(glm::mat4(), selCam->r->val_3.y, myWin.axisY);
ViewM = ViewM_t * ViewM_rx * ViewM_ry;
//in object
compute MVP and upload to GPU