I'm working on a Java program that processes some data and generates a Heat Map to display the results. This program takes a target area and divides that area into a grid, which for the sake of testing each cell is 1NM by 1NM. I generate a KML file and that for each cell in the grid is represented by a polygon and the polygon is coloured based on the value of the cell. However with the volume of data that may be used, I am worried that Google Earth may not be able to handle the amount of polygons being drawn(hundreds of polygons).
I have heard that pictures are less resource heavy for Google Earth, so is there a way of generating a image(like .jpg or .png) in Java of the Heat Map and overlaying it in Google Earth. The center of the cell is know and the 4 corners are calculated, with each cells RGB and hex value known. I'm using Geotools and JAK as libraries for this project. Any help would be greatly appreciated.
I have some GeoTools based code that generates heat map images at http://code.google.com/p/spatial-cluster-detection/ - there is code in there that will show you how to convert your grid into georeferenced imagery.
Yes, its possible, and that's as far as I can go with an answer with the information provided on the question.
However, if you already have your .kml created, it's worth a try testing it to see if Google Earth will really go crazy of if you are just speculating.
Related
I am currently using OpenCV in Java, but if need be I can switch to something else.
I have a set of known points in world coordinates as well as known in image coordinates. These points are not obtained with a calibration target. They are probably quite inaccurate. I can assume that they are close to coplanar.
I am trying to obtain a homography and lens correction from this. The camera is cheap(ish) and so there are lens issues.
I have been struggling to get the OpenCV functions to help me here without a calibration target. There are a lot of similar questions asked (the most similar I found was this one: Correct lens distortion using single calibration image in Matlab), but none quite fit the bill.
I was thinking I could iteratively find the homography, correct, find the lens distortion, correct and repeat until I got an OK result. Even so, I can't seem to do these separately in OpenCV.
I am quite new to OpenCV, so please be gentle.
openCV have this functionality
Camera calibration
Function "calibrateCamera" take as input world coordinates and corresponding image coordinates, and return camera intrinsic (also distortion)
At a minimum you need to "see" features that are known to be collinear, regardless of whether they are the images of coplanar points. That is because the error signal that you need to minimize in order to estimate (and remove) the lens's radial distortion is, by definition, the deviation of collinear points from straight lines.
See also this answer.
I am using eclipse with the adk android plugin and I am completely lost
That would depend on what do you mean by displaying their location:
Relative to each other on a blank screen.
Read up on distance calculation in coordinates and draw the locations relative to their distance from each other.
On a map, each one separately from the other or together.
Use a web/binary API for map display like Google Maps (don't want to spam and post links with my reputation), or write your app to display Open map data from sources like OpenStreetMap.
On a satellite/aerial images.
Pretty much the same as #3 just you have less options to investigate and try.
If you were a bit clearer in your question, I could have been more specific with the answers, if you edit your question and clarify more, I could be more specific.
I am looking for an algorithm that would let me find an enclosing bounding box for a lat/long without using map data. Essentially I want to be able to define grids for the planar world map given a set size and then plot which grid a lat/long falls in.
Does anyone know of previous work that might have been done in this? Are there standard ways of doing this over home grown solutions where I create a hash map (or the like) of my own bounding boxes for the world and do lookups etc.
I dont want to utilize actual cartography for this. Just looking for some math that would return a fixed bounding box for all the lat/longs that fall under it
Thanks for your help!
I guess you mean that you want to do something like cover the surface of the Earth with squares (or rectangles) of a fixed size, perhaps 100km square, and figure out a way of mapping from any (lat,long) coordinate pair to the square in which it sits ? Well, forget about that, it can't be done, there is simply no way to cover the surface of a sphere (ignore the slight non-sphericity of the Earth for this discussion) with squares of the same size.
You might be interested in Universal Transverse Mercator which is close to what you want to do but it will require you to engage with some of the mathematics of map projections. I see no way around this.
I exclude from consideration that you would be satisfied with 'squares' of equal angular measure, I mean (for example) something like 'squares' of 2 degrees of lat or long on each side. The maths for that would be trivial and you wouldn't have asked here on SO for guidance.
I m making app in netbeans platform in java using Swing technology for dentist. i want to measure length of line which is drawn by user on image's of teeth? so, then Doctor can find the length of root canal of teeth.and line can also be not straight, line can be ZigZag.if any one have idea about that then share with me please.
You can use one of the many line detection algorithms to detect the existence of lines and then measure the line in pixels.
You can use an image processing library that already has these algorithms implemented, or you can implement them your self (better use a library though), this question is about image processing libraries and approaches in java.
That is not very easy because the images are taken from different angles or distances as I suppose. You will need some kind of scale in the image which length you know. Think of a tag with a size of 5mm x 5mm which is pasted on the tooth. In you application you can then measure this tag. Lets say its edge size is 200x200 Pixel. Then you know that 200 Pixels are 5mm and you have the formula to calculate the real size from the line length.
I have 5 images by default in the program, and I will allow the user choose an image from the desktop. The program will determine which image between the 5 images is the closest one to the user image.
Can anyone help me and take me to the start of the idea?
You can try to use a feature extraction algorithm like SIFT, SURF etc. Then compare extracted features with your database. You can select the best matching image based on the number of correct matches.
Generally SIFT works fine for 2D objects, like picture of a label or an advertisement board. Rotation on 2D plane or scale wont matter if you are using SIFT. SURF is supposed to be an improvement of SIFT but I do not have much experience on it.
These algorithms are said to be bit heavy. Anyway if you are matching just 5 images it wont be much of a problem.(Or you can simply calculate the descriptors(features) of your images before hand and store them. Then at run time all you have to do is get the descriptor of the user image and compare it) But still if you are trying to match images of basic shapes like squares and circles, using square detection or circle detection might be efficient performance wise.