Iterate Through Voxels in Spherical Volume from Center Out - java

I'm not quite sure the best way to articulate this question, but I am trying to find a relatively simple way programmatically (in Java ideally, though theory is welcome too) to iterate through voxels one at a time starting from a center point and radiating out spherically. The idea is that I can specify a final radius (r) and starting coordinate <x, y, z> and at any given point in the process, the code will have iterated through each point within a radius that grows from 0 to r over the course of the function.
To be clear, I know how to search every coordinate in a spherical volume using spherical coordinates, but I don't know how to do it in the right order (starting from the center and moving outward.) Also, because it's voxels, I don't want to waste a bunch of time rounding iterations in the center just so the resolution can be complete on the outer surface. Ideally, each iteration should cover a new voxel and each voxel should be iterated exactly once (although I am open to compromise if that isn't possible).
Thanks for your help, let me know if I need to specify any further.

Related

Convert Latitude and Longitude values to a custom sized grid

I am making a java program that classifies a set of lat/lng coordinates to a specific rectangle of a custom size, so in effect, map the surface of the earth into a custom grid and be able to identify what rectangle/ polygon a point lies in.
The way to do this I am looking into is by using a map projection (possibly Mercator).
For example, assuming I want to classify a long/lat into 'squares' of 100m x 100m,
44.727549, 10.419704 and 44.727572, 10.420460 would classify to area X
and
44.732496, 10.528092 and 44.732999, 10.529465 would classify to area Y as they are within 100m apart.
(this assumes they lie within the same boundary of course)
Im not too worried about distortion as I will not need to display the map, but I do need to be able to tell what polygon a set of coordinates belong to.
Is this possible? Any suggestions welcome. Thanks.
Edit
Omitting projection of the poles is also an acceptable loss
Here is my final solution (in PHP), creates a bin for every square 100m :
function get_static_pointer_table_id($lat, $lng)
{
$earth_circumference = 40000; // km
$lat_bin = round($lat / 0.0009);
$lng_length = $earth_circumference * cos(deg2rad($lat));
$number_of_bins_on_lng = $lng_length * 10;
$lng_bin = round($number_of_bins_on_lng * $lng / 360);
//the 'bin' unique identifier
return $lat_bin . "_" . $lng_bin;
}
If I understand correctly, you are looking for
a way to divide the surface of the earth into approximately 100m x 100m squares
a way to find the square in which a point lies
Question 1 is mission impossible with squares but much less so with polygons. A very simple way to create the polygons would to use the coordinates themselves. If each polygon is 0.0009° in latitude and longitude, you will have approximately square 100m x 100m grid on the equator, put the slices will become very thin close to the poles.
Question 2 depends on the approximation used to solve the challenge outlined above. If you use the very simple method above, then placing each coordinate into a bin is just a division by 0.0009 (and rounding down to the closest integer).
So, first you will have to decide what you can compromise. Is it important to have equal area in the polygons, equal longitudinal distance, equal latitude distance, etc.? Is it important to have four corners in the polygon? Is it important to have similar or almost similar polygons close to the poles and close to the equator? Once you know the limitations set by your application, choosing the projection becomes easier.
What you are trying to do here is a projection onto a flat surface of an ellipsoid. So as long as your points are close together, and, well, you don't mind getting the answer slightly wrong you can assume that your projection plane intersects in the centre of your collection of points, and, each degree of lat and lon are a constant number of metres. Then the problem is a simple planar calculation.
This is wrong, of course. I would actually recommend that you look into map projections, pick one that makes sense, and go for that. Remember that you can move the centre of the projection to the centre to your set of points which will reduce distortion.
I suspect that PROJ.4 might help you in terms of libraries. There also must be a good Java one but that is not my speciality.
Finally you can could assume that the earth is a sphere and do your calculations on the sphere. Or, if you really want to get it right you can pick a standard earth ellipsoid and do the calculations on that.

Determining if an object is moving away from a point versus towards it

I am trying to practice my skills with using latitude and longitude and I'm attempting to determine the following: given a center point X on a map and a point around it called Y, how do I tell whether or not the points around the center are moving away from the center object or towards it using latitude and longitude?
Right now I have the center latitude and longitude and am focusing on one of the points around it. I have used the Haversine method to calculate distance in miles between two lats and longs. Using this I measured the initial distance the from X to Y and assigned it to a variable. Upon Y's first move I recalculated the overall distance from X to Y and compared it with the initial. If the new measurement is greater than the old then your distance from the point X is increasing, if not it's decreasing. Also, I have check to make sure what I'm working with the point Y is ACTUALLY moving some distance with each move, not just going around the radius of point X in some weird fashion.
Is the way I'm doing things sound alright? I keep feeling like I need to fine tune something but I just can't put my finger on it.
Hopefully everything I'm saying makes sense and is not falling on deaf ears and this doesn't get flagged as an non-constructive question. It definitely is.
Yes, this is the correct way, I have done this some years ago:
In praxis you get the coordinates from a GPS device. Therfore you may consider additional filtering, e.g ignore situtions where the device stands still. Because this may introduce positional jumps.
In your question I saw that you already use a filtering by distance moved: this is suitable!
You can use the haversine formula, like you propose. For high load situations, there are faster distance formulas, for your task (small distances), which do not need so much trigonometric calls, but this is a minor topic.

Calculating a point in 3D space

I am trying to locate a point in 3D space relative to the origin (0,0,0). I have 3 values to calculate this point with: a rotation in degrees about both the x and y axis as well as a "view distance". Using these values, how can I locate a point in 3D space relative to the origin? I have tried using basic trigonometric functions, but the results seem to be random. The image below gives a visual as to what needs to be done.
'vd' being the "view distance"
'c' being a value holder
'(x,y,z)' being the coordinate I am trying to find
What I am trying to do is find the point a player is looking at a certain distance away (find a point in the direct line of sight of the player out a certain distance). Keep in mind, the rotations about the x and y axis are constantly changing, but the view distance remains the same. If anyone has any suggestions, methods of how to do this, or needs clarification, please comment/answer below.
I am doing this in LWJGL, and the code I am using is as follows:
float c = (float)(Math.cos(Math.toRadians(A00.rot.y)) * view_distance);
locate.y = (float)(Math.sin(Math.toRadians(rot.y)) * view_distance);
locate.x = (float)(Math.cos(Math.toRadians(rot.x)) * c);
locate.z = (float)(Math.sin(Math.toRadians(rot.x)) * c);
EDIT:
My issue is that this current setup does NOT work for some reason. The math seems legitimate to me, but I must have something wrong somewhere in the actual setup of the graph..
I suggest looking up quaternions. No need to fully understand how they work. You can find ready made classes for java available on the internet as well. Quaternions allow you to represent a rotation in 3D space.
What I would then do, is to start with a vector representing the direction pointing forwards from the origin, and apply the same rotation that the player currently has to it. Now it is pointing in the same direction as the player. Now if you take the player's current point, and the direction vector we now have a ray describing where the player is looking at.
I suggest this link for further information on quaternions. They may look complex but, as I said, you don't need to fully understand how and why they work to be able to use them. Just copy the formulae and learn how they are used. Once you figure out how to use them, they make 3d rotations really easy.
http://content.gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation

Calculating geospatial bounding box without map data

I am looking for an algorithm that would let me find an enclosing bounding box for a lat/long without using map data. Essentially I want to be able to define grids for the planar world map given a set size and then plot which grid a lat/long falls in.
Does anyone know of previous work that might have been done in this? Are there standard ways of doing this over home grown solutions where I create a hash map (or the like) of my own bounding boxes for the world and do lookups etc.
I dont want to utilize actual cartography for this. Just looking for some math that would return a fixed bounding box for all the lat/longs that fall under it
Thanks for your help!
I guess you mean that you want to do something like cover the surface of the Earth with squares (or rectangles) of a fixed size, perhaps 100km square, and figure out a way of mapping from any (lat,long) coordinate pair to the square in which it sits ? Well, forget about that, it can't be done, there is simply no way to cover the surface of a sphere (ignore the slight non-sphericity of the Earth for this discussion) with squares of the same size.
You might be interested in Universal Transverse Mercator which is close to what you want to do but it will require you to engage with some of the mathematics of map projections. I see no way around this.
I exclude from consideration that you would be satisfied with 'squares' of equal angular measure, I mean (for example) something like 'squares' of 2 degrees of lat or long on each side. The maths for that would be trivial and you wouldn't have asked here on SO for guidance.

Getting boundary information from a 3d array

Hey, I'm currently trying to extract information from a 3d array, where each entry represents a coordinate in order to draw something out of it. The problem is that the array is ridiculously large (and there are several of them) meaning I can't actually draw all of it.
What I'm trying to accomplish then, is just to draw a representation of the outside coordinates, a shell of the array if you'd like. This array is not full, can have large empty spaces with only a few pixels set, or have large clusters of pixel data grouped together. I do not know what kind of shape to expect (could be a simple cube, or a complex concave mesh), and am struggling to come up with an algorithm to effectively extract the border. This array effectively stores a set of points in a 3d space.
I thought of creating 6 2d meshes (one for each side of the 3d array), and getting the shallowest point they can find for each position, and then drawing them separetly. As I said however, this 3d shape could be concave, which creates problems with this approach. Imagine a cone with a circle on top (said circle bigger than the cone's base). While the top and side meshes would get the correct depth info out of the shape, the bottom mesh would connect the base to the circle through vertical lines, making me effectivelly loose the conical shape.
Then I thought of annalysing the array slice by slice, and creating 2 meshes from the slice data. I believe this should work for any type of shape, however I'm struggling to find an algorithm which accuratly gives me the border info for each slice. Once again, if you just try to create height maps from the slices, you will run into problems if they have any concavities. I also throught of some sort of edge tracking algorithm, but the array does not provide continuous data, and there is almost certainly not a continuous edge along each slice.
I tried looking into volume rendering, as used in medical imaging and such, as it deals with similar problems to the one I have, but couldn't really find anything that I could use.
If anyone has any experience with this sort of problem, or any valuable input, could you please point me in the right direction.
P.S. I would prefer to get a closed representation of the shell, thus my earlier 2d mesh approach. However, an approach that simply gives me the shell points, without any connection between them, that would still be extremely helpful.
Thank you,
Ze
I would start by reviewing your data structure. As you observed, the array does not maintain any obvious spatial relationships between points. An octree is a pretty good representation for data like you described. Depending upon the complexity of you point set, you may be able to find the crust using just the octree - assuming you have some connectivity between near points.
Alternatively, you may then turn to more rigorous algorithms like raycasting or marching cubes.
Guess, it's a bit late by now to be truly useful to you, but for reference I'd say this is a perfect scenario for volumetric modeling (as you guessed yourself). As long as you know the bounding box of your point cloud, you can map these coordinates to a voxel space and increase the density (value) of each voxel for each data point. Once you have your volume fully defined, you can then use the Marching cubes algorithm to produce a 3D surface mesh for a given threshold value (iso value). That resulting surface doesn't need to be continuous, but will wrap all voxels with values > isovalue inside. The 2D equivalent are heatmaps... You can refine the surface quality by adjusting the iso threshold (higher means tighter) and voxel resolution.
Since you're using Java, you might like to take a look at my toxiclibs volumeutils library, which also comes with sevaral examples (for Processing) showing the general approach...
Imagine a cone with a circle on top
(said circle bigger than the cone's
base). While the top and side meshes
would get the correct depth info out
of the shape, the bottom mesh would
connect the base to the circle through
vertical lines, making me effectivelly
loose the conical shape.
Even an example as simple as this would be impossible to reconstruct manually, let alone algorithmically. The possibility of your data representing a cylinder with a cone shaped hole is as likely as the vertices representing a cone with a disk attached to the top.
I do not know what kind of shape to
expect (could be a simple cube...
Again, without further information on how the data was generated, 8 vertices arranged in the form of a cube might as well represent 2 crossed squares. If you knew that the data was generated by, for example, a rotating 3d scanner of some sort then that would at least be a start.

Categories