Finding All 3D Objects within a Certain Distance from Point - java

I have a set of objects (let's call it points) that contain the x- y- and z- components of their positions within some definite space. I would like to model the interactions between the objects in points, however, I cannot do so unless I can quickly find the objects in the set that are less than a certain distance away from one of the objects in this set.
This undoubtedly sounds a bit unclear, so let me put it another way: if the first point in points has coordinates <x, y, z>, I would like to figure out which of the objects in points has a distance that is less than [some arbitrary value] from the first point.
I was considering an implementation of an R-Tree to do this in Java, yet I feel as though this is a common-enough problem that a simpler solution exists. If there is not one, I would appreciate a simple explanation of the method by which one queries an R-Tree in order to find objects that are within some distance x from an object in the tree, where x is already known.
Edit: note that the position values of these objects will be changing

The R*-tree is a pretty good data structure for this, in particular when points are changing. It is designed for changes, actually.
The k-d-tree is simpler, but it doesn't support changes very well. It is designed for a one-time bulk construction.
However, as your data is only three dimensional: if your data is small enough to fit into memory, and the maximum and minimum values of x,y,z are known, an octree or a simple grid may be the tradeoff of simplicity and performance you need.
In particular if you fix your query radius beforehand, a grid-file is hard to beat. R*-trees get attractive when you need to support multiple radiuses, window queries, nearest-neighbor queries and all this.

EDIT : Square = Cube (however imagining it in 2D space would be maybe better, then you can convert it into 3D easily)
I was thinking and I think I solved it. However this is just "my" solution, I have no reference for it.
You create class "Square", which has position, width and list of points in that object.
All squares will be stored in array or hashmap based on their position, so they can be accessed quickly, if you know position you seeks.
All squares will be distributed regularly, so - from the point of view of "point instance" - you dont have to know all the existing squares to figure out in constant time in which one you belong. (example : I know there are squares with width of 40 and they are distributed by distance of 20. I am in position 10001, so I know I belong into squares in position 9980 and 10000)
Squares will be crossed by each other, therefore one point can be in more squares.
When you do something, for each point, you only check points, which are stored in squares that point belongs to. Of course - squares have to be large enough and crossed enough to achieve your goal.
When points moving, they are responsible for registering and unregistering into the squares.
1D EXAMPLE :
Classes : Line segment and Point
Attrributes:
Line segment : int position, List<Points> points
Point : int position, List<LineSegment> lineSegments
I want to interact only with points in distance of 20.
So I create instances of Line segments with width 40 and I put them one by one in distance of 20.
So they will be at positions 0, 20, 40, 60 ....
The frist one will cover area 0-40, second 20-60 etc.
I put them into the array and with known position, I can access them quickly : arrayOfLineSegments[position/20]
When I create point, I add him to the line segments it belongs to.
When I update, each point only interacts with points in lineSegments.
When I move point, it register and unregister lineSegments it belongs to.

You can use a for loop to check through the the array of object.
use the following formula: d = sqrt[(x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2]
x1,y1,z1 being the first point in Points and the x2,y2,z2 being the points of the object you are checking. This will check your known point vs all other points. If the distance (d) is less than your desired distance x then do whatever you want you program to do.

Related

Simple and fast collision algorithm in java for non-axis aligned boxes

I'm making a program that needs to detect the collision between 2 non-axis aligned boxes. My program only needs an indication if 2 non-axis aligned boxes are colliding. I would like to have the most simple and efficient algorithm possible.
Here I visualized the problem.
So as you can see squares 1,2 and 3 would return true because they collided with the green squares.
4 would return false because it isn't colliding.
I do have all the boxes of both colors in separate array lists.
Does anybody know a library or algorithm for this problem? Thanks in advance.
Check out the Area class in the java.awt.geom package.
http://docs.oracle.com/javase/6/docs/api/java/awt/geom/Area.html
I don't know how "easy" your game really is, how many shapes you'd have to check for (I'm thinking efficiency here), but if you have your different color shapes in different lists, a kind of brute force iteration may work for you. Not a clue if it would be efficient enough for you. I use box2D to tell me the collisions, but sounds like that may be overkill for you.
The brute force method I'm thinking of would be to utilize libgdx's Intersector class (check out the API, it has lots of methods). Iterate through your rectangles comparing to the others. Something like IntersectRectangles() gives you a boolean if two rectangles overlap (ie: collide).
This may be too inefficient/hacky, and a physics library may be too much. So one of the other answers provided may be the sweet spot.
A commonly-used approach involves quadtrees. There's a good write-up and tutorial here, which explains how to use quadtrees to perform collision detection in 2D space.
The general idea is that your game area will keep being partitioned by four as your add objects. Each partition is called a node and each node will maintain a reference to objects that exist in the corresponding partition. Objects are placed into nodes based on where they are in the 2D space. If a node does not fit cleanly into a partition, it is inserted into the parent node. Using this method, you don't have to perform an expensive check against every other object in your 2D space, because you can be sure that objects in different nodes (at the same level; i.e., sibling nodes) will not collide. So you only have to perform your collision detection on a small subset of objects.
Note that this just tells you which objects are occupying a certain area; it's a more efficient way to hone in on objects that are likely to be colliding. After that you have to check if the objects are actually colliding. There is another write-up here that goes over various techniques to accomplish this.
There are two algorithms/data-structures you need to consider for this problem:
A spatial data-structure to store your rotated quads, to efficiently determine which pairs of quads need to be tested against each other. Other answers have already addressed this. If the number of quads is small enough then you can just test all the red quads against all the green quads, which is O(m * n).
An algorithm to perform the actual test of one rotated quad against another. One of the simplest is the Separating Axis Theorem.
The basic idea behind the SAT is that if you can find at least one line where all the points of one convex object are on one side of it, and all the points of the other are on the other side, then the two are not colliding. The potential lines that you need to test are just the edges of both of the objects.
To implement it you need to implement a point-line test to tell you which side of an edge a point is on. This is done by calculating the normal to the edge, and then calculating the dot product of the edge normal and a vector from a point on the edge to the point you are testing. The sign of the dot product tells you which side the point is on (positive means the outside the edge, for an outward pointing normal). Whether you count zero (on the line) as outside or inside depends on whether you want objects that are just touching but not penetrating to count as a collision, if you do then the dot product must be greater than zero to count as outside.
For example if the points on objects are in clockwise order, and edgeA and edgeB are the two points of an edge on one object, and pointC is a point on the other object, the test is done like this (not using function calls, to show the math):
boolean isOutsideEdge(PointF edgeA, PointF edgeB, PointF pointC)
{
float normalX = edgeA.Y - edgeB.y;
float normalY = edgeB.X - edgeA.x;
float vectorX = pointC.x - edgeA.x;
float vectorY = pointC.y - edgeA.y;
return (normalX * vectorX) + (normalY * vectorY) > 0.0f;
}
Then the algorithm is:
For each edge on quad A, if all the corner points on quad B are on the outward facing side of the edge, then A and B are not colliding, stop processing and return false.
For each edge on quad B, if all the corner points on quad A are on the outward facing side of the edge, then A and B are not colliding, stop processing and return false.
If all those tests have been performed and none have returned false, then A and B are colliding, so return true.
The SAT can be generalized to arbitrary convex polygons.
So I decided to go with box2d in the end. This was the best solution because of the diffrent mask-qualifiers, the objects didn't collide but it could be easily checked wether they should be colliding.
I had to make my own contactlistener that overrides the default contactlistener. Here I could do anything if any 2 objects collided.
Thanks everyone for the help.

Convert Latitude and Longitude values to a custom sized grid

I am making a java program that classifies a set of lat/lng coordinates to a specific rectangle of a custom size, so in effect, map the surface of the earth into a custom grid and be able to identify what rectangle/ polygon a point lies in.
The way to do this I am looking into is by using a map projection (possibly Mercator).
For example, assuming I want to classify a long/lat into 'squares' of 100m x 100m,
44.727549, 10.419704 and 44.727572, 10.420460 would classify to area X
and
44.732496, 10.528092 and 44.732999, 10.529465 would classify to area Y as they are within 100m apart.
(this assumes they lie within the same boundary of course)
Im not too worried about distortion as I will not need to display the map, but I do need to be able to tell what polygon a set of coordinates belong to.
Is this possible? Any suggestions welcome. Thanks.
Edit
Omitting projection of the poles is also an acceptable loss
Here is my final solution (in PHP), creates a bin for every square 100m :
function get_static_pointer_table_id($lat, $lng)
{
$earth_circumference = 40000; // km
$lat_bin = round($lat / 0.0009);
$lng_length = $earth_circumference * cos(deg2rad($lat));
$number_of_bins_on_lng = $lng_length * 10;
$lng_bin = round($number_of_bins_on_lng * $lng / 360);
//the 'bin' unique identifier
return $lat_bin . "_" . $lng_bin;
}
If I understand correctly, you are looking for
a way to divide the surface of the earth into approximately 100m x 100m squares
a way to find the square in which a point lies
Question 1 is mission impossible with squares but much less so with polygons. A very simple way to create the polygons would to use the coordinates themselves. If each polygon is 0.0009° in latitude and longitude, you will have approximately square 100m x 100m grid on the equator, put the slices will become very thin close to the poles.
Question 2 depends on the approximation used to solve the challenge outlined above. If you use the very simple method above, then placing each coordinate into a bin is just a division by 0.0009 (and rounding down to the closest integer).
So, first you will have to decide what you can compromise. Is it important to have equal area in the polygons, equal longitudinal distance, equal latitude distance, etc.? Is it important to have four corners in the polygon? Is it important to have similar or almost similar polygons close to the poles and close to the equator? Once you know the limitations set by your application, choosing the projection becomes easier.
What you are trying to do here is a projection onto a flat surface of an ellipsoid. So as long as your points are close together, and, well, you don't mind getting the answer slightly wrong you can assume that your projection plane intersects in the centre of your collection of points, and, each degree of lat and lon are a constant number of metres. Then the problem is a simple planar calculation.
This is wrong, of course. I would actually recommend that you look into map projections, pick one that makes sense, and go for that. Remember that you can move the centre of the projection to the centre to your set of points which will reduce distortion.
I suspect that PROJ.4 might help you in terms of libraries. There also must be a good Java one but that is not my speciality.
Finally you can could assume that the earth is a sphere and do your calculations on the sphere. Or, if you really want to get it right you can pick a standard earth ellipsoid and do the calculations on that.

Selecting points randomly in regions corresponding with neighbors, avoiding infinite recursion

Pardon the wall of text. I'll add images later. I need to generate a somewhat realistic map of cubic meter voxels, with water, sand, grasses, trees, minerals, deserts, beaches, islands, etc, without any sort of voronoi cop-out(i.e. be smart about relating these factors to each other). Yes, this is a game.
I figured I'd generate critical points randomly and interpolate them for elevation and humidity readings, but I'm at a loss with random generation. Basically I need a somewhat even distribution of points without having to make the full list at once. I need to generate roughly 20x20x20 at a time, and probably work with approximately 1000x1000x1000 cells of critical points, but I'd expect strange things to happen at the edges of the large cells. Does anyone know of any way to select points in this way? The real trouble is that points should prefer to be in proximity to others in "mountain-range" style chains.
The problem here is that this is happening across these 1-km cells.
I can simply pick points this way within a cell, but since a cell and its neighbor need to be dependent on each other, my trivial algorithm would have encountered the need to head to infinity for one of these cells, or see a grid-like pattern of broken chains. The chains should not break more frequently on cell boundaries. If they do a somewhat problematic wafer-like pattern shows in generation and makes for a poor generation that is unusable for design/gameplay reasons.
n.b. For the purposes of these, the system-level random generator can be seeded and is practically uniform. As far as within a cell, I can select chains just fine.
I also considered having a cell spill over into any ungenerated cells so its chains start connected to existing ones, but that would break the determinism of generation simply based on location and seed, adding order of generation as a factor.
Again, for the purposes of realism and design I'm trying to stay away from using Perlin. Or should I post on gamedev.SE?
The key concept is to create the interfaces between a cell and all six of its neighbors before filling in the cell. Picture creating your entire world as a grid of hollow boxes, but before you do that, create it as a wire-frame outline, but before you do that, create the grid of wire-frame intersections.
Here's a simplistic approach -- you'll have to improve this. First, consider this method of generating the entire world at once:
(1) select all the vertex voxels -- perhaps the upper north west voxel in each cell -- and set its world attributes to reasonable values based entirely on location and seed.
(2) select the lines of voxels connecting the vertices, and fill in all their world attributes, based on location and seed, but constrained to match up with the existing vertex voxel values at each end.
(3) select the planes of voxels describing the faces bounded by the existing lines, and fill in in all their world attributes, based on location and seed, but constrained to match up with the existing line voxel values along the edges.
(4) fill in the cell, based on location and seed, but constrained to match up with the existing bounding six faces.
Now, consider that this method doesn't need to be done all at once. All that is necessary is that you create all six faces of a cell before filling it in, and that you create all four bounding lines of a face before you fill that in, and that you create the two end points of a line before filling that in. After the first cell, some of these will already exist.
The reason I said that you'll have to improve this idea is that it produces noticeable gradient boundaries at the cell boundaries. I'm afraid that each interface voxel will not only need to contain world attributes, but the rate of change of each attribute across the interface at that point. This implies that each line voxel will have to contain two rates of change for each attribute, and each vertex, three.
I'm not going to describe how you would constrain the gradient of a world attribute as it approaches a voxel with a predefined gradient because I'm sure you can handle it, my answer is already too long, and I don't know how.

Finding the intersection of 2 arbitrary cubes in 3d

So, I'd like to figure out a function that allows you to determine if two cubes of arbitrary rotation and size intersect.
If the cubes are not arbitrary in their rotation (but locked to a particular axis) the intersection is simple; you check if they intersect in all three dimensions by checking their bounds to see if they cross or are within one another in all three dimensions. If they cross or are within in only two, they do not intersect. This method can be used to determine if the arbitrary cubes are even candidates for intersection, using their highest/lowest x, y, and z to create an outer bounds.
That's the first step. In theory, from that information we can tell which 'side' they are on from each other, which means we can eliminate some of the quads (sides) from our intersection. However, I can't assume that we have that information, since the rotation of the cubes may make it difficult to determine simply.
My thought is to take each pair of quads, find the intersection of their planes, then determine if that line intersects with at least one edge of each of the pairs of sides. If any pair of sides has a line of intersection that intersects with any of their edges, the quads intersect. If none intersect, the two cubes do not intersect.
We can then determine the depth of the intersection on the second cube by where the plane-intersection line intersects with its edge(s).
This is simply speculative, however. Is there a better, more efficient way to determine the intersection of these two cubes? I can think of a number of different ways to do this, and I can also tell that they could be very different in terms of amount of computation required.
I'm working in Java at the moment, but C/C++ solutions are cool too (I can port them); even psuedocode since it is perhaps a big question.
To find the intersection (contact) points of two arbitrary cubes in three dimensions, you have to do it in two phases:
Detect collisions. This is usually two phases itself, but for simplicity, let's just call it "collision detection".
The algorithm will be either SAT (Separating axis theorem), or some variant of polytope expansion/reduction. Again, for simplicity, let's assume you will be using SAT.
I won't explain in detail, as others have already done so many times, and better than I could. The "take-home" from this, is that collision detection is not designed to tell you where a collision has taken place; only that it has taken place.
Upon detection of an intersection, you need to compute the contact points. This is done via a polygon clipping algorithm. In this case, let's use https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm
There are easier, and better ways to do this, but SAT is easy to grasp in 3d, and SH clipping is also easy to get your head around, so is a good starting point for you.
You should take a look at the field of computer graphics. They have many means. E.g. Weiler–Atherton clipping algorithm. There are also many datastructures that could ease up the process for you. To mention AABBs (Axis-aligned bounding boxes).
Try using the separating axis theorem. it should apply in 3d as it does in 2d.
If you create polygons from the sides of the cubes then another approach is to use Constructive Space Geometry (CSG) operations on them. By building a Binary Space Partitioning (BSP) tree of each cube you can perform an intersection on them. The result of the intersection is a set of polygons representing the intersection. In your case if the number of polygons is zero then the cubes don't intersect.
I would add that this approach is probably not a good real time solution, but you didn't indicate if this needed to happen in frame refresh time or not.
Since porting is an option you can look at the Javascript library that does CSG located at
http://evanw.github.io/csg.js/docs/
I've ported this library to C# at
https://github.com/johnmott59/CGSinCSharp

Getting boundary information from a 3d array

Hey, I'm currently trying to extract information from a 3d array, where each entry represents a coordinate in order to draw something out of it. The problem is that the array is ridiculously large (and there are several of them) meaning I can't actually draw all of it.
What I'm trying to accomplish then, is just to draw a representation of the outside coordinates, a shell of the array if you'd like. This array is not full, can have large empty spaces with only a few pixels set, or have large clusters of pixel data grouped together. I do not know what kind of shape to expect (could be a simple cube, or a complex concave mesh), and am struggling to come up with an algorithm to effectively extract the border. This array effectively stores a set of points in a 3d space.
I thought of creating 6 2d meshes (one for each side of the 3d array), and getting the shallowest point they can find for each position, and then drawing them separetly. As I said however, this 3d shape could be concave, which creates problems with this approach. Imagine a cone with a circle on top (said circle bigger than the cone's base). While the top and side meshes would get the correct depth info out of the shape, the bottom mesh would connect the base to the circle through vertical lines, making me effectivelly loose the conical shape.
Then I thought of annalysing the array slice by slice, and creating 2 meshes from the slice data. I believe this should work for any type of shape, however I'm struggling to find an algorithm which accuratly gives me the border info for each slice. Once again, if you just try to create height maps from the slices, you will run into problems if they have any concavities. I also throught of some sort of edge tracking algorithm, but the array does not provide continuous data, and there is almost certainly not a continuous edge along each slice.
I tried looking into volume rendering, as used in medical imaging and such, as it deals with similar problems to the one I have, but couldn't really find anything that I could use.
If anyone has any experience with this sort of problem, or any valuable input, could you please point me in the right direction.
P.S. I would prefer to get a closed representation of the shell, thus my earlier 2d mesh approach. However, an approach that simply gives me the shell points, without any connection between them, that would still be extremely helpful.
Thank you,
Ze
I would start by reviewing your data structure. As you observed, the array does not maintain any obvious spatial relationships between points. An octree is a pretty good representation for data like you described. Depending upon the complexity of you point set, you may be able to find the crust using just the octree - assuming you have some connectivity between near points.
Alternatively, you may then turn to more rigorous algorithms like raycasting or marching cubes.
Guess, it's a bit late by now to be truly useful to you, but for reference I'd say this is a perfect scenario for volumetric modeling (as you guessed yourself). As long as you know the bounding box of your point cloud, you can map these coordinates to a voxel space and increase the density (value) of each voxel for each data point. Once you have your volume fully defined, you can then use the Marching cubes algorithm to produce a 3D surface mesh for a given threshold value (iso value). That resulting surface doesn't need to be continuous, but will wrap all voxels with values > isovalue inside. The 2D equivalent are heatmaps... You can refine the surface quality by adjusting the iso threshold (higher means tighter) and voxel resolution.
Since you're using Java, you might like to take a look at my toxiclibs volumeutils library, which also comes with sevaral examples (for Processing) showing the general approach...
Imagine a cone with a circle on top
(said circle bigger than the cone's
base). While the top and side meshes
would get the correct depth info out
of the shape, the bottom mesh would
connect the base to the circle through
vertical lines, making me effectivelly
loose the conical shape.
Even an example as simple as this would be impossible to reconstruct manually, let alone algorithmically. The possibility of your data representing a cylinder with a cone shaped hole is as likely as the vertices representing a cone with a disk attached to the top.
I do not know what kind of shape to
expect (could be a simple cube...
Again, without further information on how the data was generated, 8 vertices arranged in the form of a cube might as well represent 2 crossed squares. If you knew that the data was generated by, for example, a rotating 3d scanner of some sort then that would at least be a start.

Categories