Detect collision between two 3d planes - java

I'm writing a game in java with the lwjgl. Basically i have two plans that i want to check if they intersect each other, like the image below.
I have the four points for each plane, can someone help me.

The 2 planes do not intersect if they are parallel (and not the same plane).
Let p1, p2, p3 and p4 be your 4 points defining the plane and n=(a,b,c) the normal vector computed as n=cross(p2-p1, p3-p1). The plane equation is
ax + by + cz + d = 0
where d=-dot(n,p1)
You have 2 planes
ax + by + cz + d = 0
a’x + b’y + c’z + d’ = 0
they are parallel (and not same) iff
a/a’ == b/b’ == c/c’ != d/d’
When you implement this predicate you have to check the divide by 0

I can't show this is enough, but I believe these three tests should be sufficient:
for two planes...
project each plane onto the x axis, check if there is any overlap
project each plane onto the y axis, check if there is any overlap
project each plane onto the z axis, check if there is any overlap
if there is no overlap in any of those three cases, the planes do not intersect. otherwise,
the planes intersect.
let me know if you are not sure how to project onto an axis or calculate an overlap. also, let me know if these three tests are insufficient.
Edit 1:
Algorithm: You don't actually have to project, rather you can just find the maximum range. Let's do the x axis as an example. You find the minimum x value on plane 1 and the maximum x value on the plane 1. Next, you find the minimum x value on plane 2 and the maximum x value on plane 2. If their ranges overlap ( for example, [1 , 5] overlaps with [2 , 9] ), then there's overlap with the projections onto the x axis. Note that finding the range of x values might not be easy if edges of your plane segment aren't parallel with the x axis. If you're dealing with more complicated plane segments that don't have edges parallel to the axes, I can't really help then. You might have to use something else like matrices.
The test, by the way, is called a separating-axis test. I think the x, y, and z axis tests should be enough to test for plane segments intersecting.
Source: (Book) Game Physics Engine Development: How To Build A Robust Commercial-Grade Physics Engine For Your Game (Second Edition) by Ian Millington
Edit 2:
Actually, you'll need to check more axes.

Related

Wraping of equispaced point set in a given direction

Let us consider a 2-D (latitude, longitude) point set. The points in the set are equispaced (approx.) with mutual distance 0.1 degree \times 0.1 degree . Each point in the set is the centre of a square grid of side length 0.1 degree (i.e., intersect point of two diagonals of the square). Each square is adjacent to the neighbouring squares.
Our goal is to get the coordinates of the outline polygon formed by the bounding sides of the square grids with given direction (will be illustrated with a figure). This polygon has no hole inside.
Let us consider a sample data set of size 10 (point set).
lat_x <- c(21.00749, 21.02675, 21.00396, 21.04602, 21.02317,
21.06524, 21.00008, 21.04247, 21.08454, 21.0192)
and
lon_y <- c(88.21993, 88.25369, 88.31292, 88.28740, 88.34669,
88.32118, 88.40608, 88.38045, 88.35494, 88.43984)
Here is the rough plot of the above points followed by some illustration,
The black points are the (lat,lon) points in the above sample.
The blue square boxes are the square grid.
The given directions (\theta) of the squares are $theta$=50 degree.
Our goal is to get the ordered (clockwise or counter clockwise) co-ordinates of the outline polygon (in yellow colour).
Note: This question is very similar to this with a nice answer given by #laune. There the goal is to get the outline polygon without direction (or 0 degree direction). But in the the present set up I need to include the direction (non-zero) while drawing the square grids and the resulted polygon.
I would gratefully appreciate any suggestion, java or R codes or helpful reference given by anyone solving the above problem.
I would do it like this:
may be some 2D array point grouping to match the grid
that should speed up all the following operations.
compute average grid sizes (img1 from left)
as two vectors
create blue points (img2)
as: gray_point (+/-) 0.5*blue_vector
create red points (img3)
as: blue_point (+/-) 0.5*red_vector
create list of gray lines (img4)
take all 2 original (gray) points that have distance close to average grid distance and add line for them
create list of red lines (img4)
take all 2 original (gray) points that have distance close to average grid distance and add line for them if it is not intersecting any line from gray lines
reorder line points to match polygon winding ...
angle
compute angle of red vector (via atan2)
compute angle of blue vector (via atan2)
return the one with smaller absolute value
[edit1] response to comments
grid size
find few points that are closest to each other so pick any point and find all closest points to it. The possible distances should be near:
sqrt(1.0)*d,sqrt(1+1)*d,sqrt(1+2)*d,sqrt(2+2)*d,...
where d is the grid size so compute d for few picked points. Remember the first smallest d found and throw away all that are not similar to smallest one. Make average of them and let call it d
grid vectors
Take any point A and find closest point B to it with distance near d. For example +/-10% comparison: |(|A-B|-d)|<=0.1*d Now the grid vector is (B-A). Find few of them (picking different A,B) and group them by sign of x,y coordinates into 4 groups.
Then join negative direction groups together by negating one group vectors so you will have 2 list of vectors (red,blue direction) and make average vectors from them (red,blue vectors)
shifting points
You take any point A and add or substract half of red or blue vector to it (not its size!!!) for example:
A.x+=0.5*red_vector.x;
A.y+=0.5*red_vector.y;
line lists
Make 2 nested fors per each 2 point combination A,B (original for gray lines,shifted red ones for red outline lines) and add condition for distance
|(|A-B|-d)|<=0.1*d
if it is true add line (A,B) to the list. Here pseudo C++ example:
int i,j,N=?; // N is number of input points in pnt[]
double x,y,d=?,dd=d*d,de=0.1*d; // d is the avg grid size
double pnt[N][2]=?; // your 2D points
for (i=0;i<N;i++) // i - all points
for (j=i+1;j<N;j++) // j - just the rest no need to test already tested combinations
{
x=pnt[i][0]-pnt[j][0];
y=pnt[i][1]-pnt[j][1];
if (fabs((x*x)+(y*y)-dd)<=de) ... // add line pnt[i],pnt[j] to the list...
}

Camera Calibration OpenCV

I am recently new in OpenCV and I have been struggling to calibrate my camera. After a few days researching I have a basic understanding of it. But I still fail to understand some basic points.
1) The initialization of the objectpoint Matrix, why do we initialize this matrix in 0,0
Mat a = new MatOfPoint3f();
for(int y=0; y<SIZE_Y; ++y)
{
for(int x=0; x<SIZE_X; ++x)
{
points = new MatOfPoint3f(new Point3(x*distance_Board , y*distance_Board , 0));
a.push_back(points);
}
}
Wouldn't it make more sense to initialize it where the board is in the 3D World for example
Mat a = new MatOfPoint3f();
for(int y=1; y<=SIZE_Y; ++y)
{
for(int x=1; x<=SIZE_X; ++x)
{
points = new MatOfPoint3f(new Point3(x*distance_Board + FirstPoint.x, y*distance_Board + FirstPoint.y, 0));
a.push_back(points);
}
}
2)
I tried to calibrate my camera using
Calib3d.calibrateCamera(object_points, corners, gray.size(), cameraMatrix, distCoeffs, rvecs, tvecs);
I have tried with more than 15 images but the results are still very poor , because i don't understand the significance of having a rvec and tvec for very image(I understand the logic, since for every point the rotation and translation is different) but how does it help us with other points/other images. I thought that the calibration provided us with a fair good method to translate 3d point into 2d points in the whole scene..
That's why when I run
Calib3d.projectPoints(objectPoints, rvecs.get(i), tvecs.get(i), cameraMatrix, distCoeffs, imagePoints);
I really don't know which rvecs and tvecs to choose
3)
Is there a method to translate from 2D(imagePoints) into 3D(real World).I have tried
this but the results are incorrect due to the incorrect parameters of calibration
4)
I have also tried to do the translation from 2D to 3D as follow
x ̃ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ]
y ̃ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ],
a)But what is r? r = sqrt( x^2 + y^2 )? And x = (the x coordinate pixel) - (the camera center in pixels) ?
b) Is the camera center in pixel = cx = parameter of the camera matrix?
c) Is the x coordinate pixel = u = imagepoint?
There is a lot of information online but i have not found a 100% reliable source
I have run out of options, I would really apreciate if someone could give me a good explanation of the camera calibration or point me into the right direction(Papers etc).
Thank you in advance
Do you ever wondered why do you have two eyes? In the most primitive sense, it is because only with both eyes we can have an idea of how far or near objects are. In some applications which need to recuperate 3D information it is made by using two cameras, this is called stereoscopy (http://en.wikipedia.org/wiki/Stereoscopy). If you are trying to depict 3D information using a single camera, you only can have a poor approximation, in this case its needed a transformation called homography (http://en.wikipedia.org/wiki/Homography), the last in order to try to model the perspective (or how far or near objects are).
In most cases when we wanted to calibrate a single camera we try to remove the radial distortions produced by the lens of the camera (http://en.wikipedia.org/wiki/Distortion_%28optics%29). Opencv offers you a tool to do this process, in most cases its needed a Chess board to help this process, you can check this: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html, in the spite of being more specific, the function cvFindChessboardCorners. I hope this could be useful to you, sorry for the english, no native speaker.
I don't know if you already fixed your issue with the opnecv calibration but I will give you some hints anyway. First of all I suggest you to read the Zhang paper on calibration (http://research.microsoft.com/en-us/um/people/zhang/Papers/TR98-71.pdf). Opencv methods are based on Zhang's work so understanding it is a real priority.
Calibrating a camera means determine the relation between camera 2D coordinate system (in pixel units, with the origin on top left corner of camera image) and the 3D coordinate systme of the external world (in metres, for example). When you place a known planar calibration object in front of the camera, the system should compute the homogeneous transformation between the knonw 3D object and the 2D one on image (that is the "rvecs.get(i), tvecs.get(i)" you are talking about).
Image coordinates are always in pixel and also the intrinsic calibration matrix is expressed in pixel.
You cannot "translate" from 2D image coordinates to 3D world coordinates but you can compute the proper transformation: it is composed by an intrinc calibration matrix and a roto-translation matrix. Please have a look also at this article http://research.microsoft.com/en-us/um/people/zhang/Papers/Camera%20Calibration%20-%20book%20chapter.pdf
Hope this helps!

How do i find if a set of coordinates are inside a 2D triangle?

Does anybody know how to find out if a set of coordinates are within a triangle for which you have the coordinates for. i know how to work out length of sides, area and perimeter, but i have no idea where to begin working out the whereabouts within the triangle of other points.
Any advice would be appreciated
You can create a Polygon object.
Polygon triangle = new Polygon();
Add the vertexes of your triangle with the addPoint(int x, int y) method.
And then, you just need to check if the set of coordinates is inside your triangle using contains(double x, double y) method.
Use the contains method of the Polygon class as documented here.
For a solution without using the Polygon-class:
Assume that you have giving three points A,B,C the vertices of your polygon. Let P be the point you want to check. First calculate the vectors representing the edges of your triangle. Let us call them AB, BC, CA. Also calculate the three vectors PA, PB, PC.
Now calculate the cross product between the first two of the vectors from above.
The cross product of the first pair gives you the sin(alpha), where alpha is the angle between AB and PA, multiplied with a vector pendenpicular to AB and PA. Ignore this vector because we are interested in the angle and take a look at the sine (in the case of 2D vectors you can imagine it as the vector standing perpendicular to your screen).
The sine can take values between (let's say for the ease) betwenn 0 and 2*Pi. It's 0 exactly at 0 and Pi. For every value in between the sine is positive and for every value between Pi and 2*Pi it's negative.
So let's say your Point p is on the left hand side of AB, so the sine would be positive.
By taking the cross product of each pair from above, you could easily guess that the point P is on the left hand side of each edge from the triangle. This just means that it has to be inside the triangle.
Of course this method can even be used from calculating whether a point P is in a polygon. Be aware of the fact, that this method only works if the sides of the polygon are directed.

Positioning Devices (Intersecting Circles)

I have a series of points, which represent mobile devices within a room. Previously I have systematically emitted a ping from each and recorded the time at which it arrives at the others to calculate the distances.
Here's a simple diagram of an example network.
The bottom A node should have been a D instead
After recording the distances I have the distance information in hashes.
A = {B: 2, C: 1, D: 3}
B = {A: 2, C: 2, D: 2}
C = {A: 1, B: 2, D: 2}
D = {A: 3, B: 2, C: 2}
My maths is rusty, but I feel like I should be able to then draw circles using these values as the respective and then intersect the circles to calculate a relative graph of the nodes.
Every time I try to do it I start out with a series of circles drawn around the root node (in this case A) that looks something like this:
I know that the other nodes must lie on the lines that I have drawn around A, but without being able to position them, how do you draw their distances so that you may intersect the circles and create the graph?
Start with any one point say A. Now take the second point say B, and plot it somewhere on the circle with the center at A and radius as distance between A and B. Now take another point C.
Let the distance (A,C)=x and (B,C)=y. Find the point of intersection of the circles (A,x) and (B,y). Mark it as C.
where circle (P,q) specifies center at P and radius q.
If no such point exists, then the given data is incorrect.
Now take the 4th point and similarly find the point of intersection of circles with centers at first three points and radii as distance between 4th and other three points respectively. Apply this method until all the points are plotted.
Note that there can be infinitely many solutions as RobH pointed out. Since you need only a virtual representation, I guess anyone of the valid solutions suffices.
The above algorithm has an order of O(N^2). It may be inefficient if the number of points are greater than 10000.
Also note that, to find the point of intersection of k circles, you first need to find the point of intersection for any two circles and validate these points on the remaining circles. This is because k circles can at most intersect at two points, assuming all have distinct centers.
EDIT: At any stage if there are two valid plots for a point, we can choose anyone of them and yet we arrive at one of the valid solutions.

Guarantee outward direction of polygon normals

I'm trying to write a 2D game in Java that uses the Separating Axis Theorem for collision detection. In order to resolve collisions between two polygons, I need to know the Minimum Translation Vector of the collision, and I need to know which direction it points relative to the polygons (so that I can give one polygon a penalty force along that direction and the other a penalty force in the opposite direction). For reference, I'm trying to implement the algorithm here.
I'd like to guarantee that if I call my collision detection function collide(Polygon polygon1, Polygon polygon2) and it detects a collision, the returned MTV will always point away from polygon1, toward polygon2. In order to do this, I need to guarantee that the separating axes that I generate, which are the normals of the polygon edges, always point away from the polygon that generated them. (That way, I know to negate any axis from polygon2 before using it as the MTV).
Unfortunately, it seems that whether or not the normal I generate for a polygon edge points towards the interior of the polygon or the exterior depends on whether the polygon's points are declared in clockwise or counterclockwise order. I'm using the algorithm described here to generate normals, and assuming that I pick (x, y) => (y, -x) for the "perpendicular" method, the resulting normals will only point away from the polygon if I iterate over the vertices in clockwise order.
Given that I can't force the client to declare the points of the polygon in clockwise order (I'm using java.awt.Polygon, which just exposes two arrays for x and y coordinates), is there a mathematical way to guarantee that the direction of the normal vectors I generate is toward the exterior of the polygon? I'm not very good at vector math, so there may be an obvious solution to this that I'm missing. Most Internet resources about the SAT just assume that you can always iterate over the vertices of a polygon in clockwise order.
You can just calculate which direction each polygon is ordered, using, for example, the answer to this question, and then multiply your normal by -1 if the two polygons have different orders.
You could also check each polygon passed to your algorithm to see if it is ordered incorrectly, again using the algorithm above, and reverse the vertex order if necessary.
Note that when calculating the vertex order, some algorithms will work for all polygons and some just for convex polygons.
I finally figured it out, but the one answer posted was not the complete solution so I'm not going to accept it. I was able to determine the ordering of the polygon using the basic algorithm described in this SO answer (also described less clearly in David Norman's link), which is:
for each edge in polygon:
sum += (x2 - x1) * (y2 + y1)
However, there's an important caveat which none of these answers mention. Normally, you can decide that the polygon's vertices are clockwise if this sum is positive, and counterclockwise if the sum is negative. But the comparison is inverted in Java's 2D graphics system, and in fact in many graphics systems, because the positive y axis points downward. So in a normal, mathematical coordinate system, you can say
if sum > 0 then polygon is clockwise
but in a graphics coordinate system with an inverted y-axis, it's actually
if sum < 0 then polygon is clockwise
My actual code, using Java's Polygon, looked something like this:
//First, find the normals as if the polygon was clockwise
int sum = 0;
for(int i = 0; i < polygon.npoints; i++) {
int nextI = (i + 1 == polygon.npoints ? 0 : i + 1);
sum += (polygon.xpoints[nextI] - polygon.xpoints[i]) *
(polygon.ypoints[nextI] + polygon.ypoints[i]);
}
if(sum > 0) {
//reverse all the normals (multiply them by -1)
}

Categories