I am recently new in OpenCV and I have been struggling to calibrate my camera. After a few days researching I have a basic understanding of it. But I still fail to understand some basic points.
1) The initialization of the objectpoint Matrix, why do we initialize this matrix in 0,0
Mat a = new MatOfPoint3f();
for(int y=0; y<SIZE_Y; ++y)
{
for(int x=0; x<SIZE_X; ++x)
{
points = new MatOfPoint3f(new Point3(x*distance_Board , y*distance_Board , 0));
a.push_back(points);
}
}
Wouldn't it make more sense to initialize it where the board is in the 3D World for example
Mat a = new MatOfPoint3f();
for(int y=1; y<=SIZE_Y; ++y)
{
for(int x=1; x<=SIZE_X; ++x)
{
points = new MatOfPoint3f(new Point3(x*distance_Board + FirstPoint.x, y*distance_Board + FirstPoint.y, 0));
a.push_back(points);
}
}
2)
I tried to calibrate my camera using
Calib3d.calibrateCamera(object_points, corners, gray.size(), cameraMatrix, distCoeffs, rvecs, tvecs);
I have tried with more than 15 images but the results are still very poor , because i don't understand the significance of having a rvec and tvec for very image(I understand the logic, since for every point the rotation and translation is different) but how does it help us with other points/other images. I thought that the calibration provided us with a fair good method to translate 3d point into 2d points in the whole scene..
That's why when I run
Calib3d.projectPoints(objectPoints, rvecs.get(i), tvecs.get(i), cameraMatrix, distCoeffs, imagePoints);
I really don't know which rvecs and tvecs to choose
3)
Is there a method to translate from 2D(imagePoints) into 3D(real World).I have tried
this but the results are incorrect due to the incorrect parameters of calibration
4)
I have also tried to do the translation from 2D to 3D as follow
x ̃ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ]
y ̃ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ],
a)But what is r? r = sqrt( x^2 + y^2 )? And x = (the x coordinate pixel) - (the camera center in pixels) ?
b) Is the camera center in pixel = cx = parameter of the camera matrix?
c) Is the x coordinate pixel = u = imagepoint?
There is a lot of information online but i have not found a 100% reliable source
I have run out of options, I would really apreciate if someone could give me a good explanation of the camera calibration or point me into the right direction(Papers etc).
Thank you in advance
Do you ever wondered why do you have two eyes? In the most primitive sense, it is because only with both eyes we can have an idea of how far or near objects are. In some applications which need to recuperate 3D information it is made by using two cameras, this is called stereoscopy (http://en.wikipedia.org/wiki/Stereoscopy). If you are trying to depict 3D information using a single camera, you only can have a poor approximation, in this case its needed a transformation called homography (http://en.wikipedia.org/wiki/Homography), the last in order to try to model the perspective (or how far or near objects are).
In most cases when we wanted to calibrate a single camera we try to remove the radial distortions produced by the lens of the camera (http://en.wikipedia.org/wiki/Distortion_%28optics%29). Opencv offers you a tool to do this process, in most cases its needed a Chess board to help this process, you can check this: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html, in the spite of being more specific, the function cvFindChessboardCorners. I hope this could be useful to you, sorry for the english, no native speaker.
I don't know if you already fixed your issue with the opnecv calibration but I will give you some hints anyway. First of all I suggest you to read the Zhang paper on calibration (http://research.microsoft.com/en-us/um/people/zhang/Papers/TR98-71.pdf). Opencv methods are based on Zhang's work so understanding it is a real priority.
Calibrating a camera means determine the relation between camera 2D coordinate system (in pixel units, with the origin on top left corner of camera image) and the 3D coordinate systme of the external world (in metres, for example). When you place a known planar calibration object in front of the camera, the system should compute the homogeneous transformation between the knonw 3D object and the 2D one on image (that is the "rvecs.get(i), tvecs.get(i)" you are talking about).
Image coordinates are always in pixel and also the intrinsic calibration matrix is expressed in pixel.
You cannot "translate" from 2D image coordinates to 3D world coordinates but you can compute the proper transformation: it is composed by an intrinc calibration matrix and a roto-translation matrix. Please have a look also at this article http://research.microsoft.com/en-us/um/people/zhang/Papers/Camera%20Calibration%20-%20book%20chapter.pdf
Hope this helps!
Related
I want to get a Vector containing a coordinate. I know my beginning coordinates, angle and distance. So far I've tried doing:
Vector2 pos = new Vector2(beginningX, beginningY).add(distance, distance).rotate(angle);
But it doesn't work as I expect it to. When the rotation isn't 0 the coordinates become big, and the ending point isn't where I expect it to be. I know this must be a simple problem, but I just can't solve it.
EDIT:
Tried doing:
Vector2 pos = new Vector2(beginningX, beginningY).add(distance, 0).rotate(angle);
(Adding distance to x only) Still no success.
I'd say you're doing it wrong: you need to rotate the distance vector and add it to the position vector:
Vector2 pos = new Vector2(beginningX, beginningY).add(new Vector2(distance, 0).rotate(angle) );
You might want to read up on vector math but basically it amounts to this (if I correctly understood what you're trying to do):
If you rotate a vector you're always rotating around point 0/0. Thus you'll want to create a vector that covers the distance from 0/0 to your distance on the x-axis:
0---------------->d
Now you rotate that vector by some angle:
d
/
/
/
/
/
0
Then you offset that vector by your starting point, i.e. you add the two vectors (for simplicity I assume your starting point lies on the y-axis):
d
/
/
/
/
/
s
|
|
|
0
You need to rotate only the distance vector, rather than a sum of beginning and distance. Addition is the same in either order (commutative), so you can try this way:
Vector2 pos = new Vector2(distance, 0).rotate(angle).add(beginningX, beginningY);
Advantage: This chained call does not create a temporary Vector2 for the beginning position that would immediately become garbage for the garbage collector. Conserving space and garbage collection time will be important when your code handles millions of vectors.
This is simple vector addition. I'm assuming 2D coordinates, with angles measured counterclockwise from x-axis:
x(new) = x(old) + distance*cos(angle)
y(new) = y(old) + distance*sin(angle)
Be sure that your angles are in radians when you plug them into trig functions.
Ok, so on the internet, I have seen equations for solving this, but they require the normal of the plane, and are a lot higher math than I know.
Basically, if I have an x,y,z position (as well as x,y,z rotations) for my ray, and x,y,z for three points that represent my plane, How would I solve for the point of collision?
I have done 2D collisions before, but I am clueless on how this would work in 3D. Also, I work in java, though I understand C# well enough.
Thanks to the answer below, I was able to find the normal of my face. This then allowed me to, through trial and error and http://geomalgorithms.com/a05-_intersect-1.html, come up with the following code (hand made vector math excluded):
Vertice Vertice1 = faces.get(f).getV1();
Vertice Vertice2 = faces.get(f).getV2();
Vertice Vertice3 = faces.get(f).getV3();
Vector v1 = vt.subtractVertices(Vertice2, Vertice1);
Vector v2 = vt.subtractVertices(Vertice3, Vertice1);
Vector normal = vt.dotProduct(v1, v2);
//formula = -(ax + by + cz + d)/n * u where a,b,c = normal(x,y,z) and where u = the vector of the ray from camX,camY,camZ,
// with a rotation of localRotX,localRotY,localRotZ
double Collision =
-(normal.x*camX + normal.y*camY + normal.z*camZ) / vt.dotProduct(normal, vt.subtractVertices(camX,camY,camZ,
camX + Math.sin(localRotY)*Math.cos(localRotX),camY + Math.cos(localRotY)*Math.cos(localRotX),camZ + Math.sin(localRotX)));
This code, mathimatically should work, but I have yet to properly test the code. Tough I will continue working on this, I consider this topic finished. Thank you.
It would be very helpful to post one of the equations that you think would work for your situation. Without more information, I can only suggest using basic linear algebra to get the normal vector for the plane from the data you have.
In R3 (a.k.a. 3d math), the cross product of two vectors will yield a vector that is perpendicular to the two vectors. A plane normal vector is a vector that is perpendicular to the plane.
You can get two vectors that lie in your plane from the three points you mentioned. Let's call them A, B, and C.
v1 = B - A
v2 = C - A
normal = v1 x v2
Stackoverflow doesn't have Mathjax formatting so that's a little ugly, but you should get the idea: construct two vectors from your three points in the plane, take the cross product of your two vectors, and then you have a normal vector. You should then be closer to adapting the equation to your needs.
I'm writing a game in java with the lwjgl. Basically i have two plans that i want to check if they intersect each other, like the image below.
I have the four points for each plane, can someone help me.
The 2 planes do not intersect if they are parallel (and not the same plane).
Let p1, p2, p3 and p4 be your 4 points defining the plane and n=(a,b,c) the normal vector computed as n=cross(p2-p1, p3-p1). The plane equation is
ax + by + cz + d = 0
where d=-dot(n,p1)
You have 2 planes
ax + by + cz + d = 0
a’x + b’y + c’z + d’ = 0
they are parallel (and not same) iff
a/a’ == b/b’ == c/c’ != d/d’
When you implement this predicate you have to check the divide by 0
I can't show this is enough, but I believe these three tests should be sufficient:
for two planes...
project each plane onto the x axis, check if there is any overlap
project each plane onto the y axis, check if there is any overlap
project each plane onto the z axis, check if there is any overlap
if there is no overlap in any of those three cases, the planes do not intersect. otherwise,
the planes intersect.
let me know if you are not sure how to project onto an axis or calculate an overlap. also, let me know if these three tests are insufficient.
Edit 1:
Algorithm: You don't actually have to project, rather you can just find the maximum range. Let's do the x axis as an example. You find the minimum x value on plane 1 and the maximum x value on the plane 1. Next, you find the minimum x value on plane 2 and the maximum x value on plane 2. If their ranges overlap ( for example, [1 , 5] overlaps with [2 , 9] ), then there's overlap with the projections onto the x axis. Note that finding the range of x values might not be easy if edges of your plane segment aren't parallel with the x axis. If you're dealing with more complicated plane segments that don't have edges parallel to the axes, I can't really help then. You might have to use something else like matrices.
The test, by the way, is called a separating-axis test. I think the x, y, and z axis tests should be enough to test for plane segments intersecting.
Source: (Book) Game Physics Engine Development: How To Build A Robust Commercial-Grade Physics Engine For Your Game (Second Edition) by Ian Millington
Edit 2:
Actually, you'll need to check more axes.
I'm creating a very very simple game for fun. Realizing I needed the trajectory of an object given an angle and a velocity, it seemed logical to use this parametric equation:
x = (v*cos(ø))t and y = (v*sin(ø)t - 16t^2
I know that this equation works for a trajectory, but it isn't working with most ø values I use.
Do java angles work any differently from normal angle calculation?
My goal is for the object to start from bottom left of the window and follow an arc that is determined by the speed and angle given. However it tends to go strange directions.
The value of ø should be horizontal at 0 degrees and vertical at 90, and in the equation it refers to the angle at which the arc begins.
This is my first ever question post on this site, so if I'm missing anything in that regard please let me know.
Here is the calculating part of my code
not shown is the void time() that counts for each 5ms
also I should mention that the parX and parY are used to refer to the x and y coordinates in an unrounded form, as graphics coordinates require integer values.
Any help is much appreciated, and thank you in advance!
public void parametric()
{
parX = (float) ((speed*cos(-ø))*time);
gravity = (time*time)*(16);
parY = (float) ((float) ((speed*sin(-ø))*time)+gravity)+500;
xCoord = round(parX);
yCoord = round(parY);
}
Do java angles work any differently from normal angle calculation?
You only need to read the docs
public static double cos(double a)
Parameters:
a - an angle, in radians.
I guess you are using degrees instead of radians?
I'm trying to write a 2D game in Java that uses the Separating Axis Theorem for collision detection. In order to resolve collisions between two polygons, I need to know the Minimum Translation Vector of the collision, and I need to know which direction it points relative to the polygons (so that I can give one polygon a penalty force along that direction and the other a penalty force in the opposite direction). For reference, I'm trying to implement the algorithm here.
I'd like to guarantee that if I call my collision detection function collide(Polygon polygon1, Polygon polygon2) and it detects a collision, the returned MTV will always point away from polygon1, toward polygon2. In order to do this, I need to guarantee that the separating axes that I generate, which are the normals of the polygon edges, always point away from the polygon that generated them. (That way, I know to negate any axis from polygon2 before using it as the MTV).
Unfortunately, it seems that whether or not the normal I generate for a polygon edge points towards the interior of the polygon or the exterior depends on whether the polygon's points are declared in clockwise or counterclockwise order. I'm using the algorithm described here to generate normals, and assuming that I pick (x, y) => (y, -x) for the "perpendicular" method, the resulting normals will only point away from the polygon if I iterate over the vertices in clockwise order.
Given that I can't force the client to declare the points of the polygon in clockwise order (I'm using java.awt.Polygon, which just exposes two arrays for x and y coordinates), is there a mathematical way to guarantee that the direction of the normal vectors I generate is toward the exterior of the polygon? I'm not very good at vector math, so there may be an obvious solution to this that I'm missing. Most Internet resources about the SAT just assume that you can always iterate over the vertices of a polygon in clockwise order.
You can just calculate which direction each polygon is ordered, using, for example, the answer to this question, and then multiply your normal by -1 if the two polygons have different orders.
You could also check each polygon passed to your algorithm to see if it is ordered incorrectly, again using the algorithm above, and reverse the vertex order if necessary.
Note that when calculating the vertex order, some algorithms will work for all polygons and some just for convex polygons.
I finally figured it out, but the one answer posted was not the complete solution so I'm not going to accept it. I was able to determine the ordering of the polygon using the basic algorithm described in this SO answer (also described less clearly in David Norman's link), which is:
for each edge in polygon:
sum += (x2 - x1) * (y2 + y1)
However, there's an important caveat which none of these answers mention. Normally, you can decide that the polygon's vertices are clockwise if this sum is positive, and counterclockwise if the sum is negative. But the comparison is inverted in Java's 2D graphics system, and in fact in many graphics systems, because the positive y axis points downward. So in a normal, mathematical coordinate system, you can say
if sum > 0 then polygon is clockwise
but in a graphics coordinate system with an inverted y-axis, it's actually
if sum < 0 then polygon is clockwise
My actual code, using Java's Polygon, looked something like this:
//First, find the normals as if the polygon was clockwise
int sum = 0;
for(int i = 0; i < polygon.npoints; i++) {
int nextI = (i + 1 == polygon.npoints ? 0 : i + 1);
sum += (polygon.xpoints[nextI] - polygon.xpoints[i]) *
(polygon.ypoints[nextI] + polygon.ypoints[i]);
}
if(sum > 0) {
//reverse all the normals (multiply them by -1)
}