I have been having trouble understanding the rotation vector given back by the Calib3d.calibrateCamera() fucntion of the opencv.
In order to understand the return values from the function, I tried the following steps.
I generated a chess board pattern first in Java. Then I applied 15 rotation vectors and 15 translation vectors to it. I also applied a known matrix representing the camera's intrinsic matrix. I basically trying to recreate the manual process of taking many photographs of the chessbaord. Some of the images I created look like this:
One of the images I generated with identified corners
After generating images, I ran them through the corner identification and camera calibration functions. I got back the intrinsic matrix i used almost exactly from the calibration function.
When I compare the rotation vectors I used to generate the images and the vectors I got back from the calibration function, they are negatives of each other. Comparison plots are shown below:
Comparing input and output Rotation value about x axis
Is this normal behavior or am i doing something wrong?
The answer at says that opencv rotations are from the image coordinate system to the object coordinate system. Can this be a reason?
If the expected and actual rotation vectors are the opposite of each other, it means the expected and actual rotation matrices are the inverse of each other. Hence you probably got the source and destination coordinate systems mixed up.
Normally, this is quite easy to check:
Apply the [R|t] matrix to the 3D points corresponding to the pattern corners
Project them in the image using the intrinsic matrix
Check that the corner projections have consistent values with what you observe in the image. If they are not consistent, try the same thing with the inverse rotation matrix.
Related
My FRC (robotics) team is having issues with image processing, and tomorrow is our last testing day before competition.
The camera is facing downward and tilted in the x direction. We are trying to calculate the distance that an object is to a fixed point on the same surface. We only need to calculate the x distance (in inches).
Here's a diagram.
The object could be anywhere on the line with the fixed point.
Here is the view from the camera
The tape measure represents the line in the diagram.
I know it's low res and not the best picture, I took it just before I left today. The tape measure is where the object could be. And we only care about it's x position.
Other info if needed:
Camera: Pixy
Focal length: 28mm (1.1024")
Sensor size: 0.25"
Height of camera from surface (the ground in our case): 8"
We always know the x position (in pixels) of the object, we just need to calculate the distance (in inches) that the object is from the fixed point.
If you have any other questions please ask. Thanks.
You are on the right track with your image of the tape measure. All you need to do is manually (from that image), determine the inches (from zero) for each x-position (pixel). Create a lookup table that you can use in the code.
When you determine the x-position of the object and the x-position of the fixed point, look up the inches for each of these x-positions and subtract to get the distance between the object and the fixed point.
This approach is super simple, but also depends on proper calibration of the system. In particular, the operational setup (height, angle, camera optics, etc.) has to exactly match the setup when the test image was taken that was used to create the lookup table.
A standard technique is to calibrate the system by taking and processing a calibration image whenever the operational setup might have changed. For example, you could place a grid patter (e.g., with one inch squares) in the field of view. The idea is that you code a calibration analysis that will determine the proper lookup table values based on the standard image.
I have a View that has the moving, rotating and scaling capability, I have all the four-corner positions and I want to get the position of some imaginary points in it so that can be constant with scaling and moving
Please take a look at the picture below to understand it well.
Edit:
After re-reading I see that question is not clear
If you need to transform points when you know initial and ending corner coordinates, then calculate matrix of affine transform using three pairs of coordinates as described here and apply that matrix to all needed points.
But you also mentioned points .. that can be constant - as far as I understand - point that preserve their coordinates after transform.
You should know matrix of affine transform that is applied to your coordinates.
For this matrix calculate eigenvalues and corresponding eigenvectors (there are up to three of them)
If some eigenvalue is 1, then static point does exist, and corresponding eigenvector being normalized to the form (x,y,1) represents coordinates of your static point.
As the title says, I'm trying to find a way to generate a transformation matrix to best best align two images (the solution with the smallest error value computed with an arbitrary metric - for example the SAD of all distances between corresponding points). Example provided below:
This is just an example in the sense that the outer contour can be any shape, the "holes" can be any shape, any size and any number.
The "from" image was drawn by hand in order to show that the shape is not perfect, but rather a contour extracted from a camera acquired image.
The API function that seems to be what I need is Video.estimateRigidTransform but I ran into a couple of issues and I'm stuck:
The transformation must be rigid in the strongest sense, meaning it must not do any kind of scaling, only translation and rotation.
Since the shapes in the "from" image are not perfect, the number of points in the contour are not the same as the ones in the "To" image, and the function above need two sets of corresponding points. In order to bypass this I have tried another approach: I have calculated the centroids of the holes and outer contour and tried aligning those. There are two issues here:
I need alignment even if one of the holes is missing in the "from" image.
The points must be in the same order in both lists passed to Video.estimateRigidTransform and there is no guarantee that function findContours will provide them in the same order in both shapes.
I have yet to try to run a feature extractor and matcher to obtain some corresponding points but I'm not very confident in this method, especially since the "From" image is a natural image with irregularities.
Any ideas would be greatly appreciated.
As the title implies I need an algorithm, code or a library that would help me to stretch a Bitmap (or a Path in Android) to an arbitrary polygon. Polygon is given with a list of x, y coordinates. Actually I need to transform/stretch a Path object in Android which is also given by x, y coordinates. I mentioned Bitmap because it is more likely that someone had similar problem and I assume that both will be transformed my a Matrix
I tried to use Matrix.setPolyToPoly(...) but it doesn't seem to help since it is transforming to square like area (only 4 points) not to an arbitrary polygon.
For better illustration what I need please check out image bellow. It is not exact transformation but something close. Note that whole image is stretched to star shaped polygon, it is not a mask and not a trim, just pixel transition.
I saw your question a few days ago, then yesterday I ran across this:
Canvas#drawBitmapMesh | Android Developers
It's kind of hard to grasp, but the way I understand it you start with an imaginary elastic grid over your bitmap. The way you want to warp the bitmap can be expressed by moving the x,y points of the grid to alternate locations.
Here's an article with a diagram and here's an article with some sample code.
Obviously, the hard part now is to take your frame polygon and use it to generate the warped vertices in the mesh. That may take some fancy mathematics. But I thought this would be a step in the right direction.
This is what I was envisioning: I'm looking at the star polygon and I'm picturing a circle as the starting point (not the square). The star could be seen as taking the circle and stretching points on it toward and away from the center. Whichever way it was stretched would create some vectors, from zero at the center to strongest at the stretch point.
For a Path, you could then just apply the vectors to the points in the path, but the lines would also need to be bent so this would be some pretty convoluted math with Bezier curves (convoluted at least for me, I'm not any sort of mathematician).
But if you drew the Path onto a Bitmap you might be in a better position. You could just alter the mesh vertices using the different vectors then use Canvas.drawBitmapMesh() to render the final result.
I have a problem in my application.
I would like to take a picture then draw a mask on it, knowing tilt and inclination angle. In testing phase, I chose to work on a simple lens checker where I draw a mask with known real coordinates and I'm supposed to find same
coordinates with code.
When tilt = 0 and inclinationAnle = 0 (default case) all works fine, the same when
tilt !=0.
The images are taken with same distance between camera lens and the lens checker's center.
This is the reference example:
Now with just applying a rotation and drawing the same mask, we get this:
Applying a simple rotation with -tilt angle get us the same eight points' coordinates as in the default case and it works fine.
The problem is when I change inclination angle like this:
I understood that it's a perspective transformation (or inverse) but I don't want to transform the image itself, I want from eight points' coordinates to retrieve the default case' coordinates. Like in a real case, I don't know the default case' coordinates, just the inclination angle. So a user takes a picture with the mask and i must retrieve mask coordinates as if the picture was taken without inclination or tilt. Here the points are in 2D (X,Y).
I'm sorry if this is duplicated somewhere, but I've looked everywhere, used OpenCV but couldn't get what I need.
Any needed infos will be provided and thanks for any help .