I have a problem in my application.
I would like to take a picture then draw a mask on it, knowing tilt and inclination angle. In testing phase, I chose to work on a simple lens checker where I draw a mask with known real coordinates and I'm supposed to find same
coordinates with code.
When tilt = 0 and inclinationAnle = 0 (default case) all works fine, the same when
tilt !=0.
The images are taken with same distance between camera lens and the lens checker's center.
This is the reference example:
Now with just applying a rotation and drawing the same mask, we get this:
Applying a simple rotation with -tilt angle get us the same eight points' coordinates as in the default case and it works fine.
The problem is when I change inclination angle like this:
I understood that it's a perspective transformation (or inverse) but I don't want to transform the image itself, I want from eight points' coordinates to retrieve the default case' coordinates. Like in a real case, I don't know the default case' coordinates, just the inclination angle. So a user takes a picture with the mask and i must retrieve mask coordinates as if the picture was taken without inclination or tilt. Here the points are in 2D (X,Y).
I'm sorry if this is duplicated somewhere, but I've looked everywhere, used OpenCV but couldn't get what I need.
Any needed infos will be provided and thanks for any help .
Related
My FRC (robotics) team is having issues with image processing, and tomorrow is our last testing day before competition.
The camera is facing downward and tilted in the x direction. We are trying to calculate the distance that an object is to a fixed point on the same surface. We only need to calculate the x distance (in inches).
Here's a diagram.
The object could be anywhere on the line with the fixed point.
Here is the view from the camera
The tape measure represents the line in the diagram.
I know it's low res and not the best picture, I took it just before I left today. The tape measure is where the object could be. And we only care about it's x position.
Other info if needed:
Camera: Pixy
Focal length: 28mm (1.1024")
Sensor size: 0.25"
Height of camera from surface (the ground in our case): 8"
We always know the x position (in pixels) of the object, we just need to calculate the distance (in inches) that the object is from the fixed point.
If you have any other questions please ask. Thanks.
You are on the right track with your image of the tape measure. All you need to do is manually (from that image), determine the inches (from zero) for each x-position (pixel). Create a lookup table that you can use in the code.
When you determine the x-position of the object and the x-position of the fixed point, look up the inches for each of these x-positions and subtract to get the distance between the object and the fixed point.
This approach is super simple, but also depends on proper calibration of the system. In particular, the operational setup (height, angle, camera optics, etc.) has to exactly match the setup when the test image was taken that was used to create the lookup table.
A standard technique is to calibrate the system by taking and processing a calibration image whenever the operational setup might have changed. For example, you could place a grid patter (e.g., with one inch squares) in the field of view. The idea is that you code a calibration analysis that will determine the proper lookup table values based on the standard image.
As the title implies I need an algorithm, code or a library that would help me to stretch a Bitmap (or a Path in Android) to an arbitrary polygon. Polygon is given with a list of x, y coordinates. Actually I need to transform/stretch a Path object in Android which is also given by x, y coordinates. I mentioned Bitmap because it is more likely that someone had similar problem and I assume that both will be transformed my a Matrix
I tried to use Matrix.setPolyToPoly(...) but it doesn't seem to help since it is transforming to square like area (only 4 points) not to an arbitrary polygon.
For better illustration what I need please check out image bellow. It is not exact transformation but something close. Note that whole image is stretched to star shaped polygon, it is not a mask and not a trim, just pixel transition.
I saw your question a few days ago, then yesterday I ran across this:
Canvas#drawBitmapMesh | Android Developers
It's kind of hard to grasp, but the way I understand it you start with an imaginary elastic grid over your bitmap. The way you want to warp the bitmap can be expressed by moving the x,y points of the grid to alternate locations.
Here's an article with a diagram and here's an article with some sample code.
Obviously, the hard part now is to take your frame polygon and use it to generate the warped vertices in the mesh. That may take some fancy mathematics. But I thought this would be a step in the right direction.
This is what I was envisioning: I'm looking at the star polygon and I'm picturing a circle as the starting point (not the square). The star could be seen as taking the circle and stretching points on it toward and away from the center. Whichever way it was stretched would create some vectors, from zero at the center to strongest at the stretch point.
For a Path, you could then just apply the vectors to the points in the path, but the lines would also need to be bent so this would be some pretty convoluted math with Bezier curves (convoluted at least for me, I'm not any sort of mathematician).
But if you drew the Path onto a Bitmap you might be in a better position. You could just alter the mesh vertices using the different vectors then use Canvas.drawBitmapMesh() to render the final result.
I have been having trouble understanding the rotation vector given back by the Calib3d.calibrateCamera() fucntion of the opencv.
In order to understand the return values from the function, I tried the following steps.
I generated a chess board pattern first in Java. Then I applied 15 rotation vectors and 15 translation vectors to it. I also applied a known matrix representing the camera's intrinsic matrix. I basically trying to recreate the manual process of taking many photographs of the chessbaord. Some of the images I created look like this:
One of the images I generated with identified corners
After generating images, I ran them through the corner identification and camera calibration functions. I got back the intrinsic matrix i used almost exactly from the calibration function.
When I compare the rotation vectors I used to generate the images and the vectors I got back from the calibration function, they are negatives of each other. Comparison plots are shown below:
Comparing input and output Rotation value about x axis
Is this normal behavior or am i doing something wrong?
The answer at says that opencv rotations are from the image coordinate system to the object coordinate system. Can this be a reason?
If the expected and actual rotation vectors are the opposite of each other, it means the expected and actual rotation matrices are the inverse of each other. Hence you probably got the source and destination coordinate systems mixed up.
Normally, this is quite easy to check:
Apply the [R|t] matrix to the 3D points corresponding to the pattern corners
Project them in the image using the intrinsic matrix
Check that the corner projections have consistent values with what you observe in the image. If they are not consistent, try the same thing with the inverse rotation matrix.
I need to create image (polygon) from GPS coordinates. I have coordinates like this:
(49.274633220,17.160206083),(49.276968797,17.162732143),(49.278188519,17.162391767),(49.279761626,17.161087954), ......
And I need to transform them to XY pixel points. Each pair of coordinates are vertex of created polygon.
File with all coordinates:
GPS.txt
and how the created polygon should look like:
Any idea how can I transform the coordinates? Thanks for all reply.
In all cases you need a transformation form lat,lon (spherical) to cartesian (x,y) coordinated.
If the polygon is not bigger than 100km you can use a simple Cyclindrical Equidistant Projection.
Otherwise, you may use a Mercator Projection. (Google Maps uses that too)
Are you sure the assignment says to create a graphic? Or is it just to read the text file and extract the pairs of coordinates? Because you can't create a graphic without defining a transformation. I would start by finding the max and min latitude and longitude values (making sure which is which!). Then just use a linear scale for the longitude so that your minimum longitude goes to px=0 and the maximum longitude goes to however wide you want your image to be. Then do the same for latitude - it'll look distorted but at least you'll see something to start with.
By the way, the graphic you pasted doesn't seem to correspond to the coordinates you gave. If it helps, yours look more like this red area.
I'm working on an Android game and would like to implement a 2D grid to visualize the effects of gravity on the playing field. I'd like to distort the grid based on various objects on my playing field. The effect I'm looking for is similar to the following from the Processing library:
Except that my grid will be simpler- 2D, and viewed strictly from the top, as if looking down at the playfield.
Can someone point me to an algorithm for drawing such a grid?
The one idea that I came up with was to draw the lines as if they were "particles"- start at one end of the screen and draw the line in multiple segments, treating each segment as a particle, calculating the effect of gravity at each segment's location.
The application is intended to run on Android.
Thanks
I would draw each line as a separate segment, as you mentioned. If the grid is sparse, it might be fastest.
If you are viewing the grid from above, you would need to calculate x and y coordinate displacements. The easiest way would be to actually do displacement along the z axis and then fake perspective with x_result = x/z and y_result = y/z . You set z=1 and make sure to vary it only relatively slightly (+- 0.1 for instance).
Your z should be proportional to the sum of 1/(distance to the sphere)^2. This simulates how gravity works - it tapers off with square of the distance. Great news - square of the distance means to calculate delta_x^2 + delta_y^2 - so you save yourself that square root calculation == faster.