Related
I want to draw an arc using center point,starting point,ending point on opengl surfaceview.I have tried this given below code so far. This function draws the expected arc if we give the value for start_line_angle and end_line_angle manually (like start_line_angle=0 and end_line_angle=90) in degree.
But I need to draw an arc with the given co-ordinates(center point,starting point,ending point) and calculating the start_line_angle and end_line_angle programatically.
This given function draws an arc with the given parameters but not giving the desire result. I've wasted my 2 days for this. Thanks in advance.
private void drawArc(GL10 gl, float radius, float cx, float cy, float start_point_x, float start_point_y, float end_point_x, float end_point_y) {
gl.glLineWidth(1);
int start_line_angle;
double sLine = Math.toDegrees(Math.atan((cy - start_point_y) / (cx - start_point_x))); //normal trigonometry slope = tan^-1(y2-y1)/(x2-x1) for line first
double eLine = Math.toDegrees(Math.atan((cy - end_point_y) / (cx - end_point_x))); //normal trigonometry slope = tan^-1(y2-y1)/(x2-x1) for line second
//cast from double to int after round
int start_line_Slope = (int) (sLine + 0.5);
/**
* mapping the tiriogonometric angle system to glsurfaceview angle system
* since angle system in trigonometric system starts in anti clockwise
* but in opengl glsurfaceview angle system starts in clock wise and the starting angle is 90 degree of general trigonometric angle system
**/
if (start_line_Slope <= 90) {
start_line_angle = 90 - start_line_Slope;
} else {
start_line_angle = 360 - start_line_Slope + 90;
}
// int start_line_angle = 270;
// int end_line_angle = 36;
//casting from double to int
int end_line_angle = (int) (eLine + 0.5);
if (start_line_angle > end_line_angle) {
start_line_angle = start_line_angle - 360;
}
int nCount = 0;
float[] stVertexArray = new float[2 * (end_line_angle - start_line_angle)];
float[] newStVertextArray;
FloatBuffer sampleBuffer;
// stVertexArray[0] = cx;
// stVertexArray[1] = cy;
for (int nR = start_line_angle; nR < end_line_angle; nR++) {
float fX = (float) (cx + radius * Math.sin((float) nR * (1 * (Math.PI / 180))));
float fY = (float) (cy + radius * Math.cos((float) nR * (1 * (Math.PI / 180))));
stVertexArray[nCount * 2] = fX;
stVertexArray[nCount * 2 + 1] = fY;
nCount++;
}
//taking making the stVertextArray's data in reverse order
reverseArray = new float[stVertexArray.length];//-2 so that no repeatation occurs of first value and end value
int count = 0;
for (int i = (stVertexArray.length) / 2; i > 0; i--) {
reverseArray[count] = stVertexArray[(i - 1) * 2 + 0];
count++;
reverseArray[count] = stVertexArray[(i - 1) * 2 + 1];
count++;
}
//reseting the counter to initial value
count = 0;
int finalArraySize = stVertexArray.length + reverseArray.length;
newStVertextArray = new float[finalArraySize];
/**Now adding all the values to the single newStVertextArray to draw an arc**/
//adding stVertextArray to newStVertextArray
for (float d : stVertexArray) {
newStVertextArray[count++] = d;
}
//adding reverseArray to newStVertextArray
for (float d : reverseArray) {
newStVertextArray[count++] = d;
}
Log.d("stArray", stVertexArray.length + "");
Log.d("reverseArray", reverseArray.length + "");
Log.d("newStArray", newStVertextArray.length + "");
ByteBuffer bBuff = ByteBuffer.allocateDirect(newStVertextArray.length * 4);
bBuff.order(ByteOrder.nativeOrder());
sampleBuffer = bBuff.asFloatBuffer();
sampleBuffer.put(newStVertextArray);
sampleBuffer.position(0);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, sampleBuffer);
gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, nCount * 2);
gl.glLineWidth(1);
}
To begin with the trigonometry you may not simply use the atan to find degrees of the angle. You need to check what quadrant the vector is in and increase or decrease the result you get from atan. Better yet use atan2 which should include both dx and dy and do the job for you.
You seem to create the buffer so that a point is created per degree. This is not the best solution as for large radius that might be too small and for small radius this is way too much. Tessellation should include the radius as well such that number of points N is N = abs((int)(deltaAngle*radius*tessellationFactor)) then use angleFragment = deltaAngle/N but make sure that N is greater then 0 (N = N?N:1). The buffer size is then 2*(N+1) of floats and the iteration if for(int i=0; i<=N; i++) angle = startAngle + angleFragment*i;.
As already pointed out you need to define the radius of the arc. It is quite normal to use an outside source the way you do and simply force it to that value but use the 3 points for center and the two borders. Some other options that usually make sense are:
getting the radius from the start line
getting the radius from the shorter of the two lines
getting the average of the two
interpolate the two to get an elliptic curve (explained below)
To interpolate the radius you need to get the two radiuses startRadius and endRadius. Then you need to find the overall radius which was already used as deltaAngle above (watch out when computing this one, it is more complicated as it seems, for instance drawing from 320 degrees to 10 degrees results in deltaAngle = 50). Anyway the radius for a specific point is then simply radius = startRadius + (endRadius-startRadius)*abs((angleFragment*i)/deltaAngle). This represents a simple linear interpolation in polar coordinate system which is usually used to interpolate vector in matrices and is the core functionality to get nice animations.
There are some other ways of getting the arc points which may be better performance wise but I would not suggest them unless and until you need to optimize your code which should be very late in production. You may simply keep stepping toward the next point and correcting the radius (this is only a concept):
vec2 start, end, center; // input values
float radius; // input value
// making the start and end relative to center
start -= center;
end -= center;
vec2 current = start/length(start) * radius; // current position starts in first vector
vec2 target = end/length(end) * radius; // should be the last point
outputBuffer[0] = current+center; // insert the first point
for(int i=1;; i++) { // "break" will need to exit the loop, we need index only for the buffer
vec2 step = vec2(current.y, -(current.x)); // a tangential vector from current start point according to center
step = step/length(step) / tessellationScale; // normalize and apply tessellation
vec2 next = current + step; // move tangentially
next = next/length(next) * radius; // normalize and set the
if(dot(current-target, next-target) > .0) { // when we passed the target vector
current = next; // set the current point
outputBuffer[i] = current+center; // insert into buffer
}
else {
current = target; // simply use the target now
outputBuffer[i] = current+center; // insert into buffer
break; // exit
}
}
Rotating Asteroids ( Polygons )
I am trying to rotate asteroids(polygons) so that they look nice. I am doing this through multiple mathematical equations. To start I give the individual asteroid a rotation velocity:
rotVel = ((Math.random()-0.5)*Math.PI/16);
Then I create the polygon shape,
this.shape = new Polygon();
Followed by generating the points,
for (j = 0; j < s; j++) {
theta = 2 * Math.PI / s * j;
r = MIN_ROCK_SIZE + (int) (Math.random() * (MAX_ROCK_SIZE - MIN_ROCK_SIZE));
x = (int) -Math.round(r * Math.sin(theta)) + asteroidData[0];
y = (int) Math.round(r * Math.cos(theta)) + asteroidData[1];
shape.addPoint(x, y);
}
Finally, in a loop a method is being called in which it attempts to move the polygon and its points down as well as rotating them. (I'm just pasting the rotating part as the other one is working)
for (int i = 0; i < shape.npoints; i++) {
// Subtract asteroid's x and y position
double x = shape.xpoints[i] - asteroidData[0];
double y = shape.ypoints[i] - asteroidData[1];
double temp_x = ((x * Math.cos(rotVel)) - (y * Math.sin(rotVel)));
double temp_y = ((x * Math.sin(rotVel)) + (y * Math.cos(rotVel)));
shape.xpoints[i] = (int) Math.round(temp_x + asteroidData[0]);
shape.ypoints[i] = (int) Math.round(temp_y + asteroidData[1]);
}
now, the problem is that when it prints to the screen the asteroids appear to 'warp' or rather the x and y positions on some of the polygon points 'float' off course.
I've noticed that when I make 'rotVel' be a whole number the problem is solved however the asteroid will rotate at mach speeds. So I've concluded that the problem has to be in the rounding but no matter what I do I can't seem to find a way to get it to work as the Polygon object requires an array of ints.
Does anyone know how to fix this?
Currently your asteroids rotate around (0 , 0) as far as i can see. Correct would be to rotate them around the center of the shape, which would be (n , m), where n is the average of all x-coordinates of the shape, and m is the average of all y-coordinates of the shape.
Your problem is definitely caused by rounding to int! The first improvement is to make all shape coordinates to be of type double. This will solve most of your unwanted 'effects'.
But even with double you might experience nasty rounding errors in case you do a lot of very small updates of the coordinates. The solution is simple: Just avoid iterative updates of the asteroid points. Every time, you update the coordinates based on the previous coordinates, the rounding error will get worse.
Instead, add a field for the rotation angle to the shape and increment it instead of the points themselves. Not until drawing the shape, you compute the final positions by applying the rotation to the points. Note that this will never change the points themselves.
You can extend this concept to other transformations (e.g. translation) too. What you get is some kind of local coordinate system for every shape/object. The points of the shape are defined in the local coordinate system. By moving and rotating this system, you can reposition the entire object anywhere in space.
public class Shape {
// rotation and position of the local coordinate system
private double rot, x, y;
// points of the shape in local coordinate system
private double[] xp, yp;
private int npoints;
// points of the shape in world coordinates
private int[][] wxp, wyp;
private boolean valid;
public void setRotation(double r) { this.rot = r; valid = false; }
public void setPosition(double x, double y) { this.x = x; this.y = y; valid = false; }
public void addPoint(double x, double y) {
// TODO: add point to xp, yp
valid = false;
}
public void draw(...) {
if (!valid) {
computeWorldCoordinates(wxp, wyp);
valid = true;
}
// TODO: draw shape at world coordaintes wxp and wyp
}
protected void computeWorldCoordinates(int[] xcoord, int[] ycoord) {
for (int i = 0; i < npoints; i++) {
double temp_x = xp[i] * Math.cos(rot) - yp[i] * Math.sin(rot);
double temp_y = xp[i] * Math.sin(rot) + yp[i] * Math.cos(rot);
xcoord[i] = (int) Math.round(x + temp_x);
ycoord[i] = (int) Math.round(y + temp_y);
}
}
}
I'm practicing for an exam, and I'm doing one of the practice problems. I have a method that takes two arguments: one for the radius of a circle, and one for the number of dots to place within that circle. The method is below:
private void drawDots(int radius, int numDots){
double ycord;
double xcord;
for(int q = 0; q < numDots; q++){
ycord = -radius + random()*(radius+radius+1);
xcord = pow((pow(radius,2)-pow(ycord,2)),0.5);
turt.moveTo(xcord,ycord);
turt.penDown();
turt.forward(0);
turt.penUp();
}
}
turt is an object I'm using to draw with, and penDown()/penUp() is placing and removing the object from the canvas respectively.
I'm trying to define the x-coordinate and y-coordinate of the turt object to stay within a radius. Say the radius is 100, and the number of dots is 200, how do I keep the object within that radius?
The question states that:
"To constain the dots to a circle of radius r, a random y-coord in the interval -r, r is chosen. To x-coord is then randomly chosen in the interval -b, b, where b = sqrt(r^2 - y^2)."
I'm just not sure how to make sense of this math. The code above was my best attempt, but the output is strange.
Here is my failed output:
The distance from the center (0,0) to a dot must be less than the radius of the circle, r. The distance can be expressed as sqrt(x² + y²). Therefore, if you choose your y coordinate randomly between [-r, r], you just have to make sure that your x coordinate respects the previous equation, hence your math.
Demonstration
sqrt(x² + y²) < r
x² + y² < r²
x² < r² - y²
x < sqrt(r² - y²)
#
Your algorithm should be as follows. Once you chose the y coordinate, you can randomly choose x as long as it respects the distance constraint.
private void drawDots(int radius, int numDots){
double y;
double x;
double xMax;
for (int q = 0; q < numDots; q++){
// y is chosen randomly
y = -radius + random() * (radius + radius + 1);
// x must respect x² + y² < r²
xMax = pow((pow(radius,2)-pow(ycord,2)), 0.5);
x = random() * 2 * xMax - xMax;
turt.moveTo(x, y);
turt.penDown();
turt.forward(0);
turt.penUp();
}
}
Take a look at the documentation for random, you will see by default it produces a number between 0 and 1.
Basically this means that the expression you are looking for is:
ycord=-radius+random()*(radius*2);
This gives you a point on the y axis between -radius and radius (consider if the random() returns 0 you get -radius, it it returns 1 you get -radius+(2*radius())=radius.
You calculation for the x co-ordinate is correct but it gives you the x coordinate point on the circle (lets call it b). I suspect you want to use a new random to select an x co-ordinate between b and -b.
At present you are drawing points on the circle, not inside it. That is because you are not following the guideline correctly.
b = pow((pow(radius,2)-pow(ycord,2)),0.5); // this should be b
xcord = -b + random()*(b+b);
Is there a reason that they decided not to add the contains method (for Path) in Android?
I'm wanting to know what points I have in a Path and hoped it was easier than seen here:
How can I tell if a closed path contains a given point?
Would it be better for me to create an ArrayList and add the integers into the array? (I only check the points once in a control statement) Ie. if(myPath.contains(x,y)
So far my options are:
Using a Region
Using an ArrayList
Extending the Class
Your suggestion
I'm just looking for the most efficient way I should go about this
I came up against this same problem a little while ago, and after some searching, I found this to be the best solution.
Java has a Polygon class with a contains() method that would make things really simple. Unfortunately, the java.awt.Polygonclass is not supported in Android. However, I was able to find someone who wrote an equivalent class.
I don't think you can get the individual points that make up the path from the Android Path class, so you will have to store the data in a different way.
The class uses a Crossing Number algorithm to determine whether or not the point is inside of the given list of points.
/**
* Minimum Polygon class for Android.
*/
public class Polygon
{
// Polygon coodinates.
private int[] polyY, polyX;
// Number of sides in the polygon.
private int polySides;
/**
* Default constructor.
* #param px Polygon y coods.
* #param py Polygon x coods.
* #param ps Polygon sides count.
*/
public Polygon( int[] px, int[] py, int ps )
{
polyX = px;
polyY = py;
polySides = ps;
}
/**
* Checks if the Polygon contains a point.
* #see "http://alienryderflex.com/polygon/"
* #param x Point horizontal pos.
* #param y Point vertical pos.
* #return Point is in Poly flag.
*/
public boolean contains( int x, int y )
{
boolean oddTransitions = false;
for( int i = 0, j = polySides -1; i < polySides; j = i++ )
{
if( ( polyY[ i ] < y && polyY[ j ] >= y ) || ( polyY[ j ] < y && polyY[ i ] >= y ) )
{
if( polyX[ i ] + ( y - polyY[ i ] ) / ( polyY[ j ] - polyY[ i ] ) * ( polyX[ j ] - polyX[ i ] ) < x )
{
oddTransitions = !oddTransitions;
}
}
}
return oddTransitions;
}
}
I would just like to comment on #theisenp answer: The code has integer arrays and if you look on the algorithm description webpage it warns against using integers instead of floating point.
I copied your code above and it seemed to work fine except for some corner cases when I made lines that didnt connect to themselves very well.
By changing everything to floating point, I got rid of this bug.
Tried the other answer, but it gave an erroneous outcome for my case. Didn't bother to find the exact cause, but made my own direct translation from the algorithm on:
http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html
Now the code reads:
/**
* Minimum Polygon class for Android.
*/
public class Polygon
{
// Polygon coodinates.
private int[] polyY, polyX;
// Number of sides in the polygon.
private int polySides;
/**
* Default constructor.
* #param px Polygon y coods.
* #param py Polygon x coods.
* #param ps Polygon sides count.
*/
public Polygon( int[] px, int[] py, int ps )
{
polyX = px;
polyY = py;
polySides = ps;
}
/**
* Checks if the Polygon contains a point.
* #see "http://alienryderflex.com/polygon/"
* #param x Point horizontal pos.
* #param y Point vertical pos.
* #return Point is in Poly flag.
*/
public boolean contains( int x, int y )
{
boolean c = false;
int i, j = 0;
for (i = 0, j = polySides - 1; i < polySides; j = i++) {
if (((polyY[i] > y) != (polyY[j] > y))
&& (x < (polyX[j] - polyX[i]) * (y - polyY[i]) / (polyY[j] - polyY[i]) + polyX[i]))
c = !c;
}
return c;
}
}
For completeness, I want to make a couple notes here:
As of API 19, there is an intersection operation for Paths. You could create a very small square path around your test point, intersect it with the Path, and see if the result is empty or not.
You can convert Paths to Regions and do a contains() operation. However Regions work in integer coordinates, and I think they use transformed (pixel) coordinates, so you'll have to work with that. I also suspect that the conversion process is computationally intensive.
The edge-crossing algorithm that Hans posted is good and quick, but you have to be very careful for certain corner cases such as when the ray passes directly through a vertex, or intersects a horizontal edge, or when round-off error is a problem, which it always is.
The winding number method is pretty much fool proof, but involves a lot of trig and is computationally expensive.
This paper by Dan Sunday gives a hybrid algorithm that's as accurate as the winding number but as computationally simple as the ray-casting algorithm. It blew me away how elegant it was.
My code
This is some code I wrote recently in Java which handles a path made out of both line segments and arcs. (Also circles, but those are complete paths on their own, so it's sort of a degenerate case.)
package org.efalk.util;
/**
* Utility: determine if a point is inside a path.
*/
public class PathUtil {
static final double RAD = (Math.PI/180.);
static final double DEG = (180./Math.PI);
protected static final int LINE = 0;
protected static final int ARC = 1;
protected static final int CIRCLE = 2;
/**
* Used to cache the contents of a path for pick testing. For a
* line segment, x0,y0,x1,y1 are the endpoints of the line. For
* a circle (ellipse, actually), x0,y0,x1,y1 are the bounding box
* of the circle (this is how Android and X11 like to represent
* circles). For an arc, x0,y0,x1,y1 are the bounding box, a1 is
* the start angle (degrees CCW from the +X direction) and a1 is
* the sweep angle (degrees CCW).
*/
public static class PathElement {
public int type;
public float x0,y0,x1,y1; // Endpoints or bounding box
public float a0,a1; // Arcs and circles
}
/**
* Determine if the given point is inside the given path.
*/
public static boolean inside(float x, float y, PathElement[] path) {
// Based on algorithm by Dan Sunday, but allows for arc segments too.
// http://geomalgorithms.com/a03-_inclusion.html
int wn = 0;
// loop through all edges of the polygon
// An upward crossing requires y0 <= y and y1 > y
// A downward crossing requires y0 > y and y1 <= y
for (PathElement pe : path) {
switch (pe.type) {
case LINE:
if (pe.x0 < x && pe.x1 < x) // left
break;
if (pe.y0 <= y) { // start y <= P.y
if (pe.y1 > y) { // an upward crossing
if (isLeft(pe, x, y) > 0) // P left of edge
++wn; // have a valid up intersect
}
}
else { // start y > P.y
if (pe.y1 <= y) { // a downward crossing
if (isLeft(pe, x, y) < 0) // P right of edge
--wn; // have a valid down intersect
}
}
break;
case ARC:
wn += arcCrossing(pe, x, y);
break;
case CIRCLE:
// This should be the only element in the path, so test it
// and get out.
float rx = (pe.x1-pe.x0)/2;
float ry = (pe.y1-pe.y0)/2;
float xc = (pe.x1+pe.x0)/2;
float yc = (pe.y1+pe.y0)/2;
return (x-xc)*(x-xc)/rx*rx + (y-yc)*(y-yc)/ry*ry <= 1;
}
}
return wn != 0;
}
/**
* Return >0 if p is left of line p0-p1; <0 if to the right; 0 if
* on the line.
*/
private static float
isLeft(float x0, float y0, float x1, float y1, float x, float y)
{
return (x1 - x0) * (y - y0) - (x - x0) * (y1 - y0);
}
private static float isLeft(PathElement pe, float x, float y) {
return isLeft(pe.x0,pe.y0, pe.x1,pe.y1, x,y);
}
/**
* Determine if an arc segment crosses the test ray up or down, or not
* at all.
* #return winding number increment:
* +1 upward crossing
* 0 no crossing
* -1 downward crossing
*/
private static int arcCrossing(PathElement pe, float x, float y) {
// Look for trivial reject cases first.
if (pe.x1 < x || pe.y1 < y || pe.y0 > y) return 0;
// Find the intersection of the test ray with the arc. This consists
// of finding the intersection(s) of the line with the ellipse that
// contains the arc, then determining if the intersection(s)
// are within the limits of the arc.
// Since we're mostly concerned with whether or not there *is* an
// intersection, we have several opportunities to punt.
// An upward crossing requires y0 <= y and y1 > y
// A downward crossing requires y0 > y and y1 <= y
float rx = (pe.x1-pe.x0)/2;
float ry = (pe.y1-pe.y0)/2;
float xc = (pe.x1+pe.x0)/2;
float yc = (pe.y1+pe.y0)/2;
if (rx == 0 || ry == 0) return 0;
if (rx < 0) rx = -rx;
if (ry < 0) ry = -ry;
// We start by transforming everything so the ellipse is the unit
// circle; this simplifies the math.
x -= xc;
y -= yc;
if (x > rx || y > ry || y < -ry) return 0;
x /= rx;
y /= ry;
// Now find the points of intersection. This is simplified by the
// fact that our line is horizontal. Also, by the time we get here,
// we know there *is* an intersection.
// The equation for the circle is x²+y² = 1. We have y, so solve
// for x = ±sqrt(1 - y²)
double x0 = 1 - y*y;
if (x0 <= 0) return 0;
x0 = Math.sqrt(x0);
// We only care about intersections to the right of x, so
// that's another opportunity to punt. For a CCW arc, The right
// intersection is an upward crossing and the left intersection
// is a downward crossing. The reverse is true for a CW arc.
if (x > x0) return 0;
int wn = arcXing1(x0,y, pe.a0, pe.a1);
if (x < -x0) wn -= arcXing1(-x0,y, pe.a0, pe.a1);
return wn;
}
/**
* Return the winding number of the point x,y on the unit circle
* which passes through the arc segment defined by a0,a1.
*/
private static int arcXing1(double x, float y, float a0, float a1) {
double a = Math.atan2(y,x) * DEG;
if (a < 0) a += 360;
if (a1 > 0) { // CCW
if (a < a0) a += 360;
return a0 + a1 > a ? 1 : 0;
} else { // CW
if (a0 < a) a0 += 360;
return a0 + a1 <= a ? -1 : 0;
}
}
}
Edit: by request, adding some sample code that makes use of this.
import PathUtil;
import PathUtil.PathElement;
/**
* This class represents a single geographic area defined by a
* circle or a list of line segments and arcs.
*/
public class Area {
public float lat0, lon0, lat1, lon1; // bounds
Path path = null;
PathElement[] pathList;
/**
* Return true if this point is inside the area bounds. This is
* used to confirm touch events and may be computationally expensive.
*/
public boolean pointInBounds(float lat, float lon) {
if (lat < lat0 || lat > lat1 || lon < lon0 || lon > lon1)
return false;
return PathUtil.inside(lon, lat, pathList);
}
static void loadBounds() {
int n = number_of_elements_in_input;
path = new Path();
pathList = new PathElement[n];
for (Element element : elements_in_input) {
PathElement pe = new PathElement();
pathList[i] = pe;
pe.type = element.type;
switch (element.type) {
case LINE: // Line segment
pe.x0 = element.x0;
pe.y0 = element.y0;
pe.x1 = element.x1;
pe.y1 = element.y1;
// Add to path, not shown here
break;
case ARC: // Arc segment
pe.x0 = element.xmin; // Bounds of arc ellipse
pe.y0 = element.ymin;
pe.x1 = element.xmax;
pe.y1 = element.ymax;
pe.a0 = a0; pe.a1 = a1;
break;
case CIRCLE: // Circle; hopefully the only entry here
pe.x0 = element.xmin; // Bounds of ellipse
pe.y0 = element.ymin;
pe.x1 = element.xmax;
pe.y1 = element.ymax;
// Add to path, not shown here
break;
}
}
path.close();
}
I am currently working with using Bezier curves and surfaces to draw the famous Utah teapot. Using Bezier patches of 16 control points, I have been able to draw the teapot and display it using a 'world to camera' function which gives the ability to rotate the resulting teapot, and am currently using an orthographic projection.
The result is that I have a 'flat' teapot, which is expected as the purpose of an orthographic projection is to preserve parallel lines.
However, I would like to use a perspective projection to give the teapot depth. My question is, how does one take the 3D xyz vertex returned from the 'world to camera' function, and convert this into a 2D coordinate. I am wanting to use the projection plane at z=0, and allow the user to determine the focal length and image size using the arrow keys on the keyboard.
I am programming this in java and have all of the input event handler set up, and have also written a matrix class which handles basic matrix multiplication. I've been reading through wikipedia and other resources for a while, but I can't quite get a handle on how one performs this transformation.
The standard way to represent 2D/3D transformations nowadays is by using homogeneous coordinates. [x,y,w] for 2D, and [x,y,z,w] for 3D. Since you have three axes in 3D as well as translation, that information fits perfectly in a 4x4 transformation matrix. I will use column-major matrix notation in this explanation. All matrices are 4x4 unless noted otherwise.
The stages from 3D points and to a rasterized point, line or polygon looks like this:
Transform your 3D points with the inverse camera matrix, followed with whatever transformations they need. If you have surface normals, transform them as well but with w set to zero, as you don't want to translate normals. The matrix you transform normals with must be isotropic; scaling and shearing makes the normals malformed.
Transform the point with a clip space matrix. This matrix scales x and y with the field-of-view and aspect ratio, scales z by the near and far clipping planes, and plugs the 'old' z into w. After the transformation, you should divide x, y and z by w. This is called the perspective divide.
Now your vertices are in clip space, and you want to perform clipping so you don't render any pixels outside the viewport bounds. Sutherland-Hodgeman clipping is the most widespread clipping algorithm in use.
Transform x and y with respect to w and the half-width and half-height. Your x and y coordinates are now in viewport coordinates. w is discarded, but 1/w and z is usually saved because 1/w is required to do perspective-correct interpolation across the polygon surface, and z is stored in the z-buffer and used for depth testing.
This stage is the actual projection, because z isn't used as a component in the position any more.
The algorithms:
Calculation of field-of-view
This calculates the field-of view. Whether tan takes radians or degrees is irrelevant, but angle must match. Notice that the result reaches infinity as angle nears 180 degrees. This is a singularity, as it is impossible to have a focal point that wide. If you want numerical stability, keep angle less or equal to 179 degrees.
fov = 1.0 / tan(angle/2.0)
Also notice that 1.0 / tan(45) = 1. Someone else here suggested to just divide by z. The result here is clear. You would get a 90 degree FOV and an aspect ratio of 1:1. Using homogeneous coordinates like this has several other advantages as well; we can for example perform clipping against the near and far planes without treating it as a special case.
Calculation of the clip matrix
This is the layout of the clip matrix. aspectRatio is Width/Height. So the FOV for the x component is scaled based on FOV for y. Far and near are coefficients which are the distances for the near and far clipping planes.
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][ 1 ]
[ 0 ][ 0 ][(2*near*far)/(near-far)][ 0 ]
Screen Projection
After clipping, this is the final transformation to get our screen coordinates.
new_x = (x * Width ) / (2.0 * w) + halfWidth;
new_y = (y * Height) / (2.0 * w) + halfHeight;
Trivial example implementation in C++
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
struct Vector
{
Vector() : x(0),y(0),z(0),w(1){}
Vector(float a, float b, float c) : x(a),y(b),z(c),w(1){}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt(x*x + y*y + z*z);
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if(mag < epsilon){
std::out_of_range e("");
throw e;
}
return *this / mag;
}
};
inline float Dot(const Vector& v1, const Vector& v2)
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data(16)
{
Identity();
}
void Identity()
{
std::fill(data.begin(), data.end(), float(0));
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[](size_t index)
{
if(index >= 16){
std::out_of_range e("");
throw e;
}
return data[index];
}
Matrix operator*(const Matrix& m) const
{
Matrix dst;
int col;
for(int y=0; y<4; ++y){
col = y*4;
for(int x=0; x<4; ++x){
for(int i=0; i<4; ++i){
dst[x+col] += m[i+col]*data[x+i*4];
}
}
}
return dst;
}
Matrix& operator*=(const Matrix& m)
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix(float fov, float aspectRatio, float near, float far)
{
Identity();
float f = 1.0f / std::tan(fov * 0.5f);
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (far+near) / (far-near);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*near*far) / (near-far);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*(const Vector& v, const Matrix& m)
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8 ] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9 ] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip(int width, int height, float near, float far, const VecArr& vertex)
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix(60.0f * (M_PI / 180.0f), aspect, near, far);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for(VecArr::iterator i=vertex.begin(); i!=vertex.end(); ++i){
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back(v);
}
/* TODO: Clipping here */
for(VecArr::iterator i=dst.begin(); i!=dst.end(); ++i){
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
If you still ponder about this, the OpenGL specification is a really nice reference for the maths involved.
The DevMaster forums at http://www.devmaster.net/ have a lot of nice articles related to software rasterizers as well.
I think this will probably answer your question. Here's what I wrote there:
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
Then, very simply:
X' = ((X - Xc) * (F/Z)) + Xc
Y' = ((Y - Yc) * (F/Z)) + Yc
If your camera is the origin, then this simplifies to:
X' = X * (F/Z)
Y' = Y * (F/Z)
To obtain the perspective-corrected co-ordinates, just divide by the z co-ordinate:
xc = x / z
yc = y / z
The above works assuming that the camera is at (0, 0, 0) and you are projecting onto the plane at z = 1 -- you need to translate the co-ords relative to the camera otherwise.
There are some complications for curves, insofar as projecting the points of a 3D Bezier curve will not in general give you the same points as drawing a 2D Bezier curve through the projected points.
You can project 3D point in 2D using: Commons Math: The Apache Commons Mathematics Library with just two classes.
Example for Java Swing.
import org.apache.commons.math3.geometry.euclidean.threed.Plane;
import org.apache.commons.math3.geometry.euclidean.threed.Vector3D;
Plane planeX = new Plane(new Vector3D(1, 0, 0));
Plane planeY = new Plane(new Vector3D(0, 1, 0)); // Must be orthogonal plane of planeX
void drawPoint(Graphics2D g2, Vector3D v) {
g2.drawLine(0, 0,
(int) (world.unit * planeX.getOffset(v)),
(int) (world.unit * planeY.getOffset(v)));
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
drawPoint(g2, new Vector3D(2, 1, 0));
drawPoint(g2, new Vector3D(0, 2, 0));
drawPoint(g2, new Vector3D(0, 0, 2));
drawPoint(g2, new Vector3D(1, 1, 1));
}
Now you only needs update the planeX and planeY to change the perspective-projection, to get things like this:
Looking at the screen from the top, you get x and z axis.
Looking at the screen from the side, you get y and z axis.
Calculate the focal lengths of the top and side views, using trigonometry, which is the distance between the eye and the middle of the screen, which is determined by the field of view of the screen.
This makes the shape of two right triangles back to back.
hw = screen_width / 2
hh = screen_height / 2
fl_top = hw / tan(θ/2)
fl_side = hh / tan(θ/2)
Then take the average focal length.
fl_average = (fl_top + fl_side) / 2
Now calculate the new x and new y with basic arithmetic, since the larger right triangle made from the 3d point and the eye point is congruent with the smaller triangle made by the 2d point and the eye point.
x' = (x * fl_top) / (z + fl_top)
y' = (y * fl_top) / (z + fl_top)
Or you can simply set
x' = x / (z + 1)
and
y' = y / (z + 1)
I'm not sure at what level you're asking this question. It sounds as if you've found the formulas online, and are just trying to understand what it does. On that reading of your question I offer:
Imagine a ray from the viewer (at point V) directly towards the center of the projection plane (call it C).
Imagine a second ray from the viewer to a point in the image (P) which also intersects the projection plane at some point (Q)
The viewer and the two points of intersection on the view plane form a triangle (VCQ); the sides are the two rays and the line between the points in the plane.
The formulas are using this triangle to find the coordinates of Q, which is where the projected pixel will go
All of the answers address the question posed in the title. However, I would like to add a caveat that is implicit in the text. Bézier patches are used to represent the surface, but you cannot just transform the points of the patch and tessellate the patch into polygons, because this will result in distorted geometry. You can, however, tessellate the patch first into polygons using a transformed screen tolerance and then transform the polygons, or you can convert the Bézier patches to rational Bézier patches, then tessellate those using a screen-space tolerance. The former is easier, but the latter is better for a production system.
I suspect that you want the easier way. For this, you would scale the screen tolerance by the norm of the Jacobian of the inverse perspective transformation and use that to determine the amount of tessellation that you need in model space (it might be easier to compute the forward Jacobian, invert that, then take the norm). Note that this norm is position-dependent, and you may want to evaluate this at several locations, depending on the perspective. Also remember that since the projective transformation is rational, you need to apply the quotient rule to compute the derivatives.
Thanks to #Mads Elvenheim for a proper example code. I have fixed the minor syntax errors in the code (just a few const problems and obvious missing operators). Also, near and far have vastly different meanings in vs.
For your pleasure, here is the compileable (MSVC2013) version. Have fun.
Mind that I have made NEAR_Z and FAR_Z constant. You probably dont want it like that.
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
#define M_PI 3.14159
#define NEAR_Z 0.5
#define FAR_Z 2.5
struct Vector
{
float x;
float y;
float z;
float w;
Vector() : x( 0 ), y( 0 ), z( 0 ), w( 1 ) {}
Vector( float a, float b, float c ) : x( a ), y( b ), z( c ), w( 1 ) {}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt( x*x + y*y + z*z );
}
Vector& operator*=(float fac) noexcept
{
x *= fac;
y *= fac;
z *= fac;
return *this;
}
Vector operator*(float fac) const noexcept
{
return Vector(*this)*=fac;
}
Vector& operator/=(float div) noexcept
{
return operator*=(1/div); // avoid divisions: they are much
// more costly than multiplications
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if (mag < epsilon) {
std::out_of_range e( "" );
throw e;
}
return Vector(*this)/=mag;
}
};
inline float Dot( const Vector& v1, const Vector& v2 )
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data( 16 )
{
Identity();
}
void Identity()
{
std::fill( data.begin(), data.end(), float( 0 ) );
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[]( size_t index )
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
const float& operator[]( size_t index ) const
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
Matrix operator*( const Matrix& m ) const
{
Matrix dst;
int col;
for (int y = 0; y<4; ++y) {
col = y * 4;
for (int x = 0; x<4; ++x) {
for (int i = 0; i<4; ++i) {
dst[x + col] += m[i + col] * data[x + i * 4];
}
}
}
return dst;
}
Matrix& operator*=( const Matrix& m )
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix( float fov, float aspectRatio )
{
Identity();
float f = 1.0f / std::tan( fov * 0.5f );
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (FAR_Z + NEAR_Z) / (FAR_Z- NEAR_Z);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*NEAR_Z*FAR_Z) / (NEAR_Z - FAR_Z);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*( const Vector& v, Matrix& m )
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip( int width, int height, const VecArr& vertex )
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix( 60.0f * (M_PI / 180.0f), aspect);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for (VecArr::const_iterator i = vertex.begin(); i != vertex.end(); ++i) {
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back( v );
}
/* TODO: Clipping here */
for (VecArr::iterator i = dst.begin(); i != dst.end(); ++i) {
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
#pragma once
I know it's an old topic but your illustration is not correct, the source code sets up the clip matrix correct.
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][(2*near*far)/(near-far)]
[ 0 ][ 0 ][ 1 ][ 0 ]
some addition to your things:
This clip matrix works only if you are projecting on static 2D plane if you want to add camera movement and rotation:
viewMatrix = clipMatrix * cameraTranslationMatrix4x4 * cameraRotationMatrix4x4;
this lets you rotate the 2D plane and move it around..-
You might want to debug your system with spheres to determine whether or not you have a good field of view. If you have it too wide, the spheres with deform at the edges of the screen into more oval forms pointed toward the center of the frame. The solution to this problem is to zoom in on the frame, by multiplying the x and y coordinates for the 3 dimensional point by a scalar and then shrinking your object or world down by a similar factor. Then you get the nice even round sphere across the entire frame.
I'm almost embarrassed that it took me all day to figure this one out and I was almost convinced that there was some spooky mysterious geometric phenomenon going on here that demanded a different approach.
Yet, the importance of calibrating the zoom-frame-of-view coefficient by rendering spheres cannot be overstated. If you do not know where the "habitable zone" of your universe is, you will end up walking on the sun and scrapping the project. You want to be able to render a sphere anywhere in your frame of view an have it appear round. In my project, the unit sphere is massive compared to the region that I'm describing.
Also, the obligatory wikipedia entry:
Spherical Coordinate System