OpenGL Vertex Arrays and GL_TRIANGLE_STRIP drawing error - java

I'm implementing a height map with GL_TRIANGLE_STRIPS and the LWJGL OpenGL Java Bindings. When I draw 'direct' using 'GL_BEGIN/GL_END' it works perfectly but when the heightmap is to big it works slow.
As I want to move on using VBO's I'm now learning how to use Vertex Array's. This is where I have a problem, the drawing is just wrong. It seems that the last triangle of the strip returns to the first one. Perheps a picture is better:
Good drawing:
Bad Vertex array drawing:
My code is the following for normal drawing:
public void renderDirect() {
//adapt the camera to the map
float scale = 5.0f / Math.max(w - 1, l - 1);
GL11.glScalef(scale, scale, scale);
GL11.glTranslatef(-(float) (w - 1) / 2, 0.0f, -(float) (l - 1) / 2);
//choose map color
GL11.glColor3f(0.3f, 0.9f, 0.0f);
for (int z = 0; z < l - 1; z++) {
//Makes OpenGL draw a triangle at every three consecutive vertices
GL11.glBegin(GL11.GL_TRIANGLE_STRIP);
for (int x = 0; x < w; x++) {
Vector3f normal = getNormal(x, z);
GL11.glNormal3f(normal.getX(), normal.getY(), normal.getZ());
GL11.glVertex3f(x, getHeight(x, z), z);
normal = getNormal(x, z + 1);
GL11.glNormal3f(normal.getX(), normal.getY(), normal.getZ());
GL11.glVertex3f(x, getHeight(x, z + 1), z + 1);
}
glEnd();
}
}
My code is the following for vertex array drawing:
private void loadArrays(){
//calculate the length of the buffers
bLength = (l-1) * w * 6;
//create the normal and vertex buffer array's
dataBuffer = BufferUtils.createFloatBuffer(bLength*2);
cBuffer = BufferUtils.createFloatBuffer(bLength);
for (int z = 0; z < l - 1; z++) {
//Fill up the buffers
for (int x = 0; x < w; x++) {
Vector3f normal = getNormal(x, z);
dataBuffer.put(x).put(getHeight(x,z)).put(z);
dataBuffer.put(normal.getX()).put(normal.getY()).put(normal.getZ());
normal = getNormal(x, z + 1);
dataBuffer.put(x).put(getHeight(x,z+1)).put(z+1);
dataBuffer.put(normal.getX()).put(normal.getY()).put(normal.getZ());
}
}
}
int stride = 6*4;
public void renderDirect() {
//adapt the camera to the map
float scale = 5.0f / Math.max(w - 1, l - 1);
GL11.glScalef(scale, scale, scale);
GL11.glTranslatef(-(float) (w - 1) / 2, 0.0f, -(float) (l - 1) / 2);
//choose map color
GL11.glColor3f(0.3f, 0.9f, 0.0f);
//Draw the vertex arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
dataBuffer.position(0);
glVertexPointer(3, stride, dataBuffer);
dataBuffer.position(3);
glNormalPointer(stride,dataBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, bLength/3);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
}
What am I doing wrong?

Concatenating triangle strips back-to-back like that will not work as you expect.
Either call glDrawArrays() in a loop adjusting the first and count parameters to draw your original N triangle strips, or add degenerate triangles to the end of each row to reset the strip starting position.
Or just use GL_TRIANGLES :)

Another solution that can safe you a lot of time if the number of consecutively drawn triangles in your strip is high enough is the following:
When rendering triangle strips you provide a stream of vertex information to OpenGL and it renders a triangle for each new vertex (e.g. sequence ABCDEF gives you the triangles ABC,BCD,CDE,DEF). Now, this will always be a single strip (that's why it's called GL_TRIANGLE_STRIP and noth the common plural naming).
Anyway, if you need to introduce a gap you may use several calls to your draw function (although that is what you want to avoid, right), but you may also make use of invalid polygons.
An invalid polygon is one that has no surface. You can create an invalid polygon by providing identical points like ABB or ABA. As far as I know such a polygon will be rejected quite early in the pipeline and it therefore doesn't give you much overhead. Now to your particular problem. Assume you have a gap (indicated by the vertical bar) like this "ABC|DEF". Modify your stream to look like this "ABCCEEDEF". This abomination gives you the triangles ABC, BCC, CCE, CEE, EDE, DEF. If you account for the early rejection of the invalid ones you get ABC and DEF - exactly what you wanted.
Ergo, in total each gap blows up your vertex stream by three triangles which in turn makes it pretty easy to see when you break even complexity-wise.
Anyway, if you need to build a gap, just you may do so by providing an invalid triangle

Related

Processing - rendering shapes is too slow

I have been doing a small little project using Processing, and the effect I wanted to achieve was a kind of "mountains" forming and moving, using Perlin Noise with the noise() function, with 2 parameters.
I was originally using a image for the background, but for illustrational purposes, I made the background black, and it's basically the same effect.
My issue is that I want to have a "history" of the mountains because they should fade away after some time, and so I made a history of PShapes, and draw the history and update it each frame.
Updating it is no issue, but drawing the PShapes seems to take a lot of time, reducing the frame rate from 60 to 10 when the length of the history is 100 elements.
Below is the code I used :
float noise_y = 0;
float noise_increment = 0.01;
// increment x in the loop by this amount instead of 1
// makes the drawing faster, since the PShapes have less vertices
// however, mountains look sharper, not as smooth
// bigger inc = better fps
final int xInc = 1;
// maximum length of the array
// bigger = less frames :(
final int arrLen = 100;
int lastIndex = 0;
PShape[] history = new PShape[arrLen];
boolean full = false;
// use this to add shapes in the history
PShape aux;
void setup() {
size(1280, 720);
}
void draw() {
background(0);
// create PShape object
aux = createShape();
aux.beginShape();
aux.noFill();
aux.stroke(255);
aux.strokeWeight(0.5);
for (float x = 0; x < width + xInc; x = x + xInc) {
float noise = noise(x / 150, noise_y) ;
// get the actual y coordinate
float y = map(noise, 0, 1, height / 2, 0);
// create vertex of shape at x, y
aux.vertex(x, y);
}
aux.endShape();
// push the current one in the history
history[lastIndex++] = aux;
// if it reached the maximum length, start it over ( kinda works like a queue )
if (lastIndex == arrLen) {
lastIndex = 0;
full = true;
}
// draw the history
// this part takes the MOST TIME to draw, need to fix it.
// without it is running at 60 FPS, with it goes as low as 10 FPS
if (full) {
for (int i = 0; i < arrLen; i++) {
shape(history[i]);
}
} else {
for (int i = 0; i < lastIndex; i++) {
shape(history[i]);
}
}
noise_y = noise_y - noise_increment;
println(frameRate);
}
I have tried to use different ways of rendering the "mountains" : I tried writing my own class of a curve and draw lines that link the points, but I get the same performance. I tried grouping the PShapes into a PShape group object like
PShape p = new PShape(GROUP);
p.addChild(someShape);
and I got the same performance.
I was thinking of using multiple threads to render each shape individually, but after doing some research, there's only one thread that is responsible with rendering - the Animation Thread, so that won't do me any good, either.
I really want to finish this, it seems really simple but I can't figure it out.
One possible solution would be, not to draw all the generated shapes, but to draw only the new shape.
To "see" the shapes of the previous frames, the scene can't be cleared at the begin of the frame, of course.
Since the scene is never cleared, this would cause, that the entire view is covered, by shapes over time. But if the scene would be slightly faded out at the begin of a new frame, instead of clearing it, then the "older" shapes would get darker and darker by time. This gives a feeling as the "older" frames would drift away into the depth by time.
Clear the background at the initlization:
void setup() {
size(1280, 720);
background(0);
}
Create the scene with the fade effect:
void draw() {
// "fade" the entire view
blendMode(DIFFERENCE);
fill(1, 1, 1, 255);
rect(0, 0, width, height);
blendMode(ADD);
// create PShape object
aux = createShape();
aux.beginShape();
aux.stroke(255);
aux.strokeWeight(0.5);
aux.noFill();
for (float x = 0; x < width + xInc; x = x + xInc) {
float noise = noise(x / 150, noise_y) ;
// get the actual y coordinate
float y = map(noise, 0, 1, height / 2, 0);
// create vertex of shape at x, y
aux.vertex(x, y);
}
aux.endShape();
// push the current one in the history
int currentIndex = lastIndex;
history[lastIndex++] = aux;
if (lastIndex == arrLen)
lastIndex = 0;
// draw the newes shape
shape(history[currentIndex]);
noise_y = noise_y - noise_increment;
println(frameRate, full ? arrLen : lastIndex);
}
See the preview:

smooth color interpolation along a "bresenham" line

I am trying to interpolate color along a line so that, given two points and their respective RGB values, I can draw a line with a smooth color gradient. Using Bresenham's Line Algorithm, I can now draw lines, but am not sure how to begin interpolating colors between the two end points. The following is part of the drawLine() function that works for all line whose slope are less than 1.
int x_start = p1.x, x_end = p2.x, y_start =p1.y, y_end = p2.y;
int dx = Math.abs(x_end-x_start), dy = Math.abs(y_end-y_start);
int x = x_start, y = y_start;
int step_x = x_start < x_end ? 1:-1;
int step_y = y_start < y_end ? 1:-1;
int rStart = (int)(255.0f * p1.c.r), rEnd = (int)(255.0f * p2.c.r);
int gStart = (int)(255.0f * p1.c.g), gEnd = (int)(255.0f * p2.c.g);
int bStart = (int)(255.0f * p1.c.b), bEnd = (int)(255.0f * p2.c.b);
int xCount = 0;
//for slope < 1
int p = 2*dy-dx;
int twoDy = 2*dy, twoDyMinusDx = 2*(dy-dx);
int xCount = 0;
// draw the first point
Point2D start = new Point2D(x, y, new ColorType(p1.c.r, p1.c.g, p1.c.b));
drawPoint(buff, start);
float pColor = xCount / Math.abs((x_end - x_start));
System.out.println(x_end + " " + x_start);
while(x != x_end){
x+= step_x;
xCount++;
if(p<0){
p+= twoDy;
}
else{
y += step_y;
p += twoDyMinusDx;
}
Point2D draw_line = new Point2D(x, y, new ColorType(p1.c.r*(1-pColor)+p2.c.r*pColor,p1.c.g*(1-pColor)+p2.c.g*pColor,p1.c.b*(1-pColor)+p2.c.b*pColor));
System.out.println(pColor);
drawPoint(buff,draw_line );
}
So what I'm thinking is that, just like drawing lines, I also need some sort of decision parameter p to determine when to change the RGB values. I am thinking of something along lines of as x increments, look at each rgb value and decide if I want to manipualte them or not.
I initialized rStart and rEnd(and so on for g and b) but have no idea where to start. any kind of help or suggestions would be greatly appreciated!
Edit: thanks #Compass for the great suggestion ! Now I've ran into another while trying to implementing that strategy, and I am almost certain it's an easy bug. I just can't see it right now. For some reason my pColor always return 0, I am not sure why. I ran some print statements to make sure xCount is indeed increasing, so I am not sure what else might've made this variable always 0.
I remember figuring this out way back when I was learning GUI! I'll explain the basic concepts for you.
Let's say we have two colors,
RGB(A,B,C)
and
RGB(X,Y,Z)
for simplicity.
If we know the position percentage-wise (we'll call this P, a float 0 for beginning, 1.0 at end) along the line, we can calculate what color should be there using the following:
Resultant Color = RGB(A*(1-P)+X*P,B*(1-P)+Y*P,C*(1-P)+Z*P)
In other words, you average out the individual RGB values along the line.
Actually you will be drawing the line in RGB space as well !
Bresenham lets you compute point coordinates from (X0, Y0) to (X1, Y1).
This is done by a loop on X or Y, with a linear interpolation on the other coordinate.
Just extend the algorithm to draw a line from (X0, Y0, R0, G0, B0) to (X1, Y1, R1, G1, B1), in the same loop on X or Y, with a linear interpolation on the other coordinates.

Fill a shape with points

I try to draw a leaf looking thing on the screen, and try to fill it with a color. It's like drawing a circle, the difference is, that it's only 270 degrees, and the radius starts from 0 to 100. I first draw the left side, and on each degree I fill the inside. At the end I draw the right side.
Here is to code, maybe it's easier to understand:
canvas = new BufferedImage(SIZE, SIZE, BufferedImage.TYPE_INT_ARGB);
Color black = new Color(0,0,0);
Color green = new Color(0,130,0);
double j = 0.0; // radius
double max = 100.0; // max radius
for (int i = 0; i < 135; i++) { // left side (270 degree / 2)
j += max / 135.0;
// x, y coordinate
int x = (int)(Math.cos(Math.toRadians(i)) * j);
int y = (int)(Math.sin(Math.toRadians(i)) * j);
// draw a circle like thing with radius j
for (int l = i; l < 135 + (135 - i); l++) {
int ix = (int)(Math.cos(Math.toRadians(l)) * j);
int iy = (int)(Math.sin(Math.toRadians(l)) * j);
canvas.setRGB(ix + 256, iy + 256, green.getRGB());
}
canvas.setRGB(x + 256, y + 256, black.getRGB());
}
// draw the right side
for (int i = 135; i < 270; i++) {
j -= max / 135.0;
int x = (int)(Math.cos(Math.toRadians(i)) * j);
int y = (int)(Math.sin(Math.toRadians(i)) * j);
canvas.setRGB(x + 256, y + 256, black.getRGB());
}
This is the result:
As you can see, where the radius is bigger, the leaf is not filled completely.
If I change i to 1350, then divide it with 10 where I calculate x, y, then it's filled, but it's much slower. Is there a better way to properly fill my shape?
Later I also would like to fill my shape with a gradient, so from green to a darker green, then back to green. With my method this is easy, but super slow.
Thanks in advance!
I think that for you the best solution is to use a flood fill algorithm, it's easy to implement in Java and efficient in your case, like you have a simple shape.
Here is a wikipedia article that is really complet : http://en.wikipedia.org/wiki/Flood_fill
Here is a simple suggestion: Instead of drawing the leaf, just put the points that create the outline into an array. The array should run from xMin (smallest X coordiate of the leaf outline) to xMax. Each element is two ints: yMin and yMax.
After rendering all the points, you can just draw vertical lines to fill the space between yMin/yMax for each X coordinate.
If you have gaps in the array, fill them by interpolating between the neighboring points.
An alternative would be to sort the points clockwise or counter-clockwise and use them as the outline for a polygon.

how to save values of variables and prevent them from changing when doing recursion? also recursion assistance?

import gpdraw.*;
public class Y2K {
// Attributes
SketchPad pad;
DrawingTool pen;
// Constructor
public Y2K() {
pad = new SketchPad(600, 600, 50);
pen = new DrawingTool(pad);
// Back the pen up so the Y is drawn in the middle of the screen
pen.up();
pen.setDirection(270);
pen.forward(150);
pen.down();
pen.setDirection(90);
}
public void drawY(int level, double length) {
// Base case: Draw an Y
if (level == 0) {
//pen.setDirection(90);
pen.forward(length);
pen.turnRight(60);
pen.forward(length);
pen.backward(length);
pen.turnLeft(120);
pen.forward(length);
pen.backward(length);
}
// Recursive case: Draw an L at each midpoint
// of the current L's segments
else {
//Drawing the bottom "leg" of our Y shape
pen.forward(length / 2);
double xpos1 = pen.getXPos();
double ypos1 = pen.getYPos();
double direction1 = pen.getDirection();
pen.turnRight(90);
drawY(level - 1, length / 2.0);
pen.up();
pen.move(xpos1, ypos1);
pen.setDirection(direction1);
pen.down();
pen.forward(length / 2);
double xpos2 = pen.getXPos();
double ypos2 = pen.getYPos();
double direction2 = pen.getDirection();
//Drawing upper Right Leg
pen.turnRight(60);
pen.forward(length / 2); //going to the midpoint
double xpos3 = pen.getXPos();
double ypos3 = pen.getYPos();
double direction3 = pen.getDirection();
pen.turnLeft(90);
drawY(level - 1, length / 2.0);
pen.up();
pen.move(xpos3, ypos3);
pen.setDirection(direction3);
pen.down();
pen.forward(length / 2);
//drawing upper left leg
pen.up();
pen.move(xpos1, ypos1);
pen.setDirection(direction1);
pen.down();
pen.forward(length / 2);
pen.turnLeft(60);
pen.forward(length / 2);
double xpos4 = pen.getXPos();
double ypos4 = pen.getYPos();
double direction4 = pen.getDirection();
pen.turnLeft(90);
drawY(level - 1, length / 2.0);
pen.up();
pen.move(xpos4, ypos4);
pen.setDirection(direction4);
pen.down();
pen.forward(length / 2);
pen.forward(length / 2);
}
}
public static void main(String[] args) {
Y2K fractal = new Y2K();
// Draw Y with given level and side length
fractal.drawY(8, 200);
}
}
output:
one certain leg of the triangle is too long, and that makes the output slightly off. maybe its because the code went (length/2) too far? lets debug this.
otherwise it is completely fine, the recursion is great, and its exactly what i wanted to do
As you're constantly drawing Y's, I'd recommend you create a method that draws a Y given certain parameters (e.g. length, angle of separation between the two branches of the Y, rotation, etc.). This will make your code much more readable and easier to understand.
As for moving to the center, just think of the Y on a coordinate plane. Based upon the rotation of the Y, and its starting point you can calculate the center point.
Just break it up into its x and y components.
Given this information, we can solve for a and for b.
a = length * sin(θ)
b = length * cos(θ)
Then add this to your x and y to calculate the center point of the Y.
As for keeping the constant length, you know the level. At the first level, level == 1. But the length of this next level should be length * (2^level). In this case, length/2 (as length would be -1).
In pseudo code terms:
public void drawY(int level, double length)
{
//Drawing the bottom "leg" of our Y shape
Move Forward length/2
Save our position
Save our direction
Turn to the right 90 degrees
Recursion (call drawY())
revert to original location
revert to original direction
move forward length/2 (to go to center point of Y)
save our new position
save our new direction
//Drawing upper Right Leg
Turn 60 to the right
Move Forward length/2 //going to the midpoint
save our new position (don't forget the center point)
save our new direction (don't forget the center point direction)
Turn 90 to the left
Recursion (call drawY())
return to our saved position (not center one)
return to our saved direction (not center one)
move forward length/2
//drawing upper left leg
return to center point
return to center direction
turn left 60
move forward length/2
save position (you can overwrite the center one now
save direction (you can overwrite)
turn left 90
Recursion (call drawY())
return to position
return to direction
move forward length/2
}

How to convert a 3D point into 2D perspective projection?

I am currently working with using Bezier curves and surfaces to draw the famous Utah teapot. Using Bezier patches of 16 control points, I have been able to draw the teapot and display it using a 'world to camera' function which gives the ability to rotate the resulting teapot, and am currently using an orthographic projection.
The result is that I have a 'flat' teapot, which is expected as the purpose of an orthographic projection is to preserve parallel lines.
However, I would like to use a perspective projection to give the teapot depth. My question is, how does one take the 3D xyz vertex returned from the 'world to camera' function, and convert this into a 2D coordinate. I am wanting to use the projection plane at z=0, and allow the user to determine the focal length and image size using the arrow keys on the keyboard.
I am programming this in java and have all of the input event handler set up, and have also written a matrix class which handles basic matrix multiplication. I've been reading through wikipedia and other resources for a while, but I can't quite get a handle on how one performs this transformation.
The standard way to represent 2D/3D transformations nowadays is by using homogeneous coordinates. [x,y,w] for 2D, and [x,y,z,w] for 3D. Since you have three axes in 3D as well as translation, that information fits perfectly in a 4x4 transformation matrix. I will use column-major matrix notation in this explanation. All matrices are 4x4 unless noted otherwise.
The stages from 3D points and to a rasterized point, line or polygon looks like this:
Transform your 3D points with the inverse camera matrix, followed with whatever transformations they need. If you have surface normals, transform them as well but with w set to zero, as you don't want to translate normals. The matrix you transform normals with must be isotropic; scaling and shearing makes the normals malformed.
Transform the point with a clip space matrix. This matrix scales x and y with the field-of-view and aspect ratio, scales z by the near and far clipping planes, and plugs the 'old' z into w. After the transformation, you should divide x, y and z by w. This is called the perspective divide.
Now your vertices are in clip space, and you want to perform clipping so you don't render any pixels outside the viewport bounds. Sutherland-Hodgeman clipping is the most widespread clipping algorithm in use.
Transform x and y with respect to w and the half-width and half-height. Your x and y coordinates are now in viewport coordinates. w is discarded, but 1/w and z is usually saved because 1/w is required to do perspective-correct interpolation across the polygon surface, and z is stored in the z-buffer and used for depth testing.
This stage is the actual projection, because z isn't used as a component in the position any more.
The algorithms:
Calculation of field-of-view
This calculates the field-of view. Whether tan takes radians or degrees is irrelevant, but angle must match. Notice that the result reaches infinity as angle nears 180 degrees. This is a singularity, as it is impossible to have a focal point that wide. If you want numerical stability, keep angle less or equal to 179 degrees.
fov = 1.0 / tan(angle/2.0)
Also notice that 1.0 / tan(45) = 1. Someone else here suggested to just divide by z. The result here is clear. You would get a 90 degree FOV and an aspect ratio of 1:1. Using homogeneous coordinates like this has several other advantages as well; we can for example perform clipping against the near and far planes without treating it as a special case.
Calculation of the clip matrix
This is the layout of the clip matrix. aspectRatio is Width/Height. So the FOV for the x component is scaled based on FOV for y. Far and near are coefficients which are the distances for the near and far clipping planes.
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][ 1 ]
[ 0 ][ 0 ][(2*near*far)/(near-far)][ 0 ]
Screen Projection
After clipping, this is the final transformation to get our screen coordinates.
new_x = (x * Width ) / (2.0 * w) + halfWidth;
new_y = (y * Height) / (2.0 * w) + halfHeight;
Trivial example implementation in C++
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
struct Vector
{
Vector() : x(0),y(0),z(0),w(1){}
Vector(float a, float b, float c) : x(a),y(b),z(c),w(1){}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt(x*x + y*y + z*z);
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if(mag < epsilon){
std::out_of_range e("");
throw e;
}
return *this / mag;
}
};
inline float Dot(const Vector& v1, const Vector& v2)
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data(16)
{
Identity();
}
void Identity()
{
std::fill(data.begin(), data.end(), float(0));
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[](size_t index)
{
if(index >= 16){
std::out_of_range e("");
throw e;
}
return data[index];
}
Matrix operator*(const Matrix& m) const
{
Matrix dst;
int col;
for(int y=0; y<4; ++y){
col = y*4;
for(int x=0; x<4; ++x){
for(int i=0; i<4; ++i){
dst[x+col] += m[i+col]*data[x+i*4];
}
}
}
return dst;
}
Matrix& operator*=(const Matrix& m)
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix(float fov, float aspectRatio, float near, float far)
{
Identity();
float f = 1.0f / std::tan(fov * 0.5f);
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (far+near) / (far-near);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*near*far) / (near-far);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*(const Vector& v, const Matrix& m)
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8 ] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9 ] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip(int width, int height, float near, float far, const VecArr& vertex)
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix(60.0f * (M_PI / 180.0f), aspect, near, far);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for(VecArr::iterator i=vertex.begin(); i!=vertex.end(); ++i){
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back(v);
}
/* TODO: Clipping here */
for(VecArr::iterator i=dst.begin(); i!=dst.end(); ++i){
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
If you still ponder about this, the OpenGL specification is a really nice reference for the maths involved.
The DevMaster forums at http://www.devmaster.net/ have a lot of nice articles related to software rasterizers as well.
I think this will probably answer your question. Here's what I wrote there:
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
Then, very simply:
X' = ((X - Xc) * (F/Z)) + Xc
Y' = ((Y - Yc) * (F/Z)) + Yc
If your camera is the origin, then this simplifies to:
X' = X * (F/Z)
Y' = Y * (F/Z)
To obtain the perspective-corrected co-ordinates, just divide by the z co-ordinate:
xc = x / z
yc = y / z
The above works assuming that the camera is at (0, 0, 0) and you are projecting onto the plane at z = 1 -- you need to translate the co-ords relative to the camera otherwise.
There are some complications for curves, insofar as projecting the points of a 3D Bezier curve will not in general give you the same points as drawing a 2D Bezier curve through the projected points.
You can project 3D point in 2D using: Commons Math: The Apache Commons Mathematics Library with just two classes.
Example for Java Swing.
import org.apache.commons.math3.geometry.euclidean.threed.Plane;
import org.apache.commons.math3.geometry.euclidean.threed.Vector3D;
Plane planeX = new Plane(new Vector3D(1, 0, 0));
Plane planeY = new Plane(new Vector3D(0, 1, 0)); // Must be orthogonal plane of planeX
void drawPoint(Graphics2D g2, Vector3D v) {
g2.drawLine(0, 0,
(int) (world.unit * planeX.getOffset(v)),
(int) (world.unit * planeY.getOffset(v)));
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
drawPoint(g2, new Vector3D(2, 1, 0));
drawPoint(g2, new Vector3D(0, 2, 0));
drawPoint(g2, new Vector3D(0, 0, 2));
drawPoint(g2, new Vector3D(1, 1, 1));
}
Now you only needs update the planeX and planeY to change the perspective-projection, to get things like this:
Looking at the screen from the top, you get x and z axis.
Looking at the screen from the side, you get y and z axis.
Calculate the focal lengths of the top and side views, using trigonometry, which is the distance between the eye and the middle of the screen, which is determined by the field of view of the screen.
This makes the shape of two right triangles back to back.
hw = screen_width / 2
hh = screen_height / 2
fl_top = hw / tan(θ/2)
fl_side = hh / tan(θ/2)
Then take the average focal length.
fl_average = (fl_top + fl_side) / 2
Now calculate the new x and new y with basic arithmetic, since the larger right triangle made from the 3d point and the eye point is congruent with the smaller triangle made by the 2d point and the eye point.
x' = (x * fl_top) / (z + fl_top)
y' = (y * fl_top) / (z + fl_top)
Or you can simply set
x' = x / (z + 1)
and
y' = y / (z + 1)
I'm not sure at what level you're asking this question. It sounds as if you've found the formulas online, and are just trying to understand what it does. On that reading of your question I offer:
Imagine a ray from the viewer (at point V) directly towards the center of the projection plane (call it C).
Imagine a second ray from the viewer to a point in the image (P) which also intersects the projection plane at some point (Q)
The viewer and the two points of intersection on the view plane form a triangle (VCQ); the sides are the two rays and the line between the points in the plane.
The formulas are using this triangle to find the coordinates of Q, which is where the projected pixel will go
All of the answers address the question posed in the title. However, I would like to add a caveat that is implicit in the text. Bézier patches are used to represent the surface, but you cannot just transform the points of the patch and tessellate the patch into polygons, because this will result in distorted geometry. You can, however, tessellate the patch first into polygons using a transformed screen tolerance and then transform the polygons, or you can convert the Bézier patches to rational Bézier patches, then tessellate those using a screen-space tolerance. The former is easier, but the latter is better for a production system.
I suspect that you want the easier way. For this, you would scale the screen tolerance by the norm of the Jacobian of the inverse perspective transformation and use that to determine the amount of tessellation that you need in model space (it might be easier to compute the forward Jacobian, invert that, then take the norm). Note that this norm is position-dependent, and you may want to evaluate this at several locations, depending on the perspective. Also remember that since the projective transformation is rational, you need to apply the quotient rule to compute the derivatives.
Thanks to #Mads Elvenheim for a proper example code. I have fixed the minor syntax errors in the code (just a few const problems and obvious missing operators). Also, near and far have vastly different meanings in vs.
For your pleasure, here is the compileable (MSVC2013) version. Have fun.
Mind that I have made NEAR_Z and FAR_Z constant. You probably dont want it like that.
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
#define M_PI 3.14159
#define NEAR_Z 0.5
#define FAR_Z 2.5
struct Vector
{
float x;
float y;
float z;
float w;
Vector() : x( 0 ), y( 0 ), z( 0 ), w( 1 ) {}
Vector( float a, float b, float c ) : x( a ), y( b ), z( c ), w( 1 ) {}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt( x*x + y*y + z*z );
}
Vector& operator*=(float fac) noexcept
{
x *= fac;
y *= fac;
z *= fac;
return *this;
}
Vector operator*(float fac) const noexcept
{
return Vector(*this)*=fac;
}
Vector& operator/=(float div) noexcept
{
return operator*=(1/div); // avoid divisions: they are much
// more costly than multiplications
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if (mag < epsilon) {
std::out_of_range e( "" );
throw e;
}
return Vector(*this)/=mag;
}
};
inline float Dot( const Vector& v1, const Vector& v2 )
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data( 16 )
{
Identity();
}
void Identity()
{
std::fill( data.begin(), data.end(), float( 0 ) );
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[]( size_t index )
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
const float& operator[]( size_t index ) const
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
Matrix operator*( const Matrix& m ) const
{
Matrix dst;
int col;
for (int y = 0; y<4; ++y) {
col = y * 4;
for (int x = 0; x<4; ++x) {
for (int i = 0; i<4; ++i) {
dst[x + col] += m[i + col] * data[x + i * 4];
}
}
}
return dst;
}
Matrix& operator*=( const Matrix& m )
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix( float fov, float aspectRatio )
{
Identity();
float f = 1.0f / std::tan( fov * 0.5f );
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (FAR_Z + NEAR_Z) / (FAR_Z- NEAR_Z);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*NEAR_Z*FAR_Z) / (NEAR_Z - FAR_Z);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*( const Vector& v, Matrix& m )
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip( int width, int height, const VecArr& vertex )
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix( 60.0f * (M_PI / 180.0f), aspect);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for (VecArr::const_iterator i = vertex.begin(); i != vertex.end(); ++i) {
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back( v );
}
/* TODO: Clipping here */
for (VecArr::iterator i = dst.begin(); i != dst.end(); ++i) {
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
#pragma once
I know it's an old topic but your illustration is not correct, the source code sets up the clip matrix correct.
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][(2*near*far)/(near-far)]
[ 0 ][ 0 ][ 1 ][ 0 ]
some addition to your things:
This clip matrix works only if you are projecting on static 2D plane if you want to add camera movement and rotation:
viewMatrix = clipMatrix * cameraTranslationMatrix4x4 * cameraRotationMatrix4x4;
this lets you rotate the 2D plane and move it around..-
You might want to debug your system with spheres to determine whether or not you have a good field of view. If you have it too wide, the spheres with deform at the edges of the screen into more oval forms pointed toward the center of the frame. The solution to this problem is to zoom in on the frame, by multiplying the x and y coordinates for the 3 dimensional point by a scalar and then shrinking your object or world down by a similar factor. Then you get the nice even round sphere across the entire frame.
I'm almost embarrassed that it took me all day to figure this one out and I was almost convinced that there was some spooky mysterious geometric phenomenon going on here that demanded a different approach.
Yet, the importance of calibrating the zoom-frame-of-view coefficient by rendering spheres cannot be overstated. If you do not know where the "habitable zone" of your universe is, you will end up walking on the sun and scrapping the project. You want to be able to render a sphere anywhere in your frame of view an have it appear round. In my project, the unit sphere is massive compared to the region that I'm describing.
Also, the obligatory wikipedia entry:
Spherical Coordinate System

Categories