How to convert a 3D point into 2D perspective projection? - java

I am currently working with using Bezier curves and surfaces to draw the famous Utah teapot. Using Bezier patches of 16 control points, I have been able to draw the teapot and display it using a 'world to camera' function which gives the ability to rotate the resulting teapot, and am currently using an orthographic projection.
The result is that I have a 'flat' teapot, which is expected as the purpose of an orthographic projection is to preserve parallel lines.
However, I would like to use a perspective projection to give the teapot depth. My question is, how does one take the 3D xyz vertex returned from the 'world to camera' function, and convert this into a 2D coordinate. I am wanting to use the projection plane at z=0, and allow the user to determine the focal length and image size using the arrow keys on the keyboard.
I am programming this in java and have all of the input event handler set up, and have also written a matrix class which handles basic matrix multiplication. I've been reading through wikipedia and other resources for a while, but I can't quite get a handle on how one performs this transformation.

The standard way to represent 2D/3D transformations nowadays is by using homogeneous coordinates. [x,y,w] for 2D, and [x,y,z,w] for 3D. Since you have three axes in 3D as well as translation, that information fits perfectly in a 4x4 transformation matrix. I will use column-major matrix notation in this explanation. All matrices are 4x4 unless noted otherwise.
The stages from 3D points and to a rasterized point, line or polygon looks like this:
Transform your 3D points with the inverse camera matrix, followed with whatever transformations they need. If you have surface normals, transform them as well but with w set to zero, as you don't want to translate normals. The matrix you transform normals with must be isotropic; scaling and shearing makes the normals malformed.
Transform the point with a clip space matrix. This matrix scales x and y with the field-of-view and aspect ratio, scales z by the near and far clipping planes, and plugs the 'old' z into w. After the transformation, you should divide x, y and z by w. This is called the perspective divide.
Now your vertices are in clip space, and you want to perform clipping so you don't render any pixels outside the viewport bounds. Sutherland-Hodgeman clipping is the most widespread clipping algorithm in use.
Transform x and y with respect to w and the half-width and half-height. Your x and y coordinates are now in viewport coordinates. w is discarded, but 1/w and z is usually saved because 1/w is required to do perspective-correct interpolation across the polygon surface, and z is stored in the z-buffer and used for depth testing.
This stage is the actual projection, because z isn't used as a component in the position any more.
The algorithms:
Calculation of field-of-view
This calculates the field-of view. Whether tan takes radians or degrees is irrelevant, but angle must match. Notice that the result reaches infinity as angle nears 180 degrees. This is a singularity, as it is impossible to have a focal point that wide. If you want numerical stability, keep angle less or equal to 179 degrees.
fov = 1.0 / tan(angle/2.0)
Also notice that 1.0 / tan(45) = 1. Someone else here suggested to just divide by z. The result here is clear. You would get a 90 degree FOV and an aspect ratio of 1:1. Using homogeneous coordinates like this has several other advantages as well; we can for example perform clipping against the near and far planes without treating it as a special case.
Calculation of the clip matrix
This is the layout of the clip matrix. aspectRatio is Width/Height. So the FOV for the x component is scaled based on FOV for y. Far and near are coefficients which are the distances for the near and far clipping planes.
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][ 1 ]
[ 0 ][ 0 ][(2*near*far)/(near-far)][ 0 ]
Screen Projection
After clipping, this is the final transformation to get our screen coordinates.
new_x = (x * Width ) / (2.0 * w) + halfWidth;
new_y = (y * Height) / (2.0 * w) + halfHeight;
Trivial example implementation in C++
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
struct Vector
{
Vector() : x(0),y(0),z(0),w(1){}
Vector(float a, float b, float c) : x(a),y(b),z(c),w(1){}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt(x*x + y*y + z*z);
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if(mag < epsilon){
std::out_of_range e("");
throw e;
}
return *this / mag;
}
};
inline float Dot(const Vector& v1, const Vector& v2)
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data(16)
{
Identity();
}
void Identity()
{
std::fill(data.begin(), data.end(), float(0));
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[](size_t index)
{
if(index >= 16){
std::out_of_range e("");
throw e;
}
return data[index];
}
Matrix operator*(const Matrix& m) const
{
Matrix dst;
int col;
for(int y=0; y<4; ++y){
col = y*4;
for(int x=0; x<4; ++x){
for(int i=0; i<4; ++i){
dst[x+col] += m[i+col]*data[x+i*4];
}
}
}
return dst;
}
Matrix& operator*=(const Matrix& m)
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix(float fov, float aspectRatio, float near, float far)
{
Identity();
float f = 1.0f / std::tan(fov * 0.5f);
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (far+near) / (far-near);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*near*far) / (near-far);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*(const Vector& v, const Matrix& m)
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8 ] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9 ] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip(int width, int height, float near, float far, const VecArr& vertex)
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix(60.0f * (M_PI / 180.0f), aspect, near, far);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for(VecArr::iterator i=vertex.begin(); i!=vertex.end(); ++i){
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back(v);
}
/* TODO: Clipping here */
for(VecArr::iterator i=dst.begin(); i!=dst.end(); ++i){
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
If you still ponder about this, the OpenGL specification is a really nice reference for the maths involved.
The DevMaster forums at http://www.devmaster.net/ have a lot of nice articles related to software rasterizers as well.

I think this will probably answer your question. Here's what I wrote there:
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
Then, very simply:
X' = ((X - Xc) * (F/Z)) + Xc
Y' = ((Y - Yc) * (F/Z)) + Yc
If your camera is the origin, then this simplifies to:
X' = X * (F/Z)
Y' = Y * (F/Z)

To obtain the perspective-corrected co-ordinates, just divide by the z co-ordinate:
xc = x / z
yc = y / z
The above works assuming that the camera is at (0, 0, 0) and you are projecting onto the plane at z = 1 -- you need to translate the co-ords relative to the camera otherwise.
There are some complications for curves, insofar as projecting the points of a 3D Bezier curve will not in general give you the same points as drawing a 2D Bezier curve through the projected points.

You can project 3D point in 2D using: Commons Math: The Apache Commons Mathematics Library with just two classes.
Example for Java Swing.
import org.apache.commons.math3.geometry.euclidean.threed.Plane;
import org.apache.commons.math3.geometry.euclidean.threed.Vector3D;
Plane planeX = new Plane(new Vector3D(1, 0, 0));
Plane planeY = new Plane(new Vector3D(0, 1, 0)); // Must be orthogonal plane of planeX
void drawPoint(Graphics2D g2, Vector3D v) {
g2.drawLine(0, 0,
(int) (world.unit * planeX.getOffset(v)),
(int) (world.unit * planeY.getOffset(v)));
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
drawPoint(g2, new Vector3D(2, 1, 0));
drawPoint(g2, new Vector3D(0, 2, 0));
drawPoint(g2, new Vector3D(0, 0, 2));
drawPoint(g2, new Vector3D(1, 1, 1));
}
Now you only needs update the planeX and planeY to change the perspective-projection, to get things like this:

Looking at the screen from the top, you get x and z axis.
Looking at the screen from the side, you get y and z axis.
Calculate the focal lengths of the top and side views, using trigonometry, which is the distance between the eye and the middle of the screen, which is determined by the field of view of the screen.
This makes the shape of two right triangles back to back.
hw = screen_width / 2
hh = screen_height / 2
fl_top = hw / tan(θ/2)
fl_side = hh / tan(θ/2)
Then take the average focal length.
fl_average = (fl_top + fl_side) / 2
Now calculate the new x and new y with basic arithmetic, since the larger right triangle made from the 3d point and the eye point is congruent with the smaller triangle made by the 2d point and the eye point.
x' = (x * fl_top) / (z + fl_top)
y' = (y * fl_top) / (z + fl_top)
Or you can simply set
x' = x / (z + 1)
and
y' = y / (z + 1)

I'm not sure at what level you're asking this question. It sounds as if you've found the formulas online, and are just trying to understand what it does. On that reading of your question I offer:
Imagine a ray from the viewer (at point V) directly towards the center of the projection plane (call it C).
Imagine a second ray from the viewer to a point in the image (P) which also intersects the projection plane at some point (Q)
The viewer and the two points of intersection on the view plane form a triangle (VCQ); the sides are the two rays and the line between the points in the plane.
The formulas are using this triangle to find the coordinates of Q, which is where the projected pixel will go

All of the answers address the question posed in the title. However, I would like to add a caveat that is implicit in the text. Bézier patches are used to represent the surface, but you cannot just transform the points of the patch and tessellate the patch into polygons, because this will result in distorted geometry. You can, however, tessellate the patch first into polygons using a transformed screen tolerance and then transform the polygons, or you can convert the Bézier patches to rational Bézier patches, then tessellate those using a screen-space tolerance. The former is easier, but the latter is better for a production system.
I suspect that you want the easier way. For this, you would scale the screen tolerance by the norm of the Jacobian of the inverse perspective transformation and use that to determine the amount of tessellation that you need in model space (it might be easier to compute the forward Jacobian, invert that, then take the norm). Note that this norm is position-dependent, and you may want to evaluate this at several locations, depending on the perspective. Also remember that since the projective transformation is rational, you need to apply the quotient rule to compute the derivatives.

Thanks to #Mads Elvenheim for a proper example code. I have fixed the minor syntax errors in the code (just a few const problems and obvious missing operators). Also, near and far have vastly different meanings in vs.
For your pleasure, here is the compileable (MSVC2013) version. Have fun.
Mind that I have made NEAR_Z and FAR_Z constant. You probably dont want it like that.
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
#define M_PI 3.14159
#define NEAR_Z 0.5
#define FAR_Z 2.5
struct Vector
{
float x;
float y;
float z;
float w;
Vector() : x( 0 ), y( 0 ), z( 0 ), w( 1 ) {}
Vector( float a, float b, float c ) : x( a ), y( b ), z( c ), w( 1 ) {}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt( x*x + y*y + z*z );
}
Vector& operator*=(float fac) noexcept
{
x *= fac;
y *= fac;
z *= fac;
return *this;
}
Vector operator*(float fac) const noexcept
{
return Vector(*this)*=fac;
}
Vector& operator/=(float div) noexcept
{
return operator*=(1/div); // avoid divisions: they are much
// more costly than multiplications
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if (mag < epsilon) {
std::out_of_range e( "" );
throw e;
}
return Vector(*this)/=mag;
}
};
inline float Dot( const Vector& v1, const Vector& v2 )
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data( 16 )
{
Identity();
}
void Identity()
{
std::fill( data.begin(), data.end(), float( 0 ) );
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[]( size_t index )
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
const float& operator[]( size_t index ) const
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
Matrix operator*( const Matrix& m ) const
{
Matrix dst;
int col;
for (int y = 0; y<4; ++y) {
col = y * 4;
for (int x = 0; x<4; ++x) {
for (int i = 0; i<4; ++i) {
dst[x + col] += m[i + col] * data[x + i * 4];
}
}
}
return dst;
}
Matrix& operator*=( const Matrix& m )
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix( float fov, float aspectRatio )
{
Identity();
float f = 1.0f / std::tan( fov * 0.5f );
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (FAR_Z + NEAR_Z) / (FAR_Z- NEAR_Z);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*NEAR_Z*FAR_Z) / (NEAR_Z - FAR_Z);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*( const Vector& v, Matrix& m )
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip( int width, int height, const VecArr& vertex )
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix( 60.0f * (M_PI / 180.0f), aspect);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for (VecArr::const_iterator i = vertex.begin(); i != vertex.end(); ++i) {
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back( v );
}
/* TODO: Clipping here */
for (VecArr::iterator i = dst.begin(); i != dst.end(); ++i) {
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
#pragma once

I know it's an old topic but your illustration is not correct, the source code sets up the clip matrix correct.
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][(2*near*far)/(near-far)]
[ 0 ][ 0 ][ 1 ][ 0 ]
some addition to your things:
This clip matrix works only if you are projecting on static 2D plane if you want to add camera movement and rotation:
viewMatrix = clipMatrix * cameraTranslationMatrix4x4 * cameraRotationMatrix4x4;
this lets you rotate the 2D plane and move it around..-

You might want to debug your system with spheres to determine whether or not you have a good field of view. If you have it too wide, the spheres with deform at the edges of the screen into more oval forms pointed toward the center of the frame. The solution to this problem is to zoom in on the frame, by multiplying the x and y coordinates for the 3 dimensional point by a scalar and then shrinking your object or world down by a similar factor. Then you get the nice even round sphere across the entire frame.
I'm almost embarrassed that it took me all day to figure this one out and I was almost convinced that there was some spooky mysterious geometric phenomenon going on here that demanded a different approach.
Yet, the importance of calibrating the zoom-frame-of-view coefficient by rendering spheres cannot be overstated. If you do not know where the "habitable zone" of your universe is, you will end up walking on the sun and scrapping the project. You want to be able to render a sphere anywhere in your frame of view an have it appear round. In my project, the unit sphere is massive compared to the region that I'm describing.
Also, the obligatory wikipedia entry:
Spherical Coordinate System

Related

What is the range of improved Perlin noise?

I'm trying to find the theoretical output range of improved Perlin noise for 1, 2 and 3 dimensions. I'm aware of existing answers to this question, but they don't seem to accord with my practical findings.
If n is the number of dimensions then according to [1] it should be [-sqrt(n/4), sqrt(n/4)]. According to [2] (which refers to [3]) it should be [-0.5·sqrt(n), 0.5·sqrt(n)] (which amounts to the same thing).
This means that the ranges should be approximately:
Dimensions
Range
1
[-0.5, 0.5]
2
[-0.707, 0.707]
3
[-0.866, 0.866]
However when I run the following code (which uses Ken Perlin's own reference implementation of improved noise from his website), I get higher values for 2 and 3 dimensions, namely approximately:
Dimensions
Range
1
[-0.5, 0.5]
2
[-0.891, 0.999]
3
[-0.997, 0.999]
With different permutations I even sometimes get values slightly over 1.0 for 3 dimensions, and for some strange reason one of the bounds for two dimension always seems to be about 0.89 while the other is about 1.00.
I can't figure out whether this is due to a bug in my code (I don't see how since this is Ken Perlin's own code) or due to those discussions not being correct or not being applicable somehow, in which case I would like to know what the theoretical ranges are for improved Perlin noise.
Can you replicate this? Are the results wrong, or can you point me to a discussion of the theoretical values that accords with this outcome?
The code:
public class PerlinTest {
public static void main(String[] args) {
double lowest1DValue = Double.MAX_VALUE, highest1DValue = -Double.MAX_VALUE;
double lowest2DValue = Double.MAX_VALUE, highest2DValue = -Double.MAX_VALUE;
double lowest3DValue = Double.MAX_VALUE, highest3DValue = -Double.MAX_VALUE;
final Random random = new SecureRandom();
for (int i = 0; i < 10000000; i++) {
double value = noise(random.nextDouble() * 256.0, 0.0, 0.0);
if (value < lowest1DValue) {
lowest1DValue = value;
}
if (value > highest1DValue) {
highest1DValue = value;
}
value = noise(random.nextDouble() * 256.0, random.nextDouble() * 256.0, 0.0);
if (value < lowest2DValue) {
lowest2DValue = value;
}
if (value > highest2DValue) {
highest2DValue = value;
}
value = noise(random.nextDouble() * 256.0, random.nextDouble() * 256.0, random.nextDouble() * 256.0);
if (value < lowest3DValue) {
lowest3DValue = value;
}
if (value > highest3DValue) {
highest3DValue = value;
}
}
System.out.println("Lowest 1D value: " + lowest1DValue);
System.out.println("Highest 1D value: " + highest1DValue);
System.out.println("Lowest 2D value: " + lowest2DValue);
System.out.println("Highest 2D value: " + highest2DValue);
System.out.println("Lowest 3D value: " + lowest3DValue);
System.out.println("Highest 3D value: " + highest3DValue);
}
static public double noise(double x, double y, double z) {
int X = (int)Math.floor(x) & 255, // FIND UNIT CUBE THAT
Y = (int)Math.floor(y) & 255, // CONTAINS POINT.
Z = (int)Math.floor(z) & 255;
x -= Math.floor(x); // FIND RELATIVE X,Y,Z
y -= Math.floor(y); // OF POINT IN CUBE.
z -= Math.floor(z);
double u = fade(x), // COMPUTE FADE CURVES
v = fade(y), // FOR EACH OF X,Y,Z.
w = fade(z);
int A = p[X ]+Y, AA = p[A]+Z, AB = p[A+1]+Z, // HASH COORDINATES OF
B = p[X+1]+Y, BA = p[B]+Z, BB = p[B+1]+Z; // THE 8 CUBE CORNERS,
return lerp(w, lerp(v, lerp(u, grad(p[AA ], x , y , z ), // AND ADD
grad(p[BA ], x-1, y , z )), // BLENDED
lerp(u, grad(p[AB ], x , y-1, z ), // RESULTS
grad(p[BB ], x-1, y-1, z ))),// FROM 8
lerp(v, lerp(u, grad(p[AA+1], x , y , z-1 ), // CORNERS
grad(p[BA+1], x-1, y , z-1 )), // OF CUBE
lerp(u, grad(p[AB+1], x , y-1, z-1 ),
grad(p[BB+1], x-1, y-1, z-1 ))));
}
static double fade(double t) { return t * t * t * (t * (t * 6 - 15) + 10); }
static double lerp(double t, double a, double b) { return a + t * (b - a); }
static double grad(int hash, double x, double y, double z) {
int h = hash & 15; // CONVERT LO 4 BITS OF HASH CODE
double u = h<8 ? x : y, // INTO 12 GRADIENT DIRECTIONS.
v = h<4 ? y : h==12||h==14 ? x : z;
return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v);
}
static final int p[] = new int[512], permutation[] = { 151,160,137,91,90,15,
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
};
static { for (int i=0; i < 256 ; i++) p[256+i] = p[i] = permutation[i]; }
}
Ken’s not using unit vectors. As [1] says, with my emphasis:
Third, there are many different ways to select the random vectors at the grid cell corners. In Improved Perlin noise, instead of selecting any random vector, one of 12 vectors pointing to the edges of a cube are used instead. Here, I will talk strictly about a continuous range of angles since it is easier – however, the range of value of an implementation of Perlin noise using a restricted set of vectors will never be larger. Finally, the script in this repository assumes the vectors are of unit length. If they not, the range of value should be scaled according to the maximum vector length. Note that the vectors in Improved Perlin noise are not unit length.
For Ken’s improved noise, the maximum vector length is 1 in 1D and √2 in 2D, so the theoretical bounds are [−0.5, 0.5] in 1D and [−1, 1] in 2D. I don’t know why you’re not seeing the full range in 2D; if you shuffled the permutation I bet you would sometimes.
For 3D, the maximum vector length is still √2, but the extreme case identified by [1] isn’t a possible output, so the theoretical range of [−√(3/2), √(3/2)] is an overestimate. These folks tried to work it out exactly, and yes, the maximum absolute value does seem to be strictly greater than 1.

How can I transform an ellipse to a rectangle in Java?

I am currently working on making a screensaver and I want my ellipse to slowly transform to a rectangle in java. What is the easiest way of doing that?
There are some shapes that are easy to transform into one another. For instance a square is a rectangle with equal side lengths, a circle is an ellipse with equal axes. So it is easy to transform a square into a rectangle since you can just use some drawrectangle function and adjust the parameters the whole way. Ditto for circle to ellipse.
squaretorect(double width,double height)
{
//Transform a square width * width to a rectangle width * height
int n = 100;//Number of intermediate points
int i;
double currentheight;
for(i=0;i<n;i++)
{
currentheight = width + (height-width) * i/(n-1);
drawrectangle(width,currentheight);
}
}
Transforming from a rectangle to an ellipse is harder, since in between the shape is neither a rectangle nor an ellipse. It may be that there is some more general object which can be either a rectangle, an ellipse, or something in between, but I cannot think of one.
So, the easy way is out, but there is a harder way to do it. Suppose if I divide the unit circle into N pieces and write points on an ellipse Ei and a rectangle Ri. Now as the transformation happens the points Ei move into the points Ri. A simple way to do this is to use a linear combination.
Ti = (1-v) * Ei + v * Ri
So to do the transformation we slowly increment v from 0 to 1. And we draw lines(or better yet interpolate) between the points Ti.
ellipsetorectangle(double a, double b, double w, double h)
{
//(x/a)^2+(y/b)^2 = 1
//Polar r = 1/sqrt(cos(phi)^2/a^2 + sin(phi)^2/b^2)
int N = 1000;
int i;
double phi; double r;
double phirect = atan(w/h);//Helps determine which of the 4 line segments we are on
ArrayList<Point> Ei;
ArrayList<Point> Ri;
for(i=0;i<N;i++)
{
//Construct ellipse
phi = 2PI * (double)i/N;
r = 1/sqrt(cos(phi)^2/a^2 + sin(phi)^2/b^2);
Ei.add(new Point(r * cos(phi),r * sin(phi));
//Construct Rectangle (It's hard)
if(phi > 2Pi - phirect || phi < phirect)
{Ri.add(new Point(w/2,w/2 * tan(phi)));}
else if(phi > phirect)
{Ri.add(new Point(h/2 * tan(phi),h/2));}
else if(phi > PI-phirect)
{Ri.add(new Point(-w/2,-w/2 * tan(phi)));}
else if(phi > PI+phirect)
{Ri.add(new Point(-h/2,-h/2 * tan(phi)));}
}
}
Arraylist<Point> Ti;
int transitionpoints = 100;
double v;
int j;
for(j=0;j<transitionpoints;j++)
{
//This outer loop represents one instance of the object. You should probably clear the picture here. This probably belongs in a separate function but it would take awhile to write it that way.
for(i=0;i<N;i++)
{
v = (double)1 * j/(N-1);
Ti = new Point(v * Ri.get(i).getx + (1-v) * Ei.get(i).getx,
v * Ri.get(i).gety + (1-v) * Ei.get(i).gety);
if(i != 0)
drawline(Ti,Tiold);
Tiold = Ti;
}
}

Draw an arc in opengl GL10

I want to draw an arc using center point,starting point,ending point on opengl surfaceview.I have tried this given below code so far. This function draws the expected arc if we give the value for start_line_angle and end_line_angle manually (like start_line_angle=0 and end_line_angle=90) in degree.
But I need to draw an arc with the given co-ordinates(center point,starting point,ending point) and calculating the start_line_angle and end_line_angle programatically.
This given function draws an arc with the given parameters but not giving the desire result. I've wasted my 2 days for this. Thanks in advance.
private void drawArc(GL10 gl, float radius, float cx, float cy, float start_point_x, float start_point_y, float end_point_x, float end_point_y) {
gl.glLineWidth(1);
int start_line_angle;
double sLine = Math.toDegrees(Math.atan((cy - start_point_y) / (cx - start_point_x))); //normal trigonometry slope = tan^-1(y2-y1)/(x2-x1) for line first
double eLine = Math.toDegrees(Math.atan((cy - end_point_y) / (cx - end_point_x))); //normal trigonometry slope = tan^-1(y2-y1)/(x2-x1) for line second
//cast from double to int after round
int start_line_Slope = (int) (sLine + 0.5);
/**
* mapping the tiriogonometric angle system to glsurfaceview angle system
* since angle system in trigonometric system starts in anti clockwise
* but in opengl glsurfaceview angle system starts in clock wise and the starting angle is 90 degree of general trigonometric angle system
**/
if (start_line_Slope <= 90) {
start_line_angle = 90 - start_line_Slope;
} else {
start_line_angle = 360 - start_line_Slope + 90;
}
// int start_line_angle = 270;
// int end_line_angle = 36;
//casting from double to int
int end_line_angle = (int) (eLine + 0.5);
if (start_line_angle > end_line_angle) {
start_line_angle = start_line_angle - 360;
}
int nCount = 0;
float[] stVertexArray = new float[2 * (end_line_angle - start_line_angle)];
float[] newStVertextArray;
FloatBuffer sampleBuffer;
// stVertexArray[0] = cx;
// stVertexArray[1] = cy;
for (int nR = start_line_angle; nR < end_line_angle; nR++) {
float fX = (float) (cx + radius * Math.sin((float) nR * (1 * (Math.PI / 180))));
float fY = (float) (cy + radius * Math.cos((float) nR * (1 * (Math.PI / 180))));
stVertexArray[nCount * 2] = fX;
stVertexArray[nCount * 2 + 1] = fY;
nCount++;
}
//taking making the stVertextArray's data in reverse order
reverseArray = new float[stVertexArray.length];//-2 so that no repeatation occurs of first value and end value
int count = 0;
for (int i = (stVertexArray.length) / 2; i > 0; i--) {
reverseArray[count] = stVertexArray[(i - 1) * 2 + 0];
count++;
reverseArray[count] = stVertexArray[(i - 1) * 2 + 1];
count++;
}
//reseting the counter to initial value
count = 0;
int finalArraySize = stVertexArray.length + reverseArray.length;
newStVertextArray = new float[finalArraySize];
/**Now adding all the values to the single newStVertextArray to draw an arc**/
//adding stVertextArray to newStVertextArray
for (float d : stVertexArray) {
newStVertextArray[count++] = d;
}
//adding reverseArray to newStVertextArray
for (float d : reverseArray) {
newStVertextArray[count++] = d;
}
Log.d("stArray", stVertexArray.length + "");
Log.d("reverseArray", reverseArray.length + "");
Log.d("newStArray", newStVertextArray.length + "");
ByteBuffer bBuff = ByteBuffer.allocateDirect(newStVertextArray.length * 4);
bBuff.order(ByteOrder.nativeOrder());
sampleBuffer = bBuff.asFloatBuffer();
sampleBuffer.put(newStVertextArray);
sampleBuffer.position(0);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, sampleBuffer);
gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, nCount * 2);
gl.glLineWidth(1);
}
To begin with the trigonometry you may not simply use the atan to find degrees of the angle. You need to check what quadrant the vector is in and increase or decrease the result you get from atan. Better yet use atan2 which should include both dx and dy and do the job for you.
You seem to create the buffer so that a point is created per degree. This is not the best solution as for large radius that might be too small and for small radius this is way too much. Tessellation should include the radius as well such that number of points N is N = abs((int)(deltaAngle*radius*tessellationFactor)) then use angleFragment = deltaAngle/N but make sure that N is greater then 0 (N = N?N:1). The buffer size is then 2*(N+1) of floats and the iteration if for(int i=0; i<=N; i++) angle = startAngle + angleFragment*i;.
As already pointed out you need to define the radius of the arc. It is quite normal to use an outside source the way you do and simply force it to that value but use the 3 points for center and the two borders. Some other options that usually make sense are:
getting the radius from the start line
getting the radius from the shorter of the two lines
getting the average of the two
interpolate the two to get an elliptic curve (explained below)
To interpolate the radius you need to get the two radiuses startRadius and endRadius. Then you need to find the overall radius which was already used as deltaAngle above (watch out when computing this one, it is more complicated as it seems, for instance drawing from 320 degrees to 10 degrees results in deltaAngle = 50). Anyway the radius for a specific point is then simply radius = startRadius + (endRadius-startRadius)*abs((angleFragment*i)/deltaAngle). This represents a simple linear interpolation in polar coordinate system which is usually used to interpolate vector in matrices and is the core functionality to get nice animations.
There are some other ways of getting the arc points which may be better performance wise but I would not suggest them unless and until you need to optimize your code which should be very late in production. You may simply keep stepping toward the next point and correcting the radius (this is only a concept):
vec2 start, end, center; // input values
float radius; // input value
// making the start and end relative to center
start -= center;
end -= center;
vec2 current = start/length(start) * radius; // current position starts in first vector
vec2 target = end/length(end) * radius; // should be the last point
outputBuffer[0] = current+center; // insert the first point
for(int i=1;; i++) { // "break" will need to exit the loop, we need index only for the buffer
vec2 step = vec2(current.y, -(current.x)); // a tangential vector from current start point according to center
step = step/length(step) / tessellationScale; // normalize and apply tessellation
vec2 next = current + step; // move tangentially
next = next/length(next) * radius; // normalize and set the
if(dot(current-target, next-target) > .0) { // when we passed the target vector
current = next; // set the current point
outputBuffer[i] = current+center; // insert into buffer
}
else {
current = target; // simply use the target now
outputBuffer[i] = current+center; // insert into buffer
break; // exit
}
}

Smooth Terrain Collision - 3D

I would like to have smooth terrain collision in my game engine, when i say smooth I mean the player's height isn't determined by one vertex. I belive barycentric coordinates are the way to go. And I've spent a good 7 hours researching this, but none of the code I've seen actually works and it doesn't explain it in plain-english either.
This is all I have so far. :(
public float getHeightAt(float xPos, float zPos) {
Vector3f one = new Vector3f((float)xPos, ((float)new Color(heightMap.getRGB((int)xPos, (int)zPos)).getRed())/255f*exaggeration*scale, (float)zPos);
Vector3f two = new Vector3f((float)xPos+1, ((float)new Color(heightMap.getRGB((int)xPos+1, (int)zPos)).getRed())/255f*exaggeration*scale, (float)zPos);
Vector3f three = new Vector3f((float)xPos, ((float)new Color(heightMap.getRGB((int)xPos, (int)zPos+1)).getRed())/255f*exaggeration*scale, (float)zPos+1);
float height = mid(one, two, three, new Vector3f(xPos, 0f, zPos));
System.out.println(height);
return height + 0.25f;
}
private float mid(Vector3f a, Vector3f b, Vector3f c, Vector3f p) {
Vector3f AB = a.mul(b);
Vector3f BC = b.mul(c);
Vector3f norm = AB.cross(BC);
float n0 = norm.getX();
float n1 = norm.getY();
float n2 = norm.getZ();
return (n0*a.getX() + n1*a.getY() + n2*a.getZ() - n0*p.getX() - n2*p.getZ()) / n1;
}
It works but it isn't smooth and I don't even know ifit is barycentric.
Here is an example of what I want: https://www.youtube.com/watch?v=ngJ6ISfXG3I
To get the smoothed height, there are two main steps:
I - Create a function to get the height from position
Create the function public float getHeightAt(float xPos, float zPos) following these instructions:
Check if the camera/player is inside the ground square
if(xPos > 0 && xPos < nbVerticesX && zPos > 0 && zPos < nbVerticesZ)
Get the point P nearest xPos and zPos
Get the normal N or compute it
Compute constant d of the plane equation
double d = -(P.x * N.x + P.y * N.y + P.z * N.z);
Return compute height
return -(d + N.z * zPos + N.x * xPos)/N.y;
II - Compute approximate height
Use this function to get the smoothed height:
public float getHeightApprox(float x, float z)
{
return ( (getHeightAt(x,z)
+ getHeightAt(x + 1, z)
+ getHeightAt(x - 1, z)
+ getHeightAt(x, z + 1)
+ getHeightAt(x, z - 1)) / 5);
}
Maybe you will have to adapt your code, but these pieces of code works fine for me. Hope this would help you.
Position and slope
Player position can be determined by one point. The case here is to create a relatively smooth function from the distinct values on the height map.
Interpolation should do the trick. It will in the simplest case provide a slope on the whole heightmap.
Bi-linear interpolation (quad)
At any point in time the palyer position in in some rectangle (quad) of the heightmap. We can evaluate the height in any point of this rectangle by doing bi-linear interpolation.
We do this for one axis on both edges and then on the second axis for the remaining edge.
^
| A--------B
| | |
| | P |
| | |
| C--------D
Y
*X------------>
// This could be different depending on how you get points
// (basically generates a [0, 1] value depending on the position in quad;
px = P.x - (int)P.x
py = P.y - (int)P.y
AB = A.h * (1.0 - px) + B.h * px;
CD = C.h * (1.0 - px) + D.h * px;
ABCD = AB * (1.0 - py) + CD * py;
ABCD is the resulting height
Considerations
This method is not perfect and might produce visual glitches depending on how you actually draw the quad in your rendering pipeline.
Also keep in mind that this works best if quads are bigger than your actual moving actor. In case when actor simultaneously is standing on several tiles a some kind averaged method should be used.

How to draw a smooth line through a set of points using Bezier curves?

I need to draw a smooth line through a set of vertices. The set of vertices is compiled by a user dragging their finger across a touch screen, the set tends to be fairly large and the distance between the vertices is fairly small. However, if I simply connect each vertex with a straight line, the result is very rough (not-smooth).
I found solutions to this which use spline interpolation (and/or other things I don't understand) to smooth the line by adding a bunch of additional vertices. These work nicely, but because the list of vertices is already fairly large, increasing it by 10x or so has significant performance implications.
It seems like the smoothing should be accomplishable by using Bezier curves without adding additional vertices.
Below is some code based on the solution here:
http://www.antigrain.com/research/bezier_interpolation/
It works well when the distance between the vertices is large, but doesn't work very well when the vertices are close together.
Any suggestions for a better way to draw a smooth curve through a large set of vertices, without adding additional vertices?
Vector<PointF> gesture;
protected void onDraw(Canvas canvas)
{
if(gesture.size() > 4 )
{
Path gesturePath = new Path();
gesturePath.moveTo(gesture.get(0).x, gesture.get(0).y);
gesturePath.lineTo(gesture.get(1).x, gesture.get(1).y);
for (int i = 2; i < gesture.size() - 1; i++)
{
float[] ctrl = getControlPoint(gesture.get(i), gesture.get(i - 1), gesture.get(i), gesture.get(i + 1));
gesturePath.cubicTo(ctrl[0], ctrl[1], ctrl[2], ctrl[3], gesture.get(i).x, gesture.get(i).y);
}
gesturePath.lineTo(gesture.get(gesture.size() - 1).x, gesture.get(gesture.size() - 1).y);
canvas.drawPath(gesturePath, mPaint);
}
}
}
private float[] getControlPoint(PointF p0, PointF p1, PointF p2, PointF p3)
{
float x0 = p0.x;
float x1 = p1.x;
float x2 = p2.x;
float x3 = p3.x;
float y0 = p0.y;
float y1 = p1.y;
float y2 = p2.y;
float y3 = p3.y;
double xc1 = (x0 + x1) / 2.0;
double yc1 = (y0 + y1) / 2.0;
double xc2 = (x1 + x2) / 2.0;
double yc2 = (y1 + y2) / 2.0;
double xc3 = (x2 + x3) / 2.0;
double yc3 = (y2 + y3) / 2.0;
double len1 = Math.sqrt((x1-x0) * (x1-x0) + (y1-y0) * (y1-y0));
double len2 = Math.sqrt((x2-x1) * (x2-x1) + (y2-y1) * (y2-y1));
double len3 = Math.sqrt((x3-x2) * (x3-x2) + (y3-y2) * (y3-y2));
double k1 = len1 / (len1 + len2);
double k2 = len2 / (len2 + len3);
double xm1 = xc1 + (xc2 - xc1) * k1;
double ym1 = yc1 + (yc2 - yc1) * k1;
double xm2 = xc2 + (xc3 - xc2) * k2;
double ym2 = yc2 + (yc3 - yc2) * k2;
// Resulting control points. Here smooth_value is mentioned
// above coefficient K whose value should be in range [0...1].
double k = .1;
float ctrl1_x = (float) (xm1 + (xc2 - xm1) * k + x1 - xm1);
float ctrl1_y = (float) (ym1 + (yc2 - ym1) * k + y1 - ym1);
float ctrl2_x = (float) (xm2 + (xc2 - xm2) * k + x2 - xm2);
float ctrl2_y = (float) (ym2 + (yc2 - ym2) * k + y2 - ym2);
return new float[]{ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y};
}
Bezier Curves are not designed to go through the provided points! They are designed to shape a smooth curve influenced by the control points.
Further you don't want to have your smooth curve going through all data points!
Instead of interpolating you should consider filtering your data set:
Filtering
For that case you need a sequence of your data, as array of points, in the order the finger has drawn the gesture:
You should look in wiki for "sliding average".
You should use a small averaging window. (try 5 - 10 points). This works as follows: (look for wiki for a more detailed description)
I use here an average window of 10 points:
start by calculation of the average of points 0 - 9, and output the result as result point 0
then calculate the average of point 1 - 10 and output, result 1
And so on.
to calculate the average between N points:
avgX = (x0+ x1 .... xn) / N
avgY = (y0+ y1 .... yn) / N
Finally you connect the resulting points with lines.
If you still need to interpolate between missing points, you should then use piece - wise cubic splines.
One cubic spline goes through all 3 provided points.
You would need to calculate a series of them.
But first try the sliding average. This is very easy.
Nice question. Your (wrong) result is obvious, but you can try to apply it to a much smaller dataset, maybe by replacing groups of close points with an average point. The appropriate distance in this case to tell if two or more points belong to the same group may be expressed in time, not space, so you'll need to store the whole touch event (x, y and timestamp). I was thinking of this because I need a way to let users draw geometric primitives (rectangles, lines and simple curves) by touch
What is this for? Why do you need to be so accurate? I would assume you only need something around 4 vertices stored for every inch the user drags his finger. With that in mind:
Try using one vertex out of every X to actually draw between, with the middle vertex used for specifying the weighted point of the curve.
int interval = 10; //how many points to skip
gesture.moveTo(gesture.get(0).x, gesture.get(0).y);
for(int i =0; i +interval/2 < gesture.size(); i+=interval)
{
Gesture ngp = gesture.get(i+interval/2);
gesturePath.quadTo(ngp.x,ngp.y, gp.x,gp.y);
}
You'll need to adjust this to actually work but the idea is there.

Categories