I need help with calculating the lookAt method
Here is my method
public void lookAt(Vector3f position, Vector3f direction, Vector3f up) {
Vector3f f = new Vector3f();
Vector3f u = new Vector3f();
Vector3f s = new Vector3f();
Vector3f.sub(direction, position, f);
f.normalise(f);
up.normalise(u);
Vector3f.cross(f, u, s);
s.normalise(s);
Vector3f.cross(s, f, u);
this.setIdentity();
this.m00 = s.x;
this.m10 = s.y;
this.m20 = s.z;
this.m01 = u.x;
this.m11 = u.y;
this.m21 = u.z;
this.m02 = -f.x;
this.m12 = -f.y;
this.m22 = -f.z;
this.m30 = -Vector3f.dot(s, position);
this.m31 = -Vector3f.dot(u, position);
this.m32 = Vector3f.dot(f, position);
}
but when I test it like this camera.lookAt(position, new Vector3f(1, 0 ,0), new Vector3f(0, -1, 0)); my camera is looking down, end if only i do this camera.lookAt(position, new Vector3f(10000, 0 ,0), new Vector3f(0, -1, 0));, camera is looking forward. Can you help please ?
P.S. sorry for my english
The second parameter of a lookAt function is usually not the direction in which you want to look but the point which you want to look at. As far as I can see, the calculation of your method also expects a second point and not a direction (which is calculated from the two point and stored in f).
In conclusion, the results you get look correct to me, except that you passed the wrong parameters to the function.
How to create a function that creates a matrix from a point and a direction
The rotation required is one that maps the minus z-direction onto the view vector. Additionally, we want the x-vector to be mapped such that it is perpendicular to the plane spanned by view and up vector. This can relatively easy be achieved by writing -view in the third row of the 3x3 matrix. The other two vectors can then be computed as the cross products of view and up vector which results in a right-vector onto which the x-axis should be mapped. The last vector (the target mapping for the y-axis) is then computed by the cross product of view and right vector. The cross products for the last vector is used, since we know that all rotation matrices have base vectors that are perpendicular:
viewv = normalize(-view)
rightv = normalize(cross(view, up))
upv = normalize(cross(view, up))
-- rightv --
rotation_matrix = [ -- upv -- ]
-- viewv --
When the camera is located in the origin, then we are done now. But since this is in general not the case, we have to add a translation part that transforms the scene such that the camera is the origin. Thus t = -camera.
The final matrix is now composed by first translating the space and then rotating it according to our calculated matrix:
lookat_matrix = rotation_matrix * translate(-camera)
Since it is fairly late here and depending on the notation you use it might be that the rotation matrix has to be transposed and that some signs have to be adjusted.
Related
I have a problem with the correct vector alignment. I want to get a vector pointing in the same direction as the player, but with a constant Y value of 0. The point is, whatever the player's vertical and horizontal rotation, the vector's Y value was 0. The vector is always supposed to point horizontally (value 0), but keeping the direction of the player's rotation.
This picture shows the situation from the side. The red line represents an example of the player's viewing direction (up - down), and the green one the effect I want to achieve. Regardless of the direction in which the player is looking, up or down, the green line remains unchanged:
Here, in turn, I have presented this situation from the top. The red line is the player's viewing direction (left - right) and the green is the effect I want to achieve. As you can see, the player's rotation on this axis sets my vector exactly the same.
I was able to write a piece of code, but it doesn't behave correctly: the Y axis gets higher and higher as the player turns up or down. I don't know why:
Vector playerDirection = player.getLocation().getDirection();
Vector vector = new Vector(playerDirection.getX(), 0, playerDirection.getZ()).normalize().multiply(3);
How to do it correctly?
tl;dr:
Vector vector = new Vector(-1 * Math.sin(Math.toRadians(player.getLocation().getYaw())), 0, Math.cos(Math.toRadians(player.getLocation().getYaw())));
You are missing a fundamental principal of creating a new Vector based on where a player is looking. I don't know the math of it very well, but I can mess around with the math of people who are better than I at Geometry.
As such, let's try to reduce the number of Vector variables you have defined. Taking a quick peek at the source for Location, we can actually create your Vector directly to avoid having multiple defined.
public Vector getDirection() {
Vector vector = new Vector();
double rotX = this.getYaw();
double rotY = this.getPitch();
vector.setY(-Math.sin(Math.toRadians(rotY)));
double xz = Math.cos(Math.toRadians(rotY));
vector.setX(-xz * Math.sin(Math.toRadians(rotX)));
vector.setZ(xz * Math.cos(Math.toRadians(rotX)));
return vector;
}
As you can see, the pitch and yaw of a player are not a 1:1 relationship. No idea why, but let's repurpose their logic.
Here's how we'll do that:
public Vector getVectorForAdixe(Location playerLoc) {
Vector vector = new Vector();
double rotX = playerLoc.getYaw();
double rotY = 0; // this is the important change from above
// Original Code:
// vector.setY(-Math.sin(Math.toRadians(rotY)));
// Always resolves to 0, so just do that
vector.setY(0);
// Original Code:
// double xz = Math.cos(Math.toRadians(rotY));
// Always resolves to 1, so just do that
double xz = 1;
vector.setX(-xz * Math.sin(Math.toRadians(rotX)));
vector.setZ(xz * Math.cos(Math.toRadians(rotX)));
return vector;
Nice! Now, cleaning it up a bit to remove those comments and unnecessary variables:
public Vector getVectorForAdixe(Location playerLoc) {
Vector vector = new Vector();
double rotX = playerLoc.getYaw();
vector.setY(0);
vector.setX(-1 * Math.sin(Math.toRadians(rotX)));
vector.setZ(Math.cos(Math.toRadians(rotX)));
return vector;
Why does this math work like that? No idea! But this should almost certainly work for you. Could even inline it if you really wanted to keep it how you had it originally:
Vector vector = new Vector(-1 * Math.sin(Math.toRadians(player.getLocation().getYaw())), 0, Math.cos(Math.toRadians(player.getLocation().getYaw())));
Closing note, if you want to be able to get the pitch/yaw FROM the vector, that code is here: https://hub.spigotmc.org/stash/projects/SPIGOT/repos/bukkit/browse/src/main/java/org/bukkit/Location.java#310
The question change a bit, I figured out how to rotate around a single axis
I want to rotate a box around the Y axis using an angle.
The box has a size, and a Vector3f to signal the rotation.
To rotate the box correctly what I do is rotate the origin position then rotate the origin position plus the size, and use those two references to render the box.
However this rotation does not work correctly and causes rendering artifacts.
This is my code to rotate the positions:
Matrix4f matrix = new Matrix4f();
// Rotate the origin position
Vector3f pos = new Vector3f(new Vector3f(blockX, blockY, blockZ));
matrix.m03 = pos.x;
matrix.m13 = pos.y;
matrix.m23 = pos.z;
Vector3f rot = new Vector3f(new Vector3f(0, 1f, 0f));
Matrix4f.rotate((float) Math.toRadians(45f), rot, matrix, matrix);
Vector3f locationMin = new Vector3f(matrix.m03, matrix.m13, matrix.m23);
// Rotate the position with the size
// Top left back is the position of the block
Vector3f sizeRot = new Vector3f(new Vector3f(blockX + size, blockY + size, blockZ + size));
matrix = new Matrix4f();
matrix.m03 = sizeRot.x;
matrix.m13 = sizeRot.y;
matrix.m23 = sizeRot.z;
rot = new Vector3f(new Vector3f(0, 1f, 0f));
Matrix4f.rotate((float) Math.toRadians(45f), rot, matrix, matrix);
Vector3f locationMax = new Vector3f(matrix.m03, matrix.m13, matrix.m23);
// Then here I use the locationMax and the locationMin to render the cube
What could be wrong with this code? Is the logic I am using to rotate the box correct? as in rotate the origin position then rotate the origin position plus the size..
EDIT: I released that rotating after translating is stupid so instead I just rotated the locationMax which is not translated (it is only the size) then I translated and I still get the same result (Graphical Artifacts).
New Code:
float rx = blockX, ry = blockY, rz = blockZ;
Matrix4f matrix = new Matrix4f();
Vector3f rot = new Vector3f(0, 1f, 0f);
matrix = new Matrix4f();
matrix.m03 = size;
matrix.m13 = size;
matrix.m23 = size;
Matrix4f.rotate((float) Math.toRadians(45f), rot, matrix, matrix);
matrix.translate(new Vector3f(rx, ry, rz), matrix);
float mx = matrix.m03;
float my = matrix.m13;
float mz = matrix.m23;
// Here is use rx, ry, rz and mx, my, mz to render the box
============ * I figured it out (See below)* =============
EDIT:
This is what I ended up doing:
// Origin point
Vector4f a = new Vector4f(blockX, blockY, blockZ, 1);
// Rotate a matrix 45 degrees
Matrix4f mat = new Matrix4f();
mat.rotate((float) Math.toRandians(45f), new Vector3f(
0, 1f, 0), mat);
/* Transform the matrix to each point */
Vector4f c = new Vector4f(size.x, 0, size.z, 1);
Matrix4f.transform(mat, c, c);
Vector4f.add(c, a, c);
Vector4f b = new Vector4f(size.x, 0, 0, 1);
Matrix4f.transform(mat, b, b);
Vector4f.add(b, a, b);
Vector4f d = new Vector4f(0, 0, size.z, 1);
Matrix4f.transform(mat, d, d);
Vector4f.add(d, a, d);
// Here is use a, b, c, and d to render the box.
The problem with this is that I want to rotate around all axises and not only around the Y axis. This makes the code very long and unreadable and There are a lot of bugs when I try to rotate around all axises.
Update Question:
How do I take the above code and make it so I can rotate around all 3 axises. I want to do this so I can have a billboard that will always face the camera.
This is how I calculate the angle between the camera and the object:
Vector3f angle = new Vector3f();
// Calculate the distance between camera and object
Vector3f.sub(game.getCamera().getLocation(),
new Vector3f(blockX, blockY, blockZ), angle);
// Calculate the angle around the Y axis.
float vectorAngle = (float) ((float) Math.atan2(angle.z, angle.x) * -1 + (Math.PI / 2.0f));
Billboards are a very common application of computer graphics (as I'm sure you've noticed, since you're asking the question!)
Ultimately I think you are over complicating the problem, based on:
as in rotate the origin position then rotate the origin position plus the size..
For computer graphics, the most common transformations are Scaling, Translating, and Rotating, and you do these in an order to achieve a desired effect (traditionally you scale, then rotate about the origin, then translate the vertex's position).
Additionally, you will have three main matrices to render a model in 3d: World Matrix, View Matrix, and Projection Matrix. I believe you are having misunderstandings of transforming from Model Space to World Space.
Graphics TRS and Matrix info. If you are having conceptual problems, or this answer is insufficient, I highly recommend looking at this link. I have yet to find a better resource explaining the fundamentals of computer graphics.
So right at the moment, you have your three angles (in degrees, in a Vector3) corresponding to the angle difference in the X,Y, and Z coordinate spaces from your billboard and your camera. With this information, we generate the View matrix by first gathering all of our matrix transformations in one place.
I'm going to assume that you already have your Translation and Scaling matrices, and that they both work. This means that we only need to generate our Rotation matrix, and then transform that matrix with the scaling matrix, and then transforming that matrix by our translation matrix.
X Rotation Matrix
Y Rotation Matrix
Z Rotation Matrix
(Images taken from CodingLabs link above)
So you will generate these three matrices, using the X,Y, and Z angles you calculated earlier, and then transform them to consolidate them into a single matrix, transform that matrix by the scaling matrix, and then transform that matrix by the translation matrix. Now you have your awesome matrix that, when you multiply a a vertex by it, will transform that vertex into the desired size, rotation, and position.
So you transform every single vertex point by this generated matrix.
And then after that, you should be done! Using these techniques will hopefully simplify your code greatly, and set you on the right path :)
So now how about some code?
//I do not guarantee that this code compiles! I did not write it in an IDE nor did I compile it
float angleToRotX = 180f;
float angleToRotY = 90f;
float angleToRotZ = 0f;
// example vertex
Vector4f vertex = new Vector4f(0, 1, 0, 1);
// Rotate vertex's X coordinates by the desired degrees
Matrix4f rotationXMatrix = new Matrix4f();
rotationXMatrix.rotX(angleToRotX);
Matrix4f rotationYMatrix = new Matrix4f();
rotationYMatrix.rotY(angleToRotY);
Matrix4f rotationZMatrix = new Matrix4f();
rotationZMatrix.rotZ(angleToRotZ);
//now let's translate it by 1.5, 1, 1.5 in the X,Y,Z directions
Matrix4f translationMatrix = new Matrix4f();
translationMatrix.setTranslate(new Vector3f(1.5, 1, 1.5));
/*
Now we have our three rotational matrices. So we multiply them (transform them) to get a single matrix to transform all of the points in this model to the desired world coordinates
*/
Matrix4f rotationMatrix = new Matrix4f();
rotationMatrix.mul(rotationXMatrix);
rotationMatrix.mul(rotationYMatrix);
rotationMatrix.mul(rotationZMatrix);
Matrix4f worldMatrix = translationMatrix;
worldMatrix.mul(rotationMatrix);
//now worldMatrix, when applied to a vertex, will rotate it by X,Y,Z degrees about the origin of it's model space, and then translate it by the amount given in translationMatrix
worldMatrix.transform(vertex);
//now vertex should be (1.5, 0, 1.5, 1) with (x,y,z,1)
Now this code could really be simplified, and it is excessively verbose. Try it out! I don't have java downloaded on my machine, but I grabbed the methods from the java documentation Here
Here is an image of what is happening (again, taking from coding labs):
(Advanced Info: Quaternions. These are really cool way of orienting a model in 3d space, however I don't quite understand them to the degree I need to in order to explain it to someone else, and I also believe that your problem is more fundamental)
You could generate the matrix without much hassle. The OpenGL matrix looks like the following:
|lx,ux,vx,px| - lx,ly,lz = the left vector
|ly,uy,vy,py| - ux,uy,uz = the up vector
|lz,uz,vz,pz| - vx,vy,vz = the view vector
|0 ,0 ,0 ,1 | - px,py,pz = the translation
All you need to do, is set px,py,pz to the position of your box in the world,
your view vector to the normalized(camera position - box position), your up comes straight from your camera, and the left is calculated via normalized cross product. It's also good practice to reconstruct the up vector, after left one is derived (by another cross product). That's all there's to it.
My solution aims to save you some time coding, rather than explain everything in detail. Hope that is useful to someone.
What is the formula for calculating the position of 3D point after it has been rotated around another 3D point a certain radians/degrees? I am using Java / LWLJGL.
Could someone just fill in the blanks in the following?
public Vector3f rotate(Vector3f origin, Vector3f rotation)
{
Vector3f ret = new Vector3f();
ret.x = __________;
ret.y = __________;
ret.z = __________;
}
Consider your fixed point has coordinates (a,b,c) and moving object (x1,y1,z1) at time t1 and at (x2,y2,z2) at time t2.
option 1
you can consider projection on x-yplane and projection on y-z plane and calculate angle in that 2D space.
option 2
you can consider two vectors. say vector A and B
A=(x1-a)i+(y1-b)j+(z1-c)k
B=(x2-a)i+(y2-b)j+(z2-c)k
Now use dot product of A and B
A . B = |A||B|cos(angle)
I wish to determine the 2D screen coordinates (x,y) of points in 3D space (x,y,z).
The points I wish to project are real-world points represented by GPS coordinates and elevation above sea level.
For example:
Point (Lat:49.291882, Long:-123.131676, Height: 14m)
The camera position and height can also be determined as a x,y,z point. I also have the heading of the camera (compass degrees), its degree of tilt (above/below horizon) and the roll (around the z axis).
I have no experience of 3D programming, therefore, I have read around the subject of perspective projection and learnt that it requires knowledge of matrices, transformations etc - all of which completely confuse me at present.
I have been told that OpenGL may be of use to construct a 3D model of the real-world points, set up the camera orientation and retrieve the 2D coordinates of the 3D points.
However, I am not sure if using OpenGL is the best solution to this problem and even if it is I have no idea how to create models, set up cameras etc
Could someone suggest the best method to solve my problem? If OpenGL is a feasible solution i'd have to use OpenGL ES if that makes any difference. Oh and whatever solution I choose it must execute quickly.
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
Then, very simply:
X' = ((X - Xc) * (F/Z)) + Xc
Y' = ((Y - Yc) * (F/Z)) + Yc
If your camera is the origin, then this simplifies to:
X' = X * (F/Z)
Y' = Y * (F/Z)
You do indeed need a perspective projection and matrix operations greatly simplify doing so. I assume you are already aware that your spherical coordinates must be transformed to Cartesian coordinates for these calculations.
Using OpenGL would likely save you a lot of work over rolling your own software rasterizer. So, I would advise trying it first. You can prototype your system on a PC since OpenGL ES is not too different as long as you keep it simple.
If just need to compute coordinates of some points, you should only need some algebra skills, not 3D programming with openGL.
Moreover openGL does not deal with Geographic coordinates
First get some doc about WGS84 and geodesic coordinates, you have first to convert your GPS data into a cartesian frame ( for instance the earth centric cartesian frame in which is defined the WGS84 ellipsoid ).
Then the computations with matrix can take place.
The chain of transformations is roughly :
WGS84
earth centric coordinates
some local frame
camera frame
2D projection
For the first conversion see this
The last involves a projection matrix
The others are only coordinates rotations and translation.
The "some local frame" is the local cartesian frame with origin as your camera location
tangent to the ellipsoid.
I'd recommend "Mathematics for 3D Game Programming and Computer Graphics" by Eric Lengyel. It covers matrices, transformations, the view frustum, perspective projection and more.
There is also a good chapter in The OpenGL Programming Guide (red book) on viewing transformations and setting up a camera (including how to use gluLookAt).
If you aren't interested in displaying the 3D scene and are limited to using OpenGL ES then it may be better to just write your own code to do the mapping from 3D to 2D window coords. As a starting point you could download Mesa 3D, an open source implementation of OpenGL, to see how they implement gluPerspective (to set a projection matrix), gluLookAt (to set a camera transformation) and gluProject (to project a 3D point to 2D window coords).
return [((fol/v[2])*v[0]+x),((fol/v[2])*v[1]+y)];
Point at [0,0,1] will be x=0 and y=0, unless you add center screen xy - it's not camera xy. fol is focal length, derived from fov angle and screen width - how high is the triangle (tangent). This method will not match three.js perspective matrix, which is why am I looking for that.
I should not be looking for it. I matched xy on openGL, perfectly like super glue! But I cannot get it to work right in java. THAT Perfect match follows.
var pmat = [0,0,0,0,0,0,0,0,0,0,
(farclip + nearclip) / (nearclip - farclip),-1,0,0,
2*farclip*nearclip / (nearclip - farclip),0 ];
void setpmat() {
double fl; // = tan(dtor(90-fovx/aspect/2)); /// UNIT focal length
fl = 1/tan(dtor(fov/Aspect/2)); /// same number
pmat[0] = fl/Aspect;
pmat[5] = fl;
}
void fovmat(double v[],double p[]) {
int cx = (int)(_Width/2),cy = (int)(_Height/2);
double pnt2[4], pnt[4] = { 0,0,0,1 } ;
COPYVECTOR(pnt,p);NORMALIZE(pnt);
popmatrix4(pnt2,pmat,pnt);
COPYVECTOR(v,pnt2);
v[0] *= -cx; v[1] *= -cy;
v[0] += cx; v[1] += cy;
} // world to screen matrix
void w2sm(int xy[],double p[]) {
double v[3]; fovmat(v,p);
xy[0] = (int)v[0];
xy[1] = (int)v[1];
}
I have one more way to match three.js xy, til I get the matrix working, just one condition. must run at Aspect of 2
function w2s(fol,v,x,y) {
var a = width / height;
var b = height/width ;
/// b = .5 // a = 2
var f = 1/Math.tan(dtor(_fov/a)) * x * b;
return [intr((f/v[2])*v[0]+x),intr((f/v[2])*v[1]+y)];
}
Use it with the inverted camera matrix, you will need invert_matrix().
v = orbital(i);
v = subv(v,campos);
v3 = popmatrix(wmatrix,v); //inverted mat
if (v3[2] > 0) {
xy = w2s(flen,v3,cx,cy);
Finally here it is, (everyone ought to know by now), the no-matrix match, any aspect.
function angle2fol(deg,centerx) {
var b = width / height;
var a = dtor(90 - (clamp(deg,0.0001,174.0) / 2));
return asa_sin(PI_5,centerx,a) / b;
}
function asa_sin(a,s,b) {
return Math.sin(b) * (s / Math.sin(PI-(a+b)));
} // ASA solve opposing side of angle2 (b)
function w2s(fol,v,x,y) {
return [intr((fol/v[2])*v[0]+x),intr((fol/v[2])*v[1]+y)];
}
Updated the image for the proof. Input _fov gets you 1.5 that, "approximately." To see the FOV readout correctly, redo the triangle with the new focal length.
function afov(deg,centerx) {
var f = angle2fol(deg,centerx);
return rtod(2 * sss_cos(f,centerx,sas_cos(f,PI_5,centerx)));
}
function sas_cos(s,a,ss) {
return Math.sqrt((Math.pow(s,2)+Math.pow(ss,2))-(2*s*ss*Math.cos(a)));
} // Side Angle Side - solve length of missing side
function sss_cos(a,b,c) {
with (Math) {
return acos((pow(a,2)+pow(c,2)-pow(b,2))/(2*a*c));
}
} // SSS solve angle opposite side2 (b)
Star library confirmed the perspective, then possible to measure the VIEW! http://innerbeing.epizy.com/cwebgl/perspective.jpg
I can explain the 90 deg correction to moon's north pole in one word precession. So what is the current up vector. pnt? radec?
function ininorths() {
if (0) {
var c = ctime;
var v = LunarPos(jdm(c));
c += secday();
var vv = LunarPos(jdm(c));
vv = crossprod(v,vv);
v = eyeradec(vv);
echo(v,vv);
v = [266.86-90,65.64]; //old
}
var v = [282.6425,65.8873]; /// new.
// ...
}
I have yet to explain the TWO sets of vectors: Three.milkyway.matrix and the 3D to 2D drawing. They ARE:
function drawmilkyway() {
var v2 = radec2pos(dtor(192.8595), dtor(27.1283),75000000);
// gcenter 266.4168 -29.0078
var v3 = radec2pos(dtor(266.4168), dtor(-29.0078),75000000);
// ...
}
function initmwmat() {
var r,u,e;
e = radec2pos(dtor(156.35), dtor(12.7),1);
u = radec2pos(dtor(60.1533), dtor(25.5935),1);
r = normaliz(crossprod(u,e));
u = normaliz(crossprod(e,r));
e = normaliz(crossprod(r,u));
var m = MilkyWayMatrix;
m[0]=r[0];m[1]=r[1];m[2]=r[2];m[3]=0.0;
m[4]=u[0];m[5]=u[1];m[6]=u[2];m[7]=0.0;
m[8]=e[0];m[9]=e[1];m[10]=e[2];m[11]=0.0;
m[12]=0.0;m[13]=0.0;m[14]=0.0;m[15]=1.0;
}
/// draw vectors and matrix were the same in C !
void initmwmat(double m[16]) {
double r[3], u[3], e[3];
radec2pos(e,dtor(192.8595), dtor(27.1283),1); //up
radec2pos(u,dtor(266.4051), dtor(-28.9362),-1); //eye
}
I think it can be done by applying the transformation matrix of the scenegraph to z-normal (0, 0, 1), but it doesn't work. My code goes like this:
Vector3f toScreenVector = new Vector3f(0, 0, 1);
Transform3D t3d = new Transform3D();
tg.getTransform(t3d); //tg is Transform Group of all objects in a scene
t3d.transform(toScreenVector);
Then I tried something like this too:
Point3d eyePos = new Point3d();
Point3d mousePos = new Point3d();
canvas.getCenterEyeInImagePlate(eyePos);
canvas.getPixelLocationInImagePlate(new Point2d(Main.WIDTH/2, Main.HEIGHT/2), mousePos); //Main is the class for main window.
Transform3D motion = new Transform3D();
canvas.getImagePlateToVworld(motion);
motion.transform(eyePos);
motion.transform(mousePos);
Vector3d toScreenVector = new Vector3f(eyePos);
toScreenVector.sub(mousePos);
toScreenVector.normalize();
But still this doesn't work correctly. I think there must be an easy way to create such vector. Do you know what's wrong with my code or better way to do so?
If I get this right, you want a vector that is normal to the screen plane, but in world coordinates?
In that case you want to INVERT the transformation from World -> Screen and do Screen -> World of (0,0,-1) or (0,0,1) depending on which axis the screen points down.
Since the ModelView matrix is just a rotation matrix (ignoring the homogeneous transformation part), you can simply pull this out by taking the transpose of the rotational part, or simple reading in the bottom row - as this transposes onto the Z coordinate column under transposition.
Yes, you got my question right. Sorry that I was a little bit confused yesterday. Now I have corrected the code by following your suggestion and mixing two pieces of code in the question together:
Vector3f toScreenVector = new Vector3f(0, 0, 1);
Transform3D t3d = new Transform3D();
canvas.getImagePlateToVworld(t3d);
t3d.transform(toScreenVector);
tg.getTransform(t3d); //tg is Transform Group of all objects in a scene
t3d.transform(toScreenVector);
Thank you.