Quaternion rotations based on vector and point - java

I have a program I'm building that uses 3D.
It is basically made of an object (some kind of a cube) that users can rotate and move around, while I can watch their view of the object by inspecting the same object on the server with an arrow directed at their point of view.
The client sends it's location, direction facing, and up as 3 vectors to the server.
I am trying to get my arrow to be rendered based off their location and rotation.
I'm doing this with the following code (using LibGDX for 3D):
Vector3[] vs;
Vector3 tmp = new Vector3();
batch.begin(wm.cam);
for (SocketWrapper s : clientAtt.keySet()) {// for every client
vs = clientAtt.get(s); // read the 3 vectors: { position, direction, up }
tmp.set(vs[2].nor());
vs[1].nor();
// arr.transform is the transform of the arrow
arr.transform.set(vs[0], new Quaternion().setFromCross(tmp.crs(Vector3.Z).nor(), vs[1]));
batch.render(arr);
}
And this is the definition of arr:
arrow = new ModelBuilder().createArrow(0, 0, 0, 1, 0, 0, 0.1f, 0.1f, 5, GL20.GL_TRIANGLES, new Material(ColorAttribute.createDiffuse(Color.RED)), VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);
arr = new ModelInstance(arrow);
If I only rotate around the Y axis, everything works, but if I use the X/Z axis it goes crazy.
I'm trying to figure out where my math is wrong and how to fix it.
Thanks.

Quaternion is used to define only a rotation, not the orientation. For example: it defines how to transform any given (unit) vector to another vector so that it is rotated the amount you specified. But it does not define which vector that is. Even more: there are an infinite amount of possible transformations that can achieve that.
The setFromCross method lets you specify that rotation by providing two arbitrary vectors. The Quaternion will then be set so that it would transform the first vector to the second vector, by rotating it around an axis perpendicular to the vectors you provided.
So, in your case:
setFromCross(tmp.set(up).crs(Vector3.Z).nor(), direction)
This sets the Quaternion so that it would rotate the cross product of your up vector and the Z+ vector to your direction vector, along the axis that is perpendicular to those two vectors. That might work for you in some cases, but I doubt that is what you actually want to do. So, to answer your question: that is probably where your math goes wrong.
Although this goes beyond the scope of your question, let's look at how you could achieve what you want to achieve.
First define the orientation of your model when it is not rotated. E.g. which side is up, which side is forward (direction) and which side is right (or left). Let's assume for this case that, in rest, your model's up side it Y+ (upRest = new Vector3(0, 1, 0);), it is facing X+ (directionRest = new Vector3(1, 0, 0)) and it's right is Z+ (rightRest = new Vector3(0, 0, 1);).
Now define the rotated orientation you want to have. You already have that, except for the right for which we can use the cross product (perpendicular) vector: upTarget = new Vector3(vs[2]).nor(); directionTarget = new Vector3(vs[1]).nor(); rightTarget = new Vector3().set(upTarget).crs(directionTraget).nor(); Note that you might need to swap the up and direction target vectors in the cross product (.set(directionTarget).crs(upTarget).nor();)
Because the orientation in rest is axis aligned, we can take a nice little shortcut by one of the properties of a matrix. A Matrix4 can be defined as: a vector of 4 vectors. The first of those four vectors specifies the rotated X axis, the second specifies the rotated Y axis, the third vector specifies the rotated Z vector and the fourth vector specifies the location. So, use this one-liner to set the model to the orientation and position we want:
arr.transform.set(directionTarget, upTarget, rightTarget, vs[0]);

Related

ARCore object tracking without a plane

I am very new to this ARCore and I have been looking at the HelloAR Java Android Studio project provided in the SDK.
Everthing works OK and is pretty cool, however, I want to place/drop an object when I touch the screen even when no planes have been detected. Let me explain a little better...
As I understand ARCore, it will detect horizontal planes and ONLY on those horizontal planes I can place 3D objects to be motion tracked.
Is there any way (perhaps using PointCloud information) to be able to place an object in the scene even if there are no horizontal planes detected? Sort of like these examples?
https://experiments.withgoogle.com/ar/flight-paths
https://experiments.withgoogle.com/ar/arcore-drawing
I know they are using Unity and openFrameworks, but could that be done in Java?
Also, I have looked at
How to put an object in the air?
and
how to check ray intersection with object in ARCore
but I don't think I'm understanding the concept of Ancor (I managed to drop the object on the scene, but it either disappears immediately or it is just a regular OpenGL object with no knowledge about the real world.
What I want to understand is:
- How and is it possible to create a custom/user defined plane, that is, a plane that is NOT automatically detected by ARCore?
- How can I create an Ancor (the sample does it in the PlaneAttachment class, I think) that is NOT linked to any plane OR that is linked to some PointCloud point?
- How do I draw the object and place it at the Ancor previously created?
I think this is too much to ask but looking at the API documentation has not helped me at all
Thank you!
Edit:
Here is the code that I added to HelloArActivity.java (Everything is the same as the original file except for the lines after // ***** and before ...
#Override
public void onDrawFrame(GL10 gl) {
...
MotionEvent tap = mQueuedSingleTaps.poll();
// I added this to use screenPointToWorldRay function in the second link I posted... I am probably using this wrong
float[] worldXY = new float[6];
...
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
// ***** I added this to use screenPointToWorldRay function
worldXY = screenPointToWorldRay(tap.getX(), tap.getY(), frame);
...
}
...
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (PlaneAttachment planeAttachment : mTouches) {
...
}
// ***** This places the object momentarily in the scene (it disappears immediately)
frame.getPose().compose(Pose.makeTranslation(worldXY[3], worldXY[4], worldXY[5])).toMatrix(mAnchorMatrix, 0);
// ***** This places the object in the middle of the scene but since it is not attached to anything, there is no tracking, it is always in the middle of the screen (pretty much expected behaviour)
// frame.getPose().compose(Pose.makeTranslation(0, 0, -1.0f)).toMatrix(mAnchorMatrix, 0);
// *****I duplicated this code which gets executed ONLY when touching a detected plane/surface.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
...
}
You would first have to perform a hit test via Frame.hitTest and iterate over the HitResult objects until you hit a Point type Trackable. You could then retrieve a pose for that hit result via HitResult.getHitPose, or attach an anchor to that point and get the pose from that via ArAnchor.getPose (best approach).
However, if you want to do this yourself from an arbitraty point retrieved with ArPointCloud.getPoints, it will take a little more work. In this approach, the question effectively reduces down to "How can I derive a pose / coordinate basis from a point?".
When working from a plane it is relatively easy to derive a pose as you can use the plane normal as the up (y) vector for your model and can pick x and y vectors to configure where you want the model to "face" about that plane. (Where each vector is perpendicular to the other vectors)
When trying to derive a basis from a point, you have to pick all three vectors (x, y and z) relative to the origin point you have. You can derive the up vector by transforming the vector (0,1,0) through the camera view matrix (assuming you want the top of the model to face the top of your screen) using ArCamera.getViewMatrix. Then you can pick the x and z vectors as any two mutually perpendicular vectors that orient the model in your desired direction.

Checking depth/z when rendering triangular faces in 3d space

My question can be simplified to the following: If a 3d triangle is being projected and rendered to a 2d viewing plane, how can the z value of each pixel being rendered be calculated in order to be stored to a buffer?
I currently have a working Java program that is capable of rendering 3d triangles to the 2d view as a solid color, and the camera can be moved, rotated, etc. with no problem, working exactly how one would expect it to, but if I try to render two triangles over each other, the one closer to the camera being expected to obscure the farther one, this isn't always the case. A Z buffer seems like the best idea as to how to remedy this issue, storing the z value of each pixel I render to the screen, and then if there's another pixel trying to be rendered to the same coordinate, I compare it to the z value of the current pixel when deciding which one to render. The issue I'm now facing is as follows:
How do I determine the z value of each pixel I render? I've thought about it, and there seem to be a few possibilities. One option involves finding the equation of the plane(ax + by + cz + d = 0) on which the face lies, then some sort of interpolation of each pixel in the triangle being rendered(e.g. halfway x-wise on the 2d rendered triangle -> halfway x-wise through the 3d triangle, same for the y, then solve for z using the plane's equation), though I'm not certain this would work. The other option I thought of is iterating through each point, with a given quantum, of the 3d triangle, then render each point individually, using the z of that point(which I'd also probably have to find through the plane's equation).
Again, I'm currently mainly considering using interpolation, so the pseudo-code would look like(if I have the plane's equation as "ax + by + cz + d = 0"):
xrange = (pixel.x - 2dtriangle.minX)/(2dtriangle.maxX - 2dtriangle.minX)
yrange = (pixel.y - 2dtriangle.minY)/(2dtriangle.maxY - 2dtriangle.minY)
x3d = (3dtriangle.maxX - 3dtriangle.minX) * xrange + 3dtriangle.minX
y3d = (3dtriangle.maxY - 3dtriangle.minY) * yrange + 3dtriangel.minY
z = (-d - a*x3d - b*y3d)/c
Where pixel.x is the x value of the pixel being rendered, 2dtraingle.minX and 2dtriangle.maxX are the minimum and maximum x values of the triangle being rendered(i.e. of its bounding box) after having been projected onto the 2d view, and it's min/max Y variables are the same, but for its Y. 3dtriangle.minX and 3dtriangle.maxX are the minimum and maximum x values of the 3d triangle before having been projected onto the 2d view, a, b, c, and d are the coefficients of the equation of the plane on which the 3d triangle lies, and z is the corresponding z value of the pixel being rendered.
Will that method work? If there's any ambiguity please let me know in the comments before closing the question! Thank you.
The best solution would be calculating the depth for each vertex of the triangle. Then we are able to get the depth of each pixel the same way we do for the colors when rendering a triangle with Gouraud shading. Doing that simultaneously with rendering allows to check the depth easily.
If we have a situation like this:
And we start to draw lines from the top to the bottom. We calculate the slopes from the point one to the others, and add the correct amount of depth every time we move to the next line... And so on.
You did't provide your rendering method, so can't say anything specific to it, but you should take a look at some tutorials related to Gouraud shading. Do some simple modifications to them and you should be able to use it with depth values.
Well, hopefully this helps!

In Javafx 8, how can I get the correct screen position of a 3d Node?

I am currently working on a 3d visualisation project using javafx 8.
As having too many points is slow when rotating the camera around, I decided to hide those points(3d boxes in my case) not displayed on the scene.
The problem is when I call box.localToScreen(0, 0, 0), the coordinates seems not correct some times. e.g, sometimes the point is still displayed on the screen, but its coordinates returned by localToScreen(0, 0, 0) can be negative. Have I missed something? or have I misused this method?
Here are some codes I have:
// where I build these boxes from points
for (point p : mergedList) {
Box pointBox = new Box(length, width, height);
boxList.add(pointBox);
pointBox.setTranslateX(p.getX());
pointBox.setTranslateY(p.getY());
pointBox.setTranslateZ(p.getZ());
...
// where I call localToScreen to get its coordinates
for (Box b : boxList) {
Point2D p = b.localToScreen(0, 0, 0); // I have also tried b.localToScreen(b.getTranslateX(), b.getTranslateY(), b.getTranslateZ())
double x = p.getX(), y = p.getY();
System.out.println(x);
System.out.println(y);
}
Thanks in advance.
I am also searching for solution to some of the localToScreen and screenToLocal issues.
For your case, if you are using multiple monitors, only the primary monitor provides you positive coordinates. The secondary monitor will give you negative coordinates.
Have you try localToScene instead of localToScreen ?
Firstly, the localToScreen method transforms the "provided" point by the "calling" objects point.
Use getLocalToSceneTransform() ...
this is your "world-Matrix", and hold all your transformation info. Rotations, scale, etc..
your position values are {Tx, Ty, Tz} so plug those into a Point3D and you have your position in SceneSpace (mostly accurate)
Another dirty option to "hide" the Boxes, is to set it's Cullface to Front. This will reduce some of the performance issues, since it does not need to be rendered, but leads to other potential problems with Mouse Picking and such.
I recently posted a video of 32k+ cubes being rendered, and I noticed 0 performance issues,
(the video encoding was not that great it's blurry in the beginning)
Video
Hope it helps!

good 3D explosion & particles effect using OpenGL (JOGL)?

I've been wanting to write it for some time now... as a project for the university, I've written (with a friend) a game that needed good explosions & particles effects. we've encountered some problems, which we solved quite elegantly (I think), and I'd like to share the knowledge.
OK then, so we found this tutorial: Make a Particle Explosion Effect which seemed easy enough to implement using Java with JOGL. before I'll answer as to how exactly we implemented this tutorial, I'll explain how rendering is done:
Camera: is just an orthonormal basis which basically means it contains 3 normalized orthogonal vectors, and a 4th vector representing the camera position. rendering is done using gluLookAt:
glu.gluLookAt(cam.getPosition().getX(), cam.getPosition().getY(), cam.getPosition().getZ(),
cam.getZ_Vector().getX(), cam.getZ_Vector().getY(), cam.getZ_Vector().getZ(),
cam.getY_Vector().getX(), cam.getY_Vector().getY(), cam.getY_Vector().getZ());
such that the camera's z vector is actually the target, the y vector is the "up" vector, and position is, well... the position.
so (if to put it in a question style), how to implement a good particles effect?
P.S: all the code samples and in-game screenshots (both in answer & question) are taken from the game, which is hosted here: Astroid Shooter
OK then, lets look at how we first approach the implementation of the particles: we had an abstract class Sprite which represented a single particle:
protected void draw(GLAutoDrawable gLDrawable) {
// each sprite has a different blending function.
changeBlendingFunc(gLDrawable);
// getting the quad as an array of length 4, containing vectors
Vector[] bb = getQuadBillboard();
GL gl = gLDrawable.getGL();
// getting the texture
getTexture().bind();
// getting the colors
float[] rgba = getRGBA();
gl.glColor4f(rgba[0],rgba[1],rgba[2],rgba[3]);
//draw the sprite on the computed quad
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(0.0f, 0.0f); gl.glVertex3d(bb[0].x, bb[0].y, bb[0].z);
gl.glTexCoord2f(1.0f, 0.0f); gl.glVertex3d(bb[1].x, bb[1].y, bb[1].z);
gl.glTexCoord2f(1.0f, 1.0f); gl.glVertex3d(bb[2].x, bb[2].y, bb[2].z);
gl.glTexCoord2f(0.0f, 1.0f); gl.glVertex3d(bb[3].x, bb[3].y, bb[3].z);
gl.glEnd();
}
we've most of the method calls are pretty much understandable here, no surprises. the rendering is quite simple. on the display method, we first draw all the opaque objects, then, we take all the Sprites and, sort them (square distance from camera), then draw the particles, such that further away from the camera is drawn first. but the real thing we have to look deeper into here is the method getQuadBillboard. we can understand that each particle has to "sit" on a plane that is perpendicular to the camera position, like here:
the way to compute a perpendicular plane like that is not hard:
substruct particle position from camera position to get a vector that is perpendicular to the plane, and normalize it, so it can be used as a normal for the plane. now a plane is defined tightly by a normal and position, which we now have (particle position is a point that the plane goes through)
compute the "height" of the quad, by normalizing the projection of the camera's Y vector on the plane. you can get the projected vector by computing: H = cam.Y - normal * (cam.Y dot normal)
create the "width" of the quad, by computing W = H cross normal
return the 4 points / vectors: {position+H+W,position+H-W,position-H-W,position-H+W}
but not all sprites acts like that, some are not perpendicular. for instance, the shockwave ring sprite, or the flying sparks/smoke trails:
so each sprite had to give it's own unique "billboard".BTW, the computation of the smoke trails & flying sprites sparks was a bit of a challenge as well. we've created another abstract class, we called it: LineSprite. i'll skip the explanations here, you can see the code in here: LineSprite.
well, this first try was nice, but there was an unexpected problem. here's a screenshot that illustrates the problem:
as you can see, the sprites intersects with each other, so if we look at 2 sprites that intersects, part of the 1st sprite is behind the 2nd sprite, and another part of it, is infront the 2nd sprite, which resulted in some weird rendering, where the lines of the intersection are visible. note, that even if we disabled glDepthMask, when rendering the particles, the result would still have the lines of intersection visible, because of the different blending that takes place in each sprite. so we had to somehow make the sprites to not intersect. the idea we had was really cool.
you know all these really cool 3D street art?
here's an image that emphasizes the idea:
we thought the idea could be implemented in our game, so the sprites won't intersect each other. here's an image to illustrate the idea:
basically, we made all the sprites to be on parallel planes, so no intersection could take place. and it did not effected the visible data, since it stayed the same. from every other angle, it would look streched, but from the camera point of view, it still looked great. so for the implementation:
when getting 4 vectors representing a quad billboard, and the position of the particle, we need to output a new set of 4 vectors that represents the original quad billboard. the idea of how to do this, is explained great in here: Intersection of a plane and a line. we have the "line" which is defined by the camera position, and each of the 4 vectors. we have the plane, since we could use our camera Z vector as the normal, and the position of the particle. also, a small change would be in the comparison function for sorting the sprites. it should now use the homogeneous matrix, which is defined by our camera orthonormal basis, and actually, the computation is as easy as computing: cam.getZ_Vector().getX()*pos.getX() + cam.getZ_Vector().getY()*pos.getY() + cam.getZ_Vector().getZ()*pos.getZ();. one more thing we should notice, is that if a particle is out of the viewing angle of the camera, i.e. behind the camera, we don't want to see it, and especially, we don't want to compute it's projection (could result in some very weird and psychedelic effects...).
and all is left is to show the final Sprite class
the result is quite nice:
hope it helps, would love to get your comments on this "article" (or on the game :}, which you can explore, fork, and use however you want...)

Java3D: Rotating the universe by increments

I'm trying to develop a Java3D method for rotating the universe in increments from the current viewing direction to the direction at the center of an object.
In other words, I want the 3D universe to rotate in, say, 100 short steps, so that an object that I click on appears to move gradually to the center of the screen.
I've reviewed the various answers to 3D rotation questions here on StackOverflow (as well as on the Web), but pretty much all of them are specific to rotating objects, not the world itself.
I've also tried to review my linear algebra, but that's not helping me to identify Java-specific functions that accomplish my requirements.
So far I've tried defining a set of incremental XYZ coordinates and dynamically using lookAt() in each pass through the loop. That almost works, but I don't see any way to preserve or obtain viewpoint values from one complete rotation pass to the next; each rotation pass starts out looking at the origin.
I've also tried defining a rotation matrix by obtaining the difference between the target and start transforms and dividing by the number of increments (and removing the scaling value), then adding that incremental rotation matrix to the current view direction at each pass through the loop. That works just fine for an increment value of 1. But splitting the rotation into two or more increments always generates the "BadTransformException: Non-congruent transform above ViewPlatform" error. (I've read the meager documentation of this exception in the Java3D API reference; it might as well have been written in Urdu for all I could make out from it. There seems to be no plain-English definition of 3D-context terms like "affine" or "shear" or "congruent" or "uniform" anywhere that Google can see.)
I then tried to cudgel my code into providing an AxisAngle4d, obtaining the angle (in radians), dividing that angle into my desired increments, and rotating by the incremental angle value. That rotated the world, all right, but nowhere near the object I picked, and not to any pattern I could see.
In desperation I tried using rotX and rotY (setting Z to the endpoint) on the extracted angle, and even blindly threw a couple of Math.cos() and Math.sin() wrappers in there. Still no joy.
My instincts are telling me that I've got the basics in place and that there's a relatively simple solution in Java3D. But clearly there's a comprehension wall I'm hitting. Rather than continue that, I thought I'd go ahead and see if anyone here can suggest a solution in Java3D. Code is preferred, but I'm willing to try to follow an explanation in linear algebra if that will get me to a code solution.
Below is the core of the method I'm using to schedule rotation increments using Java's Timer method. The part I need help with is just before the ActionListener. Presumably that's where the magic code would go that creates some kind of incremental rotation value I can apply (in the loop) to the current view direction in order to rotate the universe without getting "non-congruent" errors.
private void flyRotate(double endX, double endY, double endZ)
{
// Rotate universe by increments until target object is centered in view
//
// REQUIREMENTS
// 1. Rotate the universe by NUMROTS increments from an arbitrary (non-origin)
// 3D position and starting viewpoint to an ending viewpoint using the
// shortest path and preserving the currently defined "up" vector.
// 2. Use the Java Timer() method to schedule the visual update for each
// incremental rotation.
//
// GLOBALS
// rotLoop contains the integer loop counter for rotations (init'd to 0)
// viewTransform3D contains rotation/translation for current viewpoint
// t3d is a reusable Transform3D variable
// vtg contains the view platform transform group
// NUMROTS contains the number of incremental rotations to perform
//
// INPUTS
// endX, endY, endZ contain the 3D position of the target object
//
// NOTE: Java3D v1.5.1 or later is required for the Vector3D getX(),
// getY(), and getZ() methods to work.
final int delay = 20; // milliseconds between firings
final int pause = 10; // milliseconds before starting
// Get translation components of starting viewpoint vector
Vector3d viewVector = new Vector3d();
viewTransform3D.get(viewVector);
final double startX = viewVector.getX();
final double startY = viewVector.getY();
final double startZ = viewVector.getZ();
// Don't try to rotate to the location of the current viewpoint
if (startX != endX || startY != endY || startZ != endZ)
{
// Get a copy of the starting view transform
t3d = new Transform3D(viewTransform3D);
// Define the initial eye/camera position and the "up" vector
// Note: "up = +Y" is just the initial naive implementation
Point3d eyePoint = new Point3d(startX,startY,startZ);
Vector3d upVector = new Vector3d(0.0,1.0,0.0);
// Get target view transform
// (Presumably something like this is necessary to get a transform
// containing the ending rotation values.)
Transform3D tNew = new Transform3D();
Point3d viewPointTarg = new Point3d(endX,endY,endZ);
tNew.lookAt(eyePoint,viewPointTarg,upVector);
tNew.invert();
// Get a copy of the target view transform usable by the Listener
final Transform3D tRot = new Transform3D(tNew);
//
// (obtain either incremental rotation angle
// or congruent rotation transform here)
//
ActionListener taskPerformer = new ActionListener()
{
public void actionPerformed(ActionEvent evt)
{
if (++rotLoop <= NUMROTS)
{
// Apply incremental angle or rotation transform to the
// current view
t3d = magic(tRot);
// Communicate the rotation to the view platform transform group
vtg.setTransform(t3d);
}
else
{
timerRot.stop();
rotLoop = 0;
viewTransform3D = t3d;
}
}
};
// Set timer for rotation steps
timerRot = new javax.swing.Timer(delay,taskPerformer);
timerRot.setInitialDelay(pause);
timerRot.start();
}
}
As is often the case with these things, there may be a better way to do what I'm trying to accomplish here by stepping back and rethinking the problem. I'm open to constructive suggestions there as well.
Thanks very much for any assistance with this!
UPDATE
Let me try to define the goal a little more concretely.
I have a Java3D universe containing many Sphere objects. I can click on each object and dynamically obtain its predefined XYZ coordinates.
At any moment, I am looking at all currently visible objects with a "camera" at a particular XYZ position and a view direction, which are contained in a transform holding the rotation matrix and translation vector.
(Note: I can both rotate the universe and translate through it using the mouse independently of clicking on objects. So there will be times when the view transform containing the camera's current rotation matrix and translation vector is not pointing at any target object with known XYZ coordinates.)
Given the camera transform and the object's XYZ coordinates, I want to rotate the universe around my current camera position until the selected object is centered in the screen. And I want to do this as a sequence of discrete incremental rotations, each of which is rendered so that the visible universe appears to "spin" in the viewing window until the selected object is centered. (I'm following this up with a translation to the object; that part at least is working!)
Example: Suppose the camera is at the origin, "up" is 1.0 along the Y-axis, and the selected object is centered ten units directly to my left. Assuming I had a 180-degree field of view, I could click on the half of the sphere that is visible all the way to the left of the screen and halfway between the top and bottom of the screen.
When I give the word, every visible object in the universe should appear to move in a sequence of evenly-spaced steps (let's say 50) from my left to my right until the selected object is exactly centered in the screen.
In coding terms, I need to work out the Java3D code by which I can rotate the universe around an imaginary line that runs through my camera position (currently at 0,0,0) and that is perfectly aligned with the Y-axis of the universe's coordinate system. (I.e., the axis of rotation sweeps through a plane where Z is always equal to the Z component of the camera's position.)
The complicating requirements are:
The camera can be translated somewhere in 3D space other than the origin.
Objects can be anywhere in 3D space with respect to the camera's current position and view, including being visible but off the screen (outside the view frustum) entirely.
Rotations should take the shortest path -- no spinning the universe more than 180 degrees at a time.
There should not be any "jump" or "twisting" of the visible universe as the first step in the rotation process; i.e., the current "up" vector (not the universe's absolute "up" vector) should be preserved.
So there's the question: given a transform holding the (virtual) camera's current translation and rotation information, and the XYZ coordinates in universe space of a target object, what Java3D code will rotate the universe around the camera in N equal steps until the object is centered in the screen?
Presumably this solution is in two parts: first, some 3D math (expressed in Java3D) to calculate the incremental rotation information given only the camera transform and object's XYZ coordinates; second, a loop that [applies the incremental rotation to the current viewing transform and updates the screen] until the loop counter equals the number of increments.
It's that 3D math part that's beating me. I'm not seeing and can't bash out a way to obtain some form of incremental rotation information from the current camera transform and target object position that I can then apply to the camera transform. At least, I haven't found any way that doesn't cause jumping or twisting or unequal incremental movement steps (or a "non-congruent transform above ViewPlatform" exception).
There must be a simple solution....
So if I understand correctly, your goal is to rotate the camera so it centers on the selected object, but that rotation should not be about an arbitrary vector, but instead should preserve the camera's "up" direction.
A solution that might work then:
First, calculate the rotation angle (let's call it A) about the "up" vector necessary so that the camera is facing the object you want.
Second, calculate the translation distance/direction (let's call it D) necessary along the "up" vector so that the object lines up as necessary with the camera. This will likely just be the difference in the Z/Y coordinate between the camera/object.
Find dA and dD by diving A/D by N, the number of increments you want to take to smooth the motion.
In a timer/time loop increment A/D by dA/dD respectively N times, taking them to the final values. Remember that you are rotating the camera about it's "up" vector and current location, not about the origin..
If you want an even smoother, more realistic looking rotation, consider using SLERP.

Categories