Short Background
I'm working on a SceneManager which allows the creation of SceneNode objects. Each SceneNode contains information about position, scaling, and rotation. To make an object appear in the scene, it simply gets attached to a SceneNode and the transforms contained in the SceneNode are applied to it. For example, to get a camera to look at a sphere in the scene, we could write:
Camera cam = sceneManager.createCamera("MainCamera", Camera.ProjectionType.PERSPECTIVE);
Entity sphere = sceneManager.createEntity("Sphere", "sphere_mesh.obj");
SceneNode camNode = sceneManager.createSceneNode(cam.getname() + "Node");
SceneNode sphereNode = sceneManager.createSceneNode(sphere.getname() + "Node");
// turns camera node towards the sphere node as expected
camNode.lookAt(sphereNode);
The simplified example above should give the general idea, and would produce the expected result, i.e. I have no problems getting things to appear on screen and making the camera look at them.
The Problem
I want to make a SceneNode containing an Entity point towards another SceneNode containing another Entity, not the Camera, but it ends up pointing in the opposite direction I intended it to look at.
For example, if we have our target node at position [0, 0, -5] and some other observer node at position [0, 0, 5], we want object the observer node's forward vector to be [0, 0, -1], but instead it ends up as [0, 0, 1], looking away from the target.
I discovered this bug recently after attaching a Light to a SceneNode and telling it to lookAt the cube's SceneNode to illuminate it, but it did not get lit.
This pic shows the problem, where the center cube should be lit, but isn't.
The white sphere marks the position of the spot light, and it should be looking at the center cube, but it's not getting lit because it's actually looking in the opposite direction. In other words, I tried: lightNode.lookAt(cubeNode);
While debugging, I can see that the resulting rotation matrix looks like this:
s u f = side, up, forward; column-major matrix
[1 0 0]
[0 1 0]
[0 0 1] <-- wrong: looking down the +Z axis, towards viewer
Instead, it should look like this:
s u f
[1 0 0]
[0 1 0]
[0 0 -1] <-- good: looking down the -Z axis, into screen
(Yes, the underlying rendering system is OpenGL-based, which means that "front" is actually down the -Z axis.)
If I modify the code above to look at the negative of the actual location, i.e. lightNode.lookAt(cubeNode.getRelativePosition().mult(-1));, then it looks at the cube and lights it up, as shown below.
So, to make it look towards the "front", I had to tell it to look "back". I've been hunting this bug down for the last several days, even double-checking that the look-at matrix is being built correctly. Based on my references, it seems to be correct, but I've included it below anyway because it was a possible suspect at one point.
public static Matrix4 createLookAtMatrix(Vector4 eyePosition, Vector4 targetPosition, Vector4 upDirection) {
Vector4 f = targetPosition.sub(eyePosition).normalize();
Vector4 s = f.cross(upDirection).normalize();
Vector4 u = s.cross(f).normalize();
float[][] values = new float[][] {
{ s.x(), s.y(), s.z(), -s.dot(eyePosition) },
{ u.x(), u.y(), u.z(), -u.dot(eyePosition) },
{-f.x(), -f.y(), -f.z(), -f.dot(eyePosition) },
{ 0f , 0f , 0f , 1f }
};
return createFrom(values);
}
I'm currently out of ideas and would appreciate the any help leading to the capture and execution of this bug.
I really don't know if it could be related to an older question I posted several months ago or not, which is still unresolved, but it's included here just in case it turns out to be relevant.
I assume you created the look_at function to work with the camera. That's why it doesn't work with other objects. The cameras z axis is always point away from what you are looking at which is why you added the minus signs to the third row of the matrix I assume.
One way to solve it would be to have a createLookAtMatrix and createCameraLookAtMatrix. The ladder like the one you have now and the other without the minus signs on the third row.
Another way to do it would be to always have the camera under a "camera control" node in the scene graph. You would have the cameras local transformation always rotated 180 degrees in the y axis and then set the look at matrix to the parent node instead.
In any case, this should work for objects other than the camera:
public static Matrix4 createLookAtMatrixForObjects(Vector4 eyePosition, Vector4 targetPosition, Vector4 upDirection) {
Vector4 f = targetPosition.sub(eyePosition).normalize();
Vector4 s = f.cross(upDirection).normalize();
Vector4 u = s.cross(f).normalize();
float[][] values = new float[][] {
{ s.x(), s.y(), s.z(), -s.dot(eyePosition) },
{ u.x(), u.y(), u.z(), -u.dot(eyePosition) },
{ f.x(), f.y(), f.z(), -f.dot(eyePosition) },
{ 0f , 0f , 0f , 1f }
};
return createFrom(values);
}
Related
I need help with calculating the lookAt method
Here is my method
public void lookAt(Vector3f position, Vector3f direction, Vector3f up) {
Vector3f f = new Vector3f();
Vector3f u = new Vector3f();
Vector3f s = new Vector3f();
Vector3f.sub(direction, position, f);
f.normalise(f);
up.normalise(u);
Vector3f.cross(f, u, s);
s.normalise(s);
Vector3f.cross(s, f, u);
this.setIdentity();
this.m00 = s.x;
this.m10 = s.y;
this.m20 = s.z;
this.m01 = u.x;
this.m11 = u.y;
this.m21 = u.z;
this.m02 = -f.x;
this.m12 = -f.y;
this.m22 = -f.z;
this.m30 = -Vector3f.dot(s, position);
this.m31 = -Vector3f.dot(u, position);
this.m32 = Vector3f.dot(f, position);
}
but when I test it like this camera.lookAt(position, new Vector3f(1, 0 ,0), new Vector3f(0, -1, 0)); my camera is looking down, end if only i do this camera.lookAt(position, new Vector3f(10000, 0 ,0), new Vector3f(0, -1, 0));, camera is looking forward. Can you help please ?
P.S. sorry for my english
The second parameter of a lookAt function is usually not the direction in which you want to look but the point which you want to look at. As far as I can see, the calculation of your method also expects a second point and not a direction (which is calculated from the two point and stored in f).
In conclusion, the results you get look correct to me, except that you passed the wrong parameters to the function.
How to create a function that creates a matrix from a point and a direction
The rotation required is one that maps the minus z-direction onto the view vector. Additionally, we want the x-vector to be mapped such that it is perpendicular to the plane spanned by view and up vector. This can relatively easy be achieved by writing -view in the third row of the 3x3 matrix. The other two vectors can then be computed as the cross products of view and up vector which results in a right-vector onto which the x-axis should be mapped. The last vector (the target mapping for the y-axis) is then computed by the cross product of view and right vector. The cross products for the last vector is used, since we know that all rotation matrices have base vectors that are perpendicular:
viewv = normalize(-view)
rightv = normalize(cross(view, up))
upv = normalize(cross(view, up))
-- rightv --
rotation_matrix = [ -- upv -- ]
-- viewv --
When the camera is located in the origin, then we are done now. But since this is in general not the case, we have to add a translation part that transforms the scene such that the camera is the origin. Thus t = -camera.
The final matrix is now composed by first translating the space and then rotating it according to our calculated matrix:
lookat_matrix = rotation_matrix * translate(-camera)
Since it is fairly late here and depending on the notation you use it might be that the rotation matrix has to be transposed and that some signs have to be adjusted.
Im trying to make a Vortex effect on a Circle Body that is a Sensor.
I've been looking for this and all examples i look for are in C++ or Objective C and i dont seem to translate them well.
when my objects collition, it calls beginContact(..) and it sets a flag so that i can call bodyToUpdate.applyForce(...);
public void beginContact(Contact contact) {
setColliding(true);
}
//updating collition every frame
public void act(){
if (colliding) {
ball.getBody().applyForce(....);
}
how to calculate the amount of force to apply every frame to make it a vortex?
Edit:
so i now have the object going straight to the center of the vortex, but no "spin"
public void act() {
if (colliding) {
ball.getBody().setLinearVelocity(0, 0);
ball.getBody().applyForce((portal.getBody().getPosition().x - ball.getBody().getPosition().x) * i,
(portal.getBody().getPosition().y - ball.getBody().getPosition().y) * i,
ball.getBody().getPosition().x, ball.getBody().getPosition().y, true);
i++;
} else
i = 10;
}
If by "spin" you mean that the falling object would move along a curve or a spiral, rather then changing the direction of movement immediately towards the black hole, there is an easy fix for that.
ball.getBody().setLinearVelocity(0, 0);
This completely stops the current movement of the body. I would start by removing that line. Also, for better realistic behaviour, you can follow the proper formula to compute attractive force, which goes something like this:
force = mass1 * mass2 * [some constant] / (distance ^ 2)
When you have the vector from your body towards the black hole (computed as black hole position - body position), the length of the vector is the distance, and after normalizing and multiplying by the force, you have the desired forceX and forceY force vector that needs to be applied to the body each update, as long as it stays in range of the hole.
However this formula will cause the force to grow to infinity as body moves closer to the hole, so you could try changing to linear conversion (closest = 1, farest = 0) if that causes any trouble.
force = mass1 * mass2 * [some constant] * ( (maxDistance - distance) / maxDistance )
You want to implement a tangential force with a magnitude that increases towards the center of the vortex.
Here's some pseudocode.
radialVector = objectPosition - vortexPosition;
tangentialVector = radialVector.perpendicularVector();
if (radialVector.length() < vortexRadius) {
// Swirl faster when near the center of the vortex.
// Max tangential force when distance from center is 0.
// Min tangential force when distance from center is vortexRadius.
forceMagnitude = map(radialVector.length(), vortexRadius, 0, minTangentialForce, maxTangentialForce);
force = forceMagnitude * tangentialVector.normalize();
object.applyForce(force);
}
Here's an image that shows the vector components:
To create a whirlpool effect there should be increasing radial (Fr) and tangential (Ft) forces as the object moves closer to the center.
I'm trying to detect the positions of billiards balls on a table from an image taken at a perspective angle. I'm using the getPerspectiveTransform() method to find the transformation matrix and I want to apply that to only the circles I detect using HoughCircles. I'm trying to go from a rather large trapezoidal shape to a smaller rectangular shape. I don't want to do the transformation on the image first and then find the HoughCircles because the image gets too warped for houghcircles to provide useful results.
Here's my code:
CvMat mmat = cvCreateMat(3,3,CV_32FC1);
double srcX1 = 462;
double srcX2 = 978;
double srcX3 = 1440;
double srcX4 = 0;
double srcY = 241;
double srcHeight = 772;
double dstX = 56.8;
double dstY = 33.5;
double dstWidth = 262.4;
double dstHeight = 447.3;
CvSeq seq = cvHoughCircles(newGray, circles, CV_HOUGH_GRADIENT, 2.1d, (double)newGray.height()/40, 85d, 65d, 5, 50);
JavaCV.getPerspectiveTransform(new double[]{srcX1, srcY, srcX2,srcY, srcX3, srcHeight, srcX4, srcHeight},
new double[]{dstX, dstY, dstWidth, dstY, dstWidth, dstHeight, dstX, dstHeight}, mmat);
cvWarpPerspective(seq, seq, mmat);
for(int j=0; j<seq.total(); j++){
CvPoint3D32f point = new CvPoint3D32f(cvGetSeqElem(seq, j));
float xyr[] = {point.x(),point.y(),point.z()};
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(gray, center, 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(gray, center, radius, CvScalar.BLUE, 3, 8, 0);
}
The problem is I get this error on the warpPerspective() method:
error: (-215) seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size in function cv::Mat cv::cvarrToMat(const CvArr*, bool, bool, int)
Also I guess it's worth mentioning that I'm using JavaCV, in case the method calls look a bit different than what you're used to. Thanks for any help.
Answer:
the problem with what you want to do (besides the obvious, opencv wont let you) is that the radius cant really be warped correctly. AFAIK the xy coordinates are pretty easy to calculate x'=((m00x+m01y+m02)/(m20x+m21y+m22)) y'=((m10x+m11y+m12)/(m20x+m21y_m22)) when m is the transformation matrix. the radius you can hack by transforming all the points of the original circle and then find the max distance between x'y' and those points (atleast if the radius in the warped image is expected to cover all those points)
btw, mIJx = m(i,j)*x (just to clarify)
End Answer.
Everything i write is according to the c++ version, i've never used JavaCV but from what i could see its just a wrapper that calls the native c++ lib.
CvSeq is a sequance data structure that behaves like a linked list.
the assert your application crushes at is
CV_Assert(seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size);
which means that either your seq instance is empty (total is the number of elements in the sequence) or somehow the inner seq flags are corrupted.
I'd recommend that you'd check the total member of your CvSeq, and the cvHoughCircles call.
all of this occurs before the actual implementation of cvWarpPerspective (its the first line in the implementation, that only converts your CvSeq to cv::Mat).. so its not the warping but what you're doing before that.
anyway, to understand whats wrong with cvHoughCircles we'll need more info about the creation of newGray and circles.
here is an example i've found on the javaCV page (Link)
IplImage gray = cvCreateImage( cvSize( img.width, img.height ), IPL_DEPTH_8U, 1 );
cvCvtColor( img, gray, CV_RGB2GRAY );
// smooth it, otherwise a lot of false circles may be detected
cvSmooth(gray,gray,CV_GAUSSIAN,9,9,2,2);
CvMemStorage circles = CvMemStorage.create();
CvSeq seq = cvHoughCircles(gray, circles.getPointer(), CV_HOUGH_GRADIENT,
2, img.height/4, 100, 100, 0, 0);
for(int i=0; i<seq.total; i++){
float xyr[] = cvGetSeqElem(seq,i).getFloatArray(0, 3);
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(img, center.byValue(), 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(img, center.byValue(), radius, CvScalar.BLUE, 3, 8, 0);
}
from what i've seen in the implementation of cvHoughCircles, the answer is saved in the circles buff and at the end they create from it the CvSeq to return, so if you've allocated the circles buff wrong, it wont work.
EDIT:
as you can see, the CvSeq instance in case of the return from cvHoughCircles is a list of point-values, that is probably why the assertion failed. you cannot convert this CvSeq into a cv::Mat.. because its just not a cv::Mat. to get only the circles returned from cvHoughCircles in an cv::Mat instance, you'll need to create a new cv::Mat instance and than draw onto it all the circles in the CvSeq - as seen in the provided example above.
than the warping will work (you'll have a cv::Mat instance, and that is what the function expect - a cv::Mat as the only element in the CvSeq)
END EDIT
here is the c++ reference for CvSeq
and if you want to fiddle with the source code than
cvarrToMat is in matrix.cpp
CV_ELEM_SIZE is in types_c.h
cvWarpPerspective is in imgwarp.cpp
cvHoughCircles is in hough.cpp
I hope that will help.
BTW, your next error will probably be:
cv::warpPerspective in the C++ OpencCv asserts that dst.data != src.data
thus
cvWarpPerspective(seq, seq, mmat);
wont work cause your source mat and destination mat referencing the same data.
Not all the functions in OpenCV (and image processing in general) work in-situ (because there is no in-situ algorithm or because its slower then the other version eg. transpose of an n*n mat will work in-situ, but n*m where n!=m will be harder to do in-situ and might be slower)
you cant assume the using the src matrix as the dst will work.
I'm having a little problem with figuring something out (Obviously).
I'm creating a 2D Top-down mmorpg, and in this game I wish the player to move around a tiled map similar to the way the game Pokemon worked, if anyone has ever played it.
If you have not, picture this: I need to load various areas, constructing them from tiles which contain an image and a location (x, y) and objects (players, items) but the player can only see a portion of it at a time, namely a 20 by 15 tile-wide area, which can be 100s of tiles tall/wide. I want the "camera" to follow the player, keeping him in the center, unless the player reaches the edge of the loaded area.
I don't need code necessarily, just a design plan. I have no idea how to go about this kind of thing.
I was thinking of possibly splitting up the entire loaded area into 10x10 tile pieces, called "Blocks" and loading them, but I'm still not sure how to load pieces off screen and only show them when the player is in range.
The picture should describe it:
Any ideas?
My solution:
The way I solved this problem was through the wonderful world of JScrollPanes and JPanels.
I added a 3x3 block of JPanels inside of a JScrollPane, added a couple scrolling and "goto" methods for centering/moving the JScrollPane around, and voila, I had my camera.
While the answer I chose was a little more generic to people wanting to do 2d camera stuff, the way I did it actually helped me visualize what I was doing a little better since I actually had a physical "Camera" (JScrollPane) to move around my "World" (3x3 Grid of JPanels)
Just thought I would post this here in case anyone was googling for an answer and this came up. :)
For a 2D game, it's quite easy to figure out which tiles fall within a view rectangle, if the tiles are rectangular. Basically, picture a "viewport" rectangle inside the larger world rectangle. By dividing the view offsets by the tile sizes you can easily determine the starting tile, and then just render the tiles in that fit inside the view.
First off, you're working in three coordinate systems: view, world, and map. The view coordinates are essentially mouse offsets from the upper left corner of the view. World coordinates are pixels distances from the upper left corner of tile 0, 0. I'm assuming your world starts in the upper left corner. And map cooridnates are x, y indices into the map array.
You'll need to convert between these in order to do "fancy" things like scrolling, figuring out which tile is under the mouse, and drawing world objects at the correct coordinates in the view. So, you'll need some functions to convert between these systems:
// I haven't touched Java in years, but JavaScript should be easy enough to convey the point
var TileWidth = 40,
TileHeight = 40;
function View() {
this.viewOrigin = [0, 0]; // scroll offset
this.viewSize = [600, 400];
this.map = null;
this.worldSize = [0, 0];
}
View.prototype.viewToWorld = function(v, w) {
w[0] = v[0] + this.viewOrigin[0];
w[1] = v[1] + this.viewOrigin[1];
};
View.prototype.worldToMap = function(w, m) {
m[0] = Math.floor(w[0] / TileWidth);
m[1] = Math.floor(w[1] / TileHeight);
}
View.prototype.mapToWorld = function(m, w) {
w[0] = m[0] * TileWidth;
w[1] = m[1] * TileHeight;
};
View.prototype.worldToView = function(w, v) {
v[0] = w[0] - this.viewOrigin[0];
v[1] = w[1] - this.viewOrigin[1];
}
Armed with these functions we can now render the visible portion of the map...
View.prototype.draw = function() {
var mapStartPos = [0, 0],
worldStartPos = [0, 0],
viewStartPos = [0, 0];
mx, my, // map coordinates of current tile
vx, vy; // view coordinates of current tile
this.worldToMap(this.viewOrigin, mapStartPos); // which tile is closest to the view origin?
this.mapToWorld(mapStartPos, worldStartPos); // round world position to tile corner...
this.worldToView(worldStartPos, viewStartPos); // ... and then convert to view coordinates. this allows per-pixel scrolling
mx = mapStartPos[0];
my = mapStartPos[y];
for (vy = viewStartPos[1]; vy < this.viewSize[1]; vy += TileHeight) {
for (vx = viewStartPos[0]; vx < this.viewSize[0]; vy += TileWidth) {
var tile = this.map.get(mx++, my);
this.drawTile(tile, vx, vy);
}
mx = mapStartPos[0];
my++;
vy += TileHeight;
}
};
That should work. I didn't have time to put together a working demo webpage, but I hope you get the idea.
By changing viewOrigin you can scroll around. To get the world, and map coordinates under the mouse, use the viewToWorld and worldToMap functions.
If you're planning on an isometric view i.e. Diablo, then things get considerably trickier.
Good luck!
The way I would do such a thing is to keep a variable called cameraPosition or something. Then, in the draw method of all objects, use cameraPosition to offset the locations of everything.
For example: A rock is at [100,50], while the camera is at [75,75]. This means the rock should be drawn at [25,-25] (the result of [100,50] - [75,75]).
You might have to tweak this a bit to make it work (for example maybe you have to compensate for window size). Note that you should also do a bit of culling - if something wants to be drawn at [2460,-830], you probably don't want to bother drawing it.
One approach is along the lines of double buffering ( Java Double Buffering ) and blitting ( http://download.oracle.com/javase/tutorial/extra/fullscreen/doublebuf.html ). There is even a design pattern associated with it ( http://www.javalobby.org/forums/thread.jspa?threadID=16867&tstart=0 ).
Is there an easy way to rotate a picture around it's center? I used an AffineTransformOp first. It seems simple and need and finding the right parameters for a matrix should be done in a nice and neat google session. So I thought...
My Result is this:
public class RotateOp implements BufferedImageOp {
private double angle;
AffineTransformOp transform;
public RotateOp(double angle) {
this.angle = angle;
double rads = Math.toRadians(angle);
double sin = Math.sin(rads);
double cos = Math.cos(rads);
// how to use the last 2 parameters?
transform = new AffineTransformOp(new AffineTransform(cos, sin, -sin,
cos, 0, 0), AffineTransformOp.TYPE_BILINEAR);
}
public BufferedImage filter(BufferedImage src, BufferedImage dst) {
return transform.filter(src, dst);
}
}
Really simple if you ignore the cases of rotating multiples of 90 degrees (which can't be handled correctly by sin() and cos()). The problem with that solution is, that it transforms around the (0,0) coordinate point in the upper left corner of the picture and not around the center of the picture what would be normally expected. So I added some stuff to my filter:
public BufferedImage filter(BufferedImage src, BufferedImage dst) {
//don't let all that confuse you
//with the documentation it is all (as) sound and clear (as this library gets)
AffineTransformOp moveCenterToPointZero = new AffineTransformOp(
new AffineTransform(1, 0, 0, 1, (int)(-(src.getWidth()+1)/2), (int)(-(src.getHeight()+1)/2)), AffineTransformOp.TYPE_BILINEAR);
AffineTransformOp moveCenterBack = new AffineTransformOp(
new AffineTransform(1, 0, 0, 1, (int)((src.getWidth()+1)/2), (int)((src.getHeight()+1)/2)), AffineTransformOp.TYPE_BILINEAR);
return moveCenterBack.filter(transform.filter(moveCenterToPointZero.filter(src,dst), dst), dst);
}
My thinking here was that the form changing matrix should be the unity matrix (is that the right english word?) and the vector that moves the whole picture around are the last 2 entries. My solution first makes the picture bigger and then smaller again (doesn't really matter that much - reason unknown!!!) and also cuts around 3/4 of the picture away (what matters a lot - reason is probably that the picture is moved outside of the reasonable horizon of the "from (0,0) to (width,height)" picture dimensioning).
Through all the mathematics I am not so trained in and all the errors the computer does while calculating and everything else that doesn't get into my head so easily, I don't know how to go further. Please give advice. I want to rotate the picture around its center and I want to understand AffineTransformOp.
If I understand your question correctly, you can translate to the origin, rotate, and translate back, as shown in this example.
As you are using AffineTransformOp, this example may be more apropos. In particular, note the last-specified-first-applied order in which operations are concatenated; they are not commutative.