I have a TriangleMesh in javafx and a 2D point in screen space and i want to check whether any triangles in the mesh intersect with that point(basically same as clicking a point in the mesh except that i already have a predefined screen coordinate)
Note that i don't care where the intersection happened etc but only the fact whether it intersected or not.
I did try googling this and found some useful answers, especially one by José Pereda here: https://stackoverflow.com/a/27612786/14999427
However the links are down
Edit: i did find a working link to it, however it just hardcodes the origin/target and im not sure how to compute these from the 2d screen point
Another idea i had was to copy the implementation from openjfx but after some research i figured that i'd have to copy a lot of internals and even then i wasn't sure if i would get it working so i scrapped that idea.
Goal: i want to use my mouse to create a rectangle at an arbitrary position in the mesh and then find all triangles that are inside that rectangle (i believe it's usually called rectangle selection in 3d modeling software)
Update: converting each triangle to 2d screen coordinates using Node#localToScreen and then doing a 2D point inside triangle test works perfectly however it also selects faces that are culled.
Update 2: after also doing the culling myself it works quite well (it's not perfect but it's mostly accurate)
Current code:
Point2D v1Screen = view.localToScreen(mesh.getPoints().get(0), mesh.getPoints()
.get(1), mesh.getPoints().get(2));
Point2D v2Screen = view.localToScreen(mesh.getPoints().get(3), mesh.getPoints()
.get(4), mesh.getPoints().get(5));
Point2D v3Screen = view.localToScreen(mesh.getPoints().get(6), mesh.getPoints()
.get(7), mesh.getPoints().get(8));
Point2D point = new Point2D(mouseX, mouseY);
boolean inTriangle = pointInTriangle(point, v1Screen, v2Screen, v3Screen);
// Back-face culling
double dxAB = v1Screen.getX() - v2Screen.getX();
double dyAB = v1Screen.getY() - v2Screen.getY();
double dxCB = v3Screen.getX() - v2Screen.getX();
double dyCB = v3Screen.getY() - v2Screen.getY();
boolean culled = ((dxAB * dyCB) - (dyAB * dxCB)) <= 0;
if (inTriangle && !culled) {
view.setMaterial(new PhongMaterial(Color.BLUE));
System.out.println("Intersected");
}
I think the answer to this question may help you. How to correctly obtain screen coordinates of 3D shape after rotation If you can compute the screen coordinates of a 3D shape (triangle) the hit test should then be simple.
Related
I am very new to this ARCore and I have been looking at the HelloAR Java Android Studio project provided in the SDK.
Everthing works OK and is pretty cool, however, I want to place/drop an object when I touch the screen even when no planes have been detected. Let me explain a little better...
As I understand ARCore, it will detect horizontal planes and ONLY on those horizontal planes I can place 3D objects to be motion tracked.
Is there any way (perhaps using PointCloud information) to be able to place an object in the scene even if there are no horizontal planes detected? Sort of like these examples?
https://experiments.withgoogle.com/ar/flight-paths
https://experiments.withgoogle.com/ar/arcore-drawing
I know they are using Unity and openFrameworks, but could that be done in Java?
Also, I have looked at
How to put an object in the air?
and
how to check ray intersection with object in ARCore
but I don't think I'm understanding the concept of Ancor (I managed to drop the object on the scene, but it either disappears immediately or it is just a regular OpenGL object with no knowledge about the real world.
What I want to understand is:
- How and is it possible to create a custom/user defined plane, that is, a plane that is NOT automatically detected by ARCore?
- How can I create an Ancor (the sample does it in the PlaneAttachment class, I think) that is NOT linked to any plane OR that is linked to some PointCloud point?
- How do I draw the object and place it at the Ancor previously created?
I think this is too much to ask but looking at the API documentation has not helped me at all
Thank you!
Edit:
Here is the code that I added to HelloArActivity.java (Everything is the same as the original file except for the lines after // ***** and before ...
#Override
public void onDrawFrame(GL10 gl) {
...
MotionEvent tap = mQueuedSingleTaps.poll();
// I added this to use screenPointToWorldRay function in the second link I posted... I am probably using this wrong
float[] worldXY = new float[6];
...
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
// ***** I added this to use screenPointToWorldRay function
worldXY = screenPointToWorldRay(tap.getX(), tap.getY(), frame);
...
}
...
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (PlaneAttachment planeAttachment : mTouches) {
...
}
// ***** This places the object momentarily in the scene (it disappears immediately)
frame.getPose().compose(Pose.makeTranslation(worldXY[3], worldXY[4], worldXY[5])).toMatrix(mAnchorMatrix, 0);
// ***** This places the object in the middle of the scene but since it is not attached to anything, there is no tracking, it is always in the middle of the screen (pretty much expected behaviour)
// frame.getPose().compose(Pose.makeTranslation(0, 0, -1.0f)).toMatrix(mAnchorMatrix, 0);
// *****I duplicated this code which gets executed ONLY when touching a detected plane/surface.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
...
}
You would first have to perform a hit test via Frame.hitTest and iterate over the HitResult objects until you hit a Point type Trackable. You could then retrieve a pose for that hit result via HitResult.getHitPose, or attach an anchor to that point and get the pose from that via ArAnchor.getPose (best approach).
However, if you want to do this yourself from an arbitraty point retrieved with ArPointCloud.getPoints, it will take a little more work. In this approach, the question effectively reduces down to "How can I derive a pose / coordinate basis from a point?".
When working from a plane it is relatively easy to derive a pose as you can use the plane normal as the up (y) vector for your model and can pick x and y vectors to configure where you want the model to "face" about that plane. (Where each vector is perpendicular to the other vectors)
When trying to derive a basis from a point, you have to pick all three vectors (x, y and z) relative to the origin point you have. You can derive the up vector by transforming the vector (0,1,0) through the camera view matrix (assuming you want the top of the model to face the top of your screen) using ArCamera.getViewMatrix. Then you can pick the x and z vectors as any two mutually perpendicular vectors that orient the model in your desired direction.
i have been working with lwjgl and also j3d for the geometry part. i am still working on the collision. what i have so far witht he collision is working decently but there are 2 problems. to sum up my current way of colliding, it tests if the previous coordinate and current coordinate go through a triangle(what things are rendered as) and then it finds the point on the triangle that it just intersected that is closest to your current coordinate and makes you go there. it also makes your y coordinate go up by .001.
this workd descent but going up .001 is bad becuase if you go to a triangle that is at a 90* angle going updards you can go left to rigth but you cant back up out of it, almost as if you are stuck in it.
here is a drawing of how it works on imgur
http://i.imgur.com/1gMhRut.png
from here i want to add say .001 to the length between the current coordinate and the closest point (i already know these points) and get the new current point.
btw prev is where the person was at before they moved to the cur point and then it tests to see if those 2 points intersect a triangle and then i get the closest point to the prev if it does which is defined as closest in the picture. i can already calculate for all of those points
If I understand you correctly you want to add .001 to move away from the triangle. If that is the case then you need a vector of length 0.001 perpendicular to the triangle. In case of a triangle this is usually called the "normal". If you already have a normal for the triangle, then multiply that by .001 and add that. If you don't have a normal yet you can calculate it using cross product (you can Google the details of what a cross product is), something like this, from the vertices of the triangle:
Vector3 perpendicular = crossProduct(vertex3.pos - vertex1.pos, vertex2.pos - vertex1.pos);
Vector3 normal = perpendicular / length(normal);
Vector3 offset = normal * 0.001f;
I am currently working on a 3d visualisation project using javafx 8.
As having too many points is slow when rotating the camera around, I decided to hide those points(3d boxes in my case) not displayed on the scene.
The problem is when I call box.localToScreen(0, 0, 0), the coordinates seems not correct some times. e.g, sometimes the point is still displayed on the screen, but its coordinates returned by localToScreen(0, 0, 0) can be negative. Have I missed something? or have I misused this method?
Here are some codes I have:
// where I build these boxes from points
for (point p : mergedList) {
Box pointBox = new Box(length, width, height);
boxList.add(pointBox);
pointBox.setTranslateX(p.getX());
pointBox.setTranslateY(p.getY());
pointBox.setTranslateZ(p.getZ());
...
// where I call localToScreen to get its coordinates
for (Box b : boxList) {
Point2D p = b.localToScreen(0, 0, 0); // I have also tried b.localToScreen(b.getTranslateX(), b.getTranslateY(), b.getTranslateZ())
double x = p.getX(), y = p.getY();
System.out.println(x);
System.out.println(y);
}
Thanks in advance.
I am also searching for solution to some of the localToScreen and screenToLocal issues.
For your case, if you are using multiple monitors, only the primary monitor provides you positive coordinates. The secondary monitor will give you negative coordinates.
Have you try localToScene instead of localToScreen ?
Firstly, the localToScreen method transforms the "provided" point by the "calling" objects point.
Use getLocalToSceneTransform() ...
this is your "world-Matrix", and hold all your transformation info. Rotations, scale, etc..
your position values are {Tx, Ty, Tz} so plug those into a Point3D and you have your position in SceneSpace (mostly accurate)
Another dirty option to "hide" the Boxes, is to set it's Cullface to Front. This will reduce some of the performance issues, since it does not need to be rendered, but leads to other potential problems with Mouse Picking and such.
I recently posted a video of 32k+ cubes being rendered, and I noticed 0 performance issues,
(the video encoding was not that great it's blurry in the beginning)
Video
Hope it helps!
I've been wanting to write it for some time now... as a project for the university, I've written (with a friend) a game that needed good explosions & particles effects. we've encountered some problems, which we solved quite elegantly (I think), and I'd like to share the knowledge.
OK then, so we found this tutorial: Make a Particle Explosion Effect which seemed easy enough to implement using Java with JOGL. before I'll answer as to how exactly we implemented this tutorial, I'll explain how rendering is done:
Camera: is just an orthonormal basis which basically means it contains 3 normalized orthogonal vectors, and a 4th vector representing the camera position. rendering is done using gluLookAt:
glu.gluLookAt(cam.getPosition().getX(), cam.getPosition().getY(), cam.getPosition().getZ(),
cam.getZ_Vector().getX(), cam.getZ_Vector().getY(), cam.getZ_Vector().getZ(),
cam.getY_Vector().getX(), cam.getY_Vector().getY(), cam.getY_Vector().getZ());
such that the camera's z vector is actually the target, the y vector is the "up" vector, and position is, well... the position.
so (if to put it in a question style), how to implement a good particles effect?
P.S: all the code samples and in-game screenshots (both in answer & question) are taken from the game, which is hosted here: Astroid Shooter
OK then, lets look at how we first approach the implementation of the particles: we had an abstract class Sprite which represented a single particle:
protected void draw(GLAutoDrawable gLDrawable) {
// each sprite has a different blending function.
changeBlendingFunc(gLDrawable);
// getting the quad as an array of length 4, containing vectors
Vector[] bb = getQuadBillboard();
GL gl = gLDrawable.getGL();
// getting the texture
getTexture().bind();
// getting the colors
float[] rgba = getRGBA();
gl.glColor4f(rgba[0],rgba[1],rgba[2],rgba[3]);
//draw the sprite on the computed quad
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(0.0f, 0.0f); gl.glVertex3d(bb[0].x, bb[0].y, bb[0].z);
gl.glTexCoord2f(1.0f, 0.0f); gl.glVertex3d(bb[1].x, bb[1].y, bb[1].z);
gl.glTexCoord2f(1.0f, 1.0f); gl.glVertex3d(bb[2].x, bb[2].y, bb[2].z);
gl.glTexCoord2f(0.0f, 1.0f); gl.glVertex3d(bb[3].x, bb[3].y, bb[3].z);
gl.glEnd();
}
we've most of the method calls are pretty much understandable here, no surprises. the rendering is quite simple. on the display method, we first draw all the opaque objects, then, we take all the Sprites and, sort them (square distance from camera), then draw the particles, such that further away from the camera is drawn first. but the real thing we have to look deeper into here is the method getQuadBillboard. we can understand that each particle has to "sit" on a plane that is perpendicular to the camera position, like here:
the way to compute a perpendicular plane like that is not hard:
substruct particle position from camera position to get a vector that is perpendicular to the plane, and normalize it, so it can be used as a normal for the plane. now a plane is defined tightly by a normal and position, which we now have (particle position is a point that the plane goes through)
compute the "height" of the quad, by normalizing the projection of the camera's Y vector on the plane. you can get the projected vector by computing: H = cam.Y - normal * (cam.Y dot normal)
create the "width" of the quad, by computing W = H cross normal
return the 4 points / vectors: {position+H+W,position+H-W,position-H-W,position-H+W}
but not all sprites acts like that, some are not perpendicular. for instance, the shockwave ring sprite, or the flying sparks/smoke trails:
so each sprite had to give it's own unique "billboard".BTW, the computation of the smoke trails & flying sprites sparks was a bit of a challenge as well. we've created another abstract class, we called it: LineSprite. i'll skip the explanations here, you can see the code in here: LineSprite.
well, this first try was nice, but there was an unexpected problem. here's a screenshot that illustrates the problem:
as you can see, the sprites intersects with each other, so if we look at 2 sprites that intersects, part of the 1st sprite is behind the 2nd sprite, and another part of it, is infront the 2nd sprite, which resulted in some weird rendering, where the lines of the intersection are visible. note, that even if we disabled glDepthMask, when rendering the particles, the result would still have the lines of intersection visible, because of the different blending that takes place in each sprite. so we had to somehow make the sprites to not intersect. the idea we had was really cool.
you know all these really cool 3D street art?
here's an image that emphasizes the idea:
we thought the idea could be implemented in our game, so the sprites won't intersect each other. here's an image to illustrate the idea:
basically, we made all the sprites to be on parallel planes, so no intersection could take place. and it did not effected the visible data, since it stayed the same. from every other angle, it would look streched, but from the camera point of view, it still looked great. so for the implementation:
when getting 4 vectors representing a quad billboard, and the position of the particle, we need to output a new set of 4 vectors that represents the original quad billboard. the idea of how to do this, is explained great in here: Intersection of a plane and a line. we have the "line" which is defined by the camera position, and each of the 4 vectors. we have the plane, since we could use our camera Z vector as the normal, and the position of the particle. also, a small change would be in the comparison function for sorting the sprites. it should now use the homogeneous matrix, which is defined by our camera orthonormal basis, and actually, the computation is as easy as computing: cam.getZ_Vector().getX()*pos.getX() + cam.getZ_Vector().getY()*pos.getY() + cam.getZ_Vector().getZ()*pos.getZ();. one more thing we should notice, is that if a particle is out of the viewing angle of the camera, i.e. behind the camera, we don't want to see it, and especially, we don't want to compute it's projection (could result in some very weird and psychedelic effects...).
and all is left is to show the final Sprite class
the result is quite nice:
hope it helps, would love to get your comments on this "article" (or on the game :}, which you can explore, fork, and use however you want...)
I'm working on creating a simple 3D rendering engine in Java. I've messed about and found a few different ways of doing perspective projection, but the only one I got partly working had weird stretching effects the further away from the centre of the screen the object was moved, making it look very unrealistic. Basically, I want a method (however simple or complicated it needs to be) that takes a 3D point to be projected and the 3D point and rotation (possibly?) of the 'camera' as arguments, and returns the position on the screen that that point should drawn at. I don't care how long/short/simple/complicated this method is. I just want it to generate the same kind of perspective you see in a modern 3D first person shooters or any other game, and I know I may have to use matrix multiplication for this. I don't really want to use OpenGL or any other libraries because I'd quite like to do this as a learning exercise and roll my own, if it's possible.
Any help would be appreciated quite a lot :)
Thanks, again
- James
Update: To show what I mean by the 'stretching effects' here are some screen shots of a demo I've put together. Here a cube (40x40x10) centered at the coords (-20,-20,-5) is drawn with the only projection method I've got working at all (code below). The three screens show the camera at (0, 0, 50) in the first screenshot then moved in the X dimension to show the effect in the other two.
Projection code I'm using:
public static Point projectPointC(Vector3 point, Vector3 camera) {
Vector3 a = point;
Vector3 c = camera;
Point b = new Point();
b.x = (int) Math.round((a.x * c.z - c.x * a.z) / (c.z - a.z));
b.y = (int) Math.round((a.y * c.z - c.y * a.z) / (c.z - a.z));
return b;
}
You really can't do this without getting stuck in to the maths. There are loads of resources on the web that explain matrix multiplication, homogeneous coordinates, perspective projections etc. It's a big topic and there's no point repeating any of the required calculations here.
Here is a possible starting point for your learning:
http://en.wikipedia.org/wiki/Transformation_matrix
It's almost impossible to say what's wrong with your current approach - but one possibility based on your explanation that it looks odd as the object moves away from the centre is that your field of view is too wide. This results in a kind of fish-eye lens distortion where too much of the world view is squashed in to the edge of the screen.