Determine vertex winding (for backface culling) in JavaFX and Point3D - java

I'm in the process of displaying j3d GeometryArrays as combined TriangularMeshes in JavaFX. I know the GeometryArrays I am receiving are TriangleStripArrays, and the order allows me to build the correct faces and display the meshes in the scene.
However, I am at a loss on how to determine the vertex winding order based on the TriangleStripArray alone. The faces currently have no correct notion of backface culling, leaving me a complete TriangleMesh that appears distorted from any given angle. By changing the CullFaceMode from BACK to NONE, and by plotting the vertices in the scene, I can tell that all faces are being correctly mapped, just culling inconsistently.
I have two methods which build a TriangleMesh from any given triangular face containing 3 vertices, one for CW winding and one for CCW.
Essentially:
float[] points = {v1.x, v1.y, v1.z, v2.x, v2.y, v2.z, v3.x, v3.y, v3.z};
float[] texCoords = {1, 1, 1, 0, 0, 1};
int[] faces = new int[]{0, 0, 1, 1, 2, 2};
vs
float[] points = {v2.x, v2.y, v2.z, v1.x, v1.y, v1.z, v3.x, v3.y, v3.z};
float[] texCoords = {1, 0, 1, 1, 0, 1};
int[] faces = new int[]{0, 0, 1, 1, 2, 2};
My question is, how can I determine the vertex winding order, and which method should be used per face, based on the TriangleStripArray alone? (TexCoordinates are being supplied, but not used)
Thanks in advance!
- R. Melville

This can be solved by computing the cross product of the vectors (b-a) x (c-y) if a,b and c are the vertices of your triangles.
Our frinds from Math-Stackoverflow have a more detailed explanation :-)
why-does-cross-product-tell-us-about-clockwise-or-anti-clockwise-rotation

Related

Creating a terrain from a heightmap

I want to create realistic terrain using java and lwjgl.
At now i have double[100][100] with my heightmap and function redering every triangle from a list.
In example - drawing smal "floor" chunk with two triangles:
triangles.add(new Triangle(new Vector3f(x, (float) map[x][z] * 100, z),
new Vector3f(+0.5f, 1, -0.5f),
new Vector3f(-0.5f, 1, -0.5f),
new Vector3f(+0.5f, 1, +0.5f), rgb, rgb, rgb));
triangles.add(new Triangle(new Vector3f(x, (float) map[x][z] * 100, z),
new Vector3f(-0.5f, 1, +0.5f),
new Vector3f(-0.5f, 1, -0.5f),
new Vector3f(+0.5f, 1, +0.5f), rgb, rgb, rgb));
triangles parameters is:
start point XYZ coordinates,
first triangle point "sub" coords,
second triangle point "sub" coords,
third triangle point "sub" coords,
rgb for first point,
rgb color for second point,
rgb color for third point,
I cant do it... I think it is very easy and Im so stupid or so tired. Can you help me?

JavaFX Polygon Translation, Rotation, Scaling and its Points

I ran into a strange issue with the polygon class from javafx (java 8).
When I apply a set translate, rotate or scale on the polygon instance it is correctly moving the polygon around on my shape. The problem is, the points in the getPoints() method stay the same.
I started now to create my own methods and moving around the points and resetting them, the methods do what they should, but is it the right way?
Here an example:
private void translatePoints(double translateX, double translateY) {
List<Double> newPoints = new ArrayList<>();
for (int i = 0; i < getPoints().size(); i += 2) {
newPoints.add(getPoints().get(i) + translateX);
newPoints.add(getPoints().get(i + 1) + translateY);
}
getPoints().clear();
getPoints().addAll(newPoints);
}
Is there a way to get the translated, rotated and scaled points after a couple of operations?
Or do I have to implement them all separatly?
Take a look at the subclasses of Transform (Affine, Rotate, Scale, Shear and Translate). They allow you to transform points stored in a double[] array using the transform2DPoints method.
double[] points = new double[] {
0, 0,
0, 1,
1, 1,
1, 0
};
Rotate rot = new Rotate(45, 0.5, 0.5);
Translate t = new Translate(5, 7);
Scale sc = new Scale(3, 3);
for (Transform transform : Arrays.asList(rot, t, sc)) {
transform.transform2DPoints(points, 0, points, 0, 4);
}
System.out.println(Arrays.toString(points));
this way you need to take care of determining the pivot point of transforms where this is relevant on your own.
You could also get resulting transform for a node using Node.getLocalToParentTransform.
double[] points = polygon.getPoints().stream().mapToDouble(Number::doubleValue).toArray();
polygon.getLocalToParentTransform().transform2DPoints(points, 0, points, 0, points.length/2);
System.out.println(Arrays.toString(points));

LibGDX detecting the mouse click on mesh triangles

I have been trying to add mouse click detection on the triangles of the mesh, but it seems that I am doing something wrong and I cannot figure out how to solve the problem.
So before explaining the problem I will define the environment(the full code is available at http://pastebin.com/TxfNuYXZ):
Camera position
cam = new OrthographicCamera(10, 9);
cam.position.set(0, 5.35f, 2f);
cam.lookAt(0, 0, 0);
cam.near = 0.5f;
cam.far = 12f;
Mesh renders 4 vertices.
mesh = new Mesh(true, NUM_COLUMNS * NUM_LINES, (NUM_COLUMNS * 6 - 6) * (NUM_LINES - 1), VertexAttribute.Position(), VertexAttribute.ColorUnpacked());
mesh.setVertices(new float[] {
   0, 0, 0, 0, 1, 0, 1,
   1, 0, 0, 0, 1, 0, 1,
   0, 0, 1, 0, 1, 0, 1,
   1, 0, 1, 0, 1, 0, 1 });
mesh.setIndices(new short[] { 2, 0, 1, 2, 3, 1 });
So when I run the application I try to check if the click was done inside some of the triangles of the mesh. Now the result depends on the position of the camera. When the camera has almost top view(like in the following picture), corresponding to around 6 on Y axes, the click point is being correctly translated to the coordinates and corresponds to what is actually being seen.
When I move camera on the Y axes to lower position (around 2 or 3), so the image looks like the following one
the click is being detected in the completely wrong positions (the red line shows the place where the click is detected).. Which seems to be right according to the coordinates, but not according to what is being seen..
I would like to understand what an I missing to be able to detect clicks on what actually is being seen? The code I use to detect the click is the following:
#Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
Ray ray = cam.getPickRay(screenX, screenY);
Vector3 intersection = new Vector3();
float[] v = new float[NUM_COLUMNS * NUM_LINES * VERTEX_SIZE];
short[] i = new short[(NUM_COLUMNS * 6 - 6) * (NUM_LINES - 1)];
mesh.getIndices(i);
if (Intersector.intersectRayTriangles(ray, mesh.getVertices(v), i, VERTEX_SIZE, intersection)) {
System.out.println(intersection);
}
return false;
}
Thanks a lot for your help!
Basically, after several days of drawing and some math I have found the source of the problem. The Vertex Shader, in order to determine the position of the vertices, was performing a_position * u_projectionViewMatrix multiplication, which was resulting on what was looking fine on the screen, but actually when you compare with actual coordinates of the mesh it was wrong. Now if you check the examples at enter link description here, you can see, that gl_Position is being calculated by multiplying u_projectionViewMatrix * a_position. Making the correct calculation made the trick.
I also had to change the camera to perspective, since the mesh was not rendered how I wanted it to.

old JOGL, order of transformation

I have to solve a problem and I realize it is a bit oldschool code..
I need to write down the order of transformations from 1 to 4 and the result for pruple vertex. Would someone help me check whether it is correct and if not - why?
It is a bit tough for me to find answers to this and be 100% sure it is correct.
What I think is correct:
1. Start from bottom, take MODELVIEW first, then PROJECTION
- Yet I am not sure I did it right...
EDIT, code rewritten to text:
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glLoadIdentity();
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glTranslatef(-1, -1, -0);
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glScalef(2, 1, 3);
gl.glRotatef(-90, 0, 0, 1);
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glScalef(2, 3, 1);
gl.glBegin(GL.GL_QUADS);
gl.glColor3f(0, 0, 1);
gl.glVertex3f(-2, -2, -2);
gl.glColor3f(1, 1, 0);
gl.glVertex3f(2, 1, 3);
gl.glColor3f(1, 0, 1);
gl.glVertex3f(1, 1, -2);
gl.glColor3f(0, 1, 0);
gl.glVertex3f(-1, 1, 2);
gl.glEnd();
Write the transformations as they go in order and write the coordinate changes of purple vertex for each transformation.
Transform 1:________________
Coordinates x:_______ y:_______ z: _______
Transform 2:________________
Coordinates x:_______ y:_______ z: _______
Transform 3:________________
Coordinates x:_______ y:_______ z: _______
Transform 4:________________
Coordinates x:_______ y:_______ z: _______
Problem solved
supposed to start with model transforms and then projection, always from the bottom
apply transforms from bottom
Also, I was accidentally using wrong coordinates..
~thanks for help though!

How does this java3d code work? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm a unsure how the arrays are being used in this program. Can anyone explain to me how the two arrays in this program are being used?
import javax.vecmath.*;
import javax.media.j3d.*;
public class Tetrahedron extends IndexedTriangleArray {
public Tetrahedron() {
super(4, TriangleArray.COORDINATES | TriangleArray.NORMALS, 12);
setCoordinate(0, new Point3f(1f, 1f, 1f));
setCoordinate(1, new Point3f(1f, -1, -1f));
setCoordinate(2, new Point3f(-1f, 1f, -1f));
setCoordinate(3, new Point3f(-1f, -1f, 1f));
int[] coords = { 0, 1, 2, 0, 3, 1, 1, 3, 2, 2, 3, 0 };
float n = (float) (1.0 / Math.sqrt(3));
setNormal(0, new Vector3f(n, n, -n));
setNormal(1, new Vector3f(n, -n, n));
setNormal(2, new Vector3f(-n, -n, -n));
setNormal(3, new Vector3f(-n, n, n));
int[] norms = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3 };
setCoordinateIndices(0, coords);
setNormalIndices(0, norms);
}
}
The code works by first creating an array of points along with an array of normals and then referencing them later to create the figure. The four calls to setCoordinate() sets the position of the each vertex.
The int[] coords store the positions of the vertices for the 4 triangles that make up the 4 faces (each triangle has three vertices for a total of 12 vertices). The first triangle is made up of the 0th , 1st, and 2nd vertices, the next triangle the 0th, 3rd and 1st vertex etc.
The code for the normals works in a similar fashion to that that of the vertices

Categories