Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm a unsure how the arrays are being used in this program. Can anyone explain to me how the two arrays in this program are being used?
import javax.vecmath.*;
import javax.media.j3d.*;
public class Tetrahedron extends IndexedTriangleArray {
public Tetrahedron() {
super(4, TriangleArray.COORDINATES | TriangleArray.NORMALS, 12);
setCoordinate(0, new Point3f(1f, 1f, 1f));
setCoordinate(1, new Point3f(1f, -1, -1f));
setCoordinate(2, new Point3f(-1f, 1f, -1f));
setCoordinate(3, new Point3f(-1f, -1f, 1f));
int[] coords = { 0, 1, 2, 0, 3, 1, 1, 3, 2, 2, 3, 0 };
float n = (float) (1.0 / Math.sqrt(3));
setNormal(0, new Vector3f(n, n, -n));
setNormal(1, new Vector3f(n, -n, n));
setNormal(2, new Vector3f(-n, -n, -n));
setNormal(3, new Vector3f(-n, n, n));
int[] norms = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3 };
setCoordinateIndices(0, coords);
setNormalIndices(0, norms);
}
}
The code works by first creating an array of points along with an array of normals and then referencing them later to create the figure. The four calls to setCoordinate() sets the position of the each vertex.
The int[] coords store the positions of the vertices for the 4 triangles that make up the 4 faces (each triangle has three vertices for a total of 12 vertices). The first triangle is made up of the 0th , 1st, and 2nd vertices, the next triangle the 0th, 3rd and 1st vertex etc.
The code for the normals works in a similar fashion to that that of the vertices
Related
I want to create realistic terrain using java and lwjgl.
At now i have double[100][100] with my heightmap and function redering every triangle from a list.
In example - drawing smal "floor" chunk with two triangles:
triangles.add(new Triangle(new Vector3f(x, (float) map[x][z] * 100, z),
new Vector3f(+0.5f, 1, -0.5f),
new Vector3f(-0.5f, 1, -0.5f),
new Vector3f(+0.5f, 1, +0.5f), rgb, rgb, rgb));
triangles.add(new Triangle(new Vector3f(x, (float) map[x][z] * 100, z),
new Vector3f(-0.5f, 1, +0.5f),
new Vector3f(-0.5f, 1, -0.5f),
new Vector3f(+0.5f, 1, +0.5f), rgb, rgb, rgb));
triangles parameters is:
start point XYZ coordinates,
first triangle point "sub" coords,
second triangle point "sub" coords,
third triangle point "sub" coords,
rgb for first point,
rgb color for second point,
rgb color for third point,
I cant do it... I think it is very easy and Im so stupid or so tired. Can you help me?
I'm in the process of displaying j3d GeometryArrays as combined TriangularMeshes in JavaFX. I know the GeometryArrays I am receiving are TriangleStripArrays, and the order allows me to build the correct faces and display the meshes in the scene.
However, I am at a loss on how to determine the vertex winding order based on the TriangleStripArray alone. The faces currently have no correct notion of backface culling, leaving me a complete TriangleMesh that appears distorted from any given angle. By changing the CullFaceMode from BACK to NONE, and by plotting the vertices in the scene, I can tell that all faces are being correctly mapped, just culling inconsistently.
I have two methods which build a TriangleMesh from any given triangular face containing 3 vertices, one for CW winding and one for CCW.
Essentially:
float[] points = {v1.x, v1.y, v1.z, v2.x, v2.y, v2.z, v3.x, v3.y, v3.z};
float[] texCoords = {1, 1, 1, 0, 0, 1};
int[] faces = new int[]{0, 0, 1, 1, 2, 2};
vs
float[] points = {v2.x, v2.y, v2.z, v1.x, v1.y, v1.z, v3.x, v3.y, v3.z};
float[] texCoords = {1, 0, 1, 1, 0, 1};
int[] faces = new int[]{0, 0, 1, 1, 2, 2};
My question is, how can I determine the vertex winding order, and which method should be used per face, based on the TriangleStripArray alone? (TexCoordinates are being supplied, but not used)
Thanks in advance!
- R. Melville
This can be solved by computing the cross product of the vectors (b-a) x (c-y) if a,b and c are the vertices of your triangles.
Our frinds from Math-Stackoverflow have a more detailed explanation :-)
why-does-cross-product-tell-us-about-clockwise-or-anti-clockwise-rotation
This question already has answers here:
Processing tuples in java
(3 answers)
Closed 6 years ago.
In Python I have a code:
segments = [(0, cX, 0, cY), (cX, w, 0, cY), (cX, w, cY, h), (0, cX, cY, h)]
How can I make this using Java?
Java not have tuples by default but you can use http://www.javatuples.org, I'm believing that you can find other implementations too.
if want to make it 2 dimensions array assuming all values are int this is correct syntax
int[][] segments = new int[][] { { 0, 1, 0, 1 }, { 1, 1, 0, 1 } };
Another solution (which is much more verbose than in Python) is to write a new class called Segment with x1, y1, x2, y2 as fields, with corresponding getters and setters.
You could also define 2 classes : Segment and Point. Segment would have 2 fields : point1 and point2, Point would have x and y as fields.
segments would then be a Collection (e.g. Set or List) of Segments.
I have been trying to add mouse click detection on the triangles of the mesh, but it seems that I am doing something wrong and I cannot figure out how to solve the problem.
So before explaining the problem I will define the environment(the full code is available at http://pastebin.com/TxfNuYXZ):
Camera position
cam = new OrthographicCamera(10, 9);
cam.position.set(0, 5.35f, 2f);
cam.lookAt(0, 0, 0);
cam.near = 0.5f;
cam.far = 12f;
Mesh renders 4 vertices.
mesh = new Mesh(true, NUM_COLUMNS * NUM_LINES, (NUM_COLUMNS * 6 - 6) * (NUM_LINES - 1), VertexAttribute.Position(), VertexAttribute.ColorUnpacked());
mesh.setVertices(new float[] {
0, 0, 0, 0, 1, 0, 1,
1, 0, 0, 0, 1, 0, 1,
0, 0, 1, 0, 1, 0, 1,
1, 0, 1, 0, 1, 0, 1 });
mesh.setIndices(new short[] { 2, 0, 1, 2, 3, 1 });
So when I run the application I try to check if the click was done inside some of the triangles of the mesh. Now the result depends on the position of the camera. When the camera has almost top view(like in the following picture), corresponding to around 6 on Y axes, the click point is being correctly translated to the coordinates and corresponds to what is actually being seen.
When I move camera on the Y axes to lower position (around 2 or 3), so the image looks like the following one
the click is being detected in the completely wrong positions (the red line shows the place where the click is detected).. Which seems to be right according to the coordinates, but not according to what is being seen..
I would like to understand what an I missing to be able to detect clicks on what actually is being seen? The code I use to detect the click is the following:
#Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
Ray ray = cam.getPickRay(screenX, screenY);
Vector3 intersection = new Vector3();
float[] v = new float[NUM_COLUMNS * NUM_LINES * VERTEX_SIZE];
short[] i = new short[(NUM_COLUMNS * 6 - 6) * (NUM_LINES - 1)];
mesh.getIndices(i);
if (Intersector.intersectRayTriangles(ray, mesh.getVertices(v), i, VERTEX_SIZE, intersection)) {
System.out.println(intersection);
}
return false;
}
Thanks a lot for your help!
Basically, after several days of drawing and some math I have found the source of the problem. The Vertex Shader, in order to determine the position of the vertices, was performing a_position * u_projectionViewMatrix multiplication, which was resulting on what was looking fine on the screen, but actually when you compare with actual coordinates of the mesh it was wrong. Now if you check the examples at enter link description here, you can see, that gl_Position is being calculated by multiplying u_projectionViewMatrix * a_position. Making the correct calculation made the trick.
I also had to change the camera to perspective, since the mesh was not rendered how I wanted it to.
I have to solve a problem and I realize it is a bit oldschool code..
I need to write down the order of transformations from 1 to 4 and the result for pruple vertex. Would someone help me check whether it is correct and if not - why?
It is a bit tough for me to find answers to this and be 100% sure it is correct.
What I think is correct:
1. Start from bottom, take MODELVIEW first, then PROJECTION
- Yet I am not sure I did it right...
EDIT, code rewritten to text:
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glLoadIdentity();
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glTranslatef(-1, -1, -0);
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glScalef(2, 1, 3);
gl.glRotatef(-90, 0, 0, 1);
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glScalef(2, 3, 1);
gl.glBegin(GL.GL_QUADS);
gl.glColor3f(0, 0, 1);
gl.glVertex3f(-2, -2, -2);
gl.glColor3f(1, 1, 0);
gl.glVertex3f(2, 1, 3);
gl.glColor3f(1, 0, 1);
gl.glVertex3f(1, 1, -2);
gl.glColor3f(0, 1, 0);
gl.glVertex3f(-1, 1, 2);
gl.glEnd();
Write the transformations as they go in order and write the coordinate changes of purple vertex for each transformation.
Transform 1:________________
Coordinates x:_______ y:_______ z: _______
Transform 2:________________
Coordinates x:_______ y:_______ z: _______
Transform 3:________________
Coordinates x:_______ y:_______ z: _______
Transform 4:________________
Coordinates x:_______ y:_______ z: _______
Problem solved
supposed to start with model transforms and then projection, always from the bottom
apply transforms from bottom
Also, I was accidentally using wrong coordinates..
~thanks for help though!