This question already has answers here:
Processing tuples in java
(3 answers)
Closed 6 years ago.
In Python I have a code:
segments = [(0, cX, 0, cY), (cX, w, 0, cY), (cX, w, cY, h), (0, cX, cY, h)]
How can I make this using Java?
Java not have tuples by default but you can use http://www.javatuples.org, I'm believing that you can find other implementations too.
if want to make it 2 dimensions array assuming all values are int this is correct syntax
int[][] segments = new int[][] { { 0, 1, 0, 1 }, { 1, 1, 0, 1 } };
Another solution (which is much more verbose than in Python) is to write a new class called Segment with x1, y1, x2, y2 as fields, with corresponding getters and setters.
You could also define 2 classes : Segment and Point. Segment would have 2 fields : point1 and point2, Point would have x and y as fields.
segments would then be a Collection (e.g. Set or List) of Segments.
Related
I'm in the process of displaying j3d GeometryArrays as combined TriangularMeshes in JavaFX. I know the GeometryArrays I am receiving are TriangleStripArrays, and the order allows me to build the correct faces and display the meshes in the scene.
However, I am at a loss on how to determine the vertex winding order based on the TriangleStripArray alone. The faces currently have no correct notion of backface culling, leaving me a complete TriangleMesh that appears distorted from any given angle. By changing the CullFaceMode from BACK to NONE, and by plotting the vertices in the scene, I can tell that all faces are being correctly mapped, just culling inconsistently.
I have two methods which build a TriangleMesh from any given triangular face containing 3 vertices, one for CW winding and one for CCW.
Essentially:
float[] points = {v1.x, v1.y, v1.z, v2.x, v2.y, v2.z, v3.x, v3.y, v3.z};
float[] texCoords = {1, 1, 1, 0, 0, 1};
int[] faces = new int[]{0, 0, 1, 1, 2, 2};
vs
float[] points = {v2.x, v2.y, v2.z, v1.x, v1.y, v1.z, v3.x, v3.y, v3.z};
float[] texCoords = {1, 0, 1, 1, 0, 1};
int[] faces = new int[]{0, 0, 1, 1, 2, 2};
My question is, how can I determine the vertex winding order, and which method should be used per face, based on the TriangleStripArray alone? (TexCoordinates are being supplied, but not used)
Thanks in advance!
- R. Melville
This can be solved by computing the cross product of the vectors (b-a) x (c-y) if a,b and c are the vertices of your triangles.
Our frinds from Math-Stackoverflow have a more detailed explanation :-)
why-does-cross-product-tell-us-about-clockwise-or-anti-clockwise-rotation
I ran into a strange issue with the polygon class from javafx (java 8).
When I apply a set translate, rotate or scale on the polygon instance it is correctly moving the polygon around on my shape. The problem is, the points in the getPoints() method stay the same.
I started now to create my own methods and moving around the points and resetting them, the methods do what they should, but is it the right way?
Here an example:
private void translatePoints(double translateX, double translateY) {
List<Double> newPoints = new ArrayList<>();
for (int i = 0; i < getPoints().size(); i += 2) {
newPoints.add(getPoints().get(i) + translateX);
newPoints.add(getPoints().get(i + 1) + translateY);
}
getPoints().clear();
getPoints().addAll(newPoints);
}
Is there a way to get the translated, rotated and scaled points after a couple of operations?
Or do I have to implement them all separatly?
Take a look at the subclasses of Transform (Affine, Rotate, Scale, Shear and Translate). They allow you to transform points stored in a double[] array using the transform2DPoints method.
double[] points = new double[] {
0, 0,
0, 1,
1, 1,
1, 0
};
Rotate rot = new Rotate(45, 0.5, 0.5);
Translate t = new Translate(5, 7);
Scale sc = new Scale(3, 3);
for (Transform transform : Arrays.asList(rot, t, sc)) {
transform.transform2DPoints(points, 0, points, 0, 4);
}
System.out.println(Arrays.toString(points));
this way you need to take care of determining the pivot point of transforms where this is relevant on your own.
You could also get resulting transform for a node using Node.getLocalToParentTransform.
double[] points = polygon.getPoints().stream().mapToDouble(Number::doubleValue).toArray();
polygon.getLocalToParentTransform().transform2DPoints(points, 0, points, 0, points.length/2);
System.out.println(Arrays.toString(points));
I have been trying to add mouse click detection on the triangles of the mesh, but it seems that I am doing something wrong and I cannot figure out how to solve the problem.
So before explaining the problem I will define the environment(the full code is available at http://pastebin.com/TxfNuYXZ):
Camera position
cam = new OrthographicCamera(10, 9);
cam.position.set(0, 5.35f, 2f);
cam.lookAt(0, 0, 0);
cam.near = 0.5f;
cam.far = 12f;
Mesh renders 4 vertices.
mesh = new Mesh(true, NUM_COLUMNS * NUM_LINES, (NUM_COLUMNS * 6 - 6) * (NUM_LINES - 1), VertexAttribute.Position(), VertexAttribute.ColorUnpacked());
mesh.setVertices(new float[] {
0, 0, 0, 0, 1, 0, 1,
1, 0, 0, 0, 1, 0, 1,
0, 0, 1, 0, 1, 0, 1,
1, 0, 1, 0, 1, 0, 1 });
mesh.setIndices(new short[] { 2, 0, 1, 2, 3, 1 });
So when I run the application I try to check if the click was done inside some of the triangles of the mesh. Now the result depends on the position of the camera. When the camera has almost top view(like in the following picture), corresponding to around 6 on Y axes, the click point is being correctly translated to the coordinates and corresponds to what is actually being seen.
When I move camera on the Y axes to lower position (around 2 or 3), so the image looks like the following one
the click is being detected in the completely wrong positions (the red line shows the place where the click is detected).. Which seems to be right according to the coordinates, but not according to what is being seen..
I would like to understand what an I missing to be able to detect clicks on what actually is being seen? The code I use to detect the click is the following:
#Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
Ray ray = cam.getPickRay(screenX, screenY);
Vector3 intersection = new Vector3();
float[] v = new float[NUM_COLUMNS * NUM_LINES * VERTEX_SIZE];
short[] i = new short[(NUM_COLUMNS * 6 - 6) * (NUM_LINES - 1)];
mesh.getIndices(i);
if (Intersector.intersectRayTriangles(ray, mesh.getVertices(v), i, VERTEX_SIZE, intersection)) {
System.out.println(intersection);
}
return false;
}
Thanks a lot for your help!
Basically, after several days of drawing and some math I have found the source of the problem. The Vertex Shader, in order to determine the position of the vertices, was performing a_position * u_projectionViewMatrix multiplication, which was resulting on what was looking fine on the screen, but actually when you compare with actual coordinates of the mesh it was wrong. Now if you check the examples at enter link description here, you can see, that gl_Position is being calculated by multiplying u_projectionViewMatrix * a_position. Making the correct calculation made the trick.
I also had to change the camera to perspective, since the mesh was not rendered how I wanted it to.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm a unsure how the arrays are being used in this program. Can anyone explain to me how the two arrays in this program are being used?
import javax.vecmath.*;
import javax.media.j3d.*;
public class Tetrahedron extends IndexedTriangleArray {
public Tetrahedron() {
super(4, TriangleArray.COORDINATES | TriangleArray.NORMALS, 12);
setCoordinate(0, new Point3f(1f, 1f, 1f));
setCoordinate(1, new Point3f(1f, -1, -1f));
setCoordinate(2, new Point3f(-1f, 1f, -1f));
setCoordinate(3, new Point3f(-1f, -1f, 1f));
int[] coords = { 0, 1, 2, 0, 3, 1, 1, 3, 2, 2, 3, 0 };
float n = (float) (1.0 / Math.sqrt(3));
setNormal(0, new Vector3f(n, n, -n));
setNormal(1, new Vector3f(n, -n, n));
setNormal(2, new Vector3f(-n, -n, -n));
setNormal(3, new Vector3f(-n, n, n));
int[] norms = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3 };
setCoordinateIndices(0, coords);
setNormalIndices(0, norms);
}
}
The code works by first creating an array of points along with an array of normals and then referencing them later to create the figure. The four calls to setCoordinate() sets the position of the each vertex.
The int[] coords store the positions of the vertices for the 4 triangles that make up the 4 faces (each triangle has three vertices for a total of 12 vertices). The first triangle is made up of the 0th , 1st, and 2nd vertices, the next triangle the 0th, 3rd and 1st vertex etc.
The code for the normals works in a similar fashion to that that of the vertices
Hey all I'm trying to implement 3D picking into my program, and it works perfectly if I don't move from the origin. It is perfectly accurate. But if I move the model matrix away from the origin (the viewmatrix eye is still at 0,0,0) the picking vectors are still drawn from the original location. It should still be drawing from the view matrix eye (0,0,0) but it isn't. Here's some of my code to see if you can find out why..
Vector3d near = unProject(x, y, 0, mMVPMatrix, this.width, this.height);
Vector3d far = unProject(x, y, 1, mMVPMatrix, this.width, this.height);
Vector3d pickingRay = far.subtract(near);
//pickingRay.z *= -1;
Vector3d normal = new Vector3d(0,0,1);
if (normal.dot(pickingRay) != 0 && pickingRay.z < 0)
{
float t = (-5f-normal.dot(mCamera.eye))/(normal.dot(pickingRay));
pickingRay = mCamera.eye.add(pickingRay.scale(t));
addObject(pickingRay.x, pickingRay.y, pickingRay.z+.5f, Shape.BOX);
//a line for the picking vector for debugging
PrimProperties a = new PrimProperties(); //new prim properties for size and center
Prim result = null;
result = new Line(a, mCamera.eye, far);//new line object for seeing look at vector
result.createVertices();
objects.add(result);
}
public static Vector3d unProject(
float winx, float winy, float winz,
float[] resultantMatrix,
float width, float height)
{
winy = height-winy;
float[] m = new float[16],
in = new float[4],
out = new float[4];
Matrix.invertM(m, 0, resultantMatrix, 0);
in[0] = (winx / width) * 2 - 1;
in[1] = (winy / height) * 2 - 1;
in[2] = 2 * winz - 1;
in[3] = 1;
Matrix.multiplyMV(out, 0, m, 0, in, 0);
if (out[3]==0)
return null;
out[3] = 1/out[3];
return new Vector3d(out[0] * out[3], out[1] * out[3], out[2] * out[3]);
}
Matrix.translateM(mModelMatrix, 0, this.diffX, this.diffY, 0); //i use this to move the model matrix based on pinch zooming stuff.
Any help would be greatly appreciated! Thanks.
I wonder which algorithm you have implemented. Is it a ray casting approach to the problem?
I didn't focus much on the code itself but this looks a way too simple implementation to be a fully operational ray casting solution.
In my humble experience, i would like to suggest you, depending on the complexity of your final project (which I don't know), to adopt a color picking solution.
This solution is usually the most flexible and the easiest to be implemented.
It consist in the rendering of the objects in your scene with unique flat colors (usually you disable lighting as well in your shaders) to a backbuffer...a texture, then you acquire the coordinates of the click (touch) and you read the color of the pixel in that specific coordinates.
Having the color of the pixel and the tables of the colors of the different objects you rendered, makes possible for you to understand what the user clicked from a logical perspective.
There are other approaches to the object picking problem, this is probably universally recognized as the fastest one.
Cheers
Maurizio