Java3d read each polygon of an 3d-object - java

I'm using Java3d (VERSION 1.6) and am trying to read all polygons from any object.
I loaded one object using following code:
private BranchGroup loadObj(String p) {
BranchGroup objRoot = new BranchGroup();
TransformGroup tg = new TransformGroup();
Transform3D t3d = new Transform3D();
t3d.setScale(0.3);
Matrix4d matrix = new Matrix4d();
t3d.get(matrix);
try
{
Scene s = null;
ObjectFile f = new ObjectFile ();
String basepath = new File(p).getAbsolutePath();
System.out.println(basepath);
f.setBasePath(basepath);
f.setFlags (0);
s = f.load (s1);
s.getSceneGroup().setBoundsAutoCompute(true);
tg.addChild (s.getSceneGroup ());
objRoot.addChild(tg);
bounds.add(objRoot.getBounds());
objRoot.compile();
}
Now I like to read the computed polygons from that BranchGroup or Scene Object and put each in a class of mainly an array of Point3d's. With that class I build some algorithms to search for specific points and stuff. So how would I get these polygons?
The reason I need it is because I'm trying to "walk" over an uneven surface. I can't use BoundingBoxes or spheres, for that is not precise enough. I would appreciate a different solution as well!
EDIT:
With the help of gouessej I got so far:
try
{
Scene s = null;
ObjectFile f = new ObjectFile ();
String basepath = new File(p).getAbsolutePath();
System.out.println(basepath);
f.setBasePath(basepath);
f.setFlags (ObjectFile.TRIANGULATE);
String s1 = p;
s = f.load (s1);
BranchGroup branch = s.getSceneGroup();
branch.setBoundsAutoCompute(true);
Shape3D shape = (Shape3D)branch.getChild(0);
Geometry g = shape.getGeometry();
TriangleArray ta = (TriangleArray)shape.getGeometry();
System.out.println(ta.getVertexCount()); // Prints around 95.000, sounds about right
System.out.println(ta.getVertexFormat()); // prints 387
double[] coords = ta.getCoordRefDouble(); // line: 526; Here it throws the exception
System.out.println(Arrays.toString(coords));
tg.addChild (branch);
objRoot.addChild(tg);
bounds.add(objRoot.getBounds());
System.out.println();
objRoot.compile();
}
But on the line ta.getCoordRefDouble(), it throws me an Exception:
Exception in thread "main" java.lang.IllegalStateException: GeometryArray: cannot access individual array references in INTERLEAVED mode
at javax.media.j3d.GeometryArray.getCoordRefDouble(GeometryArray.java:5755)
at com.object.simpleTest.Test1.loadObj(Test1.java:526)
at com.object.simpleTest.Test1.<init>(Test1.java:428)
at com.object.simpleTest.Test1.main(Test1.java:686)
What does it mean and how to fix it?

At first, Java 3D is NOT dead as you can see here (please edit your question).
Secondly, you can look at the Java documentation of the class ObjectFile. I advise you to use the flag "TRIANGULATE" to be sure to get a polygon array containing only convex polygons to ease your computations.
The branch group of your Scene object contains one Shape3D object. This Shape3D object contains a Geometry object, it stores your polygons. The source code of ObjectFile is here. Look at this line.
Edit.: You can get the BranchGroup of your scene by calling Scene.getSceneGroup(). You can see that the group is added into the scene here. Call Group.getAllChildren(), loop on all children, use instanceof to check whether a child is an instance of Shape3D. For each Shape3D, call getGeometry() or getAllGeometries(). The geometry should be a GeometryArray, maybe a TriangleArray. getCoordRefBuffer() might not work exactly in the same way in Java 3D 1.6 because we removed J3DBuffer, use getCoordRefDouble(), getCoordRefFloat() or any variant of getCoordinate() or getCoordinates(). Please ensure that you use Java 3D 1.6 so that we are talking about the same code and the same version. Older versions are obsolete and unmaintained.
Edit.2: Rather call getInterleavedVertices() as its name implies if the vertices are interleaved. Keep in mind that it might contain the normals too (in first position), not only the vertex coordinates (in second position):
nx ny nz vx vy vz

Related

Render arrow with direction between 2 anchors ARCore

I got 2 anchors, I'd like to render an arrow object at one anchor and the direction of the arrow head to the other. I put some code, but it didn't work properly
private void drawLine(AnchorNode node1, AnchorNode node2) {
Vector3 point1, point2;
point1 = node1.getWorldPosition();
point2 = node2.getWorldPosition();
node1.setParent(mArFragment.getArSceneView().getScene());
//find the vector extending between the two points and define a look rotation
//in terms of this Vector.
final Vector3 difference = Vector3.subtract(point1, point2);
final Vector3 directionFromTopToBottom = difference.normalized();
final Quaternion rotationFromAToB =
Quaternion.lookRotation(directionFromTopToBottom, Vector3.up());
MaterialFactory.makeTransparentWithColor(getApplicationContext(), new Color(247, 181, 0, 0.7f))
.thenAccept(
material -> {
// create a rectangular prism, using ShapeFactory.makeCube()
// use the difference vector to extend to the necessary length
ModelRenderable model = ShapeFactory.makeCube(
new Vector3(.15f, .001f, difference.length()),
Vector3.zero(), material);
// set the world rotation of the node to the rotation calculated earlier
// and set the world position to the midpoint between the given points
Node nodeForLine = new Node();
nodeForLine.setParent(node1);
nodeForLine.setRenderable(model);
nodeForLine.setWorldPosition(Vector3.add(point1, point2).scaled(.5f));
nodeForLine.setWorldRotation(rotationFromAToB);
}
); // end rendering
ModelRenderable.builder()
.setSource(this, Uri.parse("model.sfb"))
.build()
.thenAccept(modelRenderable -> {
AnchorNode anchorNode = new AnchorNode(node1.getAnchor());
TransformableNode transformableNode = new TransformableNode(mArFragment.getTransformationSystem());
transformableNode.setParent(anchorNode);
transformableNode.setRenderable(modelRenderable);
transformableNode.setWorldRotation(rotationFromAToB);
mArFragment.getArSceneView().getScene().addChild(anchorNode);
transformableNode.select();
})
.exceptionally(throwable -> {
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage(throwable.getMessage()).show();
return null;
});
}
You see the arrow is rendered but its direction is not correct.
Current situation. See image
I think you want the arrow on one anchor to point towards the other anchor - you can do this using 'setLookDirection' and a TransformableNode.
See below for an example:
var newAnchorNode:AnchorNode = AnchorNode(newAnchor)
var transNode = TransformableNode(arFragment.transformationSystem)
transNode.setLookDirection(Vector3(0f, 1f, 1f), Vector3.left())
transNode.renderable = selectedRenderable
transNode.setParent(newAnchorNode)
newAnchorNode.setParent(arFragment.arSceneView.scene)
See API documentation here:
https://developers.google.com/sceneform/reference/com/google/ar/sceneform/ux/TransformableNode
This is actually for Scenefrom 1.15 and the newer version 1.16 is now open source on GitHub, but I don't think a detailed API description exists so the above is still a good place to look at this time (May 2020).
You can use your own values for the two vector3's.
The first Vector3 is the point you want your renderable to 'look at', in your case the other anchor most likely, and the second Vector3 is the orientation of the renderable in the scene - i.e. if you want it upright, facing left etc.
One thing to be aware of, AFAIK, Sceeform is still designed for landscape mode so you may need to experiment to get the orientation the way you want if it not using landscape - for example the vector3.left() in the example above is to make a renderable appear upright on a portrait display.

How to create Polygon from Point (Spatial4j)

I want to do some geometric calculations in Java and found that Spatial4j should suit my needs.
I want to be able to compute stuff like whether two polygons overlap or what their bounding box is.
My thinking is that I need to create a polygon from a series of points.
To that end I have tested with this code:
Point point1 = shapeFactory.pointXY(0, 0);
Point point2 = shapeFactory.pointXY(5, 1);
Point point3 = shapeFactory.pointXY(3, 3);
Point point4 = shapeFactory.pointXY(0, 1);
List<Point> points = new ArrayList<>();
points.addAll(Arrays.asList(point1, point2, point3, point4));
So, I have my points now. How do I go about making a polygon (or for that matter any shape) from these points ?
I would think that shapeFactory.polygon() would create me a polygon but that throws me an UnsupportedOperationException. Any help ?
Alright, it seems that Spatial4j does not connect the points, so it is not a filled shape. Instead I relied on the Spatial4j implementation of JTS and that did the trick. (Spatial4j's polygon is not implemented).
JtsSpatialContextFactory jtsSpatialContextFactory = new JtsSpatialContextFactory();
JtsSpatialContext jtsSpatialContext = jtsSpatialContextFactory.newSpatialContext();
JtsShapeFactory jtsShapeFactory = jtsSpatialContext.getShapeFactory();
ShapeFactory.PolygonBuilder polygonBuilder = jtsShapeFactory.polygon();
// note due to it being a builder one needs to chain the points.
Shape shape1 = polygonBuilder.pointXY(4, 0).pointXY(3, 3).pointXY(1, 4).pointXY(0, 0).pointXY(4, 0).build();
Now doing for example shape.getArea() returns the surface area.
One can also create a Geometry from a Shape by doing jtsShapeFactory.getGeometryFrom(shape), which then returns a Geometry.
Note: Watch out with doing polygonBuilder.pointXY() even after calling build(). It will still append these points to whatever was chained to the builder before the build.

Java3D transformation from 3D to current view 2D coordinate

I'm trying to add a 2D overlay for a 3D scene in Java3D, part of this overlay is to draw a line from a 2D object to a corresponding point in the 3D scene...
Searched transformation from 3D to 2D and read those threads:
Translate Java 3D coordinates to 2D screen coordinates
3D to 2D projection
from code inside walrus:
https://github.com/CAIDA/walrus/blob/master/H3ViewParameters.java
copied a method to a class extending Canvas3D:
public Transform3D getObjectToEyeTransform() {
Point3d m_eye = new Point3d();
getCenterEyeInImagePlate(m_eye);
Transform3D m_imageToEye = new Transform3D();
m_imageToEye.set(new Vector3d(-m_eye.x, -m_eye.y, 0.0));
Transform3D m_vworldToImage = new Transform3D();
getVworldToImagePlate(m_vworldToImage);
Transform3D transform = new Transform3D(m_imageToEye);
transform.mul(m_vworldToImage);
//transform.mul(m_objectTransform);
return transform;
}
and then in my overlay in method postRender i try to do the following:
Transform3D viewTrans3d = getObjectToEyeTransform();
Vector3d point = new Vector3d(1,1,1);
viewTrans3d.invert();
viewTrans3d.transform(point);
this.getGraphics2D().drawLine(0, 0, (int)point.x, (int)point.y);
Getting very weird line, which do change in a quite logical pattern (when i rotate and tilt the view) but far from what i expect...
Questions:
commented the m_objectTransform matrix multiplication because i
don't understand its purpose, any idea?
Why do i need to invert the transform matrix? without the invert the results are even weirder...
Is there a simpler way to do this??? sounds like something solved eons ago...
This can be done by using getVworldToImagePlate and then getPixelLocationFromImagePlate in the Canvas3D class. For example:
public Point2d getPosition2d(Point3d point) {
Transform3D transform = new Transform3D();
getVworldToImagePlate(transform);
transform.mul(objectTransform);
Point3d newPoint = new Point3d(point);
transform.transform(newPoint);
Point2d point2d = new Point2d();
getPixelLocationFromImagePlate(newPoint, point2d);
return point2d;
}
The objectTransform variable should be the transform of any TransformGroup in the scene that is applied to the 3d objects that are displayed. If you don't have any TransformGroup, then you can leave this out. Also, the transform shouldn't be inverted, just use it as it is.

Using WarpOptions in GDAL for Android (java via swig binding)

I want to reproject an geospatial image using GDAL compiled for Android. I am currently using the swig bindings but am considering going jni/ndk.
I have successfully warped an image with the AutoCreateWarpedVRT function, but I would like to use more options (e.g. cropping the output). Currently the below is my attempt at warping using the Warp. It produces an output raster that is not warped at all, and also is not applying the -te options.
The documentation for the GDAL swig bindings is very sparse (link) and I suspect that I did not get the WarpOptions right.
Any suggestions on how to make the WarpOptions(Vector) arguments work (or any of the ???Options(Vector) for that sake) is appreciated.
Dataset src = gdal.Open("input_raster.jpg");
src.SetProjection(src_wkt);
// Set reference points
double west = 451000;
double east = 501005;
double south = 6214995;
double north = 6257000;
int width = src.getRasterXSize();
int height = src.getRasterYSize();
GCP[] gcps = { new GCP(0, 0, west, north),
new GCP(0, height, west, south),
new GCP(width, 0, east, north),
new GCP(width, height, east, south)};
src.SetGCPs(gcps, dst_wkt);
// Try to warp
Dataset[] src_array = {src};
WarpOptions warpOptions = new WarpOptions(
new Vector(Arrays.asList("s_srs EPSG:32632", "t_srs EPSG:3857", "te 1000 820"))
);
Dataset warp = gdal.Warp("warp.vrt", src_array, warpOptions);
Dataset warp_png = driverPNG.CreateCopy("warp_raster.png", warp);
//Produces a raster output without errors, but does not apply the warp.
src.delete();
warp.delete();
warp_png.delete();
According to this discussion the syntax should be:
Vector<String> options = new Vector<>();
options.add("-s_srs");
options.add("EPSG:32632");
options.add("-t_srs");
options.add("EPSG:3857");
Seems great, but throws a runtime error WarpOptions.java:17
if (cPtr == 0)
throw new RuntimeException();
So, apart from the error I think the solution is correct. The error may be platform specific.

Route trace using java

i want to trace the trajectory between differents points
for the test i creat points and try to link between these points
this is my code
OpenStreetMapLayer osm = new OpenStreetMapLayer();
map.addLayer(vectorLayer);
List<Point>points= new ArrayList<Point>();
Point point = new Point(44.272872,4.27826);
Point point2 = new Point(-55.272873,5.3873837);
Point point3 = new Point(5.272873,54.3873837);
points.add(point);
points.add(point2);
points.add(point3);
Point[] coord=new Point[points.size()];
points.toArray(coord);
polyline.setPoints(coord);
vectorLayer.addComponent(polyline);
Style defaultstyle = new Style();
/* Set stroke color to green, otherwise like default style */
defaultstyle.extendCoreStyle("default");
defaultstyle.setStrokeColor("#0000ff");
defaultstyle.setStrokeWidth(3);
defaultstyle.setFillColor("#adfffc");
defaultstyle.setFillOpacity(0.4);
// Make borders of selected graphs bigger
Style selectStyle = new Style();
selectStyle.setStrokeWidth(5);
StyleMap stylemap = new StyleMap(defaultstyle, defaultstyle, null);
// make selectStyle inherit attributes not explicitly set
stylemap.setExtendDefault(true);
vectorLayer.setStyleMap(stylemap);
but when i execute my code i get just a point i've asked they told me this point is the point of cordinate(0,0)
this is the screen catch of the set of points without ZOOM (the blue point)
http://img4.hostingpics.net/pics/810776sss.png
and this is the MAX ZOOM
http://img4.hostingpics.net/pics/122823ert.png
i want to know if it is a probleme of scale or what?
thanks in advance
You are using https://en.wikipedia.org/wiki/EPSG:4326 coords but OSM is using https://wiki.openstreetmap.org/wiki/EPSG:3857. The first one is in abs(180,90) where the second one is in in abs(6356752,6378137). so your points are basically at the center in spherical mercator and zooming very close will give your result. you have to convert your data e.g. using geotools

Categories