I need to combine several cubes in ARCore to become one ModelRenderable. I have code that is taking the vertices and submeshes from each cube and creating one ModelRenderable, however, only the last cube added is rendered.
I have only the last cube added showing, as previously said. The one strange thing I can see is that there are vertices that have the same positions, so I'm not sure if that's correct or not.
Here's my code taking each cube and adding the submeshes and vertices.
List<RenderableDefinition.Submesh> submeshes = new ArrayList<>();
List<Vertex> vertices = new ArrayList<>();
for (SubCube cube : cubes) {
submeshes.add(cube.getSubmesh());
vertices.addAll(cube.getVertices());
}
RenderableDefinition renderableDefinition = RenderableDefinition.builder().setVertices(vertices).setSubmeshes(submeshes).build();
CompletableFuture future = ModelRenderable.builder().setSource(renderableDefinition).build();
ModelRenderable result = (ModelRenderable) future.get();
Here's my code for creating a cube. It's basically identical to ShapeFactory.makeCube.
public SubCube makeCube(Vector3 size, Vector3 center, Material material) {
AndroidPreconditions.checkMinAndroidApiLevel();
Vector3 extents = size.scaled(0.5F);
Vector3 p0 = Vector3.add(center, new Vector3(-extents.x, -extents.y, extents.z));
Vector3 p1 = Vector3.add(center, new Vector3(extents.x, -extents.y, extents.z));
Vector3 p2 = Vector3.add(center, new Vector3(extents.x, -extents.y, -extents.z));
Vector3 p3 = Vector3.add(center, new Vector3(-extents.x, -extents.y, -extents.z));
Vector3 p4 = Vector3.add(center, new Vector3(-extents.x, extents.y, extents.z));
Vector3 p5 = Vector3.add(center, new Vector3(extents.x, extents.y, extents.z));
Vector3 p6 = Vector3.add(center, new Vector3(extents.x, extents.y, -extents.z));
Vector3 p7 = Vector3.add(center, new Vector3(-extents.x, extents.y, -extents.z));
Vector3 up = Vector3.up();
Vector3 down = Vector3.down();
Vector3 front = Vector3.forward();
Vector3 back = Vector3.back();
Vector3 left = Vector3.left();
Vector3 right = Vector3.right();
Vertex.UvCoordinate uv00 = new Vertex.UvCoordinate(0.0F, 0.0F);
Vertex.UvCoordinate uv10 = new Vertex.UvCoordinate(1.0F, 0.0F);
Vertex.UvCoordinate uv01 = new Vertex.UvCoordinate(0.0F, 1.0F);
Vertex.UvCoordinate uv11 = new Vertex.UvCoordinate(1.0F, 1.0F);
List<Vertex> vertices = Arrays.asList(
Vertex.builder().setPosition(p0).setNormal(down).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p1).setNormal(down).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p2).setNormal(down).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p3).setNormal(down).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p7).setNormal(left).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p4).setNormal(left).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p0).setNormal(left).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p3).setNormal(left).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p4).setNormal(front).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p5).setNormal(front).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p1).setNormal(front).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p0).setNormal(front).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p6).setNormal(back).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p7).setNormal(back).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p3).setNormal(back).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p2).setNormal(back).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p5).setNormal(right).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p6).setNormal(right).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p2).setNormal(right).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p1).setNormal(right).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p7).setNormal(up).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p6).setNormal(up).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p5).setNormal(up).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p4).setNormal(up).setUvCoordinate(uv00).build());
ArrayList<Integer> triangleIndices = new ArrayList(36);
for(int i = 0; i < 6; ++i) {
triangleIndices.add(3 + 4 * i);
triangleIndices.add(1 + 4 * i);
triangleIndices.add(0 + 4 * i);
triangleIndices.add(3 + 4 * i);
triangleIndices.add(2 + 4 * i);
triangleIndices.add(1 + 4 * i);
}
RenderableDefinition.Submesh submesh = RenderableDefinition.Submesh.builder().setTriangleIndices(triangleIndices).setMaterial(material).build();
return new SubCube(submesh, vertices);
}
I'm not getting error messages or anything. I know the passed-in positions to makeCube are different, so that's not the issue. Expected behavior is that I'm able to render more than one cube in one ModelRenderable.
As you merge all the vertices in a final array (named vertices), the triangle indices should be offset in each sub cube.
For the first cube, index values go from 0 to 23 (as you have 24 vertices per cube)
For the second cube, index values go from 24 to 47
For the ith cube, index values go from (i-1)*24 to i*24-1
Otherwise you can achieve the same result by creating only one cube geometry and a node hierarchy with each sub node using that renderable but a different position.
Related
This project is written entirely from scratch in Java. I've just been bored ever since Covid started, so I wanted something that would take up my time, and teach me something cool. I've been stuck on this problem for about a week now though. When I try to use my near plane clipping method it skews the new vertices to the opposite side of the screen, but sometimes times it works just fine.
Failure Screenshot
Success Screenshot
So my thought is maybe that since it works sometimes, I'm just not doing the clipping at the correct time in the pipeline?
I start by face culling and lighting,
Then I apply a Camera View Transformation to the Vertices,
Then I clip on the near plane
Finally I apply the projection matrix and Clip any remaining off screen Triangles
Code:
This calculates the intersection points. Sorry if it's messy or to long I'm not very experienced in coding, my major is physics, not CS.
public Vertex vectorIntersectPlane(Vector3d planePos, Vector3d planeNorm, Vector3d lineStart, Vector3d lineEnd){
float planeDot = planeNorm.dotProduct(planePos);
float startDot = lineStart.dotProduct(planeNorm);
float endDot = lineEnd.dotProduct(planeNorm);
float midPoint = (planeDot - startDot) / (endDot - startDot);
Vector3d lineStartEnd = lineEnd.sub(lineStart);
Vector3d lineToIntersect = lineStartEnd.scale(midPoint);
return new Vertex(lineStart.add(lineToIntersect));
}
public float distanceFromPlane(Vector3d planePos, Vector3d planeNorm, Vector3d vert){
float x = planeNorm.getX() * vert.getX();
float y = planeNorm.getY() * vert.getY();
float z = planeNorm.getZ() * vert.getZ();
return (x + y + z - (planeNorm.dotProduct(planePos)));
}
//When a triangle gets clipped it has 4 possible outcomes
// 1 it doesn't actually need clipping and gets returned
// 2 it gets clipped into 1 new triangle, for testing these are red
// 3 it gets clipped into 2 new triangles, for testing 1 is green, and 1 is blue
// 4 it is outside the view planes and shouldn't be rendered
public void clipTriangles(){
Vector3d planePos = new Vector3d(0, 0, ProjectionMatrix.fNear, 1f);
Vector3d planeNorm = Z_AXIS.clone();
final int length = triangles.size();
for(int i = 0; i < length; i++) {
Triangle t = triangles.get(i);
if(!t.isDraw())
continue;
Vector3d[] insidePoint = new Vector3d[3];
int insidePointCount = 0;
Vector3d[] outsidePoint = new Vector3d[3];
int outsidePointCount = 0;
float d0 = distanceFromPlane(planePos, planeNorm, t.getVerticesVectors()[0]);
float d1 = distanceFromPlane(planePos, planeNorm, t.getVerticesVectors()[1]);
float d2 = distanceFromPlane(planePos, planeNorm, t.getVerticesVectors()[2]);
//Storing distances from plane and counting inside outside points
{
if (d0 >= 0){
insidePoint[insidePointCount] = t.getVerticesVectors()[0];
insidePointCount++;
}else{
outsidePoint[outsidePointCount] = t.getVerticesVectors()[0];
outsidePointCount++;
}
if (d1 >= 0){
insidePoint[insidePointCount] = t.getVerticesVectors()[1];
insidePointCount++;
}else{
outsidePoint[outsidePointCount] = t.getVerticesVectors()[1];
outsidePointCount++;
}
if (d2 >= 0){
insidePoint[insidePointCount] = t.getVerticesVectors()[2];
insidePointCount++;
}else{
outsidePoint[outsidePointCount] = t.getVerticesVectors()[2];
}
}
//Triangle has 1 point still inside view, remove original triangle add new clipped triangle
if (insidePointCount == 1) {
t.dontDraw();
Vertex newVert1 = vectorIntersectPlane(planePos, planeNorm, insidePoint[0], outsidePoint[0]);
Vertex newVert2 = vectorIntersectPlane(planePos, planeNorm, insidePoint[0], outsidePoint[1]);
vertices.add(newVert1);
vertices.add(newVert2);
//Triangles are stored with vertex references instead of the actual vertex object.
Triangle temp = new Triangle(t.getVertKeys()[0], vertices.size() - 2, vertices.size() - 1, vertices);
temp.setColor(1,0,0, t.getBrightness(), t.getAlpha());
triangles.add(temp);
continue;
}
//Triangle has two points inside remove original add two new clipped triangles
if (insidePointCount == 2) {
t.dontDraw();
Vertex newVert1 = vectorIntersectPlane(planePos, planeNorm, insidePoint[0], outsidePoint[0]);
Vertex newVert2 = vectorIntersectPlane(planePos, planeNorm, insidePoint[1], outsidePoint[0]);
vertices.add(newVert1);
vertices.add(newVert2);
Triangle temp = new Triangle(t.getVertKeys()[0], t.getVertKeys()[1], vertices.size() - 1, vertices);
temp.setColor(0, 1, 0, t.getBrightness(), t.getAlpha());
triangles.add(temp);
temp = new Triangle(t.getVertKeys()[0], t.getVertKeys()[1], vertices.size() - 2, vertices);
temp.setColor(0, 0, 1, t.getBrightness(), t.getAlpha());
triangles.add(temp);
continue;
}
}
}
I figured out the problem, The new clipped triangles were not being given the correct vertex references. they were just being given the first vertex of the triangle irregardless of if that was inside the view or not.
I want to place some objects (ModelInstance) on the floor (also a ModelInstance) of my game world. To get the position for these objects, I let a Ray intersect the floor. The point of intersection should then be the required position.
My plan is to set the origin of the ray below the floor, so that the direction of the ray goes straight up and hits the floor from below. Both ModelInstances are .g3db Models made in Blender.
Vector3 dir = new Vector3(0, 10, 0); //Vector points upwards
Ray ray = new Ray(new Vector3(), dir.cpy());
Mesh mesh = landscape.model.meshes.first(); //The floor ModelInstance, has only a single mesh
int fac = mesh.getVertexSize();
float[] verts = new float[mesh.getNumVertices() * fac];
short[] inds = new short[mesh.getNumIndices()];
mesh.getVertices(verts);
mesh.getIndices(inds);
for (int j = 0; j < 10; j++) { //add 10 objects to the floor
Vector3 out = new Vector3(- 15, -50f, - j * 5);
ray.origin.set(out.cpy()); //set the origin of the vector below the floor
if (Intersector.intersectRayTriangles(ray, verts, inds, fac, out)) {
System.out.println(j + " out = " + out); //out should be the position for my objects
}
}
The output of the intersectRayTriangles Method is exactly the initial position below the floor. But this point is not anywhere close to the floor. How do I get the proper point of intersection?
I finally found a (semi optimal) solution which works.
landscape is a ModelInstance, created with Blender.
ArrayList<Vector3> vertices = new ArrayList<>();
landscape.calculateTransforms();
Renderable rend = new Renderable();
Mesh mesh = landscape.getRenderable(rend).meshPart.mesh;
int vertexSize = mesh.getVertexSize() / 4;
float[] verts = new float[mesh.getNumVertices() * vertexSize];
short[] inds = new short[mesh.getNumIndices()];
mesh.getVertices(verts);
mesh.getIndices(inds);
for (int i = 0; i < inds.length; i++) {
int i1 = inds[i] * vertexSize;
Vector3 v = new Vector3(verts[i1], verts[i1 + 1], verts[i1 + 2]);
v.set(v.prj(rend.worldTransform));
vertices.add(v);
}
Vector3 dir = new Vector3(0, 10, 0);
Vector3 pos = new Vector3(random.nextFloat(),random.nextFloat(),random.nextFloat());
Ray ray = new Ray(pos, dir.cpy());
for (int i = 0; i < vertices.size() - 3; i+=3){
if (Intersector.intersectRayTriangle(ray, vertices.get(i), vertices.get(i + 1), vertices.get(i + 2), pos)) {
//pos now contains the correct coordinates
break;
}
}
Note that the y-Axis faces upwards
I am trying to find all intersection points (their x and y values) based on 4 corner points that I always have and number of cells (in my case 9, so 9x9 matrix, sudoku puzzle).
My 4 corners are marked with green cross, and taged P1 to P4.
I tried to calculate it, and only managed to do it precisely for the first row.
double xDis = p2.x - p1.x;
double yDis = p2.y - p1.y;
double xW = xDis / 9;
double yH = yDis / 9;
for (int i = 0; i < 10; i++) {
Point point = new Point(p1.x + (i * xW), p1.y + (i * yH));
}
This code would work exactly as I expected it but only for the first row.
What am I missing here ? Is there some kind of algoritmh that already does this ? Any hints are welcome.
Note that I am using android with OpenCV library.
As written above in the comments, I ended up warping the image and then cutting it. It looks something like this
if (points != null) {
Point p1 = points[0];
Point p2 = points[1];
Point p3 = points[2];
Point p4 = points[3];
MatOfPoint2f src = new MatOfPoint2f(
p1,
p2,
p3,
p4);
drawMarker(frame, p1, new Scalar(255,0,0), 0, 20, 1);
drawMarker(frame, p2, new Scalar(255,0,0), 0, 20, 1);
drawMarker(frame, p3, new Scalar(255,0,0), 0, 20, 1);
drawMarker(frame, p4, new Scalar(255,0,0), 0, 20, 1);
double x = p2.x - p1.x;
double y = p3.y - p2.y;
MatOfPoint2f dst = new MatOfPoint2f(
new Point(0, 0),
new Point(x,0),
new Point(0,y),
new Point(x,y)
);
Mat warpMat = Imgproc.getPerspectiveTransform(src, dst);
//This is you new image as Mat
Mat destImage = new Mat();
Imgproc.warpPerspective(bw2, destImage, warpMat, new Size(x, y));
List<Mat> cells = getCells(destImage, destImage.width() / 9, destImage.height / 9);
}
private List<Mat> getCells(Mat m, int width, int height) {
Size cellSize = new Size(width, height);
List<Mat> cells = new ArrayList<>();
for (int row = 0; row < 9; row++) {
for (int col = 0; col < 9; col++) {
Rect rect = new Rect(new Point(col * width, row * height), cellSize);
Mat digit = new Mat(m, rect).clone();
cells.add(digit);
}
}
return cells;
}
You only do your calculation once, on the first row.
Put your for loop inside of another for loop and run it 10 times, and you should be good (Adding in whatever x,y translation happens as you traverse downwards in y).
As for if there is any automated way to do this, yes. I could suggest using Harris Corner Detection. I suspect using the right thresholds could get you only the thicker line corners. You could also try doing line detection and looking for intersections.
Also, this article may be helpful if you find you aren't finding good lines/corners. You can correct the shading from the lighting and get a good clean image to analyze.
I try to merge multiple meshes with a transformation matrix into a single mesh.
Each mesh has 4 data sets.
Vertices
Indices
Texture Coordinates
Normals
The way I'm trying to do it is supposed to be lazy and not cost that much CPU.
It is a 3 step process.
Multiply each vertex and normal with the transformation matrix.
Merge the Vertices, Texture Coordinates and Normals of each mesh into 3 big arrays.
Merge the Indices of each mesh into a single array but use the sum of the previous meshes as an offset. For example: If mesh 1 has 800 indices then 800 has to be added to all of the indices from mesh 2.
This method has two big problems.
Duplicate vertices are not shared
Parts that are invisible due to clipping are not removed
But that is OK as this is supposed to be a lazy method with not much CPU usage. It is already optimal for creating meshes for grass and bushes.
I have attempted an implementation of this method which looks like this:
public static final MeshData mergeLazy(List<MeshData> meshes, List<Matrix4f> transformations) {
int lengthVertices = 0;
int lengthNormals = 0;
int lengthTexCoords = 0;
int lengthIndices = 0;
ArrayList<Integer> indexLengths = new ArrayList<>();
for(MeshData mesh : meshes) {
lengthVertices += mesh.getVertices().length;
lengthNormals += mesh.getNormals().length;
lengthTexCoords += mesh.getTextureCoordinates().length;
int length = mesh.getIndices().length;
lengthIndices += length;
indexLengths.add(length);
}
float[] vertices = new float[lengthVertices];
float[] texCoords = new float[lengthTexCoords];
float[] normals = new float[lengthNormals];
int[] indices = new int[lengthIndices];
int iv = 0;
int ivt = 0;
int ivn = 0;
int i = 0;
int indexLength = 0;
for(int im = 0; im < meshes.size(); im++) {
MeshData mesh = meshes.get(im);
float[] mVertices = mesh.getVertices();
float[] mTexCoords = mesh.getTextureCoordinates();
float[] mNormals = mesh.getNormals();
int[] mIndices = mesh.getIndices();
Matrix4f transformation = transformations.get(im);
for(int index = 0; index < mVertices.length; index += 3) {
Vector3f vertex = MatrixUtil.multiply(transformation, mVertices[index], mVertices[index + 1], mVertices[index + 2]);
vertices[iv++] = vertex.x;
vertices[iv++] = vertex.y;
vertices[iv++] = vertex.z;
Vector3f normal = MatrixUtil.multiply(transformation, mNormals[index], mNormals[index + 1], mNormals[index + 2]);
normals[ivn++] = normal.x;
normals[ivn++] = normal.y;
normals[ivn++] = normal.z;
}
for(int index = 0; index < mTexCoords.length; index++) {
texCoords[ivt++] = mTexCoords[index];
}
for(int index = 0; index < mIndices.length; index++) {
indices[i++] = indexLength + mIndices[index];
}
indexLength += indexLengths.get(im);
}
MeshData data = new MeshData();
data.setIndices(indices);
data.setNormals(normals);
data.setTextureCoordinates(texCoords);
data.setVertices(vertices);
return data;
}
In the end I actually have a single mesh and the multiplying of the transformation also works.... for rotation and scaling, but here come the problems.
The multiplying with the transformation does NOT work for the translation.
My method for multiplying a matrix with a vector looks like this:
public static final Vector3f multiply(Matrix4f matrix, float x, float y, float z) {
Vector3f result = new Vector3f();
result.x = x * matrix.m00 + y * matrix.m01 + z * matrix.m02;
result.y = x * matrix.m10 + y * matrix.m11 + z * matrix.m12;
result.z = x * matrix.m20 + y * matrix.m21 + z * matrix.m22;
return result;
}
And the second problem is that the textures of the second mesh are somewaht off.
Here is a picture:
As you can see the second mesh only has about 1/4 of the actual texture.
The code I used to generate this mesh looks like this:
Material grassMaterial = new Material();
grassMaterial.setMinBrightness(0.1F);
grassMaterial.setColorMap(new Texture(new XImgTextureReader().read(new FileInputStream("res/textures/grass2.ximg"))));
grassMaterial.setAffectedByLight(true);
grassMaterial.setTransparent(true);
grassMaterial.setUpwardsNormals(true);
grassMaterial.setFog(fog);
MeshData quad = Quad.generateMeshData(
new Vector3f(0.0F, 1F, 0.0F),
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(1F, 0.0F, 0.0F),
new Vector3f(1F, 1F, 0.0F)
);
StaticMesh grassMesh = new StaticMesh(MeshUtil.mergeLazy(Arrays.asList(quad, quad), Arrays.asList(
MatrixUtil.createTransformationMatrx(
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(1.0F, 1.0F, 1.0F)
),
MatrixUtil.createTransformationMatrx(
new Vector3f(0F, 0.0F, -0F),
new Vector3f(0.0F, 90.0F, 0.0F),
new Vector3f(1.0F, 1.0F, 1.0F)
)
)));
grassMesh.setCullMode(StaticMesh.CULLING_DISABLED);
Entity grass = new Entity();
grass.setShaderPipeline(shaderPipeline);
grass.setMaterial(grassMaterial);
grass.setMesh(grassMesh);
grass.setTranslation(0, 0, 1);
My question now is: What did I do wrong? Why is the texture so weird and why does the multiplication with the transformation not work for the translation?
If you need more of the code, I have a GitHub Repo with the Eclipse Project here: https://github.com/RalleYTN/Heroica-Fabulis
Thanks to #Rabbid76 I came closer to my answer and now I have finally found the problem.
The first problem with the translation not working was fixed by multiplying the transformation vertically instead of horizontally. Thanks again #Rabidd76 .
And the reason why the textures where so weird is because I merged the indices incorrectly. I should not have taken the sum of all indices in the meshes before as offset but the sum of the vertices.
Here is now the working method:
public static final MeshData mergeLazy(List<MeshData> meshes, List<Matrix4f> transformations) {
ArrayList<Float> vertices = new ArrayList<>();
ArrayList<Float> texCoords = new ArrayList<>();
ArrayList<Float> normals = new ArrayList<>();
ArrayList<Integer> indices = new ArrayList<>();
int offset = 0;
int m = 0;
for(MeshData mesh : meshes) {
Matrix4f transformation = transformations.get(m);
float[] mVertices = mesh.getVertices();
float[] mNormals = mesh.getNormals();
for(int index = 0; index < mesh.getVertices().length; index += 3) {
Vector3f vertex = MatrixUtil.multiply(transformation, mVertices[index], mVertices[index + 1], mVertices[index + 2]);
vertices.add(vertex.x);
vertices.add(vertex.y);
vertices.add(vertex.z);
Vector3f normal = MatrixUtil.multiply(transformation, mNormals[index], mNormals[index + 1], mNormals[index + 2]);
normals.add(normal.x);
normals.add(normal.y);
normals.add(normal.z);
}
ListUtil.addFloatArray(texCoords, mesh.getTextureCoordinates());
int[] mIndices = mesh.getIndices();
for(int index : mIndices) {
indices.add(index + offset);
}
offset += mVertices.length / 3;
m++;
}
MeshData mesh = new MeshData();
mesh.setIndices(ListUtil.toPrimitiveIntArray(indices));
mesh.setNormals(ListUtil.toPrimitiveFloatArray(normals));
mesh.setTextureCoordinates(ListUtil.toPrimitiveFloatArray(texCoords));
mesh.setVertices(ListUtil.toPrimitiveFloatArray(vertices));
return mesh;
}
I know how to apply sprite to a Box2d body, but is there a way to apply a texture to it? Basically what I am trying to do is to have one texture, let's say 32x32, and then just repeat it all over the body, like the ground in this image:
Is this possible in LibGDX?
EDIT:
My latest try:
Fixture fixture = body.createFixture(fixtureDef);
Vector2 mTmp = new Vector2();
PolygonShape shape = (PolygonShape) fixture.getShape();
int vertexCount = shape.getVertexCount();
float[] vertices = new float[vertexCount * 2];
for (int k = 0; k < vertexCount; k++) {
shape.getVertex(k, mTmp);
mTmp.rotate(body.getAngle()* MathUtils.radiansToDegrees);
mTmp.add(body.getPosition());
vertices[k * 2] = mTmp.x * PIXELS_PER_METER;
vertices[k * 2 + 1] = mTmp.y * PIXELS_PER_METER;
}
short triangles[] = new EarClippingTriangulator().computeTriangles(vertices).toArray();
Texture texture = new Texture(Gdx.files.internal("data/block.png"));
texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
TextureRegion textureRegion = new TextureRegion(texture, 0, 0, texture.getWidth(), texture.getHeight());
PolygonRegion region = new PolygonRegion(textureRegion, vertices, triangles);
poly = new PolygonSprite(region);
and in rendering:
polyBatch.begin();
poly.draw(polyBatch);
polyBatch.end();
but it doesn't draw anything.
After importing different shape of level, I get this result:
Only one polygon ( shown inside of red circle ) gets the texture. Whole level is imported as a JSON file
Yes This is very much possible in libgdx.
You just need to create a polygon region for that
PolygonRegion region = new PolygonRegion(textureRegion, vertices, triangles);
Here textureRegion is the region that you want to repeat.
vertices and triangles define the shape of the region.
This polygon region is a repeated texture that is form red from the vertices and triangles. You can render this region using polygon batch just same as we do it with sprite batch.
UPDATE
PolygonShape shape = (PolygonShape) fixture.getShape();
int vertexCount = shape.getVertexCount();
float[] vertices = new float[vertexCount * 2];
for (int k = 0; k < vertexCount; k++) {
shape.getVertex(k, mTmp);
mTmp.rotate(body.getAngle()* MathUtils.radiansToDegrees);
mTmp.add(bodyPos);
vertices[k * 2] = mTmp.x * PIXELS_PER_METER;
vertices[k * 2 + 1] = mTmp.y * PIXELS_PER_METER;
}
short triangles[] = new EarClippingTriangulator()
.computeTriangles(vertices)
.toArray();
PolygonRegion region = new PolygonRegion(
textureRegion, vertices, triangles);