I want to place some objects (ModelInstance) on the floor (also a ModelInstance) of my game world. To get the position for these objects, I let a Ray intersect the floor. The point of intersection should then be the required position.
My plan is to set the origin of the ray below the floor, so that the direction of the ray goes straight up and hits the floor from below. Both ModelInstances are .g3db Models made in Blender.
Vector3 dir = new Vector3(0, 10, 0); //Vector points upwards
Ray ray = new Ray(new Vector3(), dir.cpy());
Mesh mesh = landscape.model.meshes.first(); //The floor ModelInstance, has only a single mesh
int fac = mesh.getVertexSize();
float[] verts = new float[mesh.getNumVertices() * fac];
short[] inds = new short[mesh.getNumIndices()];
mesh.getVertices(verts);
mesh.getIndices(inds);
for (int j = 0; j < 10; j++) { //add 10 objects to the floor
Vector3 out = new Vector3(- 15, -50f, - j * 5);
ray.origin.set(out.cpy()); //set the origin of the vector below the floor
if (Intersector.intersectRayTriangles(ray, verts, inds, fac, out)) {
System.out.println(j + " out = " + out); //out should be the position for my objects
}
}
The output of the intersectRayTriangles Method is exactly the initial position below the floor. But this point is not anywhere close to the floor. How do I get the proper point of intersection?
I finally found a (semi optimal) solution which works.
landscape is a ModelInstance, created with Blender.
ArrayList<Vector3> vertices = new ArrayList<>();
landscape.calculateTransforms();
Renderable rend = new Renderable();
Mesh mesh = landscape.getRenderable(rend).meshPart.mesh;
int vertexSize = mesh.getVertexSize() / 4;
float[] verts = new float[mesh.getNumVertices() * vertexSize];
short[] inds = new short[mesh.getNumIndices()];
mesh.getVertices(verts);
mesh.getIndices(inds);
for (int i = 0; i < inds.length; i++) {
int i1 = inds[i] * vertexSize;
Vector3 v = new Vector3(verts[i1], verts[i1 + 1], verts[i1 + 2]);
v.set(v.prj(rend.worldTransform));
vertices.add(v);
}
Vector3 dir = new Vector3(0, 10, 0);
Vector3 pos = new Vector3(random.nextFloat(),random.nextFloat(),random.nextFloat());
Ray ray = new Ray(pos, dir.cpy());
for (int i = 0; i < vertices.size() - 3; i+=3){
if (Intersector.intersectRayTriangle(ray, vertices.get(i), vertices.get(i + 1), vertices.get(i + 2), pos)) {
//pos now contains the correct coordinates
break;
}
}
Note that the y-Axis faces upwards
Related
I need to combine several cubes in ARCore to become one ModelRenderable. I have code that is taking the vertices and submeshes from each cube and creating one ModelRenderable, however, only the last cube added is rendered.
I have only the last cube added showing, as previously said. The one strange thing I can see is that there are vertices that have the same positions, so I'm not sure if that's correct or not.
Here's my code taking each cube and adding the submeshes and vertices.
List<RenderableDefinition.Submesh> submeshes = new ArrayList<>();
List<Vertex> vertices = new ArrayList<>();
for (SubCube cube : cubes) {
submeshes.add(cube.getSubmesh());
vertices.addAll(cube.getVertices());
}
RenderableDefinition renderableDefinition = RenderableDefinition.builder().setVertices(vertices).setSubmeshes(submeshes).build();
CompletableFuture future = ModelRenderable.builder().setSource(renderableDefinition).build();
ModelRenderable result = (ModelRenderable) future.get();
Here's my code for creating a cube. It's basically identical to ShapeFactory.makeCube.
public SubCube makeCube(Vector3 size, Vector3 center, Material material) {
AndroidPreconditions.checkMinAndroidApiLevel();
Vector3 extents = size.scaled(0.5F);
Vector3 p0 = Vector3.add(center, new Vector3(-extents.x, -extents.y, extents.z));
Vector3 p1 = Vector3.add(center, new Vector3(extents.x, -extents.y, extents.z));
Vector3 p2 = Vector3.add(center, new Vector3(extents.x, -extents.y, -extents.z));
Vector3 p3 = Vector3.add(center, new Vector3(-extents.x, -extents.y, -extents.z));
Vector3 p4 = Vector3.add(center, new Vector3(-extents.x, extents.y, extents.z));
Vector3 p5 = Vector3.add(center, new Vector3(extents.x, extents.y, extents.z));
Vector3 p6 = Vector3.add(center, new Vector3(extents.x, extents.y, -extents.z));
Vector3 p7 = Vector3.add(center, new Vector3(-extents.x, extents.y, -extents.z));
Vector3 up = Vector3.up();
Vector3 down = Vector3.down();
Vector3 front = Vector3.forward();
Vector3 back = Vector3.back();
Vector3 left = Vector3.left();
Vector3 right = Vector3.right();
Vertex.UvCoordinate uv00 = new Vertex.UvCoordinate(0.0F, 0.0F);
Vertex.UvCoordinate uv10 = new Vertex.UvCoordinate(1.0F, 0.0F);
Vertex.UvCoordinate uv01 = new Vertex.UvCoordinate(0.0F, 1.0F);
Vertex.UvCoordinate uv11 = new Vertex.UvCoordinate(1.0F, 1.0F);
List<Vertex> vertices = Arrays.asList(
Vertex.builder().setPosition(p0).setNormal(down).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p1).setNormal(down).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p2).setNormal(down).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p3).setNormal(down).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p7).setNormal(left).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p4).setNormal(left).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p0).setNormal(left).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p3).setNormal(left).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p4).setNormal(front).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p5).setNormal(front).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p1).setNormal(front).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p0).setNormal(front).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p6).setNormal(back).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p7).setNormal(back).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p3).setNormal(back).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p2).setNormal(back).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p5).setNormal(right).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p6).setNormal(right).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p2).setNormal(right).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p1).setNormal(right).setUvCoordinate(uv00).build(),
Vertex.builder().setPosition(p7).setNormal(up).setUvCoordinate(uv01).build(),
Vertex.builder().setPosition(p6).setNormal(up).setUvCoordinate(uv11).build(),
Vertex.builder().setPosition(p5).setNormal(up).setUvCoordinate(uv10).build(),
Vertex.builder().setPosition(p4).setNormal(up).setUvCoordinate(uv00).build());
ArrayList<Integer> triangleIndices = new ArrayList(36);
for(int i = 0; i < 6; ++i) {
triangleIndices.add(3 + 4 * i);
triangleIndices.add(1 + 4 * i);
triangleIndices.add(0 + 4 * i);
triangleIndices.add(3 + 4 * i);
triangleIndices.add(2 + 4 * i);
triangleIndices.add(1 + 4 * i);
}
RenderableDefinition.Submesh submesh = RenderableDefinition.Submesh.builder().setTriangleIndices(triangleIndices).setMaterial(material).build();
return new SubCube(submesh, vertices);
}
I'm not getting error messages or anything. I know the passed-in positions to makeCube are different, so that's not the issue. Expected behavior is that I'm able to render more than one cube in one ModelRenderable.
As you merge all the vertices in a final array (named vertices), the triangle indices should be offset in each sub cube.
For the first cube, index values go from 0 to 23 (as you have 24 vertices per cube)
For the second cube, index values go from 24 to 47
For the ith cube, index values go from (i-1)*24 to i*24-1
Otherwise you can achieve the same result by creating only one cube geometry and a node hierarchy with each sub node using that renderable but a different position.
I am trying to find all intersection points (their x and y values) based on 4 corner points that I always have and number of cells (in my case 9, so 9x9 matrix, sudoku puzzle).
My 4 corners are marked with green cross, and taged P1 to P4.
I tried to calculate it, and only managed to do it precisely for the first row.
double xDis = p2.x - p1.x;
double yDis = p2.y - p1.y;
double xW = xDis / 9;
double yH = yDis / 9;
for (int i = 0; i < 10; i++) {
Point point = new Point(p1.x + (i * xW), p1.y + (i * yH));
}
This code would work exactly as I expected it but only for the first row.
What am I missing here ? Is there some kind of algoritmh that already does this ? Any hints are welcome.
Note that I am using android with OpenCV library.
As written above in the comments, I ended up warping the image and then cutting it. It looks something like this
if (points != null) {
Point p1 = points[0];
Point p2 = points[1];
Point p3 = points[2];
Point p4 = points[3];
MatOfPoint2f src = new MatOfPoint2f(
p1,
p2,
p3,
p4);
drawMarker(frame, p1, new Scalar(255,0,0), 0, 20, 1);
drawMarker(frame, p2, new Scalar(255,0,0), 0, 20, 1);
drawMarker(frame, p3, new Scalar(255,0,0), 0, 20, 1);
drawMarker(frame, p4, new Scalar(255,0,0), 0, 20, 1);
double x = p2.x - p1.x;
double y = p3.y - p2.y;
MatOfPoint2f dst = new MatOfPoint2f(
new Point(0, 0),
new Point(x,0),
new Point(0,y),
new Point(x,y)
);
Mat warpMat = Imgproc.getPerspectiveTransform(src, dst);
//This is you new image as Mat
Mat destImage = new Mat();
Imgproc.warpPerspective(bw2, destImage, warpMat, new Size(x, y));
List<Mat> cells = getCells(destImage, destImage.width() / 9, destImage.height / 9);
}
private List<Mat> getCells(Mat m, int width, int height) {
Size cellSize = new Size(width, height);
List<Mat> cells = new ArrayList<>();
for (int row = 0; row < 9; row++) {
for (int col = 0; col < 9; col++) {
Rect rect = new Rect(new Point(col * width, row * height), cellSize);
Mat digit = new Mat(m, rect).clone();
cells.add(digit);
}
}
return cells;
}
You only do your calculation once, on the first row.
Put your for loop inside of another for loop and run it 10 times, and you should be good (Adding in whatever x,y translation happens as you traverse downwards in y).
As for if there is any automated way to do this, yes. I could suggest using Harris Corner Detection. I suspect using the right thresholds could get you only the thicker line corners. You could also try doing line detection and looking for intersections.
Also, this article may be helpful if you find you aren't finding good lines/corners. You can correct the shading from the lighting and get a good clean image to analyze.
I try to merge multiple meshes with a transformation matrix into a single mesh.
Each mesh has 4 data sets.
Vertices
Indices
Texture Coordinates
Normals
The way I'm trying to do it is supposed to be lazy and not cost that much CPU.
It is a 3 step process.
Multiply each vertex and normal with the transformation matrix.
Merge the Vertices, Texture Coordinates and Normals of each mesh into 3 big arrays.
Merge the Indices of each mesh into a single array but use the sum of the previous meshes as an offset. For example: If mesh 1 has 800 indices then 800 has to be added to all of the indices from mesh 2.
This method has two big problems.
Duplicate vertices are not shared
Parts that are invisible due to clipping are not removed
But that is OK as this is supposed to be a lazy method with not much CPU usage. It is already optimal for creating meshes for grass and bushes.
I have attempted an implementation of this method which looks like this:
public static final MeshData mergeLazy(List<MeshData> meshes, List<Matrix4f> transformations) {
int lengthVertices = 0;
int lengthNormals = 0;
int lengthTexCoords = 0;
int lengthIndices = 0;
ArrayList<Integer> indexLengths = new ArrayList<>();
for(MeshData mesh : meshes) {
lengthVertices += mesh.getVertices().length;
lengthNormals += mesh.getNormals().length;
lengthTexCoords += mesh.getTextureCoordinates().length;
int length = mesh.getIndices().length;
lengthIndices += length;
indexLengths.add(length);
}
float[] vertices = new float[lengthVertices];
float[] texCoords = new float[lengthTexCoords];
float[] normals = new float[lengthNormals];
int[] indices = new int[lengthIndices];
int iv = 0;
int ivt = 0;
int ivn = 0;
int i = 0;
int indexLength = 0;
for(int im = 0; im < meshes.size(); im++) {
MeshData mesh = meshes.get(im);
float[] mVertices = mesh.getVertices();
float[] mTexCoords = mesh.getTextureCoordinates();
float[] mNormals = mesh.getNormals();
int[] mIndices = mesh.getIndices();
Matrix4f transformation = transformations.get(im);
for(int index = 0; index < mVertices.length; index += 3) {
Vector3f vertex = MatrixUtil.multiply(transformation, mVertices[index], mVertices[index + 1], mVertices[index + 2]);
vertices[iv++] = vertex.x;
vertices[iv++] = vertex.y;
vertices[iv++] = vertex.z;
Vector3f normal = MatrixUtil.multiply(transformation, mNormals[index], mNormals[index + 1], mNormals[index + 2]);
normals[ivn++] = normal.x;
normals[ivn++] = normal.y;
normals[ivn++] = normal.z;
}
for(int index = 0; index < mTexCoords.length; index++) {
texCoords[ivt++] = mTexCoords[index];
}
for(int index = 0; index < mIndices.length; index++) {
indices[i++] = indexLength + mIndices[index];
}
indexLength += indexLengths.get(im);
}
MeshData data = new MeshData();
data.setIndices(indices);
data.setNormals(normals);
data.setTextureCoordinates(texCoords);
data.setVertices(vertices);
return data;
}
In the end I actually have a single mesh and the multiplying of the transformation also works.... for rotation and scaling, but here come the problems.
The multiplying with the transformation does NOT work for the translation.
My method for multiplying a matrix with a vector looks like this:
public static final Vector3f multiply(Matrix4f matrix, float x, float y, float z) {
Vector3f result = new Vector3f();
result.x = x * matrix.m00 + y * matrix.m01 + z * matrix.m02;
result.y = x * matrix.m10 + y * matrix.m11 + z * matrix.m12;
result.z = x * matrix.m20 + y * matrix.m21 + z * matrix.m22;
return result;
}
And the second problem is that the textures of the second mesh are somewaht off.
Here is a picture:
As you can see the second mesh only has about 1/4 of the actual texture.
The code I used to generate this mesh looks like this:
Material grassMaterial = new Material();
grassMaterial.setMinBrightness(0.1F);
grassMaterial.setColorMap(new Texture(new XImgTextureReader().read(new FileInputStream("res/textures/grass2.ximg"))));
grassMaterial.setAffectedByLight(true);
grassMaterial.setTransparent(true);
grassMaterial.setUpwardsNormals(true);
grassMaterial.setFog(fog);
MeshData quad = Quad.generateMeshData(
new Vector3f(0.0F, 1F, 0.0F),
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(1F, 0.0F, 0.0F),
new Vector3f(1F, 1F, 0.0F)
);
StaticMesh grassMesh = new StaticMesh(MeshUtil.mergeLazy(Arrays.asList(quad, quad), Arrays.asList(
MatrixUtil.createTransformationMatrx(
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(1.0F, 1.0F, 1.0F)
),
MatrixUtil.createTransformationMatrx(
new Vector3f(0F, 0.0F, -0F),
new Vector3f(0.0F, 90.0F, 0.0F),
new Vector3f(1.0F, 1.0F, 1.0F)
)
)));
grassMesh.setCullMode(StaticMesh.CULLING_DISABLED);
Entity grass = new Entity();
grass.setShaderPipeline(shaderPipeline);
grass.setMaterial(grassMaterial);
grass.setMesh(grassMesh);
grass.setTranslation(0, 0, 1);
My question now is: What did I do wrong? Why is the texture so weird and why does the multiplication with the transformation not work for the translation?
If you need more of the code, I have a GitHub Repo with the Eclipse Project here: https://github.com/RalleYTN/Heroica-Fabulis
Thanks to #Rabbid76 I came closer to my answer and now I have finally found the problem.
The first problem with the translation not working was fixed by multiplying the transformation vertically instead of horizontally. Thanks again #Rabidd76 .
And the reason why the textures where so weird is because I merged the indices incorrectly. I should not have taken the sum of all indices in the meshes before as offset but the sum of the vertices.
Here is now the working method:
public static final MeshData mergeLazy(List<MeshData> meshes, List<Matrix4f> transformations) {
ArrayList<Float> vertices = new ArrayList<>();
ArrayList<Float> texCoords = new ArrayList<>();
ArrayList<Float> normals = new ArrayList<>();
ArrayList<Integer> indices = new ArrayList<>();
int offset = 0;
int m = 0;
for(MeshData mesh : meshes) {
Matrix4f transformation = transformations.get(m);
float[] mVertices = mesh.getVertices();
float[] mNormals = mesh.getNormals();
for(int index = 0; index < mesh.getVertices().length; index += 3) {
Vector3f vertex = MatrixUtil.multiply(transformation, mVertices[index], mVertices[index + 1], mVertices[index + 2]);
vertices.add(vertex.x);
vertices.add(vertex.y);
vertices.add(vertex.z);
Vector3f normal = MatrixUtil.multiply(transformation, mNormals[index], mNormals[index + 1], mNormals[index + 2]);
normals.add(normal.x);
normals.add(normal.y);
normals.add(normal.z);
}
ListUtil.addFloatArray(texCoords, mesh.getTextureCoordinates());
int[] mIndices = mesh.getIndices();
for(int index : mIndices) {
indices.add(index + offset);
}
offset += mVertices.length / 3;
m++;
}
MeshData mesh = new MeshData();
mesh.setIndices(ListUtil.toPrimitiveIntArray(indices));
mesh.setNormals(ListUtil.toPrimitiveFloatArray(normals));
mesh.setTextureCoordinates(ListUtil.toPrimitiveFloatArray(texCoords));
mesh.setVertices(ListUtil.toPrimitiveFloatArray(vertices));
return mesh;
}
I'm trying to detect corners, but the coordinates I get are always off-center and saddle-points are detected Multiple times.
I tried cornerHarris, cornerMinEigenVal, preCornerDetect, goodFeaturesToTrack, and cornerEigenValsAndVecs, but they all seem to lead to the same result. I haven't tried findChessboardCorners because my corners are not laid out in a nice grid of n×m, are not all saddle-type, and many more reasons.
What I have now:
Given the (pre-processed) camera image below with some positive, negative, and saddle corners:
After cornerHarris(img, energy, 20, 9, 0.1) (I increased blockSize to 20 for illustrative purposes but small values don't work either) I get this image:
It seems to detect 10 corners but the way they are positioned is odd. I superimposed this image on the original to show my problem:
The point of highest matching energy is offset towards the inside of the corner and there is a plume pointing away from the corner. The saddle corners seem to generate four separate plumes all superimposed.
Indeed, when I perform a corner-search using this energy image, I get something like:
/
What am I doing wrong and how can I detect corners accurately like in this mock image?
[[edit]] MCVE:
public class CornerTest {
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
private static Mat energy = new Mat();
private static Mat idx = new Mat();
public static void main(String... args) {
Mat byteImage = Highgui.imread("KXw7O.png");
if (byteImage.channels() > 1)
Imgproc.cvtColor(byteImage, byteImage, Imgproc.COLOR_BGR2GRAY);
// Preprocess
Mat floatImage = new Mat();
byteImage.convertTo(floatImage, CvType.CV_32F);
// Corner detect
Mat imageToShow = findCorners(floatImage);
// Show in GUI
imageToShow.convertTo(byteImage, CvType.CV_8U);
BufferedImage bufImage = new BufferedImage(byteImage.width(), byteImage.height(), BufferedImage.TYPE_BYTE_GRAY);
byte[] imgArray = ((DataBufferByte)bufImage.getRaster().getDataBuffer()).getData();
byteImage.get(0, 0, imgArray);
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.getContentPane().add(new JLabel(new ImageIcon(bufImage)));
frame.pack();
frame.setVisible(true);
}
private static Mat findCorners(Mat image) {
Imgproc.cornerHarris(image, energy, 20, 9, 0.1);
// Corner-search:
int minDistance = 16;
Core.MinMaxLocResult minMaxLoc = Core.minMaxLoc(
energy.submat(20, energy.rows() - 20, 20, energy.rows() - 20));
float thr = (float)minMaxLoc.maxVal / 4;
Mat tmp = energy.reshape(1, 1);
Core.sortIdx(tmp, idx, 16); // 16 = CV_SORT_EVERY_ROW | CV_SORT_DESCENDING
int[] idxArray = new int[idx.cols()];
idx.get(0, 0, idxArray);
float[] energyArray = new float[idx.cols()];
energy.get(0, 0, energyArray);
int n = 0;
for (int p : idxArray) {
if (energyArray[p] == -1) continue;
if (energyArray[p] < thr) break;
n++;
int x = p % image.cols();
int y = p / image.cols();
// Exclude a disk around this corner from potential future candidates
int u0 = Math.max(x - minDistance, 0) - x;
int u1 = Math.min(x + minDistance, image.cols() - 1) - x;
int v0 = Math.max(y - minDistance, 0) - y;
int v1 = Math.min(y + minDistance, image.rows() - 1) - y;
for (int v = v0; v <= v1; v++)
for (int u = u0; u <= u1; u++)
if (u * u + v * v <= minDistance * minDistance)
energyArray[p + u + v * image.cols()] = -1;
// A corner is found!
Core.circle(image, new Point(x, y), minDistance / 2, new Scalar(255, 255, 255), 1);
Core.circle(energy, new Point(x, y), minDistance / 2, new Scalar(minMaxLoc.maxVal, minMaxLoc.maxVal, minMaxLoc.maxVal), 1);
}
System.out.println("nCorners: " + n);
// Rescale energy image for display purpose only
Core.multiply(energy, new Scalar(255.0 / minMaxLoc.maxVal), energy);
// return image;
return energy;
}
}
I'm currently trying to develop a ArUco cube detector for a project. The goal is to have a more stable and accurate pose estimation without using a large ArUco board. For this to work however, I need to know the orientation of each of the markers. Using the draw3dAxis method, I discovered that the X and Y axis did not consistently appear in the same location. Here is a video demonstrating the issue: https://youtu.be/gS7BWKm2nmg
It seems to be a problem with the Rvec detection. There is a clear shift in the first two values of the Rvec, which will stay fairly consistent until the axis swaps. When this axis swap happens the values can change by a magnitude anywhere from 2-6. The ARuco library does try to deal with rotations as shown in the Marker.calculateMarkerId() method:
/**
* Return the id read in the code inside a marker. Each marker is divided into 7x7 regions
* of which the inner 5x5 contain info, the border should always be black. This function
* assumes that the code has been extracted previously.
* #return the id of the marker
*/
protected int calculateMarkerId(){
// check all the rotations of code
Code[] rotations = new Code[4];
rotations[0] = code;
int[] dists = new int[4];
dists[0] = hammDist(rotations[0]);
int[] minDist = {dists[0],0};
for(int i=1;i<4;i++){
// rotate
rotations[i] = Code.rotate(rotations[i-1]);
dists[i] = hammDist(rotations[i]);
if(dists[i] < minDist[0]){
minDist[0] = dists[i];
minDist[1] = i;
}
}
this.rotations = minDist[1];
if(minDist[0] != 0){
return -1; // matching id not found
}
else{
this.id = mat2id(rotations[minDist[1]]);
}
return id;
}
and the MarkerDetector.detect() does call that method and uses the getRotations() Method:
// identify the markers
for(int i=0;i<nCandidates;i++){
if(toRemove.get(i) == 0){
Marker marker = candidateMarkers.get(i);
Mat canonicalMarker = new Mat();
warp(in, canonicalMarker, new Size(50,50), marker.toList());
marker.setMat(canonicalMarker);
marker.extractCode();
if(marker.checkBorder()){
int id = marker.calculateMarkerId();
if(id != -1){
// rotate the points of the marker so they are always in the same order no matter the camera orientation
Collections.rotate(marker.toList(), 4-marker.getRotations());
newMarkers.add(marker);
}
}
}
}
The full source code for the ArUco library is here: https://github.com/sidberg/aruco-android/blob/master/Aruco/src/es/ava/aruco/MarkerDetector.java
If anyone has any advice or solutions I'd be very gracious. Please contact me if you have any questions.
I did find the problem. It turns out that the Marker Class has a rotation variable that can be used to rotate the axis to align with the orientation of the marker. I wrote the following method in the Utils class:
protected static void alignToId(Mat rotation, int codeRotation) {
//get the matrix corresponding to the rotation vector
Mat R = new Mat(3, 3, CvType.CV_64FC1);
Calib3d.Rodrigues(rotation, R);
codeRotation += 1;
//create the matrix to rotate around Z Axis
double[] rot = {
Math.cos(Math.toRadians(90) * codeRotation), -Math.sin(Math.toRadians(90) * codeRotation), 0,
Math.sin(Math.toRadians(90) * codeRotation), Math.cos(Math.toRadians(90) * codeRotation), 0,
0, 0, 1
};
// multiply both matrix
Mat res = new Mat(3, 3, CvType.CV_64FC1);
double[] prod = new double[9];
double[] a = new double[9];
R.get(0, 0, a);
for (int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++) {
prod[3 * i + j] = 0;
for (int k = 0; k < 3; k++) {
prod[3 * i + j] += a[3 * i + k] * rot[3 * k + j];
}
}
// convert the matrix to a vector with rodrigues back
res.put(0, 0, prod);
Calib3d.Rodrigues(res, rotation);
}
and I called it from the Marker.calculateExtrinsics Method:
Utils.alignToId(Rvec, this.getRotations());