Unable to multiply matrix in opencv java - java

I am new to Opencv in java. Problem is whenever i try to multiply two Mat type object of dimension (m x n) and (n x l) it gives the error.
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in cv::arithm_op, file ........\opencv\modules\core\src\arithm.cpp, line 1287
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: ........\opencv\modules\core\src\arithm.cpp:1287: error: (-209) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function cv::arithm_op
]
Here are my two matrices.
Mat r = new Mat(2, 2, CvType.CV_32F);
r.put(0, 0, 0.707);
r.put(0, 1, -0.707);
r.put(1, 0, 0.707);
r.put(1, 1, 0.707);
Mat mult = new Mat(1, 2, CvType.CV_32F);
double d1 = 1.00;
double d2 = 2.00;
mult.put(0, 0, d1);
mult.put(0, 1, d2);
Mat final_mat = mult.mul(r);

Mat.mul() does a per element mutiplication (same as Core.multiply()), and both Mat's need to have the same dimensions for that.
what you obviously wanted, is the 'matrix multiplication'.
while this would be a simple mat*vec in c++, in java you have to use gemm for this:
Mat r = new Mat(2, 2, CvType.CV_32F);
r.put(0, 0, 0.707);
r.put(0, 1, -0.707);
r.put(1, 0, 0.707);
r.put(1, 1, 0.707);
Mat v = new Mat(1, 2, CvType.CV_32F);
double d1 = 1.00;
double d2 = 2.00;
v.put(0, 0, d1);
v.put(0, 1, d2);
Mat final_mat = new Mat();
Core.gemm(v,r,1,new Mat(),0,final_mat);
System.err.println(final_mat.dump());
[2.1210001, 0.70700002]

Related

Choco Solver setObjective maximize polynominal equation

I'm currently trying out Choco Solver (4.0.8) and I'm trying to solve this equations:
Maximize
subject to
I'm stuck on maximising the first equation. I guess I just need a hint which subtype of Varaible EQUATION should be.
Model model = new Model("my first problem");
BoolVar x1 = model.boolVar("x1");
BoolVar x2 = model.boolVar("x2");
BoolVar x3 = model.boolVar("x3");
BoolVar x4 = model.boolVar("x4");
BoolVar[] bools = {x1, x2, x3, x4};
int[] c = {5, 7, 4, 3};
int[] c2 = {8, 11, 6, 4};
Variable EQUATION = new Variable();
model.scalar(bools, c, "<=", 14).post(); // 5x1 + 7x2 + 4x3 + 3x4 ≤ 14
model.setObjective(Model.MAXIMIZE, EQUATION); // 8x1 + 11x2 + 6x3 + 4x4
model.getSolver().solve();
System.out.println(x1);
System.out.println(x2);
System.out.println(x3);
System.out.println(x4);
It seems I have found a solution like this:
Variable EQUATION = new ScaleView(x1, 8)
.add(new ScaleView(x2, 11),
new ScaleView(x3, 6),
new ScaleView(x4, 4)).intVar();

opengl viewModelMatrix from opencv pose matrix

I have calculated the pose matrix using the cameraPoseFromHomography() function, please how to build the opengl viewModelMatrix from the pose matrix (3*4) obtained by opencv.I am working on an android aplication in java.
Mat homography = Calib3d.findHomography(ReferencePoints2, ReferencePoints1,0,Calib3d.RANSAC);
Mat pose = cameraPoseFromHomography(homography);
private static Mat cameraPoseFromHomography(Mat h) {
//Log.d("DEBUG", "cameraPoseFromHomography: homography " + matToString(h));
Mat pose = Mat.eye(3, 4, CvType.CV_32FC1); // 3x4 matrix, the camera pose
float norm1 = (float) Core.norm(h.col(0));
float norm2 = (float) Core.norm(h.col(1));
float tnorm = (norm1 + norm2) / 2.0f; // Normalization value
Mat normalizedTemp = new Mat();
Core.normalize(h.col(0), normalizedTemp);
normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
normalizedTemp.copyTo(pose.col(0)); // Normalize the rotation, and copies the column to pose
Core.normalize(h.col(1), normalizedTemp);
normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
normalizedTemp.copyTo(pose.col(1));// Normalize the rotation and copies the column to pose
Mat p3 = pose.col(0).cross(pose.col(1)); // Computes the cross-product of p1 and p2
p3.copyTo(pose.col(2));// Third column is the crossproduct of columns one and two
Mat temp = h.col(2);
double[] buffer = new double[3];
h.col(2).get(0, 0, buffer);
pose.put(0, 3, buffer[0] / tnorm); //vector t [R|t] is the last column of pose
pose.put(1, 3, buffer[1] / tnorm);
pose.put(2, 3, buffer[2] / tnorm);
return pose;
}

Error when adding two Mats in OpenCV, Java

I am writing an OpenCV based image processing algorythm for thresholding. The algorythm is written here in C++ language and I am rewriting in on Java, for Android studio. In one line, I have to add two Mat (OpenCV matrix) objects. In C++ it is res=Img+res;, in Java Core.add(imgMat, res, res);. At this line i get an error, which I cannot solve:
CvException: /Volumes/./././././././arithm.cpp:639: error: (-209) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function void cv::arithm_op(...)
In the code below you could see, that both Mat objects have the same size and it has the same format (CvType) too. Again, how the code looks in C++, you can see in this question.
My code (java):
public Bitmap Thresholding(Bitmap bitmap)
{
Mat imgMat = new Mat();
Utils.bitmapToMat(bitmap, imgMat);
imgMat.convertTo(imgMat, CvType.CV_32FC1, 1.0 / 255.0);
Mat res = CalcBlockMeanVariance(imgMat, 21);
Core.subtract(new MatOfDouble(1.0), res, res);
Core.add(imgMat, res, res);
Imgproc.threshold(res, res, 0.85, 1, Imgproc.THRESH_BINARY);
//Imgproc.resize(res, res, new org.opencv.core.Size(res.cols() / 2, res.rows() / 2));
res.convertTo(res, CvType.CV_8UC1, 255.0);
Utils.matToBitmap(res, bitmap);
return bitmap;
}
public Mat CalcBlockMeanVariance (Mat Img, int blockSide)
{
Mat I = new Mat();
Mat ResMat;
Mat inpaintmask = new Mat();
Mat patch;
Mat smallImg = new Mat();
MatOfDouble mean = new MatOfDouble();
MatOfDouble stddev = new MatOfDouble();
Img.convertTo(I, CvType.CV_32FC1);
ResMat = Mat.zeros(Img.rows() / blockSide, Img.cols() / blockSide, CvType.CV_32FC1);
for (int i = 0; i < Img.rows() - blockSide; i += blockSide)
{
for (int j = 0; j < Img.cols() - blockSide; j += blockSide)
{
patch = new Mat(I,new Rect(j,i, blockSide, blockSide));
Core.meanStdDev(patch, mean, stddev);
if (stddev.get(0,0)[0] > 0.01)
ResMat.put(i / blockSide, j / blockSide, mean.get(0,0)[0]);
else
ResMat.put(i / blockSide, j / blockSide, 0);
}
}
Imgproc.resize(I, smallImg, ResMat.size());
Imgproc.threshold(ResMat, inpaintmask, 0.02, 1.0, Imgproc.THRESH_BINARY);
Mat inpainted = new Mat();
Imgproc.cvtColor(smallImg, smallImg, Imgproc.COLOR_RGBA2BGR);
smallImg.convertTo(smallImg, CvType.CV_8UC1, 255.0);
inpaintmask.convertTo(inpaintmask, CvType.CV_8UC1);
Photo.inpaint(smallImg, inpaintmask, inpainted, 5, Photo.INPAINT_TELEA);
Imgproc.resize(inpainted, ResMat, Img.size());
ResMat.convertTo(ResMat, CvType.CV_32FC1, 1.0 / 255.0);
return ResMat;
}
Thank you in advance.
While imgMat and res have the same size, they have different number of channels: imgMat has 4 channels and res has 3 channels.
Since you can add two matrices only if they have same size and number of channels, you can convert imgMat to a 3 channel image before calling add like:
Imgproc.cvtColor( imgMat, imgMat, Imgproc.COLOR_BGRA2BGR);

Estimation of euler angels (camera pose) using images from camera and opencv library

I'm working on a android application and I need to estimate online camera rotation in 3D-plan using images from camera and opencv library. I like to calculate Euler angles.
I have read this and this page and I can estimate homography matrix like here.
My first question is, should I really know the camera intrinsic matrix from camera calibrtion or is the homography matrix (camera extrinsic) enough to estimate euler angles (pitch, roll, yaw)?
If homography matrix is enough, how can I do it exactly?
Sorry, I am really beginner with opencv and cannot decompose "Mat" of homography to rotation matrix and translation matrix like describes here. How can I implement euler angles in android?
You can see my code using solvePnPRansac() and decomposeProjectionMatrix to calculate euler angles.
But it returns just a null-vector as double[] eulerArray = {0,0,0}!!! Can somebody help me?! What is wrong there?
Thank you very much for any response!
public double[] findEulerAngles(MatOfKeyPoint keypoints1, MatOfKeyPoint keypoints2, MatOfDMatch matches){
KeyPoint[] k1 = keypoints1.toArray();
KeyPoint[] k2 = keypoints2.toArray();
List<DMatch> matchesList = matches.toList();
List<KeyPoint> referenceKeypointsList = keypoints2.toList();
List<KeyPoint> sceneKeypointsList = keypoints1.toList();
// Calculate the max and min distances between keypoints.
double maxDist = 0.0;
double minDist = Double.MAX_VALUE;
for(DMatch match : matchesList) {
double dist = match.distance;
if (dist < minDist) {
minDist = dist;
}
if (dist > maxDist) {
maxDist = dist;
}
}
// Identify "good" keypoints based on match distance.
List<Point3> goodReferencePointsList = new ArrayList<Point3>();
ArrayList<Point> goodScenePointsList = new ArrayList<Point>();
double maxGoodMatchDist = 1.75 * minDist;
for(DMatch match : matchesList) {
if (match.distance < maxGoodMatchDist) {
Point kk2 = k2[match.queryIdx].pt;
Point kk1 = k1[match.trainIdx].pt;
Point3 point3 = new Point3(kk1.x, kk1.y, 0.0);
goodReferencePointsList.add(point3);
goodScenePointsList.add( kk2);
sceneKeypointsList.get(match.queryIdx).pt);
}
}
if (goodReferencePointsList.size() < 4 || goodScenePointsList.size() < 4) {
// There are too few good points to find the pose.
return;
}
MatOfPoint3f goodReferencePoints = new MatOfPoint3f();
goodReferencePoints.fromList(goodReferencePointsList);
MatOfPoint2f goodScenePoints = new MatOfPoint2f();
goodScenePoints.fromList(goodScenePointsList);
MatOfDouble mRMat = new MatOfDouble(3, 3, CvType.CV_32F);
MatOfDouble mTVec = new MatOfDouble(3, 1, CvType.CV_32F);
//TODO: solve camera intrinsic matrix
Mat intrinsics = Mat.eye(3, 3, CvType.CV_32F); // dummy camera matrix
intrinsics.put(0, 0, 400);
intrinsics.put(1, 1, 400);
intrinsics.put(0, 2, 640 / 2);
intrinsics.put(1, 2, 480 / 2);
Calib3d.solvePnPRansac(goodReferencePoints, goodScenePoints, intrinsics, new MatOfDouble(), mRMat, mTVec);
MatOfDouble rotCameraMatrix1 = new MatOfDouble(3, 1, CvType.CV_32F);
double[] rVecArray = mRMat.toArray();
// Calib3d.Rodrigues(mRMat, rotCameraMatrix1);
double[] tVecArray = mTVec.toArray();
MatOfDouble projMatrix = new MatOfDouble(3, 4, CvType.CV_32F); //projMatrix 3x4 input projection matrix P.
projMatrix.put(0, 0, rVecArray[0]);
projMatrix.put(0, 1, rVecArray[1]);
projMatrix.put(0, 2, rVecArray[2]);
projMatrix.put(0, 3, 0);
projMatrix.put(1, 0, rVecArray[3]);
projMatrix.put(1, 1, rVecArray[4]);
projMatrix.put(1, 2, rVecArray[5]);
projMatrix.put(1, 3, 0);
projMatrix.put(2, 0, rVecArray[6]);
projMatrix.put(2, 1, rVecArray[7]);
projMatrix.put(2, 2, rVecArray[8]);
projMatrix.put(2, 3, 0);
MatOfDouble cameraMatrix = new MatOfDouble(3, 3, CvType.CV_32F); //cameraMatrix Output 3x3 camera matrix K.
MatOfDouble rotMatrix = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrix Output 3x3 external rotation matrix R.
MatOfDouble transVect = new MatOfDouble(4, 1, CvType.CV_32F); //transVect Output 4x1 translation vector T.
MatOfDouble rotMatrixX = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixX a rotMatrixX
MatOfDouble rotMatrixY = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixY a rotMatrixY
MatOfDouble rotMatrixZ = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixZ a rotMatrixZ
MatOfDouble eulerAngles = new MatOfDouble(3, 1, CvType.CV_32F); //eulerAngles Optional three-element vector containing three Euler angles of rotation in degrees.
Calib3d.decomposeProjectionMatrix( projMatrix,
cameraMatrix,
rotMatrix,
transVect,
rotMatrixX,
rotMatrixY,
rotMatrixZ,
eulerAngles);
double[] eulerArray = eulerAngles.toArray();
return eulerArray;
}
Homography relates images of the same planar surface, so it works only if there is a dominant plane in the image and you can find enough feature points lying on the plane in both images and successfully match them. Minimum number of matches is four and the math will work under the assumption, that the matches are 100% correct. With the help of robust estimation like RANSAC, you can get the result even if some elements in your set of feature point matches are obvious mismatches or are not placed on a plane.
For a more general case of a set of macthed features without the planarity assumption, you will need to find an essential matrix. The exact definition of the matrix can be found here. In short, it works more or less like homography - it relates corresponding points in two images. The minimum number of matches required to compute the essential matrix is five. To get the result from such a minimum sample, you need to make sure that the established matches are 100% correct. Again, robust estimation can help if there are outliers in your correspondence set -- and with automatic feature detection and matching there usually are.
OpenCV 3.0 has a function for essential matrix computation, conveniently integrated with RANSAC robust estimation. The essential matrix can be decomposed to the rotation matrix and translation vector as shown in the Wikipedia article I linked before. OpenCV 3.0 has a readily available function for this, too.
Now works the flowing code for me and I have decomposed the euler angles from homography matrix! I have some values for pitch, roll and yaw, which I don't know, whether there are correct. Have somebody any Idee, how can I test it?!
private static MatOfDMatch filterMatchesByHomography(MatOfKeyPoint keypoints1, MatOfKeyPoint keypoints2, MatOfDMatch matches){
List<Point> lp1 = new ArrayList<Point>(500);
List<Point> lp2 = new ArrayList<Point>(500);
KeyPoint[] k1 = keypoints1.toArray();
KeyPoint[] k2 = keypoints2.toArray();
List<DMatch> matchesList = matches.toList();
if (matchesList.size() < 4){
MatOfDMatch mat = new MatOfDMatch();
return mat;
}
// Add matches keypoints to new list to apply homography
for(DMatch match : matchesList){
Point kk1 = k1[match.queryIdx].pt;
Point kk2 = k2[match.trainIdx].pt;
lp1.add(kk1);
lp2.add(kk2);
}
MatOfPoint2f srcPoints = new MatOfPoint2f(lp1.toArray(new Point[0]));
MatOfPoint2f dstPoints = new MatOfPoint2f(lp2.toArray(new Point[0]));
//---------------------------------------
Mat mask = new Mat();
Mat homography = Calib3d.findHomography(srcPoints, dstPoints, Calib3d.RANSAC, 0.2, mask); // Finds a perspective transformation between two planes. ---Calib3d.LMEDS
Mat pose = cameraPoseFromHomography(homography);
//Decomposing a rotation matrix to eulerangle
pitch = Math.atan2(pose.get(2, 1)[0], pose.get(2, 2)[0]); // arctan2(r32, r33)
roll = Math.atan2(-1*pose.get(2, 0)[0], Math.sqrt( Math.pow(pose.get(2, 1)[0], 2) + Math.pow(pose.get(2, 2)[0], 2)) ); // arctan2(-r31, sqrt(r32^2 + r33^2))
yaw = Math.atan2(pose.get(2, 0)[0], pose.get(0, 0)[0]);
List<DMatch> matches_homo = new ArrayList<DMatch>();
int size = (int) mask.size().height;
for(int i = 0; i < size; i++){
if ( mask.get(i, 0)[0] == 1){
DMatch d = matchesList.get(i);
matches_homo.add(d);
}
}
MatOfDMatch mat = new MatOfDMatch();
mat.fromList(matches_homo);
return mat;
}
This is my camera pose from homography matrix (see this page too):
private static Mat cameraPoseFromHomography(Mat h) {
//Log.d("DEBUG", "cameraPoseFromHomography: homography " + matToString(h));
Mat pose = Mat.eye(3, 4, CvType.CV_32FC1); // 3x4 matrix, the camera pose
float norm1 = (float) Core.norm(h.col(0));
float norm2 = (float) Core.norm(h.col(1));
float tnorm = (norm1 + norm2) / 2.0f; // Normalization value
Mat normalizedTemp = new Mat();
Core.normalize(h.col(0), normalizedTemp);
normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
normalizedTemp.copyTo(pose.col(0)); // Normalize the rotation, and copies the column to pose
Core.normalize(h.col(1), normalizedTemp);
normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
normalizedTemp.copyTo(pose.col(1));// Normalize the rotation and copies the column to pose
Mat p3 = pose.col(0).cross(pose.col(1)); // Computes the cross-product of p1 and p2
p3.copyTo(pose.col(2));// Third column is the crossproduct of columns one and two
Mat temp = h.col(2);
double[] buffer = new double[3];
h.col(2).get(0, 0, buffer);
pose.put(0, 3, buffer[0] / tnorm); //vector t [R|t] is the last column of pose
pose.put(1, 3, buffer[1] / tnorm);
pose.put(2, 3, buffer[2] / tnorm);
return pose;
}

Decomposition of essential matrix leads to wrong rotation and translation

I am doing some SfM and having troubles getting R and T from the essential matrix.
Here is what I am doing in sourcecode:
Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right);
Mat E = new Mat();
Core.multiply(cameraMatrix.t(), fundamental, E); // cameraMatrix.t()*fundamental*cameraMatrix;
Core.multiply(E, cameraMatrix, E);
Mat R = new Mat();
Mat.zeros(3, 3, CvType.CV_64FC1).copyTo(R);
Mat T = new Mat();
calculateRT(E, R, T);
where `calculateRT` is defined as follows:
private void calculateRT(Mat E, Mat R, Mat T) {
/*
* //-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3
if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}
*/
Mat w = new Mat();
Mat u = new Mat();
Mat vt = new Mat();
Core.SVDecomp(E, w, u, vt, Core.DECOMP_SVD); // Maybe use flags
Mat W = new Mat(new Size(3,3), CvType.CV_64FC1);
W.put(0, 0, W_Values);
Core.multiply(u, W, R);
Core.multiply(R, vt, R);
T = u.col(2);
}
And here are the results of all matrizes after and during calculation.
Number matches: 10299
Number of good matches: 590
Number of obj_points left: 590.0
CameraMatrix:
[1133.601684570312, 0, 639.5;
0 , 1133.601684570312, 383.5;
0, 0, 1]
DistortionCoeff: [0.06604336202144623; 0.21129509806633; 0; 0; -1.206771731376648]
Fundamental:
[4.209958176688844e-08, -8.477216249742946e-08, 9.132798068178793e-05;
3.165719895008366e-07, 6.437858397735847e-07, -0.0006976204595236443;
0.0004532506630569588, -0.0009224427024602799, 1]
Essential:
[0.05410018455525099, 0, 0;
0, 0.8272987826496967, 0;
0, 0, 1]
U: (SVD)
[0, 0, 1;
0, 0.9999999999999999, 0;
1, 0, 0]
W: (SVD)
[1; 0.8272987826496967; 0.05410018455525099]
vt: (SVD)
[0, 0, 1;
0, 1, 0;
1, 0, 0]
R:
[0, 0, 0;
0, 0, 0;
0, 0, 0]
T:
[1; 0; 0]
And for completion here are the image I am using: left and right.
Before calulation of FeaturePoints and so on, I am doing an undistrortion of the images.
Can someone point out where something is goind wrong or what I am doing wrong?
edit: Question
Is it possible that my fundamental matrix is equals to the essential matrix as I am in the calibrated situation and Hartley and zissermann says:
„11.7.3 The calibrated case:
In the case of calibrated cameras normalized image coordinates may be used, and the essential matrix E computed instead of the fundamental matrix”
I've found the misstake. This code is not doing the right matrix multiplication.
Mat E = new Mat();
Core.multiply(cameraMatrix.t(),fundamental, E);
Core.multiply(E, cameraMatrix, E);
I changed this to
Core.gemm(cameraMatrix.t(), fundamental, 1, cameraMatrix, 1, E);
which is now doing the right matrix multiplication. As far as I can get ir from the documentation, Core.multiply is doing the multiplication for each element. not the dot product of row*col.
First, unless you computed the fundamental matrix by taking explicitly into account the inverse of the camera matrix, you are not in the calibrated case, hence the fundamental matrix you estimate is not an essential matrix. This is also quite easy to test: you just have to eigen-decompose the fundamental matrix and see whether the two non-zero eigen-values are equal (see § 9.6.1 in Hartley&Zisserman's book).
Second, both the fundamental matrix and the essential matrix are defined for two cameras and do not make sense if you consider only one camera. If you do have two cameras, with respective matrices K1 and K2, then you can obtain the essential matrix E12, given the fundamental matrix F12 (which maps points in I1 to lines in I2), using the following formula (see equation 9.12 in Hartley&Zisserman's book):
E12 = K2T . F12 . K1
In your case, you used K2 on both sides.

Categories