I am doing a project where I need to detect musical elements from stave lines, and I am in the point where I know what duration an note element has (quarter, octet, etc) and I what to detect the center of the note-head so that I can found out what note it is (C, D, etc) based on its location on the stave lines.
The problem that I have is that I don't know exactly where to start.
I was thinking about some template-matching using full and empty ovals as a template and the element Mat as a source.
Does anyone has any better and optimal solutions?
Examples of element Mats from where I want to find the note-head:
or
or
Project on GitHub if anyone is interested
https://github.com/AmbroziePaval/OMR
Implementation using template matching for one element (note) at a time.
Example searches all quarters and draw center points in green.
Code:
public Point getAproximateCenterNoteHeadPoint(Mat noteMat) {
noteMat.convertTo(noteMat, CvType.CV_32FC1);
Mat fullNoteHeadMat = Imgcodecs.imread(DatasetPaths.FULL_HEAD_TEMPLATE.getPath());
if (fullNoteHeadMat.channels() == 3) {
Imgproc.cvtColor(fullNoteHeadMat, fullNoteHeadMat, Imgproc.COLOR_BGR2GRAY);
}
fullNoteHeadMat.convertTo(fullNoteHeadMat, CvType.CV_32FC1);
Mat result = new Mat();
result.create(noteMat.width(), noteMat.height(), CvType.CV_32FC1);
double threshold = 0.7;
Imgproc.matchTemplate(noteMat, fullNoteHeadMat, result, Imgproc.TM_CCOEFF_NORMED);
Imgproc.threshold(result, result, threshold, 255, Imgproc.THRESH_TOZERO);
Core.MinMaxLocResult minMaxLocResult = Core.minMaxLoc(result);
if (minMaxLocResult.maxVal > threshold) {
Point maxLoc = minMaxLocResult.maxLoc;
return new Point(maxLoc.x + fullNoteHeadMat.width() / 2, maxLoc.y + fullNoteHeadMat.height() / 2);
}
return null;
}
Implementation using template matching for all elements at a time, as #Alexander Reynolds suggested in the comments of the question:
public List<Point> findAllNoteHeadCenters(Mat imageMat, List<Rect> elementRectangles) {
imageMat.convertTo(imageMat, CvType.CV_32FC1);
Mat fullNoteHeadMat = Imgcodecs.imread(DatasetPaths.FULL_HEAD_TEMPLATE.getPath());
if (fullNoteHeadMat.channels() == 3) {
Imgproc.cvtColor(fullNoteHeadMat, fullNoteHeadMat, Imgproc.COLOR_BGR2GRAY);
}
fullNoteHeadMat.convertTo(fullNoteHeadMat, CvType.CV_32FC1);
Mat result = new Mat();
result.create(imageMat.width(), imageMat.height(), CvType.CV_32FC1);
double threshold = 0.75;
Imgproc.matchTemplate(imageMat, fullNoteHeadMat, result, Imgproc.TM_CCOEFF_NORMED);
Imgproc.threshold(result, result, threshold, 255, Imgproc.THRESH_TOZERO);
List<Point> centers = new ArrayList<>();
Set<Rect> foundCenterFor = new HashSet<>();
while (true) {
Core.MinMaxLocResult minMaxLocResult = Core.minMaxLoc(result);
if (minMaxLocResult.maxVal > threshold) {
Point maxLoc = minMaxLocResult.maxLoc;
Optional<Rect> containingRect = getPointContainingRect(maxLoc, elementRectangles);
if (containingRect.isPresent() && !foundCenterFor.contains(containingRect.get())) {
centers.add(new Point(maxLoc.x + fullNoteHeadMat.width() / 2, maxLoc.y + fullNoteHeadMat.height() / 2));
foundCenterFor.add(containingRect.get());
}
Imgproc.floodFill(result, new Mat(), minMaxLocResult.maxLoc, new Scalar(0));
} else {
break;
}
}
return centers;
}
Try using Chamfer based distance transform transform to find the center of your point. The algorithm passes the image 2 times to calculate the distance of each object pixel to the nearest margin. The center point of your object will be the one with the greatest distance assigned.
Related
I am new to android. I am using opencv to detect face and mouth of a person. It is not detecting mouth correctly. Can you help me in this?
Here is my code:
mJavaDetectorLip =
loadClassifier(R.raw.haarcascade_mcs_mouth,"haarcascade_mcs_mouth.xml",
cascadeDir);
......
Rect liparea = new Rect(new Point(20,20),new Point(mGray.width() - 20,
mGray.height() - 20 ));
lipArea(mJavaLip,liparea,100);
......
here is my code:
private Mat lipArea(CascadeClassifier clasificator, Rect area, int
size) {
Mat template = new Mat();
Mat mROI = mGray.submat(area);
MatOfRect mouths = new MatOfRect();
Point lips = new Point();
//isolate the eyes first
clasificator.detectMultiScale(mROI, mouths, 1.1, 2, Objdetect.CASCADE_FIND_BIGGEST_OBJECT
| Objdetect.CASCADE_SCALE_IMAGE, new Size(30, 30), new Size());
Rect[] mouthArray = mouths.toArray();
for (int i = 0; i < mouthArray.length;) {
Rect e = mouthArray[i];
e.x = area.x + e.x;
e.y = area.y + e.y;
Point center1 = new Point(e.x + mouthArray[i].width * 0.5,
e.y + mouthArray[i].height * 0.5);
int radius = (int) Math.round(mouthArray[i].width / 2);
Imgproc.circle(mRgba, center1, radius, new Scalar(255, 0, 0), 4, 8, 0);
new Scalar(0,255,0),1,8,0);
return template;
}
return template;
}
It is not staying in one place, it is moving around the whole face.
It is not staying in one place, it is moving around the whole face.
It is an expected behavior as the features of mouth are very much limited and there is a high chance of false positives. For example your eyes would also have similar features as your lip. To mitigate this issue, OpenCV docs suggest that we must first detect the faces in a given frame, if there are multiple then choose a single one depending upon area of face rect or some other param. After successful detection of face, divide the face rect into halves and search for the lips in the lower half only.
This would significantly increase your accuracy, because the Haar features for face are pretty complex and well trained. Narrowing down your search domain from the whole frame to lower half of your face would save time as well.
I have a captured image, the image consists of a table. I want to crop the table out of that image.
This is a sample image.
Can someone suggest what can be done?
I have to use it in android.
Use a hough transform to find the lines in the image.
OpenCV can easily do this and has java bindings. See the tutorial on this page on how to do something very similar.
https://docs.opencv.org/3.4.1/d9/db0/tutorial_hough_lines.html
Here is the java code provided in the tutorial:
import org.opencv.core.*;
import org.opencv.core.Point;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class HoughLinesRun {
public void run(String[] args) {
// Declare the output variables
Mat dst = new Mat(), cdst = new Mat(), cdstP;
String default_file = "../../../../data/sudoku.png";
String filename = ((args.length > 0) ? args[0] : default_file);
// Load an image
Mat src = Imgcodecs.imread(filename, Imgcodecs.IMREAD_GRAYSCALE);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default "
+ default_file +"] \n");
System.exit(-1);
}
// Edge detection
Imgproc.Canny(src, dst, 50, 200, 3, false);
// Copy edges to the images that will display the results in BGR
Imgproc.cvtColor(dst, cdst, Imgproc.COLOR_GRAY2BGR);
cdstP = cdst.clone();
// Standard Hough Line Transform
Mat lines = new Mat(); // will hold the results of the detection
Imgproc.HoughLines(dst, lines, 1, Math.PI/180, 150); // runs the actual detection
// Draw the lines
for (int x = 0; x < lines.rows(); x++) {
double rho = lines.get(x, 0)[0],
theta = lines.get(x, 0)[1];
double a = Math.cos(theta), b = Math.sin(theta);
double x0 = a*rho, y0 = b*rho;
Point pt1 = new Point(Math.round(x0 + 1000*(-b)), Math.round(y0 + 1000*(a)));
Point pt2 = new Point(Math.round(x0 - 1000*(-b)), Math.round(y0 - 1000*(a)));
Imgproc.line(cdst, pt1, pt2, new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
// Probabilistic Line Transform
Mat linesP = new Mat(); // will hold the results of the detection
Imgproc.HoughLinesP(dst, linesP, 1, Math.PI/180, 50, 50, 10); // runs the actual detection
// Draw the lines
for (int x = 0; x < linesP.rows(); x++) {
double[] l = linesP.get(x, 0);
Imgproc.line(cdstP, new Point(l[0], l[1]), new Point(l[2], l[3]), new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
// Show results
HighGui.imshow("Source", src);
HighGui.imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst);
HighGui.imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP);
// Wait and Exit
HighGui.waitKey();
System.exit(0);
}
}
public class HoughLines {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new HoughLinesRun().run(args);
}
}
Lines or LinesP will contain the found lines. Instead of drawing them (as in the example) you will want to manipulate them a little further.
Sort the found lines by slope.
The two largest clusters will be horizontal lines and then vertical lines.
For the horizontal lines calculate and sort by the y intercept.
The largest y intercept describes the top of the table.
The smallest y intercept is the bottom of the table.
For the vertical lines calculate and sort by the x intercept.
The largest x intercept is the right side of the table.
The smallest x intercept is the left side of the table.
You'll now have the coordinates of the four table corners and can do standard image manipulation to crop/rotate etc. OpenCV can help you will this step too.
Convert your image to grayscale.
Threshold your image to drop noise.
Find the minimum area rect of the non-blank pixels.
In python the code would look like:
import cv2
import numpy as np
img = cv2.imread('table.jpg')
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 222, 255, cv2.THRESH_BINARY )
# write out the thresholded image to debug the 222 value
cv2.imwrite("thresh.png", thresh)
indices = np.where(thresh != 255)
coords = np.array([(b,a) for a, b in zip(*(indices[0], indices[1]))])
# coords = cv2.convexHull(coords)
rect = cv2.minAreaRect(coords)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img, [box], 0, (0, 0, 255), 2)
cv2.imwrite("box.png", img)
For me this produces the following image.
If your image didn't have the red squares it would be a tighter fit.
--------------read edit below---------------
I am trying to detect the edge of the pupils and iris within various images. I am altering parameters and such but I can only manage to ever get one iris/pupil outline correct, or get unnecessary outlines in the background, or none at all. Is the some specific parameters that I should try to try and get the correct outlines. Or is there a way that I can crop the image just to the eyes, so the system can focus on that part?
This is my UPDATED method:
private void findPupilIris() throws IOException {
//converts and saves image in grayscale
Mat newimg = Imgcodecs.imread("/Users/.../pic.jpg");
Mat des = new Mat(newimg.rows(), newimg.cols(), newimg.type());
Mat norm = new Mat();
Imgproc.cvtColor(newimg, des, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv = new ArrayList<Mat>();
Core.split(des, hsv);
Mat v = hsv.get(2); //gets the grey scale version
Imgcodecs.imwrite("/Users/Lisa-Maria/Documents/CapturedImages/B&Wpic.jpg", v); //only writes mats
CLAHE clahe = Imgproc.createCLAHE(2.0, new Size(8,8) ); //2.0, new Size(8,8)
clahe.apply(v,v);
// Imgproc.GaussianBlur(v, v, new Size(9,9), 3); //adds left pupil boundary and random circle on 'a'
// Imgproc.GaussianBlur(v, v, new Size(9,9), 13); //adds right outer iris boundary and random circle on 'a'
Imgproc.GaussianBlur(v, v, new Size(9,9), 7); //adds left outer iris boundary and random circle on left by hair
// Imgproc.GaussianBlur(v, v, new Size(7,7), 15);
Core.addWeighted(v, 1.5, v, -0.5, 0, v);
Imgcodecs.imwrite("/Users/.../after.jpg", v); //only writes mats
if (v != null) {
Mat circles = new Mat();
Imgproc.HoughCircles( v, circles, Imgproc.CV_HOUGH_GRADIENT, 2, v.rows(), 100, 20, 20, 200 );
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
System.out.println("circles.cols() " + circles.cols());
if(circles.cols() > 0) {
System.out.println("1");
for (int x = 0; x < circles.cols(); x++) {
System.out.println("2");
double vCircle[] = circles.get(0, x);
if(vCircle == null) {
break;
}
Point pt = new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
int radius = (int) Math.round(vCircle[2]);
//draw the found circle
Imgproc.circle(v, pt, radius, new Scalar(255,0,0),2); //newimg
//Imgproc.circle(des, pt, radius/3, new Scalar(225,0,0),2); //pupil
Imgcodecs.imwrite("/Users/.../Houghpic.jpg", v); //newimg
//draw the mask: white circle on black background
// Mat mask = new Mat( new Size( des.cols(), des.rows() ), CvType.CV_8UC1 );
// Imgproc.circle(mask, pt, radius, new Scalar(255,0,0),2);
// des.copyTo(des,mask);
// Imgcodecs.imwrite("/Users/..../mask.jpg", des); //newimg
Imgproc.logPolar(des, norm, pt, radius, Imgproc.WARP_FILL_OUTLIERS);
Imgcodecs.imwrite("/Users/..../Normalised.jpg",norm);
}
}
}
}
Result: hough pic
Following discussion in comments, I am posting a general answer with some results I got on the worst case image uploaded by the OP.
Note : The code I am posting is in Python, since it is the fastest for me to write
Step 1. As you ask for a way to crop the image, so as to focus on the eyes only, you might want to look at Face Detection. Since, the image essentially requires to find eyes only, I did the following:
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
eyes = eye_cascade.detectMultiScale(v) // v is the value channel of the HSV image
// The results "eyes" gives you the dimensions of the rectangle where the eyes are detected as [x, y, w, h]
// Just for drawing
cv2.rectangle(v, (x1, y1), (x1+w1, y1+h1), (0, 255, 0), 2)
cv2.rectangle(v, (x2, y2), (x2+w2, y2+h2), (0, 255, 0), 2)
Now, once you have the bounding rectangles, you can crop the rectangles from the image like:
crop_eye1 = v[y1:y1+h1, x1:x1+w1]
crop_eye2 = v[y2:y2+h2, x2:x2+w2]
After you obtain the rectangles, I would suggest looking into different color spaces instead of RGB/BGR, HSV/Lab/Luv in particular.
Because the R, G, and B components of an object’s color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant
Then, once, you have the eyes, its time to equalize the contrast of the image, however, I suggest using CLAHE and play with the parameters for clipLimit and tileGridSize. Here is a code which I implemented a while back in Java:
private static Mat clahe(Mat image, int ClipLimit, Size size){
CLAHE clahe = Imgproc.createCLAHE();
clahe.setClipLimit(ClipLimit);
clahe.setTilesGridSize(size);
Mat dest_image = new Mat();
clahe.apply(image, dest_image);
return dest_image;
}
Once you are satisfied, you should sharpen the image so that HoughCircle is robust. You should look at unsharpMask. Here is the code in Java for UnsharpMask I implemented in Java:
private static Mat unsharpMask(Mat input_image, Size size, double sigma){
// Make sure the {input_image} is gray.
Mat sharpend_image = new Mat(input_image.rows(), input_image.cols(), input_image.type());
Mat Blurred_image = new Mat(input_image.rows(), input_image.cols(), input_image.type());
Imgproc.GaussianBlur(input_image, Blurred_image, size, sigma);
Core.addWeighted(input_image, 2.0D, Blurred_image, -1.0D, 0.0D, sharpened_image);
return sharpened_image;
}
Alternatively, you could use bilateral filter, which is edge preserving smoothing, or read through this for defining a custom kernel for sharpening image.
Hope it helps and best of luck!
i started using opencv a few weeks ago. i would like to know if there is a function for finding out the brightest contour from a List of contours and drawing on the brightest. so far i managed to convert greyscale, threshold the image and using the findContour functions to find all the whole contours in the image.
tried using the minMax function but can't find how its used in java.
public void process(Mat rgbaImage) {
Imgproc.threshold(rgbaImage,rgbaImage,230,255,Imgproc.THRESH_BINARY);
Imgproc.findContours(rgbaImage,contours,mHierarchy,Imgproc.RETR_LIST,Imgproc.CHAIN_APPROX_SIMPLE);
/* for(int id = 0; id < contours.size();id++) {
double area = Imgproc.contourArea(contours.get(id));
if (area > 8000){
Log.i(TAG1, "contents founds at id" + id);
}
} */
}`
If your "brightest" means the average color that is brightest, you can use cv::mean(Mat src, Mat mask).
Sadly I only know C++ OpenCV implementation, but I think Java version is almost the same as C++ one.
C++ Example:
Mat src; // This is your src image
vector<vector<Point>> contours; // This is your array of contours
findContours(src.clone(), contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); // Find the contours in the image
int brightestIdx = -1;
int brightestColor = -1;
for(int i=0; i<contours.size(); i++)
{
// First, make a mask image of each contour
Mat mask(src.cols, src.rows, CV_8U, Scalar(0));
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
// Second, calculate average brightness with mask
Scalar m = mean(src, mask);
// Finally, compare current average with previous one
if(m[0] > brightestColor)
{
brightestColor = m[0];
brightestIdx = i;
}
}
// Now you've found the brightest index.
// Do whatever you want.
Mat brightest_only(src.cols, src.rows, CV_8U, Scalar(0));
drawContours(brightest_only, contours, brightestIdx, Scalar(255), 1);
I'm trying to stitch two images together, using the OpenCV Java API. However, I get the wrong output and I cannot work out the problem. I use the following steps:
1. detect features
2. extract features
3. match features.
4. find homography
5. find perspective transform
6. warp perspective
7. 'stitch' the 2 images, into a combined image.
but somewhere I'm going wrong. I think it's the way I'm combing the 2 images, but I'm not sure. I get 214 good feature matches between the 2 images, but cannot stitch them?
public class ImageStitching {
static Mat image1;
static Mat image2;
static FeatureDetector fd;
static DescriptorExtractor fe;
static DescriptorMatcher fm;
public static void initialise(){
fd = FeatureDetector.create(FeatureDetector.BRISK);
fe = DescriptorExtractor.create(DescriptorExtractor.SURF);
fm = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
//images
image1 = Highgui.imread("room2.jpg");
image2 = Highgui.imread("room3.jpg");
//structures for the keypoints from the 2 images
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
//structures for the computed descriptors
Mat descriptors1 = new Mat();
Mat descriptors2 = new Mat();
//structure for the matches
MatOfDMatch matches = new MatOfDMatch();
//getting the keypoints
fd.detect(image1, keypoints1);
fd.detect(image1, keypoints2);
//getting the descriptors from the keypoints
fe.compute(image1, keypoints1, descriptors1);
fe.compute(image2,keypoints2,descriptors2);
//getting the matches the 2 sets of descriptors
fm.match(descriptors2,descriptors1, matches);
//turn the matches to a list
List<DMatch> matchesList = matches.toList();
Double maxDist = 0.0; //keep track of max distance from the matches
Double minDist = 100.0; //keep track of min distance from the matches
//calculate max & min distances between keypoints
for(int i=0; i<keypoints1.rows();i++){
Double dist = (double) matchesList.get(i).distance;
if (dist<minDist) minDist = dist;
if(dist>maxDist) maxDist=dist;
}
System.out.println("max dist: " + maxDist );
System.out.println("min dist: " + minDist);
//structure for the good matches
LinkedList<DMatch> goodMatches = new LinkedList<DMatch>();
//use only the good matches (i.e. whose distance is less than 3*min_dist)
for(int i=0;i<descriptors1.rows();i++){
if(matchesList.get(i).distance<3*minDist){
goodMatches.addLast(matchesList.get(i));
}
}
//structures to hold points of the good matches (coordinates)
LinkedList<Point> objList = new LinkedList<Point>(); // image1
LinkedList<Point> sceneList = new LinkedList<Point>(); //image 2
List<KeyPoint> keypoints_objectList = keypoints1.toList();
List<KeyPoint> keypoints_sceneList = keypoints2.toList();
//putting the points of the good matches into above structures
for(int i = 0; i<goodMatches.size(); i++){
objList.addLast(keypoints_objectList.get(goodMatches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(goodMatches.get(i).trainIdx).pt);
}
System.out.println("\nNum. of good matches" +goodMatches.size());
MatOfDMatch gm = new MatOfDMatch();
gm.fromList(goodMatches);
//converting the points into the appropriate data structure
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);
MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);
//finding the homography matrix
Mat H = Calib3d.findHomography(obj, scene);
//LinkedList<Point> cornerList = new LinkedList<Point>();
Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);
obj_corners.put(0,0, new double[]{0,0});
obj_corners.put(0,0, new double[]{image1.cols(),0});
obj_corners.put(0,0,new double[]{image1.cols(),image1.rows()});
obj_corners.put(0,0,new double[]{0,image1.rows()});
Core.perspectiveTransform(obj_corners, scene_corners, H);
//structure to hold the result of the homography matrix
Mat result = new Mat();
//size of the new image - i.e. image 1 + image 2
Size s = new Size(image1.cols()+image2.cols(),image1.rows());
//using the homography matrix to warp the two images
Imgproc.warpPerspective(image1, result, H, s);
int i = image1.cols();
Mat m = new Mat(result,new Rect(i,0,image2.cols(), image2.rows()));
image2.copyTo(m);
Mat img_mat = new Mat();
Features2d.drawMatches(image1, keypoints1, image2, keypoints2, gm, img_mat, new Scalar(254,0,0),new Scalar(254,0,0) , new MatOfByte(), 2);
//creating the output file
boolean imageStitched = Highgui.imwrite("imageStitched.jpg",result);
boolean imageMatched = Highgui.imwrite("imageMatched.jpg",img_mat);
}
public static void main(String args[]){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
initialise();
}
I cannot embed images nor post more than 2 links, because of reputation points? so I've linked the incorrectly stitched images and an image showing the matched features between the 2 images (to get an understanding of the issue):
incorrect stitched image: http://oi61.tinypic.com/11ac01c.jpg
detected features: http://oi57.tinypic.com/29m3wif.jpg
It seems that you have a lot of outliers that make the estimation of homography is incorrect. SO you can use RANSAC method that recursively reject those outliers.
No need much efforts for that, just use a third parameter in findHomography function as:
Mat H = Calib3d.findHomography(obj, scene, CV_RANSAC);
Edit
Then try to be sure that your images given to detector are 8-bit grayscale image, as mentioned here
The "incorrectly stitched image" you post looks like having a bad conditioned H matrix. Apart from +dervish suggestions, run:
cv::determinant(H) > 0.01
To check if your H matrix is "usable". If the matrix is badly conditioned, you get the effect you are showing.
You are drawing onto a 2x2 canvas size, if that's the case, you won't see plenty of stitching configurations, i.e. it's ok for image A on the left of image B but not otherwise. Try drawing the output onto a 3x3 canvas size, using the following snippet:
// Use the Homography Matrix to warp the images, but offset it to the
// center of the output canvas. Careful to pre-multiply, not post-multiply.
cv::Mat Offset = (cv::Mat_<double>(3,3) << 1, 0,
width, 0, 1, height, 0, 0, 1);
H = Offset * H;
cv::Mat result;
cv::warpPerspective(mat_l,
result,
H,
cv::Size(3*width, 3*height));
// Copy the reference image to the center of the 3x3 output canvas.
cv::Mat roi = result.colRange(width,2*width).rowRange(height,2*height);
mat_r.copyTo(roi);
Where width and height are those of the input images, supposedly both of the same size. Note that this warping assumes the mat_l unchanged (flat) and mat_r warping to get stitched on it.