How to filter contour by brightness - java

i started using opencv a few weeks ago. i would like to know if there is a function for finding out the brightest contour from a List of contours and drawing on the brightest. so far i managed to convert greyscale, threshold the image and using the findContour functions to find all the whole contours in the image.
tried using the minMax function but can't find how its used in java.
public void process(Mat rgbaImage) {
Imgproc.threshold(rgbaImage,rgbaImage,230,255,Imgproc.THRESH_BINARY);
Imgproc.findContours(rgbaImage,contours,mHierarchy,Imgproc.RETR_LIST,Imgproc.CHAIN_APPROX_SIMPLE);
/* for(int id = 0; id < contours.size();id++) {
double area = Imgproc.contourArea(contours.get(id));
if (area > 8000){
Log.i(TAG1, "contents founds at id" + id);
}
} */
}`

If your "brightest" means the average color that is brightest, you can use cv::mean(Mat src, Mat mask).
Sadly I only know C++ OpenCV implementation, but I think Java version is almost the same as C++ one.
C++ Example:
Mat src; // This is your src image
vector<vector<Point>> contours; // This is your array of contours
findContours(src.clone(), contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); // Find the contours in the image
int brightestIdx = -1;
int brightestColor = -1;
for(int i=0; i<contours.size(); i++)
{
// First, make a mask image of each contour
Mat mask(src.cols, src.rows, CV_8U, Scalar(0));
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
// Second, calculate average brightness with mask
Scalar m = mean(src, mask);
// Finally, compare current average with previous one
if(m[0] > brightestColor)
{
brightestColor = m[0];
brightestIdx = i;
}
}
// Now you've found the brightest index.
// Do whatever you want.
Mat brightest_only(src.cols, src.rows, CV_8U, Scalar(0));
drawContours(brightest_only, contours, brightestIdx, Scalar(255), 1);

Related

OpenCV Notehead detection from note Mat/image

I am doing a project where I need to detect musical elements from stave lines, and I am in the point where I know what duration an note element has (quarter, octet, etc) and I what to detect the center of the note-head so that I can found out what note it is (C, D, etc) based on its location on the stave lines.
The problem that I have is that I don't know exactly where to start.
I was thinking about some template-matching using full and empty ovals as a template and the element Mat as a source.
Does anyone has any better and optimal solutions?
Examples of element Mats from where I want to find the note-head:
or
or
Project on GitHub if anyone is interested
https://github.com/AmbroziePaval/OMR
Implementation using template matching for one element (note) at a time.
Example searches all quarters and draw center points in green.
Code:
public Point getAproximateCenterNoteHeadPoint(Mat noteMat) {
noteMat.convertTo(noteMat, CvType.CV_32FC1);
Mat fullNoteHeadMat = Imgcodecs.imread(DatasetPaths.FULL_HEAD_TEMPLATE.getPath());
if (fullNoteHeadMat.channels() == 3) {
Imgproc.cvtColor(fullNoteHeadMat, fullNoteHeadMat, Imgproc.COLOR_BGR2GRAY);
}
fullNoteHeadMat.convertTo(fullNoteHeadMat, CvType.CV_32FC1);
Mat result = new Mat();
result.create(noteMat.width(), noteMat.height(), CvType.CV_32FC1);
double threshold = 0.7;
Imgproc.matchTemplate(noteMat, fullNoteHeadMat, result, Imgproc.TM_CCOEFF_NORMED);
Imgproc.threshold(result, result, threshold, 255, Imgproc.THRESH_TOZERO);
Core.MinMaxLocResult minMaxLocResult = Core.minMaxLoc(result);
if (minMaxLocResult.maxVal > threshold) {
Point maxLoc = minMaxLocResult.maxLoc;
return new Point(maxLoc.x + fullNoteHeadMat.width() / 2, maxLoc.y + fullNoteHeadMat.height() / 2);
}
return null;
}
Implementation using template matching for all elements at a time, as #Alexander Reynolds suggested in the comments of the question:
public List<Point> findAllNoteHeadCenters(Mat imageMat, List<Rect> elementRectangles) {
imageMat.convertTo(imageMat, CvType.CV_32FC1);
Mat fullNoteHeadMat = Imgcodecs.imread(DatasetPaths.FULL_HEAD_TEMPLATE.getPath());
if (fullNoteHeadMat.channels() == 3) {
Imgproc.cvtColor(fullNoteHeadMat, fullNoteHeadMat, Imgproc.COLOR_BGR2GRAY);
}
fullNoteHeadMat.convertTo(fullNoteHeadMat, CvType.CV_32FC1);
Mat result = new Mat();
result.create(imageMat.width(), imageMat.height(), CvType.CV_32FC1);
double threshold = 0.75;
Imgproc.matchTemplate(imageMat, fullNoteHeadMat, result, Imgproc.TM_CCOEFF_NORMED);
Imgproc.threshold(result, result, threshold, 255, Imgproc.THRESH_TOZERO);
List<Point> centers = new ArrayList<>();
Set<Rect> foundCenterFor = new HashSet<>();
while (true) {
Core.MinMaxLocResult minMaxLocResult = Core.minMaxLoc(result);
if (minMaxLocResult.maxVal > threshold) {
Point maxLoc = minMaxLocResult.maxLoc;
Optional<Rect> containingRect = getPointContainingRect(maxLoc, elementRectangles);
if (containingRect.isPresent() && !foundCenterFor.contains(containingRect.get())) {
centers.add(new Point(maxLoc.x + fullNoteHeadMat.width() / 2, maxLoc.y + fullNoteHeadMat.height() / 2));
foundCenterFor.add(containingRect.get());
}
Imgproc.floodFill(result, new Mat(), minMaxLocResult.maxLoc, new Scalar(0));
} else {
break;
}
}
return centers;
}
Try using Chamfer based distance transform transform to find the center of your point. The algorithm passes the image 2 times to calculate the distance of each object pixel to the nearest margin. The center point of your object will be the one with the greatest distance assigned.

Best parameters for pupil detection using hough? java opencv

--------------read edit below---------------
I am trying to detect the edge of the pupils and iris within various images. I am altering parameters and such but I can only manage to ever get one iris/pupil outline correct, or get unnecessary outlines in the background, or none at all. Is the some specific parameters that I should try to try and get the correct outlines. Or is there a way that I can crop the image just to the eyes, so the system can focus on that part?
This is my UPDATED method:
private void findPupilIris() throws IOException {
//converts and saves image in grayscale
Mat newimg = Imgcodecs.imread("/Users/.../pic.jpg");
Mat des = new Mat(newimg.rows(), newimg.cols(), newimg.type());
Mat norm = new Mat();
Imgproc.cvtColor(newimg, des, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv = new ArrayList<Mat>();
Core.split(des, hsv);
Mat v = hsv.get(2); //gets the grey scale version
Imgcodecs.imwrite("/Users/Lisa-Maria/Documents/CapturedImages/B&Wpic.jpg", v); //only writes mats
CLAHE clahe = Imgproc.createCLAHE(2.0, new Size(8,8) ); //2.0, new Size(8,8)
clahe.apply(v,v);
// Imgproc.GaussianBlur(v, v, new Size(9,9), 3); //adds left pupil boundary and random circle on 'a'
// Imgproc.GaussianBlur(v, v, new Size(9,9), 13); //adds right outer iris boundary and random circle on 'a'
Imgproc.GaussianBlur(v, v, new Size(9,9), 7); //adds left outer iris boundary and random circle on left by hair
// Imgproc.GaussianBlur(v, v, new Size(7,7), 15);
Core.addWeighted(v, 1.5, v, -0.5, 0, v);
Imgcodecs.imwrite("/Users/.../after.jpg", v); //only writes mats
if (v != null) {
Mat circles = new Mat();
Imgproc.HoughCircles( v, circles, Imgproc.CV_HOUGH_GRADIENT, 2, v.rows(), 100, 20, 20, 200 );
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
System.out.println("circles.cols() " + circles.cols());
if(circles.cols() > 0) {
System.out.println("1");
for (int x = 0; x < circles.cols(); x++) {
System.out.println("2");
double vCircle[] = circles.get(0, x);
if(vCircle == null) {
break;
}
Point pt = new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
int radius = (int) Math.round(vCircle[2]);
//draw the found circle
Imgproc.circle(v, pt, radius, new Scalar(255,0,0),2); //newimg
//Imgproc.circle(des, pt, radius/3, new Scalar(225,0,0),2); //pupil
Imgcodecs.imwrite("/Users/.../Houghpic.jpg", v); //newimg
//draw the mask: white circle on black background
// Mat mask = new Mat( new Size( des.cols(), des.rows() ), CvType.CV_8UC1 );
// Imgproc.circle(mask, pt, radius, new Scalar(255,0,0),2);
// des.copyTo(des,mask);
// Imgcodecs.imwrite("/Users/..../mask.jpg", des); //newimg
Imgproc.logPolar(des, norm, pt, radius, Imgproc.WARP_FILL_OUTLIERS);
Imgcodecs.imwrite("/Users/..../Normalised.jpg",norm);
}
}
}
}
Result: hough pic
Following discussion in comments, I am posting a general answer with some results I got on the worst case image uploaded by the OP.
Note : The code I am posting is in Python, since it is the fastest for me to write
Step 1. As you ask for a way to crop the image, so as to focus on the eyes only, you might want to look at Face Detection. Since, the image essentially requires to find eyes only, I did the following:
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
eyes = eye_cascade.detectMultiScale(v) // v is the value channel of the HSV image
// The results "eyes" gives you the dimensions of the rectangle where the eyes are detected as [x, y, w, h]
// Just for drawing
cv2.rectangle(v, (x1, y1), (x1+w1, y1+h1), (0, 255, 0), 2)
cv2.rectangle(v, (x2, y2), (x2+w2, y2+h2), (0, 255, 0), 2)
Now, once you have the bounding rectangles, you can crop the rectangles from the image like:
crop_eye1 = v[y1:y1+h1, x1:x1+w1]
crop_eye2 = v[y2:y2+h2, x2:x2+w2]
After you obtain the rectangles, I would suggest looking into different color spaces instead of RGB/BGR, HSV/Lab/Luv in particular.
Because the R, G, and B components of an object’s color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant
Then, once, you have the eyes, its time to equalize the contrast of the image, however, I suggest using CLAHE and play with the parameters for clipLimit and tileGridSize. Here is a code which I implemented a while back in Java:
private static Mat clahe(Mat image, int ClipLimit, Size size){
CLAHE clahe = Imgproc.createCLAHE();
clahe.setClipLimit(ClipLimit);
clahe.setTilesGridSize(size);
Mat dest_image = new Mat();
clahe.apply(image, dest_image);
return dest_image;
}
Once you are satisfied, you should sharpen the image so that HoughCircle is robust. You should look at unsharpMask. Here is the code in Java for UnsharpMask I implemented in Java:
private static Mat unsharpMask(Mat input_image, Size size, double sigma){
// Make sure the {input_image} is gray.
Mat sharpend_image = new Mat(input_image.rows(), input_image.cols(), input_image.type());
Mat Blurred_image = new Mat(input_image.rows(), input_image.cols(), input_image.type());
Imgproc.GaussianBlur(input_image, Blurred_image, size, sigma);
Core.addWeighted(input_image, 2.0D, Blurred_image, -1.0D, 0.0D, sharpened_image);
return sharpened_image;
}
Alternatively, you could use bilateral filter, which is edge preserving smoothing, or read through this for defining a custom kernel for sharpening image.
Hope it helps and best of luck!

Using OpenCV to find the bounding box of numbers on an image

I'm trying to find the bounding box of the numbers in the middle of the below 3 images.
Here's 3 example cards I'm trying to work with.
The code I'm using is based (almost a complete copy) of the code provided in the first answer here, although converted to Java (answers in C++ are fine) and added parameters for the size of the contours to merge (defined as sizeHorizonal and sizeVertical in my code), which are the two parameters I'm playing with in the images below.
MatOfPoint2f approxCurve = new MatOfPoint2f();
Mat imgMAT = new Mat();
Utils.bitmapToMat(bmp32, imgMAT);
Mat grad = new Mat();
Imgproc.cvtColor(imgMAT, grad, Imgproc.COLOR_BGR2GRAY);
Mat img_sobel = new Mat();
Mat img_threshold = new Mat();
Imgproc.Sobel(grad, img_sobel, CvType.CV_8U, 1, 0, 3, 1, 0, Core.BORDER_DEFAULT);
Imgproc.threshold(img_sobel, img_threshold, 0, 255, Imgproc.THRESH_OTSU + Imgproc.THRESH_BINARY);
Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(sizeHorizontal, sizeVertical));
Imgproc.morphologyEx(img_threshold, img_threshold, Imgproc.MORPH_CLOSE, element);
Imgproc.cvtColor(imgMAT, imgMAT, Imgproc.COLOR_BGR2GRAY);
List<MatOfPoint> contours = new ArrayList<>();
Imgproc.findContours(img_threshold, contours, new Mat(), Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE, new org.opencv.core.Point(0, 0));
for (int i = 0; i < contours.size(); i++) {
//Convert contours(i) from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f( contours.get(i).toArray() );
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true)*0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint( approxCurve.toArray() );
// Get bounding rect of contour
org.opencv.core.Rect rect = Imgproc.boundingRect(points);
Imgproc.rectangle(imgMAT, rect.tl(), rect.br(), new Scalar(0, 255, 0), 2);
}
I've got the separate number sections contoured, but I can't find a way to isolate the contours I want. Here's the areas contoured with the parameters for the size input. As you can see for the second image, this is working exactly as I want and has contoured the whole number, rather than each section.
1: Size param input: 17, 5
2: Size param input: 23, 7
3: Size param input: 23, 13
So, things I need help with:
Isolating the four contours in the middle and finding a way to merge these contours together
I've thought about taking the contours that match a given aspect ratio and cropping to a bounding box encompassing all of them, but there are other surrounding contours with similar ratios.
Finding a way to choose the correct size parameters automatically (as each card type requires different parameters to isolate the numbers)
Short of trying all 3 size inputs and seeing what gives the expected contours, I could use the prevailing colour as an indicator of the card type and then use the parameters for this card type. But, any other suggestions would be helpful, as I feel there's a better way to do this.
Many thanks!
Out of my busy schedule i could help you to some extinct. Please find the code below which will help you for first two images. Fine tune it for the third image. Just play with morphological operations to get the required output.
//#include "stdafx.h"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include "tchar.h"
using namespace cv;
using namespace std;
#define INPUT_FILE "p.jpg"
#define OUTPUT_FOLDER_PATH string("")
int _tmain(int argc, _TCHAR* argv[])
{
Mat large = imread(INPUT_FILE);
Mat rgb;
// downsample and use it for processing
pyrDown(large, rgb);
Mat small;
cvtColor(rgb, small, CV_BGR2GRAY);
// morphological gradient
Mat grad;
Mat morphKernel = getStructuringElement(MORPH_ELLIPSE, Size(2, 2));
Mat morphKernel1 = getStructuringElement(MORPH_ELLIPSE, Size(1, 1));
morphologyEx(small, grad, MORPH_GRADIENT, morphKernel);
// binarize
Mat bw;
threshold(grad, bw, 5.0, 50.0, THRESH_BINARY | THRESH_OTSU);
// connect horizontally oriented regions
Mat connected;
morphKernel = getStructuringElement(MORPH_RECT, Size(9, 1));
morphologyEx(bw, connected, MORPH_CLOSE, morphKernel);
morphologyEx(bw, connected, MORPH_OPEN, morphKernel1);
morphologyEx(connected, connected, MORPH_CLOSE, morphKernel);
morphologyEx(connected, connected, MORPH_CLOSE, morphKernel);
morphologyEx(connected, connected, MORPH_CLOSE, morphKernel);
// find contours
Mat mask = Mat::zeros(bw.size(), CV_8UC1);
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(connected, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
// filter contours
int y=0;
for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
Rect rect = boundingRect(contours[idx]);
Mat maskROI(mask, rect);
maskROI = Scalar(0, 0, 0);
// fill the contour
drawContours(mask, contours, idx, Scalar(255, 255, 255), CV_FILLED);
double a=contourArea( contours[idx],false);
if(a> 575)
{
rectangle(rgb, rect, Scalar(0, 255, 0), 2);
y++;
}
imshow("Result1",rgb);
}
cout<<" The number of elements"<<y<< endl;
imshow("Result",mask);
imwrite(OUTPUT_FOLDER_PATH + string("rgb.jpg"), rgb);
waitKey(0);
return 0;
}

Crop out part from images (findContours?) {opencv, java}

I have this image with boxes containing letters, like this:
I have been able to crop out each box., like this:
Now to my question. How can i crop out the letters only from each box? The desired result looks like this
I would like to use findContours but I am not really sure of how to achieve this since it will detect the noise and everything around as well.
Approach
I suggest the following approach according to this fact that you can extract the box. If you are give the box follow the steps, I think that would work:
Find the center of image
Find the contours in the image - those can be candidates
Find the bounding rectangle of each contour
Find the center of each bounding rectangle
Find the distance of each bounding rectangle from the center of image
Find the minimum distance - your answer
Note: There is a var named pad which control the padding of the result figure!
Do this for all your boxes. I hope that will help!
Good Luck :)
Python Code
# reading image in grayscale
image = cv2.imread('testing2.jpg',cv2.CV_LOAD_IMAGE_GRAYSCALE)
# thresholding to get a binary one
ret, image = cv2.threshold(image, 100,255,cv2.THRESH_BINARY_INV)
# finding the center of image
image_center = (image.shape[0]/2, image.shape[1]/2)
if image is None:
print 'can not read the image data'
# finding image contours
contours, hier = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# finding distance of each contour from the center of image
d_min = 1000
for contour in contours:
# finding bounding rect
rect = cv2.boundingRect(contour)
# skipping the outliers
if rect[3] > image.shape[1]/2 and rect[2] > image.shape[0]/2:
continue
pt1 = (rect[0], rect[1])
# finding the center of bounding rect-digit
c = (rect[0]+rect[2]*1/2, rect[1]+rect[3]*1/2)
d = np.sqrt((c[0] - image_center[0])**2 + (c[1]-image_center[1])**2)
# finding the minimum distance from the center
if d < d_min:
d_min = d
rect_min = [pt1, (rect[2],rect[3])]
# fetching the image with desired padding
pad = 5
result = image[rect_min[0][1]-pad:rect_min[0][1]+rect_min[1][1]+pad, rect_min[0][0]-pad:rect_min[0][0]+rect_min[1][0]+pad]
plt.imshow(result*255, 'gray')
plt.show()
Java Code
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
// reading image
Mat image = Highgui.imread(".\\testing2.jpg", Highgui.CV_LOAD_IMAGE_GRAYSCALE);
// clone the image
Mat original = image.clone();
// thresholding the image to make a binary image
Imgproc.threshold(image, image, 100, 128, Imgproc.THRESH_BINARY_INV);
// find the center of the image
double[] centers = {(double)image.width()/2, (double)image.height()/2};
Point image_center = new Point(centers);
// finding the contours
ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(image, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
// finding best bounding rectangle for a contour whose distance is closer to the image center that other ones
double d_min = Double.MAX_VALUE;
Rect rect_min = new Rect();
for (MatOfPoint contour : contours) {
Rect rec = Imgproc.boundingRect(contour);
// find the best candidates
if (rec.height > image.height()/2 & rec.width > image.width()/2)
continue;
Point pt1 = new Point((double)rec.x, (double)rec.y);
Point center = new Point(rec.x+(double)(rec.width)/2, rec.y + (double)(rec.height)/2);
double d = Math.sqrt(Math.pow((double)(pt1.x-image_center.x),2) + Math.pow((double)(pt1.y -image_center.y), 2));
if (d < d_min)
{
d_min = d;
rect_min = rec;
}
}
// slicing the image for result region
int pad = 5;
rect_min.x = rect_min.x - pad;
rect_min.y = rect_min.y - pad;
rect_min.width = rect_min.width + 2*pad;
rect_min.height = rect_min.height + 2*pad;
Mat result = original.submat(rect_min);
Highgui.imwrite("result.png", result);
EDIT:
Java code added!
Result

Stitching 2 images (OpenCV)

I'm trying to stitch two images together, using the OpenCV Java API. However, I get the wrong output and I cannot work out the problem. I use the following steps:
1. detect features
2. extract features
3. match features.
4. find homography
5. find perspective transform
6. warp perspective
7. 'stitch' the 2 images, into a combined image.
but somewhere I'm going wrong. I think it's the way I'm combing the 2 images, but I'm not sure. I get 214 good feature matches between the 2 images, but cannot stitch them?
public class ImageStitching {
static Mat image1;
static Mat image2;
static FeatureDetector fd;
static DescriptorExtractor fe;
static DescriptorMatcher fm;
public static void initialise(){
fd = FeatureDetector.create(FeatureDetector.BRISK);
fe = DescriptorExtractor.create(DescriptorExtractor.SURF);
fm = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
//images
image1 = Highgui.imread("room2.jpg");
image2 = Highgui.imread("room3.jpg");
//structures for the keypoints from the 2 images
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
//structures for the computed descriptors
Mat descriptors1 = new Mat();
Mat descriptors2 = new Mat();
//structure for the matches
MatOfDMatch matches = new MatOfDMatch();
//getting the keypoints
fd.detect(image1, keypoints1);
fd.detect(image1, keypoints2);
//getting the descriptors from the keypoints
fe.compute(image1, keypoints1, descriptors1);
fe.compute(image2,keypoints2,descriptors2);
//getting the matches the 2 sets of descriptors
fm.match(descriptors2,descriptors1, matches);
//turn the matches to a list
List<DMatch> matchesList = matches.toList();
Double maxDist = 0.0; //keep track of max distance from the matches
Double minDist = 100.0; //keep track of min distance from the matches
//calculate max & min distances between keypoints
for(int i=0; i<keypoints1.rows();i++){
Double dist = (double) matchesList.get(i).distance;
if (dist<minDist) minDist = dist;
if(dist>maxDist) maxDist=dist;
}
System.out.println("max dist: " + maxDist );
System.out.println("min dist: " + minDist);
//structure for the good matches
LinkedList<DMatch> goodMatches = new LinkedList<DMatch>();
//use only the good matches (i.e. whose distance is less than 3*min_dist)
for(int i=0;i<descriptors1.rows();i++){
if(matchesList.get(i).distance<3*minDist){
goodMatches.addLast(matchesList.get(i));
}
}
//structures to hold points of the good matches (coordinates)
LinkedList<Point> objList = new LinkedList<Point>(); // image1
LinkedList<Point> sceneList = new LinkedList<Point>(); //image 2
List<KeyPoint> keypoints_objectList = keypoints1.toList();
List<KeyPoint> keypoints_sceneList = keypoints2.toList();
//putting the points of the good matches into above structures
for(int i = 0; i<goodMatches.size(); i++){
objList.addLast(keypoints_objectList.get(goodMatches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(goodMatches.get(i).trainIdx).pt);
}
System.out.println("\nNum. of good matches" +goodMatches.size());
MatOfDMatch gm = new MatOfDMatch();
gm.fromList(goodMatches);
//converting the points into the appropriate data structure
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);
MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);
//finding the homography matrix
Mat H = Calib3d.findHomography(obj, scene);
//LinkedList<Point> cornerList = new LinkedList<Point>();
Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);
obj_corners.put(0,0, new double[]{0,0});
obj_corners.put(0,0, new double[]{image1.cols(),0});
obj_corners.put(0,0,new double[]{image1.cols(),image1.rows()});
obj_corners.put(0,0,new double[]{0,image1.rows()});
Core.perspectiveTransform(obj_corners, scene_corners, H);
//structure to hold the result of the homography matrix
Mat result = new Mat();
//size of the new image - i.e. image 1 + image 2
Size s = new Size(image1.cols()+image2.cols(),image1.rows());
//using the homography matrix to warp the two images
Imgproc.warpPerspective(image1, result, H, s);
int i = image1.cols();
Mat m = new Mat(result,new Rect(i,0,image2.cols(), image2.rows()));
image2.copyTo(m);
Mat img_mat = new Mat();
Features2d.drawMatches(image1, keypoints1, image2, keypoints2, gm, img_mat, new Scalar(254,0,0),new Scalar(254,0,0) , new MatOfByte(), 2);
//creating the output file
boolean imageStitched = Highgui.imwrite("imageStitched.jpg",result);
boolean imageMatched = Highgui.imwrite("imageMatched.jpg",img_mat);
}
public static void main(String args[]){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
initialise();
}
I cannot embed images nor post more than 2 links, because of reputation points? so I've linked the incorrectly stitched images and an image showing the matched features between the 2 images (to get an understanding of the issue):
incorrect stitched image: http://oi61.tinypic.com/11ac01c.jpg
detected features: http://oi57.tinypic.com/29m3wif.jpg
It seems that you have a lot of outliers that make the estimation of homography is incorrect. SO you can use RANSAC method that recursively reject those outliers.
No need much efforts for that, just use a third parameter in findHomography function as:
Mat H = Calib3d.findHomography(obj, scene, CV_RANSAC);
Edit
Then try to be sure that your images given to detector are 8-bit grayscale image, as mentioned here
The "incorrectly stitched image" you post looks like having a bad conditioned H matrix. Apart from +dervish suggestions, run:
cv::determinant(H) > 0.01
To check if your H matrix is "usable". If the matrix is badly conditioned, you get the effect you are showing.
You are drawing onto a 2x2 canvas size, if that's the case, you won't see plenty of stitching configurations, i.e. it's ok for image A on the left of image B but not otherwise. Try drawing the output onto a 3x3 canvas size, using the following snippet:
// Use the Homography Matrix to warp the images, but offset it to the
// center of the output canvas. Careful to pre-multiply, not post-multiply.
cv::Mat Offset = (cv::Mat_<double>(3,3) << 1, 0,
width, 0, 1, height, 0, 0, 1);
H = Offset * H;
cv::Mat result;
cv::warpPerspective(mat_l,
result,
H,
cv::Size(3*width, 3*height));
// Copy the reference image to the center of the 3x3 output canvas.
cv::Mat roi = result.colRange(width,2*width).rowRange(height,2*height);
mat_r.copyTo(roi);
Where width and height are those of the input images, supposedly both of the same size. Note that this warping assumes the mat_l unchanged (flat) and mat_r warping to get stitched on it.

Categories