I'm trying to use Opencv FAST detector setting a threshold in Android. I found a similar solved problem here. I have listed the keypoints list after detect method as suggested, but still doesn't work for me. In my case I want to detect the key points on my camera frame:
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
MatOfKeyPoint points = new MatOfKeyPoint();
Mat mat=inputFrame.rgba();
FeatureDetector fast = FeatureDetector.create(FeatureDetector.FAST);
fast.detect(mat, points);
// Sort and select 500 best keypoints
List<KeyPoint> listOfKeypoints = points.toList();
Collections.sort(listOfKeypoints, new Comparator<KeyPoint>() {
#Override
public int compare(KeyPoint kp1, KeyPoint kp2) {
// Sort them in descending order, so the best response KPs will come first
return (int) (kp2.response - kp1.response);
}
});
List<KeyPoint> listOfBestKeypoints = listOfKeypoints.subList(0, 500);
points.fromList(listOfBestKeypoints);
Scalar redcolor = new Scalar(255,0,0);
Mat mRgba= mat.clone();
Imgproc.cvtColor(mat, mRgba, Imgproc.COLOR_RGBA2RGB,4);
Features2d.drawKeypoints(mRgba, points, mRgba, redcolor, 3);
return mRgba;
}
Problem is that my listOfKeypoints remains null. If I don't try to set the threshold the code works fine but too slow.
What am I doing wrong here?
Thanks.
I found out that my list is null only the first time onCameraFrame is called. So I can make it works using a class variable and populating my list starting from the second call.
private int count=0;
And then in onCameraFrame;
if (count!=0) {
//Sort and select 500 best keypoints
List<KeyPoint> listOfKeypoints = points.toList();
Collections.sort(listOfKeypoints, new Comparator<KeyPoint>() {
#Override
public int compare(KeyPoint kp1, KeyPoint kp2) {
// Sort them in descending order, so the best response KPs will come first
return (int) (kp2.response - kp1.response);
}
});
List<KeyPoint> listOfBestKeypoints = listOfKeypoints.subList(0, 500);
points.fromList(listOfBestKeypoints);
}
count++;
In this way it works, but I still don't understand why on the first call of onCameraFrame the list is null. Any ideas?
Related
I am creating an app to recognize a building or parts of a building to overlay certain highlights.
At first I thought about using Vuforia and Unity like everyone else does, but I feel like it does not give me the freedom I need, especially with the free version.
My logic goes a bit deeper than just using a target image, so my idea was to use Android Studio and OpenCV.
I am at a point, where I can show feature matching with steps like
Calib3d.findHomography(pts1Mat, pts2Mat, Calib3d.RANSAC, 10, outPutMask, 2000, 0.995);
to get good matches and then use
Features2d.drawMatches(imgFromFile, keyPoints1, imgFromFrame, keyPoints2, better_matches_mat, outputImg);
But at this moment I am kind of out of ideas on how to translate the seemingly easy python code you often find to android/java.
Things I need to do at this point:
Extract descriptors/keypoints from known images of the building so the app does not need to calculate those each time/frame (I will take many fotographs)
Highlight matching area (box or color highlights on contour)
Get rid of false positives (it finds matches even though I have the camera on some random object
Framerate kinda low atm with drawMatches, since I dont really need that I hope that the framerate will be better when "just" calculating matches
I am trying to frameResolution/2 or frameResolution/4 before working with them but matches get worse
Some of my code
public Mat matching (Mat matFrame, int viewMode, int resizeMode) {
if (viewMode == VIEW_MODE_FEATURES) {
initMatching();
if (!imageIsAnalyzed) {
detectImageFromFile();
}
detectFrame(matFrame, resizeMode);
featureMatching();
outPutMat = drawingOutputMat();
}
return outPutMat;
private void initMatching () {
detector = ORB.create();
descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
}
private void featureMatching () {
matcher.knnMatch(descriptor1, descriptor2, matches, 2);
//ratio test to get good matches
if (matOfDMatch.toArray()[0].distance / matOfDMatch.toArray()[1].distance < 0.9) {
good_matches.add(matOfDMatch.toArray()[0]);
}
//....
for(int i = 0; i<good_matches.size(); i++) {
pts1.add(keyPoints1.toList().get(good_matches.get(i).queryIdx).pt);
pts2.add(keyPoints2.toList().get(good_matches.get(i).trainIdx).pt);
}
//....
Calib3d.findHomography(pts1Mat, pts2Mat, Calib3d.RANSAC, 10, outPutMask, 2000, 0.995);
//outPutMask contains zeros and ones indicating which matches are filtered
better_matches = new LinkedList<DMatch>();
for (int i = 0; i < good_matches.size(); i++) {
if (outPutMask.get(i , 0)[0] != 0.0) {
better_matches.add(good_matches.get(i));
}
}
private void detectFrame (Mat matFrame, int resizeMode) {
imgFromFrame = matFrame;
Imgproc.resize(imgFromFrame, imgFromFrame, new Size(matFrame.width()/resizeMode, matFrame.height()/resizeMode));
descriptor2 = new Mat();
keyPoints2 = new MatOfKeyPoint();
detector.detect(imgFromFrame, keyPoints2);
descriptor.compute(imgFromFrame, keyPoints2, descriptor2);
}
private Mat drawingOutputMat () {
//Drawing Output
outputImg = new Mat();
better_matches_mat = new MatOfDMatch();
better_matches_mat.fromList(better_matches);
//this will draw all matches
Features2d.drawMatches(imgFromFile, keyPoints1, imgFromFrame, keyPoints2, better_matches_mat, outputImg);
//Instead of the drawing matches I will need some classification and some overlay on the output
return outputImg;
}
I hope some of you can help me to figure out my next steps and how I should continue.
Thanks in advance.
I am doing a project where I need to detect musical elements from stave lines, and I am in the point where I know what duration an note element has (quarter, octet, etc) and I what to detect the center of the note-head so that I can found out what note it is (C, D, etc) based on its location on the stave lines.
The problem that I have is that I don't know exactly where to start.
I was thinking about some template-matching using full and empty ovals as a template and the element Mat as a source.
Does anyone has any better and optimal solutions?
Examples of element Mats from where I want to find the note-head:
or
or
Project on GitHub if anyone is interested
https://github.com/AmbroziePaval/OMR
Implementation using template matching for one element (note) at a time.
Example searches all quarters and draw center points in green.
Code:
public Point getAproximateCenterNoteHeadPoint(Mat noteMat) {
noteMat.convertTo(noteMat, CvType.CV_32FC1);
Mat fullNoteHeadMat = Imgcodecs.imread(DatasetPaths.FULL_HEAD_TEMPLATE.getPath());
if (fullNoteHeadMat.channels() == 3) {
Imgproc.cvtColor(fullNoteHeadMat, fullNoteHeadMat, Imgproc.COLOR_BGR2GRAY);
}
fullNoteHeadMat.convertTo(fullNoteHeadMat, CvType.CV_32FC1);
Mat result = new Mat();
result.create(noteMat.width(), noteMat.height(), CvType.CV_32FC1);
double threshold = 0.7;
Imgproc.matchTemplate(noteMat, fullNoteHeadMat, result, Imgproc.TM_CCOEFF_NORMED);
Imgproc.threshold(result, result, threshold, 255, Imgproc.THRESH_TOZERO);
Core.MinMaxLocResult minMaxLocResult = Core.minMaxLoc(result);
if (minMaxLocResult.maxVal > threshold) {
Point maxLoc = minMaxLocResult.maxLoc;
return new Point(maxLoc.x + fullNoteHeadMat.width() / 2, maxLoc.y + fullNoteHeadMat.height() / 2);
}
return null;
}
Implementation using template matching for all elements at a time, as #Alexander Reynolds suggested in the comments of the question:
public List<Point> findAllNoteHeadCenters(Mat imageMat, List<Rect> elementRectangles) {
imageMat.convertTo(imageMat, CvType.CV_32FC1);
Mat fullNoteHeadMat = Imgcodecs.imread(DatasetPaths.FULL_HEAD_TEMPLATE.getPath());
if (fullNoteHeadMat.channels() == 3) {
Imgproc.cvtColor(fullNoteHeadMat, fullNoteHeadMat, Imgproc.COLOR_BGR2GRAY);
}
fullNoteHeadMat.convertTo(fullNoteHeadMat, CvType.CV_32FC1);
Mat result = new Mat();
result.create(imageMat.width(), imageMat.height(), CvType.CV_32FC1);
double threshold = 0.75;
Imgproc.matchTemplate(imageMat, fullNoteHeadMat, result, Imgproc.TM_CCOEFF_NORMED);
Imgproc.threshold(result, result, threshold, 255, Imgproc.THRESH_TOZERO);
List<Point> centers = new ArrayList<>();
Set<Rect> foundCenterFor = new HashSet<>();
while (true) {
Core.MinMaxLocResult minMaxLocResult = Core.minMaxLoc(result);
if (minMaxLocResult.maxVal > threshold) {
Point maxLoc = minMaxLocResult.maxLoc;
Optional<Rect> containingRect = getPointContainingRect(maxLoc, elementRectangles);
if (containingRect.isPresent() && !foundCenterFor.contains(containingRect.get())) {
centers.add(new Point(maxLoc.x + fullNoteHeadMat.width() / 2, maxLoc.y + fullNoteHeadMat.height() / 2));
foundCenterFor.add(containingRect.get());
}
Imgproc.floodFill(result, new Mat(), minMaxLocResult.maxLoc, new Scalar(0));
} else {
break;
}
}
return centers;
}
Try using Chamfer based distance transform transform to find the center of your point. The algorithm passes the image 2 times to calculate the distance of each object pixel to the nearest margin. The center point of your object will be the one with the greatest distance assigned.
i started using opencv a few weeks ago. i would like to know if there is a function for finding out the brightest contour from a List of contours and drawing on the brightest. so far i managed to convert greyscale, threshold the image and using the findContour functions to find all the whole contours in the image.
tried using the minMax function but can't find how its used in java.
public void process(Mat rgbaImage) {
Imgproc.threshold(rgbaImage,rgbaImage,230,255,Imgproc.THRESH_BINARY);
Imgproc.findContours(rgbaImage,contours,mHierarchy,Imgproc.RETR_LIST,Imgproc.CHAIN_APPROX_SIMPLE);
/* for(int id = 0; id < contours.size();id++) {
double area = Imgproc.contourArea(contours.get(id));
if (area > 8000){
Log.i(TAG1, "contents founds at id" + id);
}
} */
}`
If your "brightest" means the average color that is brightest, you can use cv::mean(Mat src, Mat mask).
Sadly I only know C++ OpenCV implementation, but I think Java version is almost the same as C++ one.
C++ Example:
Mat src; // This is your src image
vector<vector<Point>> contours; // This is your array of contours
findContours(src.clone(), contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); // Find the contours in the image
int brightestIdx = -1;
int brightestColor = -1;
for(int i=0; i<contours.size(); i++)
{
// First, make a mask image of each contour
Mat mask(src.cols, src.rows, CV_8U, Scalar(0));
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
// Second, calculate average brightness with mask
Scalar m = mean(src, mask);
// Finally, compare current average with previous one
if(m[0] > brightestColor)
{
brightestColor = m[0];
brightestIdx = i;
}
}
// Now you've found the brightest index.
// Do whatever you want.
Mat brightest_only(src.cols, src.rows, CV_8U, Scalar(0));
drawContours(brightest_only, contours, brightestIdx, Scalar(255), 1);
my objective here is simply to detect the seat no matter where's the image is positioned. To make things even simpler this is the only image that will be shown on the screen but the position of the image may change. User may move it right, left, up down and maybe show part of the image.
I read this thread that shows how to 'Brute-force' the image to detect a subset of an image but when I tried it - it took my 100+ seconds to detect it (really long time, though I'm not looking for real-time) and also, I think my challenge is simpler.
Q: What should be my approach here? I've never tried anything with image processing and ready to go this path (if its applicable here).
Thanks!
this is the image that will be shown on the screen (might be shown only part of it, say user moved it all the way to the right and it shows only the rear wheal with the seat)
subset image is always like this:
given javaCV and openCV this snippet code does the job:
public class RunTest1
{
public static void main(String args[])
{
IplImage src = cvLoadImage("C:\\Users\\Nespresso\\Desktop\\cervelo.jpg",0);
IplImage tmp = cvLoadImage("C:\\Users\\Nespresso\\Desktop\\subImage.jpg",0);
IplImage result = cvCreateImage(cvSize(src.width()-tmp.width()+1, src.height()-tmp.height()+1), IPL_DEPTH_32F, 1);
cvZero(result);
//Match Template Function from OpenCV
cvMatchTemplate(src, tmp, result, CV_TM_CCORR_NORMED);
double[] min_val = new double[2];
double[] max_val = new double[2];
CvPoint minLoc = new CvPoint();
CvPoint maxLoc = new CvPoint();
//Get the Max or Min Correlation Value
cvMinMaxLoc(result, min_val, max_val, minLoc, maxLoc, null);
System.out.println(Arrays.toString(min_val));
System.out.println(Arrays.toString(max_val));
CvPoint point = new CvPoint();
point.x(maxLoc.x()+tmp.width());
point.y(maxLoc.y()+tmp.height());
cvRectangle(src, maxLoc, point, CvScalar.WHITE, 2, 8, 0);//Draw a Rectangle for Matched Region
cvShowImage("Lena Image", src);
cvWaitKey(0);
cvReleaseImage(src);
cvReleaseImage(tmp);
cvReleaseImage(result);
}
}
I'm trying to stitch two images together, using the OpenCV Java API. However, I get the wrong output and I cannot work out the problem. I use the following steps:
1. detect features
2. extract features
3. match features.
4. find homography
5. find perspective transform
6. warp perspective
7. 'stitch' the 2 images, into a combined image.
but somewhere I'm going wrong. I think it's the way I'm combing the 2 images, but I'm not sure. I get 214 good feature matches between the 2 images, but cannot stitch them?
public class ImageStitching {
static Mat image1;
static Mat image2;
static FeatureDetector fd;
static DescriptorExtractor fe;
static DescriptorMatcher fm;
public static void initialise(){
fd = FeatureDetector.create(FeatureDetector.BRISK);
fe = DescriptorExtractor.create(DescriptorExtractor.SURF);
fm = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
//images
image1 = Highgui.imread("room2.jpg");
image2 = Highgui.imread("room3.jpg");
//structures for the keypoints from the 2 images
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
//structures for the computed descriptors
Mat descriptors1 = new Mat();
Mat descriptors2 = new Mat();
//structure for the matches
MatOfDMatch matches = new MatOfDMatch();
//getting the keypoints
fd.detect(image1, keypoints1);
fd.detect(image1, keypoints2);
//getting the descriptors from the keypoints
fe.compute(image1, keypoints1, descriptors1);
fe.compute(image2,keypoints2,descriptors2);
//getting the matches the 2 sets of descriptors
fm.match(descriptors2,descriptors1, matches);
//turn the matches to a list
List<DMatch> matchesList = matches.toList();
Double maxDist = 0.0; //keep track of max distance from the matches
Double minDist = 100.0; //keep track of min distance from the matches
//calculate max & min distances between keypoints
for(int i=0; i<keypoints1.rows();i++){
Double dist = (double) matchesList.get(i).distance;
if (dist<minDist) minDist = dist;
if(dist>maxDist) maxDist=dist;
}
System.out.println("max dist: " + maxDist );
System.out.println("min dist: " + minDist);
//structure for the good matches
LinkedList<DMatch> goodMatches = new LinkedList<DMatch>();
//use only the good matches (i.e. whose distance is less than 3*min_dist)
for(int i=0;i<descriptors1.rows();i++){
if(matchesList.get(i).distance<3*minDist){
goodMatches.addLast(matchesList.get(i));
}
}
//structures to hold points of the good matches (coordinates)
LinkedList<Point> objList = new LinkedList<Point>(); // image1
LinkedList<Point> sceneList = new LinkedList<Point>(); //image 2
List<KeyPoint> keypoints_objectList = keypoints1.toList();
List<KeyPoint> keypoints_sceneList = keypoints2.toList();
//putting the points of the good matches into above structures
for(int i = 0; i<goodMatches.size(); i++){
objList.addLast(keypoints_objectList.get(goodMatches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(goodMatches.get(i).trainIdx).pt);
}
System.out.println("\nNum. of good matches" +goodMatches.size());
MatOfDMatch gm = new MatOfDMatch();
gm.fromList(goodMatches);
//converting the points into the appropriate data structure
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);
MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);
//finding the homography matrix
Mat H = Calib3d.findHomography(obj, scene);
//LinkedList<Point> cornerList = new LinkedList<Point>();
Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);
obj_corners.put(0,0, new double[]{0,0});
obj_corners.put(0,0, new double[]{image1.cols(),0});
obj_corners.put(0,0,new double[]{image1.cols(),image1.rows()});
obj_corners.put(0,0,new double[]{0,image1.rows()});
Core.perspectiveTransform(obj_corners, scene_corners, H);
//structure to hold the result of the homography matrix
Mat result = new Mat();
//size of the new image - i.e. image 1 + image 2
Size s = new Size(image1.cols()+image2.cols(),image1.rows());
//using the homography matrix to warp the two images
Imgproc.warpPerspective(image1, result, H, s);
int i = image1.cols();
Mat m = new Mat(result,new Rect(i,0,image2.cols(), image2.rows()));
image2.copyTo(m);
Mat img_mat = new Mat();
Features2d.drawMatches(image1, keypoints1, image2, keypoints2, gm, img_mat, new Scalar(254,0,0),new Scalar(254,0,0) , new MatOfByte(), 2);
//creating the output file
boolean imageStitched = Highgui.imwrite("imageStitched.jpg",result);
boolean imageMatched = Highgui.imwrite("imageMatched.jpg",img_mat);
}
public static void main(String args[]){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
initialise();
}
I cannot embed images nor post more than 2 links, because of reputation points? so I've linked the incorrectly stitched images and an image showing the matched features between the 2 images (to get an understanding of the issue):
incorrect stitched image: http://oi61.tinypic.com/11ac01c.jpg
detected features: http://oi57.tinypic.com/29m3wif.jpg
It seems that you have a lot of outliers that make the estimation of homography is incorrect. SO you can use RANSAC method that recursively reject those outliers.
No need much efforts for that, just use a third parameter in findHomography function as:
Mat H = Calib3d.findHomography(obj, scene, CV_RANSAC);
Edit
Then try to be sure that your images given to detector are 8-bit grayscale image, as mentioned here
The "incorrectly stitched image" you post looks like having a bad conditioned H matrix. Apart from +dervish suggestions, run:
cv::determinant(H) > 0.01
To check if your H matrix is "usable". If the matrix is badly conditioned, you get the effect you are showing.
You are drawing onto a 2x2 canvas size, if that's the case, you won't see plenty of stitching configurations, i.e. it's ok for image A on the left of image B but not otherwise. Try drawing the output onto a 3x3 canvas size, using the following snippet:
// Use the Homography Matrix to warp the images, but offset it to the
// center of the output canvas. Careful to pre-multiply, not post-multiply.
cv::Mat Offset = (cv::Mat_<double>(3,3) << 1, 0,
width, 0, 1, height, 0, 0, 1);
H = Offset * H;
cv::Mat result;
cv::warpPerspective(mat_l,
result,
H,
cv::Size(3*width, 3*height));
// Copy the reference image to the center of the 3x3 output canvas.
cv::Mat roi = result.colRange(width,2*width).rowRange(height,2*height);
mat_r.copyTo(roi);
Where width and height are those of the input images, supposedly both of the same size. Note that this warping assumes the mat_l unchanged (flat) and mat_r warping to get stitched on it.