I wanna make an ellipse mask for cropping an image so only the contents inside of the ellipse will be shown.
Could you inspect my code?
public static Mat cropImage(Mat imageOrig, MatOfPoint contour){
Rect rect = Imgproc.boundingRect(contour);
MatOfPoint2f contour2f = new MatOfPoint2f(contour.toArray());
RotatedRect boundElps = Imgproc.fitEllipse(contour2f);
Mat out = imageOrig.submat(rect);
// the line function is working
Imgproc.line(out, new Point(0,0), new Point(out.width(), out.height()), new Scalar(0,0,255), 5);
// but not this one
Imgproc.ellipse(out, boundElps, new Scalar(255, 0, 0), 99);
return out;
}//cropImage
It seems like it's not working at all. Though you can see the line function I've done to test if it is working on the right image and I can see a line but there's no ellipse.
Here's a sample output of my cropImage function.
TIA
You're retrieving the ellipse coordinates in the imageOrig coordinates system, but you're showing it on the cropped out image.
If you want to show the ellipse on the crop, you need to translate the ellipse center to account for the translation introduced by the crop (top-left coordinates of rect), something like:
boundElps.center().x -= rect.x; boundElps.center().y -= rect.y;
You can try this out:
RotatedRect rRect = Imgproc.minAreaRect(contour2f);
Imgproc.ellipse(out, rRect , new Scalar(255, 0, 0), 3);
You should check for the minimum requirements for using the fitEllipse as shown in this post.
The function fitEllipse requires atleast 5 points.
Note: Although the reference I mention is for Python, I hope you can do the same for Java.
for cnt in contours:
area = cv2.contourArea(cnt)
# Probably this can help but not required
if area < 2000 or area > 4000:
continue
# This is the check I'm referring to
if len(cnt) < 5:
continue
ellipse = cv2.fitEllipse(cnt)
cv2.ellipse(roi, ellipse, (0, 255, 0), 2)
Hope it helps!
Related
I am new to android. I am using opencv to detect face and mouth of a person. It is not detecting mouth correctly. Can you help me in this?
Here is my code:
mJavaDetectorLip =
loadClassifier(R.raw.haarcascade_mcs_mouth,"haarcascade_mcs_mouth.xml",
cascadeDir);
......
Rect liparea = new Rect(new Point(20,20),new Point(mGray.width() - 20,
mGray.height() - 20 ));
lipArea(mJavaLip,liparea,100);
......
here is my code:
private Mat lipArea(CascadeClassifier clasificator, Rect area, int
size) {
Mat template = new Mat();
Mat mROI = mGray.submat(area);
MatOfRect mouths = new MatOfRect();
Point lips = new Point();
//isolate the eyes first
clasificator.detectMultiScale(mROI, mouths, 1.1, 2, Objdetect.CASCADE_FIND_BIGGEST_OBJECT
| Objdetect.CASCADE_SCALE_IMAGE, new Size(30, 30), new Size());
Rect[] mouthArray = mouths.toArray();
for (int i = 0; i < mouthArray.length;) {
Rect e = mouthArray[i];
e.x = area.x + e.x;
e.y = area.y + e.y;
Point center1 = new Point(e.x + mouthArray[i].width * 0.5,
e.y + mouthArray[i].height * 0.5);
int radius = (int) Math.round(mouthArray[i].width / 2);
Imgproc.circle(mRgba, center1, radius, new Scalar(255, 0, 0), 4, 8, 0);
new Scalar(0,255,0),1,8,0);
return template;
}
return template;
}
It is not staying in one place, it is moving around the whole face.
It is not staying in one place, it is moving around the whole face.
It is an expected behavior as the features of mouth are very much limited and there is a high chance of false positives. For example your eyes would also have similar features as your lip. To mitigate this issue, OpenCV docs suggest that we must first detect the faces in a given frame, if there are multiple then choose a single one depending upon area of face rect or some other param. After successful detection of face, divide the face rect into halves and search for the lips in the lower half only.
This would significantly increase your accuracy, because the Haar features for face are pretty complex and well trained. Narrowing down your search domain from the whole frame to lower half of your face would save time as well.
I'm working on android app, which determines which font is used on a text image. So I need to extract every character from image and don't know how to do it precisely. Furthermore, when I'm trying to process an image I have one result...but my classmate has different (for example, more or less noise). The problem with character detection is that:
1) it detects also some noise blobs on image and shows it in rectangles (I thought about detectMultiScale... but I have doubts about it, maybe there are easiest ways to detect characters)
2) it detects several contours of one character (for example inner and outer radius of letter "o")
And question for the future: I'm going to create a DB with images (for now just 3 fonts) of different letters of fonts and compare them with an image of letters from photo. Maybe someone could recommend a better way to do it.
So this is part of code with image processing(I'm still playing with values of blur, threshold and Canny... but there was no really positive result):
Imgproc.cvtColor(sImage, grayImage, Imgproc.COLOR_BGR2GRAY); //градации серого
Imgproc.GaussianBlur(grayImage,blurImage,new Size(5, 5),0); //размытие
Imgproc.adaptiveThreshold(blurImage, thresImage, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 101, 39);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.Canny(thresImage, binImage, 30, 10, 3, true); //контур
Imgproc.findContours(binImage, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
hierarchy.release();
Imgproc.drawContours(binImage, contours, -1, new Scalar(255, 255, 255));//, 2, 8, hierarchy, 0, new Point());
MatOfPoint2f approxCurve = new MatOfPoint2f();
//For each contour found
for (int i = 0; i < contours.size(); i++) {
//Convert contours(i) from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(i).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
// Get bounding rect of contour
Rect rect = Imgproc.boundingRect(points);
// draw enclosing rectangle (all same color, but you could use variable i to make them unique)
Imgproc.rectangle(binImage, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 255, 255), 5);
}
And screen (not actually with processing values from code, just one with better results):
Original:
(unfortunately, I can't add more than 2 links to show more examples)
There were situations, when picture from this screen looked pretty good, but another pictures looked like with shapeless blobs.
Your code is fine, you just need to make a minor tweaks to get it work properly.
Firstly, the image size is very large, you can safely reduce it to 20% of current size without suffering a major loss in accuracy. Due to larger image size all the functions would perform slower.
You dont need to perform adaptive threshold before Canny, canny works perfectly on gray-scale images as well, You need to adjust the params as:
Canny(img, threshold1=170, threshold2=250)
which yields an image as:
[Optional] If you want to de-noise the image then you can try with morphological operations like erode and dilate.
Now you are ready to find the contours. The mistake in your code was using Imgproc.RETR_TREE flag you need to use Imgproc.RETR_EXTERNAL flag to get only the outer contours and not the nested inner contours.
At this step you may have some unwanted small contours, which can be filtered as:
// ** Below code if for reference purposes only, consult OpenCV docs for proper API methods
int character_area_lower_thresh = 10;
for (Contour c:contours) {
if (Imgproc.contourArea(c) > character_area_lower_thresh) {
// Desired contour, do what ever you want to do
Rect r = Imgproc.boundingRect(c);
}
}
--------------read edit below---------------
I am trying to detect the edge of the pupils and iris within various images. I am altering parameters and such but I can only manage to ever get one iris/pupil outline correct, or get unnecessary outlines in the background, or none at all. Is the some specific parameters that I should try to try and get the correct outlines. Or is there a way that I can crop the image just to the eyes, so the system can focus on that part?
This is my UPDATED method:
private void findPupilIris() throws IOException {
//converts and saves image in grayscale
Mat newimg = Imgcodecs.imread("/Users/.../pic.jpg");
Mat des = new Mat(newimg.rows(), newimg.cols(), newimg.type());
Mat norm = new Mat();
Imgproc.cvtColor(newimg, des, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv = new ArrayList<Mat>();
Core.split(des, hsv);
Mat v = hsv.get(2); //gets the grey scale version
Imgcodecs.imwrite("/Users/Lisa-Maria/Documents/CapturedImages/B&Wpic.jpg", v); //only writes mats
CLAHE clahe = Imgproc.createCLAHE(2.0, new Size(8,8) ); //2.0, new Size(8,8)
clahe.apply(v,v);
// Imgproc.GaussianBlur(v, v, new Size(9,9), 3); //adds left pupil boundary and random circle on 'a'
// Imgproc.GaussianBlur(v, v, new Size(9,9), 13); //adds right outer iris boundary and random circle on 'a'
Imgproc.GaussianBlur(v, v, new Size(9,9), 7); //adds left outer iris boundary and random circle on left by hair
// Imgproc.GaussianBlur(v, v, new Size(7,7), 15);
Core.addWeighted(v, 1.5, v, -0.5, 0, v);
Imgcodecs.imwrite("/Users/.../after.jpg", v); //only writes mats
if (v != null) {
Mat circles = new Mat();
Imgproc.HoughCircles( v, circles, Imgproc.CV_HOUGH_GRADIENT, 2, v.rows(), 100, 20, 20, 200 );
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
System.out.println("circles.cols() " + circles.cols());
if(circles.cols() > 0) {
System.out.println("1");
for (int x = 0; x < circles.cols(); x++) {
System.out.println("2");
double vCircle[] = circles.get(0, x);
if(vCircle == null) {
break;
}
Point pt = new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
int radius = (int) Math.round(vCircle[2]);
//draw the found circle
Imgproc.circle(v, pt, radius, new Scalar(255,0,0),2); //newimg
//Imgproc.circle(des, pt, radius/3, new Scalar(225,0,0),2); //pupil
Imgcodecs.imwrite("/Users/.../Houghpic.jpg", v); //newimg
//draw the mask: white circle on black background
// Mat mask = new Mat( new Size( des.cols(), des.rows() ), CvType.CV_8UC1 );
// Imgproc.circle(mask, pt, radius, new Scalar(255,0,0),2);
// des.copyTo(des,mask);
// Imgcodecs.imwrite("/Users/..../mask.jpg", des); //newimg
Imgproc.logPolar(des, norm, pt, radius, Imgproc.WARP_FILL_OUTLIERS);
Imgcodecs.imwrite("/Users/..../Normalised.jpg",norm);
}
}
}
}
Result: hough pic
Following discussion in comments, I am posting a general answer with some results I got on the worst case image uploaded by the OP.
Note : The code I am posting is in Python, since it is the fastest for me to write
Step 1. As you ask for a way to crop the image, so as to focus on the eyes only, you might want to look at Face Detection. Since, the image essentially requires to find eyes only, I did the following:
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
eyes = eye_cascade.detectMultiScale(v) // v is the value channel of the HSV image
// The results "eyes" gives you the dimensions of the rectangle where the eyes are detected as [x, y, w, h]
// Just for drawing
cv2.rectangle(v, (x1, y1), (x1+w1, y1+h1), (0, 255, 0), 2)
cv2.rectangle(v, (x2, y2), (x2+w2, y2+h2), (0, 255, 0), 2)
Now, once you have the bounding rectangles, you can crop the rectangles from the image like:
crop_eye1 = v[y1:y1+h1, x1:x1+w1]
crop_eye2 = v[y2:y2+h2, x2:x2+w2]
After you obtain the rectangles, I would suggest looking into different color spaces instead of RGB/BGR, HSV/Lab/Luv in particular.
Because the R, G, and B components of an object’s color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant
Then, once, you have the eyes, its time to equalize the contrast of the image, however, I suggest using CLAHE and play with the parameters for clipLimit and tileGridSize. Here is a code which I implemented a while back in Java:
private static Mat clahe(Mat image, int ClipLimit, Size size){
CLAHE clahe = Imgproc.createCLAHE();
clahe.setClipLimit(ClipLimit);
clahe.setTilesGridSize(size);
Mat dest_image = new Mat();
clahe.apply(image, dest_image);
return dest_image;
}
Once you are satisfied, you should sharpen the image so that HoughCircle is robust. You should look at unsharpMask. Here is the code in Java for UnsharpMask I implemented in Java:
private static Mat unsharpMask(Mat input_image, Size size, double sigma){
// Make sure the {input_image} is gray.
Mat sharpend_image = new Mat(input_image.rows(), input_image.cols(), input_image.type());
Mat Blurred_image = new Mat(input_image.rows(), input_image.cols(), input_image.type());
Imgproc.GaussianBlur(input_image, Blurred_image, size, sigma);
Core.addWeighted(input_image, 2.0D, Blurred_image, -1.0D, 0.0D, sharpened_image);
return sharpened_image;
}
Alternatively, you could use bilateral filter, which is edge preserving smoothing, or read through this for defining a custom kernel for sharpening image.
Hope it helps and best of luck!
I want to get all the outer contours with RETR_EXTERNAL but for some weird reason openCV thinks that the image border is a contour too and therefore discards all inner contours. What exactly am I doing wrong here?
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(imageA, contours, hierarchy, Imgproc.RETR_EXTERNAL,
Imgproc.CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); i++) {
double[] c = hierarchy.get(0, i);
Rect rect = Imgproc.boundingRect(contours.get(i));
Core.rectangle(image, new Point(rect.x, rect.y),
new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(0, 255, 0), 3);
}
Input (imageA was processed to this before contour-finding):
Output:
EDIT:
Problem partially solved
Inverting the pixels so that black is the background and white the foreground helped with the image above image. However I still get inner contours on some images. Like this one:
Input
Output
Your input image isnt good enought o extract the contours you want to have.
Your input contours are these (part of your image):
each color is a single contour (and some of the white ones)
For the red contour I've drawn the bounding rectangle which is the same method that you used to display the contours. All the other colored contours aren't inside of the red contour, but just inside of the bounding rectangle, that's why they are found even though you selected to only find the outer contours.
What you really want is something like this:
but to get that result, your input image must have that lines of the ellipse connected, too!!
For your input image it will be very hard to extract those lines, without getting lines of the ground too, but an easy approach could be to use a couple of dilation operations followed by the same number of erosion operations on your input image, before extracting contours. This won't be stable for all setting though ;)
I'm trying to detect the positions of billiards balls on a table from an image taken at a perspective angle. I'm using the getPerspectiveTransform() method to find the transformation matrix and I want to apply that to only the circles I detect using HoughCircles. I'm trying to go from a rather large trapezoidal shape to a smaller rectangular shape. I don't want to do the transformation on the image first and then find the HoughCircles because the image gets too warped for houghcircles to provide useful results.
Here's my code:
CvMat mmat = cvCreateMat(3,3,CV_32FC1);
double srcX1 = 462;
double srcX2 = 978;
double srcX3 = 1440;
double srcX4 = 0;
double srcY = 241;
double srcHeight = 772;
double dstX = 56.8;
double dstY = 33.5;
double dstWidth = 262.4;
double dstHeight = 447.3;
CvSeq seq = cvHoughCircles(newGray, circles, CV_HOUGH_GRADIENT, 2.1d, (double)newGray.height()/40, 85d, 65d, 5, 50);
JavaCV.getPerspectiveTransform(new double[]{srcX1, srcY, srcX2,srcY, srcX3, srcHeight, srcX4, srcHeight},
new double[]{dstX, dstY, dstWidth, dstY, dstWidth, dstHeight, dstX, dstHeight}, mmat);
cvWarpPerspective(seq, seq, mmat);
for(int j=0; j<seq.total(); j++){
CvPoint3D32f point = new CvPoint3D32f(cvGetSeqElem(seq, j));
float xyr[] = {point.x(),point.y(),point.z()};
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(gray, center, 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(gray, center, radius, CvScalar.BLUE, 3, 8, 0);
}
The problem is I get this error on the warpPerspective() method:
error: (-215) seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size in function cv::Mat cv::cvarrToMat(const CvArr*, bool, bool, int)
Also I guess it's worth mentioning that I'm using JavaCV, in case the method calls look a bit different than what you're used to. Thanks for any help.
Answer:
the problem with what you want to do (besides the obvious, opencv wont let you) is that the radius cant really be warped correctly. AFAIK the xy coordinates are pretty easy to calculate x'=((m00x+m01y+m02)/(m20x+m21y+m22)) y'=((m10x+m11y+m12)/(m20x+m21y_m22)) when m is the transformation matrix. the radius you can hack by transforming all the points of the original circle and then find the max distance between x'y' and those points (atleast if the radius in the warped image is expected to cover all those points)
btw, mIJx = m(i,j)*x (just to clarify)
End Answer.
Everything i write is according to the c++ version, i've never used JavaCV but from what i could see its just a wrapper that calls the native c++ lib.
CvSeq is a sequance data structure that behaves like a linked list.
the assert your application crushes at is
CV_Assert(seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size);
which means that either your seq instance is empty (total is the number of elements in the sequence) or somehow the inner seq flags are corrupted.
I'd recommend that you'd check the total member of your CvSeq, and the cvHoughCircles call.
all of this occurs before the actual implementation of cvWarpPerspective (its the first line in the implementation, that only converts your CvSeq to cv::Mat).. so its not the warping but what you're doing before that.
anyway, to understand whats wrong with cvHoughCircles we'll need more info about the creation of newGray and circles.
here is an example i've found on the javaCV page (Link)
IplImage gray = cvCreateImage( cvSize( img.width, img.height ), IPL_DEPTH_8U, 1 );
cvCvtColor( img, gray, CV_RGB2GRAY );
// smooth it, otherwise a lot of false circles may be detected
cvSmooth(gray,gray,CV_GAUSSIAN,9,9,2,2);
CvMemStorage circles = CvMemStorage.create();
CvSeq seq = cvHoughCircles(gray, circles.getPointer(), CV_HOUGH_GRADIENT,
2, img.height/4, 100, 100, 0, 0);
for(int i=0; i<seq.total; i++){
float xyr[] = cvGetSeqElem(seq,i).getFloatArray(0, 3);
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(img, center.byValue(), 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(img, center.byValue(), radius, CvScalar.BLUE, 3, 8, 0);
}
from what i've seen in the implementation of cvHoughCircles, the answer is saved in the circles buff and at the end they create from it the CvSeq to return, so if you've allocated the circles buff wrong, it wont work.
EDIT:
as you can see, the CvSeq instance in case of the return from cvHoughCircles is a list of point-values, that is probably why the assertion failed. you cannot convert this CvSeq into a cv::Mat.. because its just not a cv::Mat. to get only the circles returned from cvHoughCircles in an cv::Mat instance, you'll need to create a new cv::Mat instance and than draw onto it all the circles in the CvSeq - as seen in the provided example above.
than the warping will work (you'll have a cv::Mat instance, and that is what the function expect - a cv::Mat as the only element in the CvSeq)
END EDIT
here is the c++ reference for CvSeq
and if you want to fiddle with the source code than
cvarrToMat is in matrix.cpp
CV_ELEM_SIZE is in types_c.h
cvWarpPerspective is in imgwarp.cpp
cvHoughCircles is in hough.cpp
I hope that will help.
BTW, your next error will probably be:
cv::warpPerspective in the C++ OpencCv asserts that dst.data != src.data
thus
cvWarpPerspective(seq, seq, mmat);
wont work cause your source mat and destination mat referencing the same data.
Not all the functions in OpenCV (and image processing in general) work in-situ (because there is no in-situ algorithm or because its slower then the other version eg. transpose of an n*n mat will work in-situ, but n*m where n!=m will be harder to do in-situ and might be slower)
you cant assume the using the src matrix as the dst will work.