Enhance edges in OpenCV (Java) while removing noise - java

I'm trying to write an OpenCV program in Java that takes a photo of a marker with a phone and finds circular contours on the marker. I have gotten it working in the Android emulator (where the photo has perfect lighting conditions), but can't get it to work with the phone itself.
This is the marker I'm trying to capture:
After using the combination of transforming to grayscale, a Gaussian blur and a Canny edge detector, I get this output:
If I then try to find the contours on the image, the number of contours returned is really high (over 1000), but they aren't closed (because the edges seem to be too weak.
The contours drawn over the original image:
This is the code I use for the segmentation:
Mat processed = new Mat(original.height(), original.width(), original.type());
Imgproc.cvtColor(original, processed, Imgproc.COLOR_RGB2GRAY);
Imgproc.GaussianBlur(processed, processed, new Size(7, 7), 0.1);
Imgproc.Canny(processed, processed, 50, 50*3, 3, false);
I have tried tweaking the different parameters (thresholds etc.), but feel like that isn't the ideal solution. I'm thinking there has to be a way to enhance the edges returned by the canny detector, but I haven't found anything myself.
Any help is appreciated!

You want something like this?
Noise is usually present - it can be reduced - you can distinguish the circles with adaptiveThreshold and then find the circles using the method I described below. The point is, the real-world image you get from the camera may contain a whole bunch of other circles - so it might be best to find all the circles. Compare them all in size. Find the 4 circles that are most similar to your marker in terms of color, size and placement.
A quick python code to distinguish the circles:
# Make a gray version
gry = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# Thresh
ada = cv2.adaptiveThreshold(
gry, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2
)
# Remove noises
out = cv2.medianBlur(ada, 7)
# Invert colors (Not needed)
out = ~out
I tested the Python code and it works; you can find an equivalent for Java or C++. I tried to explain the Java code but I wrote it on the fly and did not test it. The Java code I wrote probably has errors, but it does get the point. With a bit change will probably work. I also wrote the code that should find the circles as the last block. Working with it is tricky and requires adjusting the parameters.
Java:
...
Imgproc.cvtColor(im, gry, Imgproc.COLOR_RGBA2GRAY);
Imgproc.adaptiveThreshold(gry, ada, 255,
Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 11, 2);
Imgproc.medianBlur(ada, out, 7);
...
and for finding circles:
Java:
...
SimpleBlobDetector_Params params = new SimpleBlobDetector_Params();
params.set_filterByCircularity(true);
params.set_minCircularity(0.01f);
params.set_maxArea(50000);//important
SimpleBlobDetector detector = SimpleBlobDetector.create(params);
// List of detected points
MatOfKeyPoint keyPoints = new MatOfKeyPoint();
detector.detect(ada, keyPoints);
// Draw circles on final image
Scalar color = new Scalar(127, 0, 255);
for (KeyPoint key: keyPoints.toList()) {
Imgproc.circle(im, key.pt, (int) (key.size / 2.0f), color, 3/*Thickness*/);
}
...

You can look into dilation operation in opencv to enhance the edges.
https://docs.opencv.org/4.x/db/df6/tutorial_erosion_dilatation.html
Also look into this example below with Canny edge detection, approxPolyDP and minEnclosingCircle. It is very close to your question.
https://docs.opencv.org/4.x/da/d0c/tutorial_bounding_rects_circles.html
https://docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#ga0012a5fdaea70b8a9970165d98722b4c
https://docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#ga8ce13c24081bbc7151e9326f412190f1

Related

How to calculate the number of shapes detected after thresholding

I did color segmentation to my image using java OpenCV and the thresholded image can be shown as image 1 :
I want to calculate the number of white spot in the threshold image. I worked on the findcontour() function and tried to get the count of the white spot. However I failed on that. Please help me. My code is here.
Imgproc.findContours(destination, contours, hierarchy,
Imgproc.RETR_EXTERNAL,Imgproc.CHAIN_APPROX_SIMPLE);
for(int j=0;j<contours.size();j++){
sum=sum+contours.size();
}
System.out.println("Sum"+sum);
Jeru's answer is correct for this case. If you have a case with bigger noise where morpholigocal operations won't take them out, you can make a cutoff with the contour size, something like
for contour in contours
if cv2.contourArea(contour) > minimal_length
before counting

OpenCV thresholding unlike expected

I would like to preprocess a given picture by thresholding it in order to then hand it over to Tesseract. I first did that using Gimp (2.8.16) and a fixed range of 130 – 255. When I then implemented it in OpenCV (3.1) using Java, I first forgot to call cvtColor resulting in a picture that still had some colors in it (these areas were white in Gimp). Besides that, the picture was as expected. However, when I implemented the corresponding call, I got a picture that was different to the one I would have expected. It seems that the areas that were colored previously are now black while the remaining picture is similar to the one I created with Gimp.
Is there anything that I am missing to create a more similar output?
The reason I am asking this question is that, unfortunately, Tesseract (with psm 6) creates quite different results for the two images:
for the one created in Gimp: "2011 1 L 0006"
for the second one created with OpenCV: "2011ÔÇö] L 0 0006 1"
Here is the code that I used:
Mat thres = new Mat();
Mat tmp = new Mat();
Imgproc.cvtColor(src, tmp, Imgproc.COLOR_BGR2GRAY); // tmp = src.clone(); in my first attempt
Imgproc.threshold(tmp, thres, 130, 255, Imgproc.THRESH_BINARY);
Imgcodecs.imwrite("output.jpg", thres);
Here are the pictures:
Given picture:
Picture created with Gimp:
First result using OpenCV:
Second result using OpenCV:
In the first case, You are doing thresholding on color image (tmp = src.clone() creates another copy of src which is a color image). So you are getting result like that and in the second case, you are first converting to grayscale and then thresholding which gives a better result. Thresholding is good on gray scale images.

Optimizing performance of morphological processing

I am working in face detection using the YCbCr color space. When I apply it on a human face, there are gaps that denote the nose, eye and mouth and the resultant patch looks like (a). In order to remove these gaps, I apply a morphological dilation operation and I get the resulting image shown in (b) but my requirement is to get patch like it is shown in (c). This means that I want to remove the outer contours from the processed patch.
Can anyone please suggest that how can I remove these outer contours?
I have a few suggestions for you, though it's hard to verify this without the actual raw images themselves. Try one of these and see if you get something meaningful.
Method #1 - Use imfill followed by imopen
One suggestion I have is to use imfill to fill in any of the holes in the image, followed by a call to imopen to perform morphological opening (i.e. erosion followed by dilation as alluded to by user Paul R). Opening (via imopen) removes any small isolated regions in the image subject to the desired structuring element.
Assuming your image is stored in the variable BW, something like this may work:
BW2 = imfill(BW, 'holes');
se = strel('square', 5);
BW2 = imopen(BW2, se);
BW2 is the final image.
Method #2 - Use bwareaopen followed by imdilate
I can also suggest using the function bwareaopen which removes objects whose areas fall under a certain amount. Try something small like an area of 80 pixels to remove those isolated areas, then use the dilation (imdilate) command that you alluded to in your post:
BW2 = bwareaopen(BW, 80);
%// Place your code for dilation here using BW2
Method #3 - Open your image with imopen then perform imdilate
One final thing I can suggest is to open your image first to remove the spurious small pixel areas, then perform your dilation code as you suggested:
se = strel('square', 5);
BW2 = imopen(BW, se);
%// Place your code for dilation here using BW2
You should do the following steps:
Fill holes => result. It fills all the holes into the face.
Opening (erosion + dilation) => result. It erases all the small patterns outside the shape.
Even better: you replace the step 2 by an "opening by reconstruction" which is an erosion followed by a geodesic reconstruction. This operation does not modify the main pattern. See the result.
All these operations should be available in OpenCV or ImageJ.

Complete the circular contour edge when using canny edge detection and the contour is not occluded

I want to binarize the bone areas (make the bones areas be 255, and the other areas be 0)
but the gray level distribution is not simple enough(it's brighter in the lower half part) to just find a value and threshold it. So I think if I can detect its complete contours and fill the spaces inside these contours may be an easier way.
Original image:
After applying canny edge detection:
I've tried to find a reasonable way to to got these contours of bones occluded but failed. Please give me advice if any. Thank you very much.
I also need to deal with the issues that if there are two bones overlapping together.
(I apologize i didn't mention this in the very first place..)
I'm considering how can i separate a pair of bones overlapping together:
http://i.imgur.com/dI5s11L.png
Consider using Active Contours (snakes)
It computes "fuzzy" edges by considering both local gradient, and the overall "smoothness"
(this description isn't very accurate, It's just to understand the concept)
I tried in several similar cases and got good results.
The low Contrast to Noise Ratio of your raw image makes the object extraction challenge since the threshold setting may not be robust to every image. Yet I tried to extract the bones in your current figure. Two tricks are applied in my processing:(1) non-linear transform of your image to enhance the bones with low intensity compared with the background; (2) zero padding on the border of your image at the possible bone regions after the canny edge detector is applied. See my code below:
I=rgb2gray(I);
I=double(I);
I=I.^0.6; % non linear transform before canny edge detector
BW=edge(I,'canny');
%%% padding at the possible bone regions
BW(1,BW(2,:)==1)=1;
BW(end,BW(end-1,:)==1)=1;
BW(BW(:,2)==1,1)=1;
BW(BW(:,end-1)==1,end)=1;
%%% padding in order to fill in the bone boundaries
bw2=imfill(padarray(BW,size(BW),'symmetric'),'holes');
bw2=bw2(size(bw,1)+(1:size(bw,1)),size(bw,2)+(1:size(bw,2)));
bw2=bwareaopen(bw2,200); % remove the too small regions
MASK=I>10; % remove the background with very low intensity
figure,imshow(bw2.*MASK)
The result:
Everything looks good except one bone boundary is a little bit messy.

Ellipse detection with OpenCV

I would like to detect ellipses with OpenCV for Android, using the Tutorial 2-Basic included with OpenCV 2.4.1 package as a starting point. Note that my ellipse would be a perfect-photoshop one.
From what I understand, using the "HoughCircles" will only find perfect (or so) circles, thus leaving ellipses out.
Any help would be much appreciated as I am a total beginner at OpenCV
This is what I've tried so far
case Sample2NativeCamera.VIEW_MODE_CANNY: (ignore the Canny mode...)
capture.retrieve(mGray, Highgui.CV_CAP_ANDROID_GREY_FRAME);
Imgproc.HoughCircles(mGray, mCircles, Imgproc.CV_HOUGH_GRADIENT, 1, 20);
Log.d("Ellipse Points", " X " + mCircles.get(1,1)[0] + mCircles.get(1, 1)[1]);
break;
If you think any more info could be useful, please let me know.
One possible solution to your problem is similar to this thread Detection of coins (and fit ellipses) on an image .
You should take a look a opencv's function fitEllipse.
The parameters used in HoughCircles play a fundamental role. HoughCircles will detect not just perfect, but also near-perfect circles (ellipses). I suggest you check this examples:
How to detect circles (coins) in a photo
Simple object detection using OpenCV
OpenCV dot target detection
Reshaping noisy coin into a circle form
And this answer has a decent collection of references.
If you already have an idea of the sizes of the ellipses that you're looking for, then try the following steps:
Find Canny edges in the image
Use a sliding window, the size of which is the maximum length of the major axis of ellipses you're looking for.
Within the window, collect all edge pixels, pick 6 pixels randomly and use linear least squares to fit an ellipse in the general form.
Repeat the above step in a RANSAC like procedure.
If there are enough inliers, you have an ellipse.

Categories