Related
I am trying to determine a way to change the pixel color of my masks from black to a different color. Unfortunately, I have not be able to determine a way to do this task. Essentially, what I am trying to do is take this image:
and convert the black portions to a color with values (255, 160, 130). I have tried several methods to try and achieve my goal. These include draw contours, setTo, and looping through the matrix. Unfortunately all of these attempts have failed. I have included the code and the resulting outcomes below.
Draw Contours method
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat img = Imgcodecs.imread(
"C:\\Users\\Hassan\\Documents\\School\\Me\\COMP5900 Y\\Project\\Project\\src\\resources\\face.jpg");
Mat img_grey = new Mat();
Mat grad = new Mat(), grad_x = new Mat(), grad_y = new Mat();
Mat abs_grad_x = new Mat(), abs_grad_y = new Mat();
int ddepth = CvType.CV_32F;
int scale = 1;
int delta = 0;
Imgproc.GaussianBlur(img, img, new Size(3, 3), 0, 0, Core.BORDER_CONSTANT);
Imgproc.cvtColor(img, img_grey, Imgproc.COLOR_BGR2GRAY);
// Apply Sobel
Imgproc.Sobel(img_grey, grad_x, ddepth, 1, 0, 3, scale, delta, Core.BORDER_DEFAULT);
Imgproc.Sobel(img_grey, grad_y, ddepth, 0, 1, 3, scale, delta, Core.BORDER_DEFAULT);
// converting back to CV_8U
Core.convertScaleAbs(grad_x, abs_grad_x);
Core.convertScaleAbs(grad_y, abs_grad_y);
// Total Gradient (approximate)
Core.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
Photo.fastNlMeansDenoising(grad, grad);
Imgproc.GaussianBlur(grad, grad, new Size(3, 3), 0, 0, Core.BORDER_CONSTANT);
// isolate background
Mat background = new Mat();
Imgproc.threshold(grad, background, 2, 255, Imgproc.THRESH_BINARY);
// draw contours
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(background, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_NONE);
Mat drawing = Mat.zeros(background.size(), CvType.CV_8UC3);
List<MatOfPoint> hullList = new ArrayList<>();
for (MatOfPoint contour : contours) {
MatOfInt hull = new MatOfInt();
Imgproc.convexHull(contour, hull);
Point[] contourArray = contour.toArray();
Point[] hullPoints = new Point[hull.rows()];
List<Integer> hullContourIdxList = hull.toList();
for (int i = 0; i < hullContourIdxList.size(); i++) {
hullPoints[i] = contourArray[hullContourIdxList.get(i)];
}
hullList.add(new MatOfPoint(hullPoints));
}
for (int i = 0; i < contours.size(); i++) {
Scalar color = new Scalar(255, 160, 130);
Imgproc.drawContours(drawing, contours, i, color);
//Imgproc.drawContours(drawing, hullList, i, color );
}
Note here, that I also tried using Imgproc.RETR_EXTERNAL as well, but that produced a completely black image. Also the name of the HighGui window is called "flood fill", but I just forgot to update the name.
setTo
// replace find and draw contours portion of code above
Mat out = new Mat();
background.copyTo(out);
out.setTo(new Scalar(255, 160, 130), background);
Iterating through matrix
// replace draw contours portion of code above
for (a = 0; a < background.rows(); a++) {
for(b = 0; b < background.cols(); b++) {
if(background.get(a,b)[0] == 0) {
//background.put(a, b, CvType.CV_16F, new Scalar(255, 160, 130));
double[] data = {255, 160, 130};
background.put(a, b, data);
}
}
}
The loop is promising, but I know it will not be efficient as I have 2 other masks that I would like to update as well. Could you please suggest an efficient method, that allows me to set the value for all three channels?
Thanks
I am not sure why you are doing many operations on the image but to me it looks like applying the mask and replacing the color efficiently. So if there are other complexities than please let me know.
Below is the code I was looking for in Java.
public static void main(String s[]) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat matr =Imgcodecs.imread("/home/shariq/Desktop/test.png");
Mat result = new Mat();
//create a mask based on range
Core.inRange(matr, new Scalar(0), new Scalar(50), result);
Imgcodecs.imwrite("/home/shariq/Desktop/test_in.png", result);
//apply the mask with color you are looking for, note here scalar is in hsv
matr.setTo(new Scalar(130,160,255),result);
Imgcodecs.imwrite("/home/shariq/Desktop/result.png", matr);
}
We are creating a mask for the pixel values between 0-50 for black color using inRange method.
Core.inRange(matr, new Scalar(0), new Scalar(50), result);
This mask in result variable is than applied to original matrix using setTo method. The replacement color value is provided in HSV format through Scalar object. new Scalar(a,b,c) in HSV can be understand in RGB like this Red = c, Green = b and Blue = a.
matr.setTo(new Scalar(130,160,255),result);
Its quite fast compared to iterating the pixels one by one.
I am working on a program where I am trying to extract the colored squares from a puzzle. I take a frame from a video capture then find the all contours. I then remove contours that aren't in the shape of a square (This works alright but looking for a better method). The main problem I am facing is that there are overlapping contours. I use RETR_TREE to get all contours but when using RETR_EXTERNAL The contours become harder to detect. Is there a way I can improve the detection of squares? Or a way that I can remove the overlapping contours in the image.
Here is an image of where there are overlapping contours:
There were 11 contours found in this image but I want only 9.(I draw the rects to see the overlapping a little easier)
. How can I remove the inner contours?
Here is my code:
public Mat captureFrame(Mat capturedFrame){
Mat newFrame = new Mat();
capturedFrame.copyTo(newFrame);
//Gray
Mat gray = new Mat();
Imgproc.cvtColor(capturedFrame, gray, Imgproc.COLOR_RGB2GRAY);
//Blur
Mat blur = new Mat();
Imgproc.blur(gray, blur, new Size(3,3));
//Canny image
Mat canny = new Mat();
Imgproc.Canny(blur, canny, 20, 40, 3, true);
//Dilate image to increase size of lines
Mat kernel = Imgproc.getStructuringElement(1, new Size(3,3));
Mat dilated = new Mat();
Imgproc.dilate(canny,dilated, kernel);
List<MatOfPoint> contours = new ArrayList<>();
List<MatOfPoint> squareContours = new ArrayList<>();
//find contours
Imgproc.findContours(dilated, contours, new Mat(), Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_NONE);
//Remove contours that aren't close to a square shape.
//Wondering if there is a way I can improve this?
for(int i = 0; i < contours.size(); i++){
double area = Imgproc.contourArea( contours.get(i));
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(i).toArray());
double perimeter = Imgproc.arcLength(contour2f, true);
//Found squareness equation on wiki...
//https://en.wikipedia.org/wiki/Shape_factor_(image_analysis_and_microscopy)
double squareness = 4 * Math.PI * area / Math.pow(perimeter, 2);
//add contour to new List if it has a square shape.
if(squareness >= 0.7 && squareness <= 0.9 && area >= 3000){
squareContours.add(contours.get(i));
}
}
MatOfPoint2f approxCurve = new MatOfPoint2f();
for(int n = 0; n < squareContours.size(); n++){
//Convert contours(n) from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f( squareContours.get(n).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true)*0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint( approxCurve.toArray());
// Get bounding rect of contour
Rect rect = Imgproc.boundingRect(points);
//length and width should be about the same
if(rect.height - rect.width < Math.abs(10)){
System.out.printf("%s , %s \n", rect.height, rect.width);
}
// draw enclosing rectangle (all same color, but you could use variable i to make them unique)
Imgproc.rectangle(newFrame, new Point(rect.x,rect.y), new Point(rect.x+rect.width,rect.y+rect.height),new Scalar (255, 0, 0, 255), 3);
}
return newFrame;
}
Fortunately, cv::findContours also provides us with hierarchy matrix, which you have ignored in your snippet, the hierarchy is very useful for all modes other than RETR_EXTERNAL, You may find the detailed doc of hierarchy matrix here.
Hierarchy matrix contains the data in the format [Next, Previous, First_Child, Parent] for each contour. Now you may filter contours, which a logic such as select only those contours where parent == -1, this will eliminate the sub contours inside the parent contour.
To make use of the hierarchy matrix you need to call cv::findContours as:
cv::Mat hierarchy;
Imgproc.findContours(dilated, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_NONE);
I am writing a program which need to detect red circle-alikes from this picture.
I have tried canny edge detection and find contours but none of them find this red "circles". I also tried to convert this to hsv and detect this by color but I couldn't determine good range for this color, maybe background color confuses it?
I put here a piece of my code with my final attempt..
Mat image = new Mat();
image = Imgcodecs.imread("image.jpg");
Mat hsvImage = new Mat();
Mat grayscaleImage = new Mat();
Mat binaryImage = new Mat();
Imgproc.blur(image, image, new Size(1, 1));
Imgproc.cvtColor(image, hsvImage, Imgproc.COLOR_BGR2HSV);
Imgproc.cvtColor(image, grayscaleImage, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist(grayscaleImage, grayscaleImage);
Imgproc.Canny(grayscaleImage, grayscaleImage, 50, 150, 3,false);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(grayscaleImage.clone(), contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
for (int id=0;id<contours.size();id++){
MatOfPoint2f mop2f = new MatOfPoint2f();
contours.get(id).convertTo(mop2f,CvType.CV_32F);
RotatedRect rectangle = Imgproc.minAreaRect(mop2f);
if (rectangle.boundingRect().width>80)
Imgproc.drawContours(image,contours,id,new Scalar(0,255,0));
}
If you want to process that marked image, you really might want to detect colors. Typically this is done in HSV color-space.
Here is some C++ code to detect "red" color. The result isn't good enough to use findContours yet, but maybe after some dilation. Maybe you can convert the code to Java.
If you want to detect different color, change the line redMask = thresholdHue(hsv, 0, 20, 50, 50); to mask = thresholdHue(hsv, yourWantedHueColorValue, 20, 50, 50);`
// for example to shift a circluar hue-channel
cv::Mat shiftChannel(cv::Mat H, int shift, int maxVal = 180)
{
// CV_8UC1 only!
cv::Mat shiftedH = H.clone();
//int shift = 25; // in openCV hue values go from 0 to 180 (so have to be doubled to get to 0 .. 360) because of byte range from 0 to 255
for (int j = 0; j < shiftedH.rows; ++j)
for (int i = 0; i < shiftedH.cols; ++i)
{
shiftedH.at<unsigned char>(j, i) = (shiftedH.at<unsigned char>(j, i) + shift) % maxVal;
}
return shiftedH;
}
cv::Mat thresholdHue(cv::Mat hsvImage, int hueVal, int range = 30, int minSat = 50, int minValue = 50)
{
// hsvImage must be CV_8UC3 HSV image.
// hue val and range are in openCV's hue range (0 .. 180)
// range shouldnt be bigger than 90, because that's max (all colors), after shifting the hue channel.
// this function will
// 1. shift the hue channel, so that even colors near the border (red color!) will be detectable with same code.
// 2. threshold the hue channel around the value 90 +/- range
cv::Mat mask; // return-value
std::vector<cv::Mat> channels;
cv::split(hsvImage, channels);
int targetHueVal = 180 / 2; // we'll shift the hue-space so that the target val will always be 90 afterwards, no matter which hue value was chosen. This can be important if
int shift = targetHueVal - hueVal;
if (shift < 0) shift += 180;
cv::Mat shiftedHue = shiftChannel(channels[0], shift, 180);
// merge the channels back to hsv image
std::vector<cv::Mat> newChannels;
newChannels.push_back(shiftedHue);
newChannels.push_back(channels[1]);
newChannels.push_back(channels[2]);
cv::Mat shiftedHSV;
cv::merge(newChannels, shiftedHSV);
// threshold
cv::inRange(shiftedHSV, cv::Vec3b(targetHueVal - range, minSat, minValue), cv::Vec3b(targetHueVal + range, 255, 255), mask);
return mask;
}
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/redCircleLikeContours.jpg");
cv::Mat redMask;
cv::Mat hsv;
cv::cvtColor(input, hsv, CV_BGR2HSV);
redMask = thresholdHue(hsv, 0, 20, 50, 50);
cv::imshow("red", redMask);
cv::imshow("input", input);
cv::imwrite("C:/StackOverflow/Output/redCircleLikeContoursMask.png", redMask);
cv::waitKey(0);
return 0;
}
Here's the result:
Here is my code if somebody would want to look :)
public static void main (String args[]){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat image = new Mat();
image = Imgcodecs.imread("imageorg.jpg");
if ( image == null) System.out.println("Image is fine");
else System.out.println("Wrong path to image");
Mat hsvImage = new Mat();
Imgproc.blur(image, image, new Size(3,3));
Imgproc.cvtColor(image, hsvImage, Imgproc.COLOR_BGR2HSV);
Mat redMask = new Mat();
redMask = thresholdHue(hsvImage,0,20,50,50);
Mat kernel = new Mat();
kernel = Imgproc.getStructuringElement(Imgproc.MORPH_DILATE, new Size(2,2));
Mat dilateMat = new Mat();
Imgproc.dilate(redMask, dilateMat, kernel);
Imgcodecs.imwrite("redCircleLikeContours.png", redMask);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(dilateMat.clone(), contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
List<MatOfPoint> removedContoursList = new ArrayList<MatOfPoint>();
for (int id=0;id<contours.size();id++){
MatOfPoint2f mop2f = new MatOfPoint2f();
contours.get(id).convertTo(mop2f,CvType.CV_32F);
RotatedRect rectangle = Imgproc.minAreaRect(mop2f);
if (rectangle.boundingRect().height<10){
removedContoursList.add(contours.get(id));
System.out.println("removing: "+rectangle.boundingRect());
contours.remove(id);
id--;
}
}
}
public static Mat thresholdHue(Mat hsvImage, int hueVal, int range, int minSat, int minValue)
{
Mat mask = new Mat();
List<Mat> channels = new ArrayList<Mat>();
Core.split(hsvImage, channels);
int targetHueVal = 180 / 2;
int shift = targetHueVal - hueVal;
if (shift < 0) shift += 180;
Mat shiftedHue = shiftChannel(channels.get(0), shift, 180);
List<Mat> newChannels = new ArrayList<Mat>();
newChannels.add(shiftedHue);
newChannels.add(channels.get(1));
newChannels.add(channels.get(2));
Mat shiftedHSV = new Mat();
Core.merge(newChannels, shiftedHSV);
Core.inRange(shiftedHSV, new Scalar(targetHueVal - range, minSat, minValue), new Scalar(targetHueVal + range, 255, 255), mask);
return mask;
}
private static Mat shiftChannel(Mat H, int shift, int maxVal)
{
Mat shiftedH = H.clone();
for (int j = 0; j < shiftedH.rows(); ++j)
for (int i = 0; i < shiftedH.cols(); ++i)
{
shiftedH.put(j, i,(shiftedH.get(j,i)[0] + shift) % maxVal);
}
return shiftedH;
}
I know this is duplicated post but still get stuck on implementation.
I following some guide on the internet in how to detect document in an image in OpenCV and Java.
The first approarch i came up with is that use the findContours after pre-process some image processing like blur, edge detection, after get all the contours i can found the largest contour and assume that is a rectangle i'm looking for but it fail in some case, e.g the document is not fully taken like missing one corner.
After trying several time and some new processing but does not work at all, i found that the HoughLine transform take it easier. From now i have all the line inside an image but still do not what to do next to defined the interest rectangle that i want.
Here is the implementation code i have so far:
Approach 1: Using findContours
Mat grayImage = new Mat();
Mat detectedEdges = new Mat();
// convert to grayscale
Imgproc.cvtColor(frame, grayImage, Imgproc.COLOR_BGR2GRAY);
// reduce noise with a 3x3 kernel
// Imgproc.blur(grayImage, detectedEdges, new Size(3, 3));
Imgproc.medianBlur(grayImage, detectedEdges, 9);
// Imgproc.equalizeHist(detectedEdges, detectedEdges);
// Imgproc.GaussianBlur(detectedEdges, detectedEdges, new Size(5, 5), 0, 0, Core.BORDER_DEFAULT);
Mat edges = new Mat();
// canny detector, with ratio of lower:upper threshold of 3:1
Imgproc.Canny(detectedEdges, edges, this.threshold.getValue(), this.threshold.getValue() * 3, 3, true);
// makes the object in white bigger
Imgproc.dilate(edges, edges, new Mat(), new Point(-1, -1), 1); // 1
Image imageToShow = Utils.mat2Image(edges);
updateImageView(cannyFrame, imageToShow);
/// Find contours
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(edges, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// loop over the contours
MatOfPoint2f approxCurve;
double maxArea = 0;
int maxId = -1;
for (MatOfPoint contour : contours) {
MatOfPoint2f temp = new MatOfPoint2f(contour.toArray());
double area = Imgproc.contourArea(contour);
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(temp, approxCurve, Imgproc.arcLength(temp, true) * 0.02, true);
if (approxCurve.total() == 4 && area >= maxArea) {
double maxCosine = 0;
List<Point> curves = approxCurve.toList();
for (int j = 2; j < 5; j++) {
double cosine = Math.abs(angle(curves.get(j % 4), curves.get(j - 2), curves.get(j - 1)));
maxCosine = Math.max(maxCosine, cosine);
}
if (maxCosine < 0.3) {
maxArea = area;
maxId = contours.indexOf(contour);
}
}
}
MatOfPoint maxMatOfPoint = contours.get(maxId);
MatOfPoint2f maxMatOfPoint2f = new MatOfPoint2f(maxMatOfPoint.toArray());
RotatedRect rect = Imgproc.minAreaRect(maxMatOfPoint2f);
System.out.println("Rect angle: " + rect.angle);
Point points[] = new Point[4];
rect.points(points);
for (int i = 0; i < 4; ++i) {
Imgproc.line(frame, points[i], points[(i + 1) % 4], new Scalar(255, 255, 25), 3);
}
Mat dest = new Mat();
frame.copyTo(dest, frame);
return dest;
Apparch 2: Using HoughLine transform
// STEP 1: Edge detection
Mat grayImage = new Mat();
Mat detectedEdges = new Mat();
Vector<Point> start = new Vector<Point>();
Vector<Point> end = new Vector<Point>();
// convert to grayscale
Imgproc.cvtColor(frame, grayImage, Imgproc.COLOR_BGR2GRAY);
// reduce noise with a 3x3 kernel
// Imgproc.blur(grayImage, detectedEdges, new Size(3, 3));
Imgproc.medianBlur(grayImage, detectedEdges, 9);
// Imgproc.equalizeHist(detectedEdges, detectedEdges);
// Imgproc.GaussianBlur(detectedEdges, detectedEdges, new Size(5, 5), 0, 0, Core.BORDER_DEFAULT);
// AdaptiveThreshold -> classify as either black or white
// Imgproc.adaptiveThreshold(detectedEdges, detectedEdges, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 5, 2);
// Imgproc.Sobel(detectedEdges, detectedEdges, -1, 1, 0);
Mat edges = new Mat();
// canny detector, with ratio of lower:upper threshold of 3:1
Imgproc.Canny(detectedEdges, edges, this.threshold.getValue(), this.threshold.getValue() * 3, 3, true);
// apply gaussian blur to smoothen lines of dots
Imgproc.GaussianBlur(edges, edges, new org.opencv.core.Size(5, 5), 5);
// makes the object in white bigger
Imgproc.dilate(edges, edges, new Mat(), new Point(-1, -1), 1); // 1
Image imageToShow = Utils.mat2Image(edges);
updateImageView(cannyFrame, imageToShow);
// STEP 2: Line detection
// Do Hough line
Mat lines = new Mat();
int minLineSize = 50;
int lineGap = 10;
Imgproc.HoughLinesP(edges, lines, 1, Math.PI / 720, (int) this.threshold.getValue(), this.minLineSize.getValue(), lineGap);
System.out.println("MinLineSize: " + this.minLineSize.getValue());
System.out.println(lines.rows());
for (int i = 0; i < lines.rows(); i++) {
double[] val = lines.get(i, 0);
Point tmpStartP = new Point(val[0], val[1]);
Point tmpEndP = new Point(val[2], val[3]);
start.add(tmpStartP);
end.add(tmpEndP);
Imgproc.line(frame, tmpStartP, tmpEndP, new Scalar(255, 255, 0), 2);
}
Mat dest = new Mat();
frame.copyTo(dest, frame);
return dest;
HoughLine result 1
HoughLine result 2
How to detect needed rectangle from HoughLine result?
Can someone give me the next step to complete the HoughLine transform approach.
Any help is appriciated. i'm stuck with this for a while.
Thanks you for reading this.
This answer is pretty much a mix of two other answers (here and here) I posted. But the pipeline I used for the other answers can be a little bit improved for your case. So I think it's worth posting a new answer.
There are many ways to achieve what you want. However, I don't think that line detection with HoughLinesP is needed here. So here is the pipeline I used on your samples:
Step 1: Detect egdes
Resize the input image if it's too large (I noticed that this pipeline works better on down scaled version of a given input image)
Blur grayscale input and detect edges with Canny filter
Step 2: Find the card's corners
Compute the contours
Sort the contours by length and only keep the largest one
Generate the convex hull of this contour
Use approxPolyDP to simplify the convex hull (this should give a quadrilateral)
Create a mask out of the approximate polygon
return the 4 points of the quadrilateral
Step 3: Homography
Use findHomography to find the affine transformation of your paper sheet (with the 4 corner points found at Step 2)
Warp the input image using the computed homography matrix
NOTE: Of course, once you have found the corners of the paper sheet on the down scaled version of the input image, you can easily compute the position of the corners on the full sized input image. This, in order to have the best resolution for the warped paper sheet.
And here is the result:
vector<Point> getQuadrilateral(Mat & grayscale, Mat& output)
{
Mat approxPoly_mask(grayscale.rows, grayscale.cols, CV_8UC1);
approxPoly_mask = Scalar(0);
vector<vector<Point>> contours;
findContours(grayscale, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
vector<int> indices(contours.size());
iota(indices.begin(), indices.end(), 0);
sort(indices.begin(), indices.end(), [&contours](int lhs, int rhs) {
return contours[lhs].size() > contours[rhs].size();
});
/// Find the convex hull object for each contour
vector<vector<Point> >hull(1);
convexHull(Mat(contours[indices[0]]), hull[0], false);
vector<vector<Point>> polygon(1);
approxPolyDP(hull[0], polygon[0], 20, true);
drawContours(approxPoly_mask, polygon, 0, Scalar(255));
imshow("approxPoly_mask", approxPoly_mask);
if (polygon[0].size() >= 4) // we found the 4 corners
{
return(polygon[0]);
}
return(vector<Point>());
}
int main(int argc, char** argv)
{
Mat input = imread("papersheet1.JPG");
resize(input, input, Size(), 0.1, 0.1);
Mat input_grey;
cvtColor(input, input_grey, CV_BGR2GRAY);
Mat threshold1;
Mat edges;
blur(input_grey, input_grey, Size(3, 3));
Canny(input_grey, edges, 30, 100);
vector<Point> card_corners = getQuadrilateral(edges, input);
Mat warpedCard(400, 300, CV_8UC3);
if (card_corners.size() == 4)
{
Mat homography = findHomography(card_corners, vector<Point>{Point(warpedCard.cols, warpedCard.rows), Point(0, warpedCard.rows), Point(0, 0), Point(warpedCard.cols, 0)});
warpPerspective(input, warpedCard, homography, Size(warpedCard.cols, warpedCard.rows));
}
imshow("warped card", warpedCard);
imshow("edges", edges);
imshow("input", input);
waitKey(0);
return 0;
}
This is C++ code, but it shouldn't be to hard to translate into Java.
I'm implementing a function in order to detect circles in an image. I am using OpenCV for Java to identify the circles. The gray scale images do show a circle.
Here's my code:
Mat gray = new Mat();
Imgproc.cvtColor(img, gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.blur(gray, gray, new Size(3, 3));
Mat edges = new Mat();
int lowThreshold = 100;
int ratio = 3;
Imgproc.Canny(gray, edges, lowThreshold, lowThreshold * ratio);
Mat circles = new Mat();
Vector<Mat> circlesList = new Vector<Mat>();
Imgproc.HoughCircles(edges, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 60, 200, 20, 30, 0);
Imshow grayIM = new Imshow("grayscale");
grayIM.showImage(edges);
Any idea why this may be the case?
First, as Miki pointed out HughCircles should be applied directly on the greyscale, it has an internal Canny Edge detector of its own.
Second the parameters of HughCircles should be tuned to your specific type of images. There isn't a one setting fits all formula.
Based on your code this worked for me on some generated circles:
public static void main(String[] args) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat img = Highgui.imread("circle-in.jpg", Highgui.CV_LOAD_IMAGE_ANYCOLOR);
Mat gray = new Mat();
Imgproc.cvtColor(img, gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.blur(gray, gray, new Size(3, 3));
Mat circles = new Mat();
double minDist = 60;
// higher threshold of Canny Edge detector, lower threshold is twice smaller
double p1UpperThreshold = 200;
// the smaller it is, the more false circles may be detected
double p2AccumulatorThreshold = 20;
int minRadius = 30;
int maxRadius = 0;
// use gray image, not edge detected
Imgproc.HoughCircles(gray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, minDist, p1UpperThreshold, p2AccumulatorThreshold, minRadius, maxRadius);
// draw the detected circles
Mat detected = img.clone();
for (int x = 0; x < circles.cols(); x++) {
double[] c1xyr = circles.get(0, x);
Point xy = new Point(Math.round(c1xyr[0]), Math.round(c1xyr[1]));
int radius = (int) Math.round(c1xyr[2]);
Core.circle(detected, xy, radius, new Scalar(0, 0, 255), 3);
}
Highgui.imwrite("circle-out.jpg", detected);
}
Input image with circles:
Detected circles are colored red:
Note how in the output image the white circle on the left was not detected being very close to white. It will be if you set p1UpperThreshold=20.