I am trying to detect and crop circullar/elliptical shapes of different sizes.
This is an example of an image I am trying to do the detection and croping.
Input Image
The result I am trying to get in the aforementioned image is 3 cropped images
looking like this:
segmented part 1, segmented part 2, segmented part 3
Another image could look like this: different image
Just like the previous image, I am trying to do the same to this one.
The shapes are dramatically smaller from the first one.
Can this be achieved algorithmically or should I look for a machine learning-like solution?
Note: The final image has been applied by the following filters: Gaussian Blur, Grayscale, Threshold, Contour and Morphological Dilation.
[EDIT]
The code I have written(not working as intended):
findReference() finds a shape in the middle of the image and returns its rectangle.
private Rect findReference(Mat inputImage) {
// clone the image
Mat original = inputImage.clone();
// find the center of the image
double[] centers = {(double)inputImage.width()/2, (double)inputImage.height()/2};
Point image_center = new Point(centers);
// finding the contours
ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(inputImage, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
// finding best bounding rectangle for a contour whose distance is closer to the image center that other ones
double d_min = Double.MAX_VALUE;
Rect rect_min = new Rect();
for (MatOfPoint contour : contours) {
Rect rec = Imgproc.boundingRect(contour);
// find the best candidates
if (rec.height > inputImage.height()/2 & rec.width > inputImage.width()/2){
continue;
}
Point pt1 = new Point((double)rec.x, (double)rec.y);
Point center = new Point(rec.x+(double)(rec.width)/2, rec.y + (double)(rec.height)/2);
double d = Math.sqrt(Math.pow((double)(pt1.x-image_center.x),2) + Math.pow((double)(pt1.y -image_center.y), 2));
if (d < d_min)
{
d_min = d;
rect_min = rec;
}
}
// showReference( rect_min, original);
return rect_min;
}
I use the rectangle as reference and create a bigger one and a smaller one, so that similar shapes fit in the dimensions of the smaller and bigger rectangle.
findAllEllipses() tries to find similar shapes fitting in the smaller and bigger rectangles. After that it draws ellipses around the found shapes.
private Mat findAllEllipses(Rect referenceRect, Mat inputImage) {
float per = 0.5f;
float perSquare = 0.05f;
Rect biggerRect = new Rect();
Rect smallerRect = new Rect();
biggerRect.width = (int) (referenceRect.width / per);
biggerRect.height = (int) (referenceRect.height / per);
smallerRect.width = (int) (referenceRect.width * per);
smallerRect.height = (int) (referenceRect.height * per);
System.out.println("reference rectangle height: " + referenceRect.height + " width: " + referenceRect.width);
System.out.println("[" + 0 +"]: biggerRect.height: " + biggerRect.height + " biggerRect.width: " + biggerRect.width);
System.out.println("[" + 0 +"]: smallerRect.height: " + smallerRect.height + " smallerRect.width: " + smallerRect.width);
//Finding Contours
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchey = new Mat();
Imgproc.findContours(inputImage, contours, hierarchey, Imgproc.RETR_TREE,
Imgproc.CHAIN_APPROX_SIMPLE);
System.out.println("the numbers of found contours is: " + contours.size());
int sum = 0;
//Empty rectangle
RotatedRect[] rec = new RotatedRect[contours.size()];
for (int i = 0; i < contours.size(); i++) {
rec[i] = new RotatedRect();
if(contours.get(i).toArray().length >= 5 ){
Rect foundRect = Imgproc.boundingRect(contours.get(i));
// Rect foundBigger = new Rect();
// Rect foundSmaller = new Rect();
//
// foundBigger.width = (int) (foundBigger.width + foundBigger.width * per);
// foundBigger.height = (int) (foundBigger.height + foundBigger.height * per);
//
// foundSmaller.width = (int) (foundRect.width - foundRect.width * per);
// foundSmaller.height = (int) (foundRect.height - foundRect.height * per);
if (
(biggerRect.height >= foundRect.height && biggerRect.width >= foundRect.width)
&& (smallerRect.height <= foundRect.height && smallerRect.width <= foundRect.width)
&& (((foundRect.width - foundRect.width * perSquare) <= foundRect.height) && ((foundRect.width + foundRect.width * perSquare) >= foundRect.height))
&& (((foundRect.height - foundRect.height * perSquare) <= foundRect.width) && ((foundRect.height + foundRect.height * perSquare) >= foundRect.width))
) {
System.out.println("[" + i +"]: foundRect.width: " + foundRect.width + " foundRect.height: " + foundRect.height);
System.out.println("----------------");
rec[i] = Imgproc.fitEllipse(new MatOfPoint2f(contours.get(i).toArray()));
sum++;
}
}
Scalar color_elli = new Scalar(190, 0, 0);
Imgproc.ellipse(inputImage, rec[i], color_elli, 5);
}
System.out.println("found ellipses: " + sum);
// trytest(ImageUtils.doResizeMat(outputImage),0,0);
return inputImage;
}
Unfortuantelly there are several variables that are hardcoded into the method.
This is used to make the smaller and bigger rectangles (used as a percentage)
float per = 0.5f;
perSquare is used to get shapes closer to a square (fluctuated width height)
float perSquare = 0.05f;
This code might work in some images, while on others will not find a single shape, like I mentioned the shapes are circullar/elliptical and of different sizes.
Related
I am trying use Mask_RCNN in android. So, written its code. But Dnn.readNetFromTensorflow(MODEL_WEIGHTS, TEXT_GRAPH) function is not able to open the model weights and text graph file. Seems like they app is not able to find the path of files stored.
I have inserted these files(Text_Graph and Model_Weights) in java folder where mainactivity.java file is present. But it shows same error as mentioned below.
I also tried adding these files to res folder and assets folder but could not parse the model weight path to the function Dnn.readNetFromTensorflow(MODEL_WEIGHTS, TEXT_GRAPH) because its arguments are of String type. But assetmanager returns input stream.
I am using opencv 3.4.10 and android studio 4.0.
Also please help me in drawing contours over image in java. If anyone knows how to run it in android using Model in C++ then please suggest. As I have already tried it in that also but was getting build error of undefined reference to 'cv::dnn::experimental_dnn_v4::Net::~Net()'
Any help would be appreciated. Thanks in advance.
On running the app it is giving exception of : ****
Caused by: CvException [org.opencv.core.CvException: cv::Exception:
OpenCV(3.4.10)
/build/3_4_pack-android/opencv/modules/dnn/src/caffe/caffe_io.cpp:1132:
error: (-2:Unspecified error) FAILED: fs.is_open(). Can't open
"./frozen_inference_graph.pb" in function 'bool
cv::dnn::ReadProtoFromBinaryFile(const char*,
google::protobuf::Message*)'
]
at org.opencv.dnn.Dnn.readNetFromTensorflow_0(Native Method)
at org.opencv.dnn.Dnn.readNetFromTensorflow(Dnn.java:659)
at com.example.imagecompressor.MainActivity.onActivityResult(MainActivity.java:189)(Error
at this line : Net net = Dnn.readNetFromTensorflow(MODEL_WEIGHTS,
TEXT_GRAPH);)
at android.app.Activity.dispatchActivityResult(Activity.java:7454)
at android.app.ActivityThread.deliverResults(ActivityThread.java:4353)
Source code is given below:
Mainactivity.java
final String TEXT_GRAPH = "./mask_rcnn_inception_v2_coco_2018_01_28.pbtxt";
final String MODEL_WEIGHTS = "./frozen_inference_graph.pb";
final String CLASSES_FILE ="./mscoco_labels";
Mat tmp = new Mat(bitmap.getWidth(), bitmap.getHeight(), CV_8UC1);
Utils.bitmapToMat(bitmap, tmp);
Mat image = Imgcodecs.imread(img_path);
image=tmp;
Size size = image.size();
int cols = image.cols();
int rows = image.rows();
double h = size.height;
double w = size.width;
int hh = (int)size.height;
int ww = (int)size.width;
if(!image.empty()) {
Mat blob = Dnn.blobFromImage(image, 1.0, new Size(w, h), new Scalar(0), true, false);
// Load the network
Net net = Dnn.readNetFromTensorflow(MODEL_WEIGHTS, TEXT_GRAPH);
net.setPreferableBackend(Dnn.DNN_BACKEND_OPENCV);
net.setPreferableTarget(Dnn.DNN_TARGET_CPU);
net.setInput(blob);
ArrayList<String> outputlayers = new ArrayList<String>();
ArrayList<Mat> outputMats = new ArrayList<Mat>();
outputlayers.add("detection_out_final");
outputlayers.add("detection_masks");
net.forward(outputMats, outputlayers);
Mat numClasses = outputMats.get(0);
Mat numMasks = outputMats.get(1);
numClasses = numClasses.reshape(1, (int) numClasses.total() / 7);
for (int i = 0; i < numClasses.rows(); ++i) {
double confidence = numClasses.get(i, 2)[0];
//System.out.println(confidence);
// Mat objectMask=outputMats.get(i);
if (confidence > 0.5) {
int classId = (int) numClasses.get(i, 1)[0];
String label = classes.get(classId) + ": " + confidence;
System.out.println(label);
int left = (int) (numClasses.get(i, 3)[0] * cols);
int top = (int) (numClasses.get(i, 4)[0] * rows);
int right = (int) (numClasses.get(i, 5)[0] * cols);
int bottom = (int) (numClasses.get(i, 6)[0] * rows);
System.out.println(left + " " + top + " " + right + " " + bottom);
left = max(0, min(left, cols - 1));
top = max(0, min(top, rows - 1));
right = max(0, min(right, cols - 1));
bottom = max(0, min(bottom, rows - 1));
final Rect box = new Rect(left, top, right - left + 1, bottom - top + 1);
//Mat objectMask(numMasks.rows(), numMasks.size[3],CV_32F, numMasks.ptr<float>(i,classId));
// Mat obj();
Mat objectMask = new Mat(numMasks.rows(), numMasks.cols(), CV_32F);
rectangle(image, new Point(box.x, box.y), new Point(box.x + box.width, box.y + box.height), new Scalar(255, 178, 50), 3);
/* String lab = format("%.2f", confidence);
if (!classes.isEmpty()){
//CV_Assert(classId < (int)classes.size());
if(classId<(int)classes.size()) {
lab = classes.get(classId) + ":" + lab;
}
}*/
Scalar color = new Scalar(rng.nextInt(256), rng.nextInt(256), rng.nextInt(256));
double maskThreshold = 0.3;
// Resize the mask, threshold, color and apply it on the image
resize(objectMask, objectMask, new Size(box.width, box.height));
Imgproc.threshold(objectMask, objectMask, 255 * maskThreshold, 255, Imgproc.THRESH_BINARY);
// Mat mask = (objectMask > maskThreshold);
Mat ili = new Mat();
multiply(image, new Scalar(0.7), ili);
Mat coloredRoi = new Mat();
add(ili, new Scalar(0.3).mul(color), coloredRoi);
coloredRoi.convertTo(coloredRoi, CV_8UC3);
List<MatOfPoint> contours = null;
Mat hierarchy = null;
objectMask.convertTo(objectMask, CV_8U);
findContours(objectMask, contours, hierarchy, RETR_CCOMP,CHAIN_APPROX_SIMPLE);
drawContours(coloredRoi, contours, -1, color, 5, LINE_8, hierarchy, 100);
coloredRoi.copyTo(image, objectMask);
}
Mat detectedFrame = new Mat();
image.convertTo(detectedFrame, CV_8U);
Imgcodecs.imwrite("outputFile.jpg", detectedFrame);
}
}
good night. I'm using this code to find and crop the letters on my image. However, i get this error;
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in cv::Mat::Mat
And i do not know how to fix that. I've already search something about this, but i'm not finding the solution. Can anyone help me ?
Mat image = Imgcodecs.imread("C:\\Users\\Me\\Desktop\\Programs\\Image2.png", Imgcodecs.CV_LOAD_IMAGE_GRAYSCALE);
// clone the image
Mat original = image.clone();
// thresholding the image to make a binary image
Imgproc.threshold(image, image, 220, 60, Imgproc.THRESH_BINARY_INV);
// find the center of the image
double[] centers = {(double)image.width()/2, (double)image.height()/2};
Point image_center = new Point(centers);
// finding the contours
ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(image, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
// finding best bounding rectangle for a contour whose distance is closer to the image center that other ones
double d_min = Double.MAX_VALUE;
Rect rect_min = new Rect();
for (MatOfPoint contour : contours) {
Rect rec = Imgproc.boundingRect(contour);
// find the best candidates
if (rec.height > image.height()/2 & rec.width > image.width()/2)
continue;
Point pt1 = new Point((double)rec.x, (double)rec.y);
Point center = new Point(rec.x+(double)(rec.width)/2, rec.y + (double)(rec.height)/2);
double d = Math.sqrt(Math.pow((double)(pt1.x-image_center.x),2) + Math.pow((double)(pt1.y -image_center.y), 2));
if (d < d_min)
{
d_min = d;
rect_min = rec;
}
}
// slicing the image for result region
int pad = 5;
rect_min.x = rect_min.x - pad;
rect_min.y = rect_min.y - pad;
rect_min.width = rect_min.width + 2*pad;
rect_min.height = rect_min.height + 2*pad;
Mat result = original.submat(rect_min);
Imgcodecs.imwrite("C:\\Users\\Me\\Desktop\\Programs\\result.png", result);
My programming program are pointing out in this line:
Mat result = original.submat(rect_min);
It is most likely that rect_min has dimensions that either negative, i.e. rect_min.x = rect_min.x - pad; or larger than that of original image, i.e. rect_min.width = rect_min.width + 2*pad; makes rect_min.width > original.width.
A possible fix is to crop original image with the unmodified rect_min, then, if you want padding, use copyMakeBorder.
That means the boundaries of rect_min are going beyond the boundaries of the original image.
Maybe the padding is making it so ?
You should print out the size of the original image, and the size of the rect_min to find out.
I'm trying to detect corners, but the coordinates I get are always off-center and saddle-points are detected Multiple times.
I tried cornerHarris, cornerMinEigenVal, preCornerDetect, goodFeaturesToTrack, and cornerEigenValsAndVecs, but they all seem to lead to the same result. I haven't tried findChessboardCorners because my corners are not laid out in a nice grid of n×m, are not all saddle-type, and many more reasons.
What I have now:
Given the (pre-processed) camera image below with some positive, negative, and saddle corners:
After cornerHarris(img, energy, 20, 9, 0.1) (I increased blockSize to 20 for illustrative purposes but small values don't work either) I get this image:
It seems to detect 10 corners but the way they are positioned is odd. I superimposed this image on the original to show my problem:
The point of highest matching energy is offset towards the inside of the corner and there is a plume pointing away from the corner. The saddle corners seem to generate four separate plumes all superimposed.
Indeed, when I perform a corner-search using this energy image, I get something like:
/
What am I doing wrong and how can I detect corners accurately like in this mock image?
[[edit]] MCVE:
public class CornerTest {
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
private static Mat energy = new Mat();
private static Mat idx = new Mat();
public static void main(String... args) {
Mat byteImage = Highgui.imread("KXw7O.png");
if (byteImage.channels() > 1)
Imgproc.cvtColor(byteImage, byteImage, Imgproc.COLOR_BGR2GRAY);
// Preprocess
Mat floatImage = new Mat();
byteImage.convertTo(floatImage, CvType.CV_32F);
// Corner detect
Mat imageToShow = findCorners(floatImage);
// Show in GUI
imageToShow.convertTo(byteImage, CvType.CV_8U);
BufferedImage bufImage = new BufferedImage(byteImage.width(), byteImage.height(), BufferedImage.TYPE_BYTE_GRAY);
byte[] imgArray = ((DataBufferByte)bufImage.getRaster().getDataBuffer()).getData();
byteImage.get(0, 0, imgArray);
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.getContentPane().add(new JLabel(new ImageIcon(bufImage)));
frame.pack();
frame.setVisible(true);
}
private static Mat findCorners(Mat image) {
Imgproc.cornerHarris(image, energy, 20, 9, 0.1);
// Corner-search:
int minDistance = 16;
Core.MinMaxLocResult minMaxLoc = Core.minMaxLoc(
energy.submat(20, energy.rows() - 20, 20, energy.rows() - 20));
float thr = (float)minMaxLoc.maxVal / 4;
Mat tmp = energy.reshape(1, 1);
Core.sortIdx(tmp, idx, 16); // 16 = CV_SORT_EVERY_ROW | CV_SORT_DESCENDING
int[] idxArray = new int[idx.cols()];
idx.get(0, 0, idxArray);
float[] energyArray = new float[idx.cols()];
energy.get(0, 0, energyArray);
int n = 0;
for (int p : idxArray) {
if (energyArray[p] == -1) continue;
if (energyArray[p] < thr) break;
n++;
int x = p % image.cols();
int y = p / image.cols();
// Exclude a disk around this corner from potential future candidates
int u0 = Math.max(x - minDistance, 0) - x;
int u1 = Math.min(x + minDistance, image.cols() - 1) - x;
int v0 = Math.max(y - minDistance, 0) - y;
int v1 = Math.min(y + minDistance, image.rows() - 1) - y;
for (int v = v0; v <= v1; v++)
for (int u = u0; u <= u1; u++)
if (u * u + v * v <= minDistance * minDistance)
energyArray[p + u + v * image.cols()] = -1;
// A corner is found!
Core.circle(image, new Point(x, y), minDistance / 2, new Scalar(255, 255, 255), 1);
Core.circle(energy, new Point(x, y), minDistance / 2, new Scalar(minMaxLoc.maxVal, minMaxLoc.maxVal, minMaxLoc.maxVal), 1);
}
System.out.println("nCorners: " + n);
// Rescale energy image for display purpose only
Core.multiply(energy, new Scalar(255.0 / minMaxLoc.maxVal), energy);
// return image;
return energy;
}
}
I am wondering how to draw a bounding box around contours using JavaCV. I know the area of pixels and the center point. I also found a way to find the pixel width to find the distance. I feel a bounding box would be more accurate to find the pixel width to find the distance then what I am doing. Any help would be great or if you know another way to find the distance that would be great. Thanks...
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.opencv_core.CvMemStorage;
import org.bytedeco.javacpp.opencv_core.IplImage;
import org.bytedeco.javacpp.opencv_videoio.CvCapture;
import static org.bytedeco.javacpp.opencv_core.*;
import static org.bytedeco.javacpp.opencv_imgproc.*;
public class Webcam {
public static void main(String[] args) throws Exception {
CvCapture capture = opencv_videoio.cvCreateCameraCapture(0);
IplImage img1, imghsv, imgbin;
CvScalar minc = cvScalar(95,125,75,0), maxc = cvScalar(145,255,255,0);
CvSeq contour1 = new CvSeq(), contour2;
CvMemStorage storage = CvMemStorage.create();
CvMoments moments = new CvMoments(Loader.sizeof(CvMoments.class));
double areaMax = 1000, areaC = 0;
double m01, m10, m_area, focal, width, obj_width, obj_height;
double distance;
//focal is (pixel width * distance in inches) / object width
focal = 144.4;
//Real objects width in inches
obj_width = 3.5;
//Real objects height in inches
obj_height = 3.5;
int posX=0, posY=0;
int cRad = 100;
while(true)
{
img1 = opencv_videoio.cvQueryFrame(capture);
opencv_imgproc.cvSmooth(img1, img1, CV_MEDIAN, 13, 0, 0, 0);
imgbin = IplImage.create(cvGetSize(img1), 8, 1);
imghsv = IplImage.create(cvGetSize(img1), 8, 3);
if(img1 == null) break;
cvCvtColor(img1, imghsv, CV_BGR2HSV);
cvInRangeS(imghsv, minc, maxc, imgbin);
contour1 = new CvSeq();
areaMax = 1000;
cvFindContours(imgbin, storage, contour1, Loader.sizeof(CvContour.class), CV_RETR_LIST, CV_LINK_RUNS, cvPoint(0,0));
contour2 = contour1;
while(contour1 != null && !contour1.isNull())
{
areaC = cvContourArea(contour1, CV_WHOLE_SEQ, 1);
if(areaC > areaMax)
{
areaMax = areaC;
}
contour1 = contour1.h_next();
}
while(contour2 != null && !contour2.isNull())
{
areaC = cvContourArea(contour2, CV_WHOLE_SEQ, 1);
if(areaC < areaMax)
{
cvDrawContours(imgbin, contour2, CV_RGB(0,0,0),CV_RGB(0,0,0),0,CV_FILLED,8,cvPoint(0,0));
}
contour2 = contour2.h_next();
}
cvMoments(imgbin, moments, 1);
m10 = cvGetSpatialMoment(moments, 1, 0);
m01 = cvGetSpatialMoment(moments, 0, 1);
m_area = cvGetCentralMoment(moments, 0, 0);
posX = (int) (m10/m_area);
posY = (int) (m01/m_area);
if(posX > 0 && posY > 0)
{
cRad = (int) (100 / (5000/m_area));
cvCircle(img1, cvPoint(posX, posY), 5, cvScalar(0,255,0,0), 9, 0, 0);
}
//Change numbers after m_area to size of object
width = java.lang.Math.sqrt((m_area/(obj_height*obj_width)));
distance = (obj_width * focal) / width;
cvFlip(img1, img1, 1);
cvFlip(imgbin, imgbin , 1);
opencv_highgui.cvShowImage("Color",img1);
opencv_highgui.cvShowImage("CF",imgbin);
char c = (char) opencv_highgui.cvWaitKey(15);
if(c == 27) break;
if(c == 'q')
{
System.out.print("Width in pixels ");
System.out.println(width);
System.out.print("Distance in inches ");
System.out.println(distance);
}
}
}
}
This is what I have This is what I want . I am able to find all the blue contours and have the background turned black. I would just like to draw a bounding box around the blue pixels to help find the distance better to the object and to make sure we are tracking the right object.
If you want to draw a bouding box around a contour, you can do that with just using:
Rect rect = opencv_imgproc.boundingRect(contour);
opencv_imgproc.rectangle(src, rect, Scalar.GREEN);
I'm using opencv and java to find circles on an image, I have the image below so far. I'm using Hough to find the circles with the code like this :
public static Vector<Mat> circles(Mat img){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
long start_time = System.nanoTime();
Imgproc.resize(img, img, new Size(450,250));
Mat gray = new Mat();
Imgproc.cvtColor(img, gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.blur(gray, gray, new Size(3, 3));
Mat edges = new Mat();
int lowThreshold = 40;
int ratio = 3;
Imgproc.Canny(gray, edges, lowThreshold, lowThreshold * ratio);
Mat circles = new Mat();
Vector<Mat> circlesList = new Vector<Mat>();
Imgproc.HoughCircles(edges, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 60, 200, 20, 30, 0 );
System.out.println("#rows " + circles.rows() + " #cols " + circles.cols());
double x = 0.0;
double y = 0.0;
int r = 0;
for( int i = 0; i < circles.rows(); i++ )
{
double[] data = circles.get(i, 0);
for(int j = 0 ; j < data.length ; j++){
x = data[0];
y = data[1];
r = (int) data[2];
}
Point center = new Point(x,y);
// circle center
Core.circle( img, center, 3, new Scalar(0,255,0), -1);
// circle outline
Core.circle( img, center, r, new Scalar(0,0,255), 1);
Imshow im1 = new Imshow("Hough");
im1.showImage(img);
Rect bbox = new Rect((int)Math.abs(x-r), (int)Math.abs(y-r), (int)2*r, (int)2*r);
Mat croped_image = new Mat(img, bbox);
Imgproc.resize(croped_image, croped_image, new Size(160,160));
circlesList.add(croped_image);
Imshow m2 = new Imshow("cropedImage");
m2.showImage(croped_image);
}
long end_time = System.nanoTime();
long duration = (end_time - start_time)/1000000; //divide by 1000000 to get milliseconds.
System.out.println("duration : " + duration * 0.001 + " s");
return circlesList;
}
BUT it always detects only one circle.
My Question is how I can detect all the circles in an image using java/OpenCV ?
Note:-
1- I'm using Mat called circles in HoughCircles function parameters , because the function requires a Mat in Java.
2- I'm using openCV 2.4.11 version.
The circles are saved in columns of the mat circles
try replacing for loop as:
for( int i = 0; i < circles.cols(); i++)
{
double[] data = circles.get(0,i);
...