opencv - canny edge detection with adaptive threshold parameter for video(java) - java

I am trying to use canny edge detection on each frame in a video. I can use it no problem but because each image is different the threshold parameters in the canny method will need to be custom to that image.
I have got good advice on here about calculating the median and then using .33 percentile above and below that for high and low threshold parameters in the canny method.
something like
Imgproc.Canny(gray, fullCanny, thirdAboveMedian, thirdBelowMedian);
So I have tried calculating the median and the percentiles and plugging them in but my calculations must be wrong somewhere as I am only getting a black screen with these values??
here is the code i used to work it out.
Mat fullCanny = new Mat();
fullCanny=gray.clone();
fullCanny.reshape(0,1);
double median;
double thirdAboveMedian;
double thirdBelowMedian;
int[] histogram = hist(fullCanny);
if(histogram.length%2==0)
{
median = (histogram[histogram.length/2] + histogram[histogram.length/2 - 1 ])/2;
thirdAboveMedian = histogram[(histogram.length/2)+(histogram.length/2/3)];
thirdBelowMedian = histogram[(histogram.length/2)-(histogram.length/2/3)];
}
else
{
median = histogram[(int)histogram.length/2];
thirdAboveMedian = histogram[(int)(histogram.length/2)+(histogram.length/2/3)];
thirdBelowMedian = histogram[(histogram.length/2)-(histogram.length/2/3)];
}
System.out.println("median is "+median);
System.out.println("thirdAboveMedian is "+thirdAboveMedian);
System.out.println("thirdBelowMedian is "+thirdBelowMedian);
//run edge detection on the blurred gray image and display it on fullCanny mat
Imgproc.Canny(gray, fullCanny, thirdAboveMedian, thirdBelowMedian);
And the hist method
public static int[] hist(Mat img) {
// array for intensities
int hist[] = new int[256];
byte data[] = new byte[img.rows() * img.cols() * img.channels()];
img.get(0, 0, data);
for (int i = 0; i < data.length; i++) {
hist[(data[i] & 0xff)]++;
}
return hist;
}
Thanks

Related

Perlin Noise repeating pattern

My problem that my perlin noise is repeating itself very obviously in very small spaces. Here is an image of what it going on. I know that this does happen after a certain point with all perlin noise, but it seems to be happening almost immediately with mine. I believe that it is caused by my really awful pseudorandom gradient generator, but Im not sure. My code is below.
As a side note, my perlin noise seems to generate very small values, between -.2 and positive .2 and I think this is also caused by my pseudorandom gradient generator.
If anyone has any advice on improving this part of my code, please feel free to tell me. Any ideas would be helpful right now.
Thanks to everyone in advance!
public class Perlin {
int[] p = new int[255];
public Perlin() {
for(int i = 0; i < p.length; i++)
p[i] = i;
shuffle(p);
}
int grads[][] = {
{1,0},{0,1},{-1,0},{0,-1},
{1,1},{-1,1},{1,-1},{-1,-1}
};
public double perlin (double x, double y) {
int unitX = (int)Math.floor(x) & 255; // decide unit square
int unitY = (int)Math.floor(y) & 255; // decide unit square
double relX = x-Math.floor(x); // relative x position
double relY = y-Math.floor(y); // relative y position
// bad pseudorandom gradient -- what i think is causing the problems
int units = unitX+unitY;
int[] gradTL = grads[p[(units)]%(grads.length)];
int[] gradTR = grads[p[(units+1)]%(grads.length)];
int[] gradBL = grads[p[(units+1)]%(grads.length)];
int[] gradBR = grads[p[(units+2)]%(grads.length)];
// distance from edges to point, relative x and y inside the unit square
double[] vecTL = {relX,relY};
double[] vecTR = {relX-1,relY};
double[] vecBL = {relX,relY-1};
double[] vecBR = {relX-1,relY-1};
// dot product
double tl = dot(gradTL,vecTL);
double tr = dot(gradTR,vecTR);
double bl = dot(gradBL,vecBL);
double br = dot(gradBR,vecBR);
// perlins fade curve
double u = fade(relX);
double v = fade(relY);
// lerping the faded values
double x1 = lerp(tl,tr,u);
double y1 = lerp(bl,br,u);
// ditto
return lerp(x1,y1,v);
}
public double dot(int[] grad, double[] dist) {
return (grad[0]*dist[0]) + (grad[1]*dist[1]);
}
public double lerp(double start, double end, double rate){
return start+rate*(end-start);
}
public double fade(double t) {
return t*t*t*(t*(t*6-15)+10);
}
public void shuffle(int[] p) {
Random r = new Random();
for(int i = 0; i < p.length; i++) {
int n = r.nextInt(p.length - i);
// do swap thing
int place = p[i];
p[i] = p[i+n];
p[i+n] = place;
}
}
}
A side note on my gradient generator, I know Ken Perlin used 255 because he was using bits, I just randomly picked it. I dont think it has any effect on the patterns if it is changed.
Your intuition is correct. You calculate:
int units = unitX+unitY;
and then use that as the base of all your gradient table lookups. This guarantees that you get the same values along lines with slope -1, which is exactly what we see assuming (0, 0) is the upper-left corner.
I would suggest using a real hash function to combine your coordinates: xxHash, Murmur3, or even things like CRC32 (which isn't meant to be a hash) would be much better than what you're doing. You could also implement Perlin's original hash function, although it has known issues with anisotropy.

How to convert C++ implementation of GLCM into Java?

I got the following snippet from GitHub to compute the gray level co-occurrence matrix (GLCM) through OpenCV:
float energy=0,contrast=0,homogenity=0,IDM=0,entropy=0,mean1=0;
int row=img.rows,col=img.cols;
Mat gl=Mat::zeros(256,256,CV_32FC1);
//creating glcm matrix with 256 levels,radius=1 and in the horizontal direction
for(int i=0;i<row;i++)
for(int j=0;j<col-1;j++)
gl.at<float>(img.at<uchar>(i,j),img.at<uchar>(i,j+1))=gl.at<float>(img.at<uchar>(i,j),img.at<uchar>(i,j+1))+1;
// normalizing glcm matrix for parameter determination
gl=gl+gl.t();
gl=gl/sum(gl)[0];
The code above is in C++. I need to convert this into Java but I'm stuck in this line:
gl.at<float>(img.at<uchar>(i,j),img.at<uchar>(i,j+1))=gl.at<‌​float>(img.at<uchar>‌​(i,j),img.at<uchar>(‌​i,j+1))+1;
Can someone help me out with this?
The calculation of a 256x256 symmetric gray level co-occurrence matrix of image img (of class Mat) corresponding to an offset "one pixel to the right" may be implemented in Java through OpenCV as follows:
Mat gl = Mat.zeros(256, 256, CvType.CV_64F);
Mat glt = gl.clone();
for (int y = 0; y < img.rows(); y++) {
for (int x = 0; x < img.cols()-1; x++) {
int i = (int) img.get(y, x)[0];
int j = (int) img.get(y, x + 1)[0];
double[] count = gl.get(i, j);
count[0]++;
gl.put(i, j, count);
}
}
Core.transpose(gl, glt);
Core.add(gl, glt, gl);
Scalar sum = Core.sumElems(gl);
Core.divide(gl, sum, gl);
There is a good bunch of publicly available libraries to compute GLCMs and extract Haralick features from them in Java, for example GLCM2, JFeatureLib, etc.

Java Convolution

Hi I am in need of some help. I need to write a convolution method from scratch that takes in the following inputs: int[][] and BufferedImage inputImage. I can assume that the kernel has size 3x3.
My approach is to do the follow:
convolve inner pixels
convolve corner pixels
convolve outer pixels
In the program that I will post below I believe I convolve the inner pixels but I am a bit lost at how to convolve the corner and outer pixels. I am aware that corner pixels are at (0,0), (width-1,0), (0, height-1) and (width-1,height-1). I think I know to how approach the problem but not sure how to execute that in writing though. Please to aware that I am very new to programming :/ Any assistance will be very helpful to me.
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
}
public BufferedImage convolveInner(double center, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 1; x < width - 1; x++) {
for (int y = 1; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) center*red;
int innergreen = (int) center*green;
int innerblue = (int) center*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage1.setRGB(x, y, newRgbvalue);
}
}
return inputImage1;
}
public BufferedImage convolveEdge(double edge, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage2 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) edge*red;
int innergreen = (int) edge*green;
int innerblue = (int) edge*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage2.setRGB(x, y, newRgbvalue);
}
}
return inputImage2;
}
public BufferedImage convolveCorner(double corner, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage3 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) corner*red;
int innergreen = (int) corner*green;
int innerblue = (int) corner*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage3.setRGB(x, y, newRgbvalue);
}
}
return inputImage3;
}
public static void main(String[] args) {
DrawingKit dk = new DrawingKit("Compositor", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p5 = c.convolve();
dk.drawPicture(p5, 0, 100);
}
}
I changed the code a bit but the output comes out as black. What did I do wrong:
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//for every pixel
for (int x = 0; x < width; x ++) {
for (int y = 0; y < height; y ++) {
int colorValue = inputImage.getRGB(x,y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed();
int green = pixelColor.getGreen();
int blue = pixelColor.getBlue();
double gray = 0;
//multiply every value of kernel with corresponding image pixel
for (int i = 0; i < 3; i ++) {
for (int j = 0; j < 3; j ++) {
int imageX = (x - 3/2 + i + width) % width;
int imageY = (x -3/2 + j + height) % height;
int RGB = inputImage.getRGB(imageX, imageY);
int GRAY = (RGB) & 0xff;
gray += (GRAY*kernel[i][j]);
}
}
int out;
out = (int) Math.min(Math.max(gray * 1, 0), 255);
inputImage1.setRGB(x, y, new Color(out,out,out).getRGB());
}
}
return inputImage1;
}
public static void main(String[] args) {
int[][] newArray = {{1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}};
DrawingKit dk = new DrawingKit("Problem28", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p2 = c.convolve(newArray, p1);
dk.drawPicture(p2, 0, 100);
}
}
Welcome ewuzz! I wrote a convolution using CUDA about a week ago, and the majority of my experience is with Java, so I feel qualified to provide advice for this problem.
Rather than writing all of the code for you, the best way to solve this large program is to discuss individual elements. You mentioned you are very new to programming. As the programs you write become more complex, it's essential to write small working snippets before combining them into a large successful program (or iteratively add snippets). With this being said, it's already apparent you're trying to debug a ~100 line program, and this approach will cost you time in most cases.
The first point to discuss is the general approach you mentioned. If you think about the program, what is the simplest and most repeated step? Obviously this is the kernel/mask step, so we can start from here. When you convolute each pixel, you are performing a similar option, regardless of the position (corner, edge, inside). While there are special steps necessary for these edge cases, they share similar underlying steps. If you try to write code for each of these cases separately, you will have to update the code in multiple (three) places with each adjustment and it will make the whole program more difficult to grasp.
To support my point above, here's what happened when I pasted your code into IntelliJ. This illustrates the (yellow) red flag of using the same code in multiple places:
The concrete way to fix this problem is to combine the three convolve methods into a single one and use if statements for edge-cases as necessary.
Our pseudocode with this change:
convolve(kernel, inputImage)
for each pixel in the image
convolve the single pixel and check edge cases
endfor
end
That seems pretty basic right? If we are able to successfully check edge cases, then this extremely simple logic will work. The reason I left it so general above to show how convolve the single pixel and check edge cases is logically grouped. This means it's a good candidate for extracting a method, which could look like:
private void convolvePixel(int x, int y, int[][] kernel, BufferedImage input, BufferedImage output)
Now to implement our method above, we will need to break it into a few steps, which we may then break into more steps if necessary. We'll need to look at the input image, if possible for each pixel accumulate the values using the kernel, and then set this in the output image. For brevity I will only write pseudocode from here.
convolvePixel(x, y, kernel, input, output)
accumulation = 0
for each row of kernel applicable pixels
for each column of kernel applicable pixels
if this neighboring pixel location is within the image boundaries then
input color = get the color at this neighboring pixel
adjusted value = input color * relative kernel mask value
accumulation += adjusted value
else
//handle this somehow, mentioned below
endif
endfor
endfor
set output pixel as accumulation, assuming this convolution method does not require normalization
end
The pseudocode above is already relatively long. When implementing you could write methods for the if and the else cases, but it you should be fine with this structure.
There are a few ways to handle the edge case of the else above. Your assignment probably specifies a requirement, but the fancy way is to tile around, and pretend like there's another instance of the same image next to this input image. Wikipedia explains three possibilities:
Extend - The nearest border pixels are conceptually extended as far as necessary to provide values for the convolution. Corner pixels are extended in 90° wedges. Other edge pixels are extended in lines.
Wrap - (The method I mentioned) The image is conceptually wrapped (or tiled) and values are taken from the opposite edge or corner.
Crop - Any pixel in the output image which would require values from beyond the edge is skipped. This method can result in the output image being slightly smaller, with the edges having been cropped.
A huge part of becoming a successful programmer is researching on your own. If you read about these methods, work through them on paper, run your convolvePixel method on single pixels, and compare the output to your results by hand, you will find success.
Summary:
Start by cleaning-up your code before anything.
Group the same code into one place.
Hammer out a small chunk (convolving a single pixel). Print out the result and the input values and verify they are correct.
Draw out edge/corner cases.
Read about ways to solve edge cases and decide what fits your needs.
Try implementing the else case through the same form of testing.
Call your convolveImage method with the loop, using the convolvePixel method you know works. Done!
You can look up pseudocode and even specific code to solve the exact problem, so I focused on providing general insight and strategies I have developed through my degree and personal experience. Good luck and please let me know if you want to discuss anything else in the comments below.
Java code for multiple blurs via convolution.

ArUco Axis Swap while drawing 3dAxis

I'm currently trying to develop a ArUco cube detector for a project. The goal is to have a more stable and accurate pose estimation without using a large ArUco board. For this to work however, I need to know the orientation of each of the markers. Using the draw3dAxis method, I discovered that the X and Y axis did not consistently appear in the same location. Here is a video demonstrating the issue: https://youtu.be/gS7BWKm2nmg
It seems to be a problem with the Rvec detection. There is a clear shift in the first two values of the Rvec, which will stay fairly consistent until the axis swaps. When this axis swap happens the values can change by a magnitude anywhere from 2-6. The ARuco library does try to deal with rotations as shown in the Marker.calculateMarkerId() method:
/**
* Return the id read in the code inside a marker. Each marker is divided into 7x7 regions
* of which the inner 5x5 contain info, the border should always be black. This function
* assumes that the code has been extracted previously.
* #return the id of the marker
*/
protected int calculateMarkerId(){
// check all the rotations of code
Code[] rotations = new Code[4];
rotations[0] = code;
int[] dists = new int[4];
dists[0] = hammDist(rotations[0]);
int[] minDist = {dists[0],0};
for(int i=1;i<4;i++){
// rotate
rotations[i] = Code.rotate(rotations[i-1]);
dists[i] = hammDist(rotations[i]);
if(dists[i] < minDist[0]){
minDist[0] = dists[i];
minDist[1] = i;
}
}
this.rotations = minDist[1];
if(minDist[0] != 0){
return -1; // matching id not found
}
else{
this.id = mat2id(rotations[minDist[1]]);
}
return id;
}
and the MarkerDetector.detect() does call that method and uses the getRotations() Method:
// identify the markers
for(int i=0;i<nCandidates;i++){
if(toRemove.get(i) == 0){
Marker marker = candidateMarkers.get(i);
Mat canonicalMarker = new Mat();
warp(in, canonicalMarker, new Size(50,50), marker.toList());
marker.setMat(canonicalMarker);
marker.extractCode();
if(marker.checkBorder()){
int id = marker.calculateMarkerId();
if(id != -1){
// rotate the points of the marker so they are always in the same order no matter the camera orientation
Collections.rotate(marker.toList(), 4-marker.getRotations());
newMarkers.add(marker);
}
}
}
}
The full source code for the ArUco library is here: https://github.com/sidberg/aruco-android/blob/master/Aruco/src/es/ava/aruco/MarkerDetector.java
If anyone has any advice or solutions I'd be very gracious. Please contact me if you have any questions.
I did find the problem. It turns out that the Marker Class has a rotation variable that can be used to rotate the axis to align with the orientation of the marker. I wrote the following method in the Utils class:
protected static void alignToId(Mat rotation, int codeRotation) {
//get the matrix corresponding to the rotation vector
Mat R = new Mat(3, 3, CvType.CV_64FC1);
Calib3d.Rodrigues(rotation, R);
codeRotation += 1;
//create the matrix to rotate around Z Axis
double[] rot = {
Math.cos(Math.toRadians(90) * codeRotation), -Math.sin(Math.toRadians(90) * codeRotation), 0,
Math.sin(Math.toRadians(90) * codeRotation), Math.cos(Math.toRadians(90) * codeRotation), 0,
0, 0, 1
};
// multiply both matrix
Mat res = new Mat(3, 3, CvType.CV_64FC1);
double[] prod = new double[9];
double[] a = new double[9];
R.get(0, 0, a);
for (int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++) {
prod[3 * i + j] = 0;
for (int k = 0; k < 3; k++) {
prod[3 * i + j] += a[3 * i + k] * rot[3 * k + j];
}
}
// convert the matrix to a vector with rodrigues back
res.put(0, 0, prod);
Calib3d.Rodrigues(res, rotation);
}
and I called it from the Marker.calculateExtrinsics Method:
Utils.alignToId(Rvec, this.getRotations());

Capturing an A4 size document. Can OpenCV do this in Android?

This is my first question on Stackoverflow.
I'm a software engineer in profession(Java, C#) and I have 0 knowledge on image processing and Android related technologies. I'm writing an android application for my masters thesis to support visually impaired people in my country to read a document from their Android smartphones, in our native language.
I have selected the sample size of the document as A4, and the app should eventually automatically focus on the document once the whole A4 doc is in camera's view (audible notification should be given to user), and it should then capture that image.
Then I plan to run the document through tesseract engine to convert it into OCR. (Some other guy is doing the text-to-speech part of this application)
I googled thorough couple of applications and came up with the documentation of OpenCV. The http://docs.opencv.org/opencv_tutorials.pdf explains something about "Creating Bounding boxes and circles for contours" and looks like its gonna be my life saver.
My MSC project is a 300 hour part time project so I fear that I will get ended up with nothing after spending time to convert C++/ Python examples to Java by myself to learn OpenCV. I went through the JavaCV as well, but looks like its still in a growing stage, so most probably I will have to convert the examples by myself.
What I wanted to ask from experts is that whether the OpenCV can really do a thing like this?
Thanks in advance!
Edit. I had a look at the link on the comment and trying to port the C++ example to Java. Here is what I got so far. Still there are couple of things to do though...
int thresh = 50, N = 11;
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
static double angle(Point pt1, Point pt2, Point pt0)
{
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/Math.sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
public void find_squares(Mat image, Vector<Vector<Point> > squares)
{
Imgproc img = new Imgproc();
// blur will enhance edge detection
org.opencv.core.Mat blurred = new org.opencv.core.Mat();
Imgproc.medianBlur(image, blurred, 9);
Mat gray0 = new Mat(blurred.size(), CvType.CV_8U);
Mat gray = new Mat();
// Vector<Vector<Point> > contours;
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
// Core.mixChannels(blurred, 1, gray0, 1, ch, 1);
List<Mat> src = new ArrayList<Mat>();
src.add(blurred);
List<Mat> dest = new ArrayList<Mat>();
dest.add(gray0);
MatOfInt a = new MatOfInt(ch);
Core.mixChannels(src, dest, a);
// try several threshold levels
final int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Imgproc.Canny(gray0, gray, 10, 20, 3, false);
// Dilate helps to remove potential holes between edge segments
Point point = new Point(-1, -1);
Imgproc.dilate(gray, gray, new Mat(), point, 1);
}
else
{
// TODO
// gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list. //TODO
Imgproc.findContours(gray, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// Test contours
MatOfPoint2f approx = new MatOfPoint2f();
for (int i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
double epilson = Imgproc.arcLength(new MatOfPoint2f(contours.get(i)), true);
epilson *= 0.02;
Imgproc.approxPolyDP(new MatOfPoint2f(contours.get(i)), approx, epilson, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
// Mat mmm = new Mat();
// MatOfPoint ppp = new MatOfPoint();
if (/*TODO*/approx.size().area() == 4 &&
Math.abs(Imgproc.contourArea(approx)) > 1000 &&
Imgproc.isContourConvex(/*TODO*/approx))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = Math.abs(angle(approx[j % 4], approx[j - 2], approx[j - 1]));
maxCosine = /*TODO*/MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares./*TODO*/push_back(approx);
}
}
}
}
}
}
Just to answer the question, Yes, this can be done in OpenCv (among many other things) and I have completed the project that I explained in the question. Also voted up Abid's answer for the link he provided:)

Categories