OpenCV Java Harris Corner Detection - java

I am developing an Android application and I want to make use of Harris corner detection. I want to draw the corners detected but I cannot seem to find the documentation for the Java code.
My code so far:
Mat inputImage = inputFrame.rgba();
Imgproc.cornerHarris(inputImage, inputImage, 7, 5, 0.05, Imgproc.BORDER_DEFAULT);
How can I detect and display the corners?

For Java you can try this piece of code.
private void Harris(Mat Scene, Mat Object, int thresh) {
// This function implements the Harris Corner detection. The corners at intensity > thresh
// are drawn.
Mat Harris_scene = new Mat();
Mat Harris_object = new Mat();
Mat harris_scene_norm = new Mat(), harris_object_norm = new Mat(), harris_scene_scaled = new Mat(), harris_object_scaled = new Mat();
int blockSize = 9;
int apertureSize = 5;
double k = 0.1;
Imgproc.cornerHarris(Scene, Harris_scene,blockSize, apertureSize,k);
Imgproc.cornerHarris(Object, Harris_object, blockSize,apertureSize,k);
Core.normalize(Harris_scene, harris_scene_norm, 0, 255, Core.NORM_MINMAX, CvType.CV_32FC1, new Mat());
Core.normalize(Harris_object, harris_object_norm, 0, 255, Core.NORM_MINMAX, CvType.CV_32FC1, new Mat());
Core.convertScaleAbs(harris_scene_norm, harris_scene_scaled);
Core.convertScaleAbs(harris_object_norm, harris_object_scaled);
for( int j = 0; j < harris_scene_norm.rows() ; j++){
for( int i = 0; i < harris_scene_norm.cols(); i++){
if ((int) harris_scene_norm.get(j,i)[0] > thresh){
Imgproc.circle(harris_scene_scaled, new Point(i,j), 5 , new Scalar(0), 2 ,8 , 0);
}
}
}
for( int j = 0; j < harris_object_norm.rows() ; j++){
for( int i = 0; i < harris_object_norm.cols(); i++){
if ((int) harris_object_norm.get(j,i)[0] > thresh){
Imgproc.circle(harris_object_scaled, new Point(i,j), 5 , new Scalar(0), 2 ,8 , 0);
}
}
}
}
I just wrote this following code here

I know this is not ideal, but it is also not so bad - you can look at the c++ documentation and examples, the translation to Java is usually straight forward:
One example: Harris corner detector. (you did not mention your version, this is from v2.4).

If anyone is still looking for OpenCV Java Samples, you can find it here in the following links.
Complete Java Samples
https://github.com/opencv/opencv/tree/master/samples/java/tutorial_code
Motion Tracking
https://github.com/opencv/opencv/tree/master/samples/java/tutorial_code/TrackingMotion
Harris Corner Detection
https://github.com/opencv/opencv/blob/master/samples/java/tutorial_code/TrackingMotion/harris_detector/CornerHarrisDemo.java

Related

why dilate functin give different result when the parameteer is the same

I try to dilate three image of characters in java opencv. i found out than ever it same character with same font and size, after dilate the result is different. so i try with same image the result is still different. Here is my test code.
for (int j = 0; j < 3; j++) {
Mat InputSrc = openFile("src\\myOpencv\\ocr\\crop1.png");
Mat tempImg =new Mat();
Imgproc.cvtColor(InputSrc, tempImg, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(tempImg, tempImg, 0, 255, Imgproc.THRESH_OTSU);
imageViewer.show(tempImg, "src");
Mat kernal5 = new Mat(5, 5, CV_8U);
Point midPoint = new Point(-1, -1);
Scalar scalarOne = new Scalar(1);
Mat binImg2 = new Mat();
Imgproc.dilate(tempImg, binImg2, kernal5, midPoint, 1, 1, scalarOne);
imageViewer.show(binImg2, "dilate");
}
thank

HoughLinesP not detecting lines OpenCV android

I am working with OpenCV 3.0 for Android. I have an image in which i want to detect angle of hands inside circular dials. for that i am working on HoughLinesP to detect hands.
Here is the code.
Mat imgSource = new Mat(), imgCirclesOut = new Mat(),imgLinesOut=new Mat();
//grey opencv
Imgproc.cvtColor(Image, imgSource, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur( imgSource, imgSource, new Size(9, 9), 2, 2 );
int threshold = 0;
int minLineSize = 0;
int lineGap = 0;
Imgproc.HoughLinesP(imgSource, imgLinesOut, 1, Math.PI/180, threshold, minLineSize, lineGap);
for( int j = 0; i < imgLinesOut.cols(); i++ )
{
double[] vec=imgLinesOut.get(0,j);
Point pt1, pt2;
pt1=new Point(vec[0],vec[1]);
pt2=new Point(vec[2],vec[3]);
Imgproc.line( Image, pt1, pt2, new Scalar(0,0,255), 3, Core.LINE_AA,0);
}
But result is
What i need is the angle of hands in these circles. Any help regarding this issue is highly appreciated. Thanks in ADvance
Edit
I have updated my code with this
Mat imgSource = new Mat(), imgCirclesOut = new Mat(),imgLinesOut=new Mat();
Imgproc.GaussianBlur( Image, imgSource, new Size(5, 5), 2, 2 );
int threshold = 20;
int minLineSize = 0;
int lineGap = 10;
Imgproc.Canny(imgSource, imgSource, 70, 100);
Imgproc.HoughLinesP(imgSource, imgLinesOut, 1, Math.PI/180, threshold, minLineSize, lineGap);
for( int j = 0; j < imgLinesOut.cols(); j++ )
{
double[] vec=imgLinesOut.get(0,j);
Point pt1, pt2;
pt1=new Point(vec[0],vec[1]);
pt2=new Point(vec[2],vec[3]);
Imgproc.line( imgSource, pt1, pt2, new Scalar(0,0,255), 3, Core.LINE_AA,0);
}
as suggested by #Micka, there is no need of Graying image(I removed cvtcolor). I also decreased value of GuassianBlur Size to 5. I have added Canny on image too for edges.
Resulting blur image is
Detecting lines can be a problem in such small images, since you have to few points to fill the Hough accumulator properly.
I propose to use a different approach:
Segment each circle (dial)
Extract the largest dark blob (hand)
Below is a simple implementation of this idea. The code is in C++, but you can easily port to Java, or at least use as a reference.
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
Mat3b res;
cvtColor(img, res, COLOR_GRAY2BGR);
// Find dials
vector<Vec3f> circles;
HoughCircles(img, circles, CV_HOUGH_GRADIENT, 1, img.cols/10, 400, 40);
// For each dial
for (int i = 0; i < circles.size(); ++i)
{
// Segment the dial
Mat1b dial(img.size(), uchar(255));
Mat1b mask(img.size(), uchar(0));
circle(mask, Point(circles[i][0], circles[i][1]), circles[i][2], Scalar(255), CV_FILLED);
img.copyTo(dial, mask);
// Apply threshold and open
Mat1b bin;
threshold(dial, bin, 127, 255, THRESH_BINARY_INV);
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(5,5));
morphologyEx(bin, bin, MORPH_OPEN, kernel);
// Get min area rect
vector<Point> points;
findNonZero(bin, points);
RotatedRect r = minAreaRect(points);
// Draw min area rect
Point2f pts[4];
r.points(pts);
for (int j = 0; j < 4; ++j) {
line(res, pts[j], pts[(j + 1) % 4], Scalar(0, 255, 0), 1);
}
}
imshow("Result", res);
waitKey();
return 0;
}
Starting from this image:
I find hands here:
for( int j = 0; j < imgLinesOut.size(); j++ )
This will give the size of the vector.To iterate through that vector

OpenCV on Android - Imgproc.minEnclosingCircle and Core.circle

I'm having problem about the relation between Imgproc.minEnclosingCircle method and Core.circle method. Below is how part of my code looks like:
/// Start of Code
//init
List<MatOfPoint2f> contours2f = new ArrayList<MatOfPoint2f>();
List<MatOfPoint2f> polyMOP2f = new ArrayList<MatOfPoint2f>();
List<MatOfPoint> polyMOP = new ArrayList<MatOfPoint>();
Rect[] boundRect = new Rect[contours.size()];
Point[] mPusat = new Point[contours.size()];
float[] mJejari = new float[contours.size()];
//initialize the Lists ?
for (int i = 0; i < contours.size(); i++) {
contours2f.add(new MatOfPoint2f());
polyMOP2f.add(new MatOfPoint2f());
polyMOP.add(new MatOfPoint());
}
//Convert to MatOfPoint2f + approximate contours to polygon + get bounding rects and circles
for (int i = 0; i < contours.size(); i++) {
contours.get(i).convertTo(contours2f.get(i), CvType.CV_32FC2);
Imgproc.approxPolyDP(contours2f.get(i), polyMOP2f.get(i), 3, true);
polyMOP2f.get(i).convertTo(polyMOP.get(i), CvType.CV_32S);
boundRect[i] = Imgproc.boundingRect(polyMOP.get(i));
Imgproc.minEnclosingCircle(polyMOP2f.get(i), mPusat[i], mJejari);
}
// Draw polygonal contours + boundingRects + circles
for (int i = 0; i < contours.size(); i++) {
Imgproc.drawContours(image3, polyMOP, i, green, 1);
Core.rectangle(image3, boundRect[i].tl(), boundRect[i].br(), green, 2);
Core.circle(image3, mPusat[i], (int)mJejari[i], green, 3);
}
/// End of Code
I tried to run the program but it came up error with the exception java.lang.NullPointerException
Then I tried to modify the Imgproc.minEnclosingCircle() method a bit like this:
...
Imgproc.minEnclosingCircle(polyMOP2f.get(i), mPusat[i], tempJejari);
mJejari[i] = tempJejari[0];
...
but it turns out failed too.
My question is: it looks like the Imgproc.minEnclosingCircle() method requires the radius (in my code it's mJejari) to be inform of float[] array, but the Core.circle() method requires the radius to be in integer. So is there any way to manipulate the Imgproc.minEnclosingCircle() method to satisfy the Core.circle() method? Thanks before.
PS: some sources I used for the code:
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
How to convert MatOfPoint to MatOfPoint2f in opencv java api

Capturing an A4 size document. Can OpenCV do this in Android?

This is my first question on Stackoverflow.
I'm a software engineer in profession(Java, C#) and I have 0 knowledge on image processing and Android related technologies. I'm writing an android application for my masters thesis to support visually impaired people in my country to read a document from their Android smartphones, in our native language.
I have selected the sample size of the document as A4, and the app should eventually automatically focus on the document once the whole A4 doc is in camera's view (audible notification should be given to user), and it should then capture that image.
Then I plan to run the document through tesseract engine to convert it into OCR. (Some other guy is doing the text-to-speech part of this application)
I googled thorough couple of applications and came up with the documentation of OpenCV. The http://docs.opencv.org/opencv_tutorials.pdf explains something about "Creating Bounding boxes and circles for contours" and looks like its gonna be my life saver.
My MSC project is a 300 hour part time project so I fear that I will get ended up with nothing after spending time to convert C++/ Python examples to Java by myself to learn OpenCV. I went through the JavaCV as well, but looks like its still in a growing stage, so most probably I will have to convert the examples by myself.
What I wanted to ask from experts is that whether the OpenCV can really do a thing like this?
Thanks in advance!
Edit. I had a look at the link on the comment and trying to port the C++ example to Java. Here is what I got so far. Still there are couple of things to do though...
int thresh = 50, N = 11;
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
static double angle(Point pt1, Point pt2, Point pt0)
{
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/Math.sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
public void find_squares(Mat image, Vector<Vector<Point> > squares)
{
Imgproc img = new Imgproc();
// blur will enhance edge detection
org.opencv.core.Mat blurred = new org.opencv.core.Mat();
Imgproc.medianBlur(image, blurred, 9);
Mat gray0 = new Mat(blurred.size(), CvType.CV_8U);
Mat gray = new Mat();
// Vector<Vector<Point> > contours;
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
// Core.mixChannels(blurred, 1, gray0, 1, ch, 1);
List<Mat> src = new ArrayList<Mat>();
src.add(blurred);
List<Mat> dest = new ArrayList<Mat>();
dest.add(gray0);
MatOfInt a = new MatOfInt(ch);
Core.mixChannels(src, dest, a);
// try several threshold levels
final int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Imgproc.Canny(gray0, gray, 10, 20, 3, false);
// Dilate helps to remove potential holes between edge segments
Point point = new Point(-1, -1);
Imgproc.dilate(gray, gray, new Mat(), point, 1);
}
else
{
// TODO
// gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list. //TODO
Imgproc.findContours(gray, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// Test contours
MatOfPoint2f approx = new MatOfPoint2f();
for (int i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
double epilson = Imgproc.arcLength(new MatOfPoint2f(contours.get(i)), true);
epilson *= 0.02;
Imgproc.approxPolyDP(new MatOfPoint2f(contours.get(i)), approx, epilson, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
// Mat mmm = new Mat();
// MatOfPoint ppp = new MatOfPoint();
if (/*TODO*/approx.size().area() == 4 &&
Math.abs(Imgproc.contourArea(approx)) > 1000 &&
Imgproc.isContourConvex(/*TODO*/approx))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = Math.abs(angle(approx[j % 4], approx[j - 2], approx[j - 1]));
maxCosine = /*TODO*/MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares./*TODO*/push_back(approx);
}
}
}
}
}
}
Just to answer the question, Yes, this can be done in OpenCv (among many other things) and I have completed the project that I explained in the question. Also voted up Abid's answer for the link he provided:)

Android OpenCV Find contours

I need to extract the largest contour of an image.
This is the code i'm currently using. gathered of a few snippets online
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(outerBox, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
double maxArea = -1;
int maxAreaIdx = -1;
for (int idx = 0; idx < contours.size(); idx++) {
Mat contour = contours.get(idx);
double contourarea = Imgproc.contourArea(contour);
if (contourarea > maxArea) {
maxArea = contourarea;
maxAreaIdx = idx;
}
}
and it seems to work. however, I'm not quite sure how to go about from here.
I tried using Imgproc.floodFill, but I'm not quite sure how.
this function requires a mast Mat of the same size as the original Mat +2 horizontal and +2 vertical.
When I ran this on the contour contours.get(maxAreaIdx), it gave me an error.
The code:
Mat mask = Mat.zeros(contour.rows() + 2, contour.cols() + 2, CvType.CV_8UC1);
int area = Imgproc.floodFill(contour, mask, new Point(0,0), new Scalar(255, 255, 255));
The error:
11-18 19:07:49.406: E/cv::error()(3117): OpenCV Error: Unsupported format or combination of formats () in void cvFloodFill(CvArr*, CvPoint, CvScalar, CvScalar, CvScalar, CvConnectedComp*, int, CvArr*), file /home/oleg/sources/opencv/modules/imgproc/src/floodfill.cpp, line 621
So basically my question is, how can I, after finding the contour with the largest area, to "highlight" it? I want everything else to be black, and the contour to be white
Thanks!
You can use the DrawContours function in OpenCV : http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=drawcontours#drawcontours
Or you can use this implementation in C++ (you can find the equivalent in Java in the OpenCV doc, just type OpenCV + the name of the function on google)
Mat src = imread("your image"); int row = src.rows; int col = src.cols;
//Create contour
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Mat src_copy = src.clone();
findContours( src_copy, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE);
// Create Mask
Mat_<uchar> mask(row,col);
for (int j=0; j<row; j++)
for (int i=0; i<col; i++)
{
if ( pointPolygonTest( contours[0], Point2f(i,j),false) =0)
{mask(j,i)=255;}
else
{mask(j,i)=0;}
};
try contours[1], contours[2]... to find the biggest one
This is for displaying your contour:
namedWindow("Contour",CV_WINDOW_AUTOSIZE);
imshow("Contour", mask);

Categories