I'm new to JavaCv. My task is to find symbols on an image and generate pictures with single symbol on each other.
Firstly, I have this picture:
Then I do thresholding and get this:
I'm trying to use cvFindContours an then draw a rectangle around each symbol, my code:
String filename = "captcha.jpg";
IplImage firstImage=cvLoadImage(filename);
Mat src = imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
Mat dst = new Mat();
threshold(src, dst, 200, 255, 0);
imwrite("out.jpg", dst);
IplImage iplImage=cvLoadImage("out.jpg",CV_8UC1);
CvMemStorage memStorage=CvMemStorage.create();
CvSeq contours=new CvSeq();
cvFindContours(iplImage,memStorage,contours,Loader.sizeof(CvContour.class),CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
CvSeq ptr;
CvRect bb;
for (ptr=contours;ptr!=null;ptr=ptr.h_next()){
bb=cvBoundingRect(ptr);
cvRectangle(firstImage , cvPoint( bb.x(), bb.y() ),
cvPoint( bb.x() + bb.width(), bb.y() + bb.height()),
CvScalar.BLACK, 2, 0, 0 );
}
cvSaveImage("result.jpg",firstImage);
}
I want to get output like this:, but really I get this:
Please, need your help.
You're using "out.jpg" image for findContour().
When you save dst Mat into "out.jpg", JPEG format automatically quantizes your original pixel data and creates noises to your image.
Save dst to "out.png" rather than "out.jpg", or use dst Mat directly into findContour().
Source Code ADDED: C++ version
string filename = "captcha.jpg";
Mat src = imread(filename);
Mat gray;
cvtColor(src, gray, CV_BGR2GRAY);
Mat thres;
threshold(gray, thres, 200, 255, 0);
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(thres.clone(), contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
Mat firstImage = src.clone();
for(int i=0; i< contours.sizes(); i++)
{
Rect r = boundingRect(contours[i]);
rectangle(firstImage, r, CV_RGB(255, 0, 0), 2);
}
imwrite("result.png", firstImage);
Related
I use OpenCV (Java) and I want to change color of all the non transparent pixels in my Mat to black (colors defined in ARGB_8888 but could be another format)
How can I do that ?
Mat mat = new Mat();
bmp32 = bitmap.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp32, mat);
// and then ????
Thanks !
// allocate space as mixChannels requires so
Mat bgr = new Mat(mat.size(), CvType.CV_8UC3);
Mat alpha = new Mat(mat.size(), CvType.CV_8U);
List<Mat> src = new ArrayList<Mat>();
src.add(mat);
List<Mat> dst = new ArrayList<Mat>();
dst.add(alpha);
dst.add(bgr);
// A->alpha, RGB->bgr in reverse order
MatOfInt fromTo = new MatOfInt(0,0, 1,3, 2,2, 3,1);
mixChannels(src, dst, fromTo);
// now, make non-transparent parts black
Mat mask;
compare(alpha, Scalar(0), mask, opencv_core.CMP_EQ);
Mat res;
Core.bitwise_and(bgr, bgr, res, mask);
I have the following Python code:
ret,thresh = cv2.threshold(gray,5,255,cv2.THRESH_TOZERO)
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(11,11))
kernel2 = np.ones((3,3),np.uint8)
erosion = cv2.erode(thresh,kernel2,iterations = 1)
dilation = cv2.dilate(erosion,kernel1,iterations = 7)
I'm trying to convert it to Java. This is my current version:
double thresh = Imgproc.threshold(gray, gray, 5, 255, Imgproc.THRESH_TOZERO);
Mat kernel1 = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(11, 11));
Mat kernel2 = Mat.ones(3, 3, CvType.CV_8U);
Mat erosion = new Mat();
Imgproc.erode(gray, erosion, kernel2);
Mat dilation = new Mat();
Imgproc.dilate(erosion, dilation, kernel1);
Right now I'm unable to find a place where thresh parameter should be applied and also I'm not using the iterations parameter for Imgproc.erode and Imgproc.dilate methods because the method signature in this case also requires additional Point anchor parameter which I do not have right now.
How to properly convert this code to Java?
UPDATED
I do the following translation:
Mat erosion = new Mat();
Imgproc.erode(gray, erosion, kernel2, new Point(), 1);
Mat dilation = new Mat();
Imgproc.dilate(erosion, dilation, kernel1, new Point(), 7);
and looks like it is working as expected but I'm not sure about new Point() Is it a correct translation?
Here are the dilate declarations in C++/Python/Java:
C++: void dilate(InputArray src, OutputArray dst, InputArray kernel,
Point anchor=Point(-1,-1),
int iterations=1,
int borderType=BORDER_CONSTANT,
const Scalar& borderValue=morphologyDefaultBorderValue() )
Python: cv2.dilate(src, kernel[, dst
[, anchor[, iterations[, borderType[, borderValue]]]]]) → dst
Java: static void dilate(Mat src, Mat dst, Mat kernel, Point anchor, int iterations)
Because there is no default value for anchor if you want to use the param iterations. So pass new Point(-1,-1) is a better choice.
Imgproc.dilate(src, dst, kernel, new Point(-1,-1), iterations);
I recently started working with the Java bindings for OpenCV to make a quick and dirty project to do template matching. Basically I am trying to read a set of jpg images (saved in MS Paint) into Mats and then use template matching to find their locations from a screen shot taken with Java.Robot.
When it comes time to do the template matching this error is thrown
OpenCV Error: Assertion failed ((depth == CV_8U || depth == CV_32F)
&& type == _templ.type() && _img.dims() <= 2) in cv::matchTemplate
After searching it looks like the issue is that the two Mats I am trying to use do not have the same "type". What I am not sure of is what this refers to. I assume it is the Mats CvType, if I print out the CvType of the image and template I get a type() of 4 == CvType.CV_32SC1 for my template I get a type() of 20 == CvType.CV_32SC3.
But I feel like this is not the correct type() I am trying to compare, I have feeling it refers to the data type of how the data is stored in the Mat? But I have no good links to back this up just remembrances from many SO searches.
Here is my code for loading in my jpg images into a Mat
Mat pic_ = Imgcodecs.imread("MyPath\\image.jpg");
pic_.convertTo(pic_, CvType.CV_32SC1);
Here the second line turns my type() from 20 to 16, though as per my last comment I don't think this is the proper way to alter the Mat to match the image?... Because convertTo'ing this Mat to match the type of the screen shot `(below) does not fix the error?
Here is how I am creating the image Mat
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage screenShot = rob.createScreenCapture(screenRect);
Mat screenImage = bufferedImageToMat(screenShot);
So I first take a screenshot with Java.Robot.createScreenCapture I then convert it to a Mat with
private Mat bufferedImageToMat(BufferedImage inBuffImg)
{
BufferedImage image = new BufferedImage(inBuffImg.getWidth(), inBuffImg.getHeight(), BufferedImage.TYPE_INT_RGB);
Graphics2D g2d= image.createGraphics();
g2d.drawImage(inBuffImg, 0, 0, null);
g2d.dispose();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_32SC1);
int[] data = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
return mat;
}
From what I could tell the BufferedImage created by Robot is of type BufferedImage.TYPE_3BYTE_BGR which gives me an error "DataBufferInt cannot be cast to DataBufferByte" when trying to get the pixel data. So per the linked question I redraw the BufferedImage as type BufferedImage.TYPE_INT_RGB and pull out the data as a DataBufferInt.
So in all, should I be trying to match the Mat.type() or does my problem lie elsewhere? If not elsewhere how can I alter either of the Mats so that they can be used with Imgproc.matchTemplate properly?
I feel like the easiest solution would be to convert the image loaded from file to match the screenshot Mat?
EDIT: The exact section of code that gives the error is below
// Mat imageTemplate is a function argument; the loaded jpg image
// Take a picture of the screen
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage screenShot = rob.createScreenCapture(screenRect);
Mat screenImage = bufferedImageToMat(screenShot);
// Create the result matrix
int result_cols = screenImage.cols() - imageTemplate.cols() + 1;
int result_rows = screenImage.rows() - imageTemplate.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32SC1);
newStatus("ScreenType: " + screenImage.type());
newStatus("TemplaType: " + imageTemplate.type());
// Choose a matching method
int matchMethod = Imgproc.TM_SQDIFF_NORMED;
// Do the Matching and Normalize
Imgproc.matchTemplate(screenImage, imageTemplate, result, matchMethod);
// Error occurs on previous line
As #Miki pointed out in the comments the answer was getting the channe type to match for the image and template. I ended up changing my bufferedImageToMat function.
private Mat bufferedImageToMat(BufferedImage inBuffImg)
{
BufferedImage image = new BufferedImage(inBuffImg.getWidth(), inBuffImg.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
Graphics2D g2d= image.createGraphics();
g2d.drawImage(inBuffImg, 0, 0, null);
g2d.dispose();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
return mat;
}
My templates are read in as CvType.CV_8UC3, so it was just a matter of creating a Mat from the screen image with this type!
As the title suggests, I am interested in getting the HSV value of a specific pixel using java CV. This sounds easy enough, and it seems to be straight forward in C++ or Python, but I simply cant figure out how to do it in Java. I am pretty new to OpenCV, and if I decide to do more projects using this library I will definitely write them in C++ or Python.
For reference, my goal is to do a color analysis of an object that has varying levels of lighting. The end goal is to be able to take an image of something like a t-shirt and be able to say "this t shirt is x% red".
Here is some of the code I was using. Surprisingly inRange() takes much longer than just looping through every pixel and getting RGB one by one. I want to be able to do exactly this, just with the HSV color space. If you know of a better way to accomplish this goal, please let me know as this has destroyed my entire Saturday. Thanks!
Scalar min = new Scalar(22,11,3);
Scalar max = new Scalar(103,87,74);
int sum = 0;
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
File input = new File("bluesample.jpg");
BufferedImage image = ImageIO.read(input);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
Mat mat1 = new Mat(image.getHeight(),image.getWidth(),CvType.CV_8UC3);
mat.put(0, 0, data);
Core.inRange(mat, min, max, mat1);
System.out.println(mat1.total());
System.out.println(mat1.total());
for (int i=0;i<mat1.rows(); i++ ){
for (int j=0;i<mat1.cols();j++){
sum += mat1.get(j, i, data);
}
}
System.out.println(sum/mat1.total());
EDIT:
try { System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
File input = new File("singlehsvpix.jpg");
BufferedImage image = ImageIO.read(input);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, data);
Mat mat1 = new Mat(image.getHeight(),image.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(mat, mat1, Imgproc.COLOR_RGB2HSV);
System.out.println(mat1.dump());
byte[] data1 = new byte[mat1.rows() * mat1.cols() * (int)(mat1.elemSize())];
mat1.get(0, 0, data1);
//BufferedImage image1 = new BufferedImage(mat1.cols(),mat1.rows(), BufferedImage.TYPE_BYTE_GRAY);
BufferedImage image1 = new BufferedImage(mat1.cols(),mat1.rows(), 5);
image1.getRaster().setDataElements(0, 0, mat1.cols(), mat1.rows(), data1);
File output = new File("PLS!.jpg");
ImageIO.write(image1, "jpg", output);
System.out.println(mat1.get(0, 0, data1)); // RELEVANT LINE
System.out.println("Done");
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
}
Is printing:
[ 54, 213, 193]
3
Done
For this pic, 54, 213, 193 are the BGR values... I guess I don't understand enough about OpenCV to know why my mat1.get is printing 3
So, you want to convert rgb to hsv.
Imgproc.cvtColor(im_rgb, im_hsv, Imgproc.COLOR_RGB2HSV);
Then, process as you like
Edit: in your code, change mat to mat1
for (int i=0;i<mat1.rows(); i++ ){
for (int j=0;i<mat1.cols();j++){
sum += mat.get(j, i, data); //this line
}
}
System.out.println(sum/mat1.total());
You are adding the value in original matrix.
I am working on a real time text detection and recognition with OpenCV4Android. Recognition part is totally completed. However, I have to ask question about text detection. I' m using the MSER FeatureDetector for detection text.
This is the real time and calling the method part:
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
carrierMat = inputFrame.gray();
carrierMat = General.MSER(carrierMat);
return carrierMat;
}
And this is the basic MSER implementation:
private static FeatureDetector fd = FeatureDetector.create(FeatureDetector.MSER);
private static MatOfKeyPoint mokp = new MatOfKeyPoint();
private static Mat edges = new Mat();
public static Mat MSER(Mat mat) {
//for mask
Imgproc.Canny(mat, edges, 400, 450);
fd.detect(mat, mokp, edges);
//for drawing keypoints
Features2d.drawKeypoints(mat, mokp, mat);
return mat;
}
It works fine for finding text with edges mask.
I would like to draw a rectangles for clusters like this:
or this:
You can assume that I have the right points.
As you can see, fd.detect() method is returning a MatOfKeyPoint. Hence I' ve tried this method for drawing rectangle:
public static Mat MSER_(Mat mat) {
fd.detect(mat, mokp);
KeyPoint[] refKp = mokp.toArray();
Point[] refPts = new Point[refKp.length];
for (int i = 0; i < refKp.length; i++) {
refPts[i] = refKp[i].pt;
}
MatOfPoint2f refMatPt = new MatOfPoint2f(refPts);
MatOfPoint2f approxCurve = new MatOfPoint2f();
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(refMatPt, true) * 0.02;
Imgproc.approxPolyDP(refMatPt, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
// Get bounding rect
Rect rect = Imgproc.boundingRect(points);
// draw enclosing rectangle (all same color, but you could use variable i to make them unique)
Imgproc.rectangle(mat, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), Detect_Color_, 5);
//Features2d.drawKeypoints(mat, mokp, mat);
return mat;
}
But when I was trying to Imgproc.arcLength() method, it suddenly stopped. I gave a random approxDistance value for Imgproc.approxPolyDP() method like 0.1, it doesn' t work really efficiently.
So how can I draw rectangle for detected text?
I tested your code and had exactly the same problem.
For now I still can't find the problem within.
But I found a project using both "MSER" and "Morphological".
you can find it here .
The project have very simple structure and the author put the
text detection in "onCameraFrame" method just like you.
I implemented the method from that project and it worked,
but the result was still not very good.
If you seek better text detection tool, here's two of them.
Stroke Width Transform(SWT):
A whole new method for finding text area. It's fast and efficient. however it is only available in c++ or python. you can find some example here.
Class-specific Extremal Regions using class ERFilter:An advanced version of the MSER. Unfortunately, it is only available in OpenCV 3.0.0-dev. You can't use it in current version of OpenCV4Android. The document is here.
To be honest I am new in this area(2 months), but I hope these information can help you finish your project.
(update:2015/9/13)
I've translated a c++ method from a post.
It works far better than the first github project I mentioned.
Here is the code:
public void apply(Mat src, Mat dst) {
if (dst != src) {
src.copyTo(dst);
}
Mat img_gray,img_sobel, img_threshold, element;
img_gray=new Mat();
Imgproc.cvtColor(src, img_gray, Imgproc.COLOR_RGB2GRAY);
img_sobel=new Mat();
Imgproc.Sobel(img_gray, img_sobel, CvType.CV_8U, 1, 0, 3, 1, 0,Core.BORDER_DEFAULT);
img_threshold=new Mat();
Imgproc.threshold(img_sobel, img_threshold, 0, 255, Imgproc.THRESH_OTSU+Imgproc.THRESH_BINARY);
element=new Mat();
element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(17, 3) );
Imgproc.morphologyEx(img_threshold, img_threshold, Imgproc.MORPH_CLOSE, element);
//Does the trick
List<MatOfPoint> contours=new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(img_threshold, contours, hierarchy, 0, 1);
List<MatOfPoint> contours_poly=new ArrayList<MatOfPoint>(contours.size());
contours_poly.addAll(contours);
MatOfPoint2f mMOP2f1,mMOP2f2;
mMOP2f1=new MatOfPoint2f();
mMOP2f2=new MatOfPoint2f();
for( int i = 0; i < contours.size(); i++ )
if (contours.get(i).toList().size()>100)
{
contours.get(i).convertTo(mMOP2f1, CvType.CV_32FC2);
Imgproc.approxPolyDP(mMOP2f1,mMOP2f2, 3, true );
mMOP2f2.convertTo(contours_poly.get(i), CvType.CV_32S);
Rect appRect=Imgproc.boundingRect(contours_poly.get(i));
if (appRect.width>appRect.height)
{
Imgproc.rectangle(dst, new Point(appRect.x,appRect.y) ,new Point(appRect.x+appRect.width,appRect.y+appRect.height), new Scalar(255,0,0));
}
}
}