OpenCV thresholding unlike expected - java

I would like to preprocess a given picture by thresholding it in order to then hand it over to Tesseract. I first did that using Gimp (2.8.16) and a fixed range of 130 – 255. When I then implemented it in OpenCV (3.1) using Java, I first forgot to call cvtColor resulting in a picture that still had some colors in it (these areas were white in Gimp). Besides that, the picture was as expected. However, when I implemented the corresponding call, I got a picture that was different to the one I would have expected. It seems that the areas that were colored previously are now black while the remaining picture is similar to the one I created with Gimp.
Is there anything that I am missing to create a more similar output?
The reason I am asking this question is that, unfortunately, Tesseract (with psm 6) creates quite different results for the two images:
for the one created in Gimp: "2011 1 L 0006"
for the second one created with OpenCV: "2011ÔÇö] L 0 0006 1"
Here is the code that I used:
Mat thres = new Mat();
Mat tmp = new Mat();
Imgproc.cvtColor(src, tmp, Imgproc.COLOR_BGR2GRAY); // tmp = src.clone(); in my first attempt
Imgproc.threshold(tmp, thres, 130, 255, Imgproc.THRESH_BINARY);
Imgcodecs.imwrite("output.jpg", thres);
Here are the pictures:
Given picture:
Picture created with Gimp:
First result using OpenCV:
Second result using OpenCV:

In the first case, You are doing thresholding on color image (tmp = src.clone() creates another copy of src which is a color image). So you are getting result like that and in the second case, you are first converting to grayscale and then thresholding which gives a better result. Thresholding is good on gray scale images.

Related

Enhance edges in OpenCV (Java) while removing noise

I'm trying to write an OpenCV program in Java that takes a photo of a marker with a phone and finds circular contours on the marker. I have gotten it working in the Android emulator (where the photo has perfect lighting conditions), but can't get it to work with the phone itself.
This is the marker I'm trying to capture:
After using the combination of transforming to grayscale, a Gaussian blur and a Canny edge detector, I get this output:
If I then try to find the contours on the image, the number of contours returned is really high (over 1000), but they aren't closed (because the edges seem to be too weak.
The contours drawn over the original image:
This is the code I use for the segmentation:
Mat processed = new Mat(original.height(), original.width(), original.type());
Imgproc.cvtColor(original, processed, Imgproc.COLOR_RGB2GRAY);
Imgproc.GaussianBlur(processed, processed, new Size(7, 7), 0.1);
Imgproc.Canny(processed, processed, 50, 50*3, 3, false);
I have tried tweaking the different parameters (thresholds etc.), but feel like that isn't the ideal solution. I'm thinking there has to be a way to enhance the edges returned by the canny detector, but I haven't found anything myself.
Any help is appreciated!
You want something like this?
Noise is usually present - it can be reduced - you can distinguish the circles with adaptiveThreshold and then find the circles using the method I described below. The point is, the real-world image you get from the camera may contain a whole bunch of other circles - so it might be best to find all the circles. Compare them all in size. Find the 4 circles that are most similar to your marker in terms of color, size and placement.
A quick python code to distinguish the circles:
# Make a gray version
gry = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# Thresh
ada = cv2.adaptiveThreshold(
gry, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2
)
# Remove noises
out = cv2.medianBlur(ada, 7)
# Invert colors (Not needed)
out = ~out
I tested the Python code and it works; you can find an equivalent for Java or C++. I tried to explain the Java code but I wrote it on the fly and did not test it. The Java code I wrote probably has errors, but it does get the point. With a bit change will probably work. I also wrote the code that should find the circles as the last block. Working with it is tricky and requires adjusting the parameters.
Java:
...
Imgproc.cvtColor(im, gry, Imgproc.COLOR_RGBA2GRAY);
Imgproc.adaptiveThreshold(gry, ada, 255,
Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 11, 2);
Imgproc.medianBlur(ada, out, 7);
...
and for finding circles:
Java:
...
SimpleBlobDetector_Params params = new SimpleBlobDetector_Params();
params.set_filterByCircularity(true);
params.set_minCircularity(0.01f);
params.set_maxArea(50000);//important
SimpleBlobDetector detector = SimpleBlobDetector.create(params);
// List of detected points
MatOfKeyPoint keyPoints = new MatOfKeyPoint();
detector.detect(ada, keyPoints);
// Draw circles on final image
Scalar color = new Scalar(127, 0, 255);
for (KeyPoint key: keyPoints.toList()) {
Imgproc.circle(im, key.pt, (int) (key.size / 2.0f), color, 3/*Thickness*/);
}
...
You can look into dilation operation in opencv to enhance the edges.
https://docs.opencv.org/4.x/db/df6/tutorial_erosion_dilatation.html
Also look into this example below with Canny edge detection, approxPolyDP and minEnclosingCircle. It is very close to your question.
https://docs.opencv.org/4.x/da/d0c/tutorial_bounding_rects_circles.html
https://docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#ga0012a5fdaea70b8a9970165d98722b4c
https://docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#ga8ce13c24081bbc7151e9326f412190f1

Mask image in opencv java

I need to convert near white pixels to white and near black pixels to black.
I found a code snippet in python on how to do it.
hsv=cv.cvtColor(image,cv.COLOR_BGR2HSV)
# Define lower and upper limits of what we call "brown"
brown_lo=np.array([10,0,0])
brown_hi=np.array([20,255,255])
# Mask image to only select browns
mask=cv.inRange(hsv,brown_lo,brown_hi)
# Change image to red where we found brown
image[mask>0]=(0,0,255)
I have converted it java as below.
Mat temp= new Mat();
Imgproc.cvtColor(src,temp,COLOR_BGR2HSV);
Scalar low= new Scalar(10,0,0);
Scalar high= new Scalar(20,255,255);
Mat mask = new Mat();
inRange(temp,low,high,mask);
But I am facing problem converting below statement to java and there is no good opencv documentation in java with samples.
image[mask>0]=(0,0,255)
Could somebody help on how to convert above statement to java...?
I have tried setTo but it is not giving desired behaviour(attached screenshot below). Refer https://stackoverflow.com/a/50215020/12643143 for the expected result.
src.setTo(new Scalar(0,0,255),mask);
I recommend to use setTo(). This method can set all he pixels in a Mat. If an optionally mask argument is specified, then all the pixels who have a corresponding pixel with a non-zero value in the mask will be set.
Thus the python statement
image[mask>0]=(0,0,255)
can be substituted in Java by:
image.setTo(new Scalar(0, 0, 255), mask);
where image has to be a Mat object.
Answer to the question
As #Rabbid76 mentioned setTo is the correct way to do this. However if you want specific logic like image[mask>127]=(0,0,255), then do threshold (Imgproc.threshold(grey,grey, 127, 255, THRESH_BINARY);) and then use setTo.
Solution to my problem
Actually my problem was not due to setTo. Its the logic mismatch between how I read/write the Mat in my code Vs the post I referred.
I am posting the solution to the problem that I have faced so that it might help new bees like me.
Problem in reading Image
The post use Imgcodecs.imread() to read image to Mat in BGR format.
Whereas I am loading bitmap using bitmapToMat in CV_8UC4 type as below which reads the image to Mat in RGBA format.
Mat src = new Mat(bitmap.getHeight(), bitmap.getWidth(), CV_8UC4);
org.opencv.android.Utils.bitmapToMat(bitmap, src);
Fix is to convert the format properly.
Mat src = new Mat(bitmap.getHeight(), bitmap.getWidth(), CV_8UC3); //notice 3 channel
org.opencv.android.Utils.bitmapToMat(bitmap, src);
Imgproc.cvtColor(src,hsv,COLOR_RGB2HSV); //Convert RGB to HSV. COLOR_RGBA2HSV not exist, hence we load it in CV_8UC3(3 channel R,G,B).
Problem in writing the Image
Similarly as we have differences in reading between bitmapToMat and imread, the same are applicable for writing. Imgcodecs.imwrite() will write the BGR image to bitmap, where as I have to convert it back to RGB format for matToBitmap to work like Imgproc.cvtColor(rgb, rgb, Imgproc.COLOR_BGR2RGB);

opencv perspectiveTransform produces incorrect transformation despite good homography inliers

I am having a problem trying to get perspectiveTransform() to produce results I can make sense of.
I am writing an image matching application. Most of these images are paintings. What I am matching is whole-whole, whole-part, part-whole or even part-part. The resolutions will be different. Because of this relationship the "object" "scene" terminology typically used doesn't fit. Because the object can in fact be the scene and vice versa. So I use query image and target indexed image to describe the query and the image I'm matching against.
I have been following various OpenCV tutorials on matching one image to another and then using the perspectiveTransform to be able to place a bounding box on the identified image...but Im running into problems.
Image 1 - Whole-Part - Result of image matching: The query image (left), target pre-indexed img (right)
In the image added to this we can see I have a whole-part relationship.
The images have been scaled to max edge 1000 and turned to greyscale as part of the SIFT process which has proceeded this.
Query image dimensions x=1000, y=750
Idx image dimensions x=667, y=1000
Initial Flann matches: 501
After Lowe's 2nd nn ratio: 48 matches
RANSAC inliers: 37 matches
The code..
homography = Calib3d.findHomography(idxMatOfPoint2f, queryMatOfPoint2f, Calib3d.RANSAC, 5, mask, 2000, 0.995);
Mat query_corners = new Mat(4, 1, CvType.CV_32FC2);
Mat idx_corners = new Mat(4, 1, CvType.CV_32FC2);
query_corners.put(0, 0, new double[]{0, 0});
query_corners.put(1, 0, new double[]{queryImage.cols() - 1, 0});
query_corners.put(2, 0, new double[]{queryImage.cols() - 1, queryImage.rows() - 1});
query_corners.put(3, 0, new double[]{0, queryImage.rows() - 1});
Core.perspectiveTransform(query_corners, idx_corners, homography);
The result of this code gives the following data (original x,y : transformed x,y )
Corners - Top-left = 0.0,0.0 : 163.84683227539062,167.56898498535156
Corners - Top-right = 999.0,0.0 : 478.38623046875,169.61349487304688
Corners - Bot-right = 999.0,749.0 : 491.45220947265625,411.24688720703125
Corners - Bot-left = 0.0,749.0 : 162.11233520507812,411.5089416503906
Now clearly the points are drawn image are wrong - but selecting which to draw on means I have already determined this. However, what I find odd is that the box is the entire size of the query image, transformed into the space of the 2nd image. I wasnt expecting the box to reduce in size and shape in a way that doesnt even seem to match the first image.
The transformed x,y just do not make any sense to me. Can anyone shed any light on this please?
Image 2 - Part-Whole - Result of image matching: The query image (left), target pre-indexed img (right)
Looking at image 2 where the query is a part and the whole the target idx image gives:
Initial Flann matches: 500
After Lowe's 2nd nn ratio: 21
RANSAC inliers: 17
query image dimensions x=1000, y=750
idx image dimensions x=1000, y=609
Corners - Top-left = 0.0,0.0 : -1228.55224609375,-923.1514282226562
Corners - Top-right = 999.0,0.0 : 3561.064453125,-930.8649291992188
Corners - Bot-right = 999.0,749.0 : 2768.0224609375,1934.417236328125
Corners - Bot-left = 0.0,749.0 : -699.1375732421875,2089.652587890625
Again this just makes absolutely no sense to me. -1228? But both images are only 1000 across and the query is wholly contained in the target idx image.
This last image shows the frustration in this.
Image 3 - Whole-Whole
Here we can see perspective transformed corners are just way-off - its actually smaller than the image being matched to...It seems the perspective transformation function is returning almost random results.
Can anyone spot what I am doing wrong? Am I mis-understanding the perspective transformation?
Thanks to Micka...The answer for the issue in the perspectiveTransform() is because the query and pre-indexed image points were swapped over in the function call. The following call gives the correct result for a matched image.
homography = Calib3d.findHomography(queryMatOfPoint2f, idxMatOfPoint2f, Calib3d.RANSAC, 5, mask, 2000, 0.995);
However, the homography is letting through a curious set of matches which shouldn't be allowed.
Ill post a new Q as the perspective transform is now solved.

Working with DrJava - How can I load and alter a jpeg?

I'm a complete beginner to programming and I've been trying to figure this out for a while but I'm lost. There's a few different versions of the question, but I think I can figure the rest out after I have one finished code, so I'm just going explain the one. The first part asks to write a program using DrJava that will display an image, wait for a user response, and then reduce the image to have only 4 levels per color channel. It goes on to say this:
"What we want to do is reduce each color channel from the range 0-255 (8 bits) to the range 0-3 (2 bits). We can do this by dividing the color channel value by 64. However, since our actual display still uses 1 byte per color channel, a values 0-3 will all look very much like black (very low color intensity). To make it look right, we need to scale the values back up to the original range (multiply by 64). Note that, if integer division is used, this means that only 4 color channel values will occur: 0, 64, 128 and 192, imitating a 2-bit color palate."
I don't even get where I'm supposed to put the picture and get it to load from. Basically I need it explained like I'm five. Thanks in advance!
Java API documentation will be your best resource.
You can read an BufferedImage via a function ImageIO.read(File).
BufferedImage is an Image, so you can display it a part of a JLabel or JButton.
BufferedImage can be created with different ColorModels, RGB, BGR, ARGB, one byte per colour, indexed colours and so on. Here you want to copy one BufferedImage to another with another Colormodel.
Basically you can create a new BufferedImage with the differing ColorModel, call:
Graphics g = otherImg.getGraphics();
g.drawImage(originalImg, ...);
ImageIO.write(otherImg, ...);

How to specify behavior of Java BufferedImage resize: need min for pixel rows instead of averaging

I would like to resize a Java BufferedImage, making it smaller vertically but without using any type of averaging, so that if a pixel-row is "blank" (white) in the source image, there will be a white pixel-row in the corresponding position of the destination image: the "min" operation. The default algorithms (specified in getScaledInstance) do not allow me a fine-grained enough control. I would like to implement the following logic:
for each pixel row in the w-pixels wide destination image, d = pixel[w]
find the corresponding j pixel rows of the source image, s[][] = pixel[j][w]
write the new line of pixels, so that d[i] = min(s[j][i]) over all j, i
I have been reading on RescaleOp, but have not figured out how to implement this functionality -- it is admittedly a weird type of scaling. Can anyone provide me pointers on how to do this? In the worse case, I figure I can just reserve the destination ImageBuffer and copy the pixels following the pseudocode, but I was wondering if there is better way.
The RescaleOp methods include a parameter called RenderingHints. There is a hint called KEY_INTERPOLATION that decides the color to use when scaling an image.
If you use the value VALUE_INTERPOLATION_NEAREST_NEIGHBOR for the KEY_INTERPOLATION, Java will use the original colors, rather than using some type of algorithm to recalculate the new colors.
So, instead of white lines turning to gray or some mix of color, you'll get either white lines, or you won't get any lines at all. It all depends on the scaling factor, and if it's an even or odd row. For example, if you are scaling by half, then each 1 pixel horizontal line has at least a 50% change of appearing in the new image. However, if the white lines were two pixels in height, you'd have a 100% chance of the white line appearing.
This is probably the closest you're going to get besides writing your own scaling method. Unfortunately, I don't see any other hints that might help further.
To implement your own scaling method, you could create a new class that implements the BufferedImageOp interface, and implement the filter() method. Use getRGB() and setRGB() on the BufferedImage object to get the pixels from the original image and set the pixels on the new image.

Categories