I am working on a program to detect wrinkles in an image taken from a high resolution camera.
Currently the project is in its starting phase. I have performed the following steps until now:
Convert to grayscale and contrast the image.
Remove noise using Gaussian Blur.
Apply Adaptive threshold to detect wrinkles.
Use dilation to enhance the size of the detected wrinkle and join disparate elements of a single wrinkle as much as possible.
Remove noise by finding the contours and removing the ones with lesser areas.
Here is the code of the same :
package Wrinkle.Detection;
import java.util.ArrayList;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
public class DetectWrinkle {
private Mat sourceImage;
private Mat destinationImage;
private Mat thresh;
public void detectUsingThresh(String filename) {
sourceImage = Highgui.imread(filename, Highgui.CV_LOAD_IMAGE_GRAYSCALE);
//Contrast
Mat contrast = new Mat(sourceImage.rows(), sourceImage.cols(), sourceImage.type());
Imgproc.equalizeHist(sourceImage, contrast);
Highgui.imwrite("wrinkle_contrast.jpg", contrast);
//Remove Noise
destinationImage = new Mat(contrast.rows(), contrast.cols(), contrast.type());
Imgproc.GaussianBlur(contrast, destinationImage,new Size(31,31), 0);
Highgui.imwrite("wrinkle_Blur.jpg", destinationImage);
//Apply Adaptive threshold
thresh = new Mat();
Imgproc.adaptiveThreshold(destinationImage, thresh, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 99, 10);
Highgui.imwrite("wrinkle_threshold.jpg", thresh);
// dilation
Mat element1 = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(2*3+1, 2*6+1));
Imgproc.dilate(thresh, thresh, element1);
Highgui.imwrite("wrinkle_thresh_dilation.jpg", thresh);
//Find contours
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat image32S = new Mat();
Mat threshClone = thresh.clone();
threshClone.convertTo(image32S, CvType.CV_32SC1);
Imgproc.findContours(image32S, contours, new Mat(), Imgproc.RETR_FLOODFILL,Imgproc.CHAIN_APPROX_SIMPLE);
//Find contours with smaller area and color them to black (removing furhter noise)
Imgproc.cvtColor(thresh, thresh, Imgproc.COLOR_GRAY2BGR);
for (int c=0; c<contours.size(); c++) {
double value = Imgproc.contourArea(contours.get(c));
if(value<500){
Imgproc.drawContours(thresh, contours, c, new Scalar(0, 0, 0), -1);
}
}
Highgui.imwrite("wrinkle_contour_fill.jpg", thresh);
}
public static void main(String[] args) {
DetectWrinkle dw = new DetectWrinkle();
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
String imagefile = "wrinkle_guo (1).bmp";
dw.detectUsingThresh(imagefile);
}
}
Question:
As you can see from the result shown below in the images, a single wrinkle on the skin is getting broken down into separate small elements. Here, I am trying to connect those elements to show the complete wrinkle by using dilate. Once it is done, I am removing the noise by first detecting the contours, calculating the area of the contours and then removing the contours with area less than a particular value.
However, this is not giving me a proper result so I feel that there could be some better way of joining the broken wrinkle elements. Please help me solve this.
And please pardon me if there is anything wrong with the question as I really need a solution and I am a newbie here.
Following are the images :
Input image
After getting contours and removing noise by finding contour area
Related
I am trying to segment a simple image using watershed function provided by BoofCV in Java. So I have writen (copied, edited and adjusted) the following code :
package alltestshere;
import boofcv.alg.filter.binary.BinaryImageOps;
import boofcv.alg.filter.binary.Contour;
import boofcv.alg.filter.binary.GThresholdImageOps;
import boofcv.gui.ListDisplayPanel;
import boofcv.gui.binary.VisualizeBinaryData;
import boofcv.gui.image.ShowImages;
import boofcv.io.UtilIO;
import boofcv.io.image.ConvertBufferedImage;
import boofcv.io.image.UtilImageIO;
import boofcv.struct.ConnectRule;
import boofcv.struct.image.GrayS32;
import boofcv.struct.image.GrayU8;
import java.awt.image.BufferedImage;
import java.util.List;
import boofcv.alg.segmentation.watershed.WatershedVincentSoille1991;
import boofcv.factory.segmentation.FactorySegmentationAlg;
import boofcv.gui.feature.VisualizeRegions;
public class examp {
public static void main( String args[] ) {
// load and convert the image into a usable format
BufferedImage image = UtilImageIO.loadImage(UtilIO.pathExample("C:\\\\Users\\\\Caterina\\\\Downloads\\\\boofcv\\\\data\\\\example\\\\shapes\\\\shapes02.png"));
// convert into a usable format
GrayU8 input = ConvertBufferedImage.convertFromSingle(image, null, GrayU8.class);
//declare some of my working data
GrayU8 binary = new GrayU8(input.width,input.height);
GrayS32 markers = new GrayS32(input.width,input.height);
// Select a global threshold using Otsu's method.
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeOtsu(input, 0, 255),true);
//through multiple erosion you can obtain the sure foreground and use it as marker in order to segment the image
GrayU8 filtered = new GrayU8 (input.width, input.height);
GrayU8 filtered2 = new GrayU8 (input.width, input.height);
GrayU8 filtered3 = new GrayU8 (input.width, input.height);
BinaryImageOps.erode8(binary, 1, filtered);
BinaryImageOps.erode8(filtered, 1, filtered2);
BinaryImageOps.erode8(filtered2, 1, filtered3);
//count how many markers you have (one for every foreground part +1 for the background
int numRegions = BinaryImageOps.contour(filtered3, ConnectRule.EIGHT, markers).size()+1 ;
// Detect foreground imagea using an 8-connect rule
List<Contour> contours = BinaryImageOps.contour(binary, ConnectRule.EIGHT, markers);
//Watershed function which takes the original b&w image as input and the markers
WatershedVincentSoille1991 watershed = FactorySegmentationAlg.watershed(ConnectRule.FOUR);
watershed.process(input, markers);
//get the results of the watershed as output
GrayS32 output = watershed.getOutput();
// display the results
BufferedImage visualBinary = VisualizeBinaryData.renderBinary(input, false, null);
BufferedImage visualFiltered = VisualizeBinaryData.renderBinary(filtered3, false, null);
BufferedImage visualLabel = VisualizeBinaryData.renderLabeledBG(markers , contours.size(), null);
BufferedImage outLabeled = VisualizeBinaryData.renderLabeledBG(output, numRegions, null);
ListDisplayPanel panel = new ListDisplayPanel();
panel.addImage(visualBinary, "Binary Original");
panel.addImage(visualFiltered, "Binary Filtered");
panel.addImage(visualLabel, "Markers");
panel.addImage(outLabeled, "Watershed");
ShowImages.showWindow(panel,"Watershed");
}
}
This code, however does not work well. Specifically, instead of colouring with different colours the foreground objects and leave the background as it may, it just splits all the image into region while each regions consists of only one foreground object and some part of the background and paints all this part with the same colour (picture 3). So, what do I do wrong?
I am uploading the Original Picture Markers Picture and Watershed Picture
Thanks in advance,
Katerina
You get this result because you are not processing the background as a region. The markers you provide to watershed are only the contour of your shapes. Since the background isn't a region, the watershed algorithm splits it equally to each region. It is done equally because the distance in your original image of each shape to the background is the same (binary image).
If you want to get the background as another region, then provide to the watershed algorithm some points of the background as markers such as the corners for example.
I have a captured image, the image consists of a table. I want to crop the table out of that image.
This is a sample image.
Can someone suggest what can be done?
I have to use it in android.
Use a hough transform to find the lines in the image.
OpenCV can easily do this and has java bindings. See the tutorial on this page on how to do something very similar.
https://docs.opencv.org/3.4.1/d9/db0/tutorial_hough_lines.html
Here is the java code provided in the tutorial:
import org.opencv.core.*;
import org.opencv.core.Point;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class HoughLinesRun {
public void run(String[] args) {
// Declare the output variables
Mat dst = new Mat(), cdst = new Mat(), cdstP;
String default_file = "../../../../data/sudoku.png";
String filename = ((args.length > 0) ? args[0] : default_file);
// Load an image
Mat src = Imgcodecs.imread(filename, Imgcodecs.IMREAD_GRAYSCALE);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default "
+ default_file +"] \n");
System.exit(-1);
}
// Edge detection
Imgproc.Canny(src, dst, 50, 200, 3, false);
// Copy edges to the images that will display the results in BGR
Imgproc.cvtColor(dst, cdst, Imgproc.COLOR_GRAY2BGR);
cdstP = cdst.clone();
// Standard Hough Line Transform
Mat lines = new Mat(); // will hold the results of the detection
Imgproc.HoughLines(dst, lines, 1, Math.PI/180, 150); // runs the actual detection
// Draw the lines
for (int x = 0; x < lines.rows(); x++) {
double rho = lines.get(x, 0)[0],
theta = lines.get(x, 0)[1];
double a = Math.cos(theta), b = Math.sin(theta);
double x0 = a*rho, y0 = b*rho;
Point pt1 = new Point(Math.round(x0 + 1000*(-b)), Math.round(y0 + 1000*(a)));
Point pt2 = new Point(Math.round(x0 - 1000*(-b)), Math.round(y0 - 1000*(a)));
Imgproc.line(cdst, pt1, pt2, new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
// Probabilistic Line Transform
Mat linesP = new Mat(); // will hold the results of the detection
Imgproc.HoughLinesP(dst, linesP, 1, Math.PI/180, 50, 50, 10); // runs the actual detection
// Draw the lines
for (int x = 0; x < linesP.rows(); x++) {
double[] l = linesP.get(x, 0);
Imgproc.line(cdstP, new Point(l[0], l[1]), new Point(l[2], l[3]), new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
// Show results
HighGui.imshow("Source", src);
HighGui.imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst);
HighGui.imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP);
// Wait and Exit
HighGui.waitKey();
System.exit(0);
}
}
public class HoughLines {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new HoughLinesRun().run(args);
}
}
Lines or LinesP will contain the found lines. Instead of drawing them (as in the example) you will want to manipulate them a little further.
Sort the found lines by slope.
The two largest clusters will be horizontal lines and then vertical lines.
For the horizontal lines calculate and sort by the y intercept.
The largest y intercept describes the top of the table.
The smallest y intercept is the bottom of the table.
For the vertical lines calculate and sort by the x intercept.
The largest x intercept is the right side of the table.
The smallest x intercept is the left side of the table.
You'll now have the coordinates of the four table corners and can do standard image manipulation to crop/rotate etc. OpenCV can help you will this step too.
Convert your image to grayscale.
Threshold your image to drop noise.
Find the minimum area rect of the non-blank pixels.
In python the code would look like:
import cv2
import numpy as np
img = cv2.imread('table.jpg')
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 222, 255, cv2.THRESH_BINARY )
# write out the thresholded image to debug the 222 value
cv2.imwrite("thresh.png", thresh)
indices = np.where(thresh != 255)
coords = np.array([(b,a) for a, b in zip(*(indices[0], indices[1]))])
# coords = cv2.convexHull(coords)
rect = cv2.minAreaRect(coords)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img, [box], 0, (0, 0, 255), 2)
cv2.imwrite("box.png", img)
For me this produces the following image.
If your image didn't have the red squares it would be a tighter fit.
I will start by stating that I'm slowly going insane. I am trying to extract contours from an image and compute their centers of mass using Java and OpenCV.
For all the inner contours, the results are correct, however for the outer (largest) contour, the centroid is way, way off. The input image, the code and the output result are all below. OpenCV version is 3.1.
Others have had this problem and the suggestions were to:
Check if the contour is closed. It is, I checked.
Use Canny to detect edges before extracting contours. I don't understand why that's necessary, but I tried it and the result is that it messes up the tree hierarchy since it generates two contours for each edge, which is not something I want.
The input image is very large (27MB) and the weird part is that when I resized it to 1000x800, the center of mass suddenly got computed correctly, however, I need to be able to process the image at the original resolution.
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.philrovision.dxfvision.matching;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgproc.Moments;
import org.testng.annotations.Test;
/**
*
* #author rhobincu
*/
public class MomentsNGTest {
#Test
public void testOpenCvMoments() {
Mat image = Imgcodecs.imread("moments_fail.png");
Mat channel = new Mat();
Core.extractChannel(image, channel, 1);
Mat mask = new Mat();
Imgproc.threshold(channel, mask, 191, 255, Imgproc.THRESH_BINARY);
Mat filteredMask = new Mat();
Imgproc.medianBlur(mask, filteredMask, 5);
List<MatOfPoint> allContours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(filteredMask, allContours, hierarchy, Imgproc.RETR_TREE,
Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
MatOfPoint largestContour = allContours.stream().max((c1, c2) -> {
double area1 = Imgproc.contourArea(c1);
double area2 = Imgproc.contourArea(c2);
if (area1 < area2) {
return -1;
} else if (area1 > area2) {
return 1;
}
return 0;
}).get();
Mat debugCanvas = new Mat(image.size(), CvType.CV_8UC3);
Imgproc.drawContours(debugCanvas, Arrays.asList(largestContour), -1, new Scalar(255, 255, 255), 3);
Imgproc.drawMarker(debugCanvas, getCenterOfMass(largestContour),
new Scalar(255, 255, 255));
Rect boundingBox = Imgproc.boundingRect(largestContour);
Imgproc.rectangle(debugCanvas, boundingBox.br(), boundingBox.tl(), new Scalar(0, 255, 0), 3);
System.out.printf("Bounding box area is: %f and contour area is: %f", boundingBox.area(), Imgproc.contourArea(
largestContour));
Imgcodecs.imwrite("output.png", debugCanvas);
}
private static Point getCenterOfMass(MatOfPoint contour) {
Moments moments = Imgproc.moments(contour);
return new Point(moments.m10 / moments.m00, moments.m01 / moments.m00);
}
}
Input: (full image here)
Output:
STDOUT:
Bounding box area is: 6460729,000000 and contour area is: 5963212,000000
The centroid is drawn close to the upper left corner, outside the contour.
As mentioned in the comment discussion, it looks like this issue you're having was reported specifically in the Java implementation on OpenCV's GitHub. It was eventually solved with this simple pull request. There were some unnecessary int castings.
Possible solutions then:
Upgrading OpenCV should fix you up.
You can edit your library files with the fix (it's simply removing an (int) cast on a few lines).
Define your own function to calculate the centroids.
If you're bored and want to figure out 3, it's actually not a difficult calculation:
Centroids of a contour are usually calculated from image moments. As shown on that page, a moment M_ij can be defined on images as:
M_ij = sum_x sum_y (x^i * y^j * I(x, y))
and the centroid of a binary shape is
(x_c, y_c) = (M_10/M_00, M_01/M_00)
Note that M_00 = sum_x sum_y (I(x, y)) which, in a binary 0 and 1 image, is just the number of white pixels. If your contourArea is working as you stated in the comments, you can simply use that as M_00. Then note also that M_10 is just the sum of the x values corresponding to white pixels and M_01 with the y values. These can be easily calculated, and you can define your own centroid function with the contours.
I'm trying to code a class to seam carve images in x and y direction. The x direction is working, and to reduce the y direction I thought about simply rotating the image 90° and run the same code over the already rescaled image (in x direction only) and after that, rotate it back to its initial state.
I found something with AffineTransform and tried it. It actually produced a rotated image, but messed up the colors and I don't know why.
This is all the code:
import java.awt.image.BufferedImage;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.io.File;
import java.io.IOException;
import javafx.scene.paint.Color;
import javax.imageio.ImageIO;
public class example {
/**
* #param args the command line arguments
*/
public static void main(String[] args) throws IOException {
// TODO code application logic here
BufferedImage imgIn = ImageIO.read(new File("landscape.jpg"));
BufferedImage imgIn2 = imgIn;
AffineTransform tx = new AffineTransform();
tx.rotate(Math.PI/2, imgIn2.getWidth() / 2, imgIn2.getHeight() / 2);//(radian,arbit_X,arbit_Y)
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_BILINEAR);
BufferedImage last = op.filter(imgIn2, null);//(sourse,destination)
ImageIO.write(last, "JPEG", new File("distortedColors.jpg"));
}
}
Just alter the filename in
BufferedImage imgIn = ImageIO.read(new File("landscape.jpg")); and try it.
When executed, you get 4 images: a heatmap, an image with seams in it and a rescaled image. The last image is a test to see if the rotation worked and it should show a rotated image but with distorted colors...
Help would be greatly appreciated!
EDIT:
The problem is with the AffineTransformOp You need :
AffineTransformOp.TYPE_NEAREST_NEIGHBOR
instead of the BILINEAR you have now.
Second paragraph from documentation hints towards this.
This class uses an affine transform to perform a linear mapping from
2D coordinates in the source image or Raster to 2D coordinates in the
destination image or Raster. The type of interpolation that is used is
specified through a constructor, either by a RenderingHints object or
by one of the integer interpolation types defined in this class. If a
RenderingHints object is specified in the constructor, the
interpolation hint and the rendering quality hint are used to set the
interpolation type for this operation.
The color rendering hint and
the dithering hint can be used when color conversion is required. Note
that the following constraints have to be met: The source and
destination must be different. For Raster objects, the number of bands
in the source must be equal to the number of bands in the destination.
So this works
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
It seems like there's a color conversion happening due to passing null to op.filter(imgIn2, null);.
If you change it like that it should work:
BufferedImage last = new BufferedImage( imgIn2.getWidth(), imgIn2.getHeight(), imgIn2.getType() );
op.filter(imgIn2, last );
Building on what bhavya said...
Keep it simple and you should use the dimensions expected from the operation:
AffineTransformOp op = new AffineTransformOp(transform, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
BufferedImage destinationImage = op.filter(bImage, op.createCompatibleDestImage(bImage, null));
I am developing an Android app that calculates the sum of all points of the being-seen dominoes pieces -shown in picture- using OpenCV for Android.
The problem is, I can't find a way to filtering other contours and counting only dots I see in the dominoes, I tried to use Canny edge finding then use HoughCircles, but with no result, as I don't have an absolute top view of the rocks and HoughCircles detect perfect circles only :)
Here is my code:
public Mat onCameraFrame(Mat inputFrame) {
inputFrame.copyTo(mRgba);
Mat grey = new Mat();
// Make it greyscale
Imgproc.cvtColor(mRgba, grey, Imgproc.COLOR_RGBA2GRAY);
// init contours arraylist
List<MatOfPoint> contours = new ArrayList<MatOfPoint>(200);
//blur
Imgproc.GaussianBlur(grey, grey, new Size(9,9), 10);
Imgproc.threshold(grey, grey, 80, 150, Imgproc.THRESH_BINARY);
// because findContours modifies the image I back it up
Mat greyCopy = new Mat();
grey.copyTo(greyCopy);
Imgproc.findContours(greyCopy, contours, new Mat(), Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_NONE);
// Now I have my controus pefectly
MatOfPoint2f mMOP2f1 = new MatOfPoint2f();
//a list for only selected contours
List<MatOfPoint> SelectedContours = new ArrayList<MatOfPoint>(400);
for(int i=0;i<contours.size();i++)
{
if(here I should put a condition that distinguishes my spots, eg: if contour inside is black and is a black disk)
{
SelectedContours.add(contours.get(i));
}
}
Imgproc.drawContours(mRgba, SelectedContours, -1, new Scalar(255,0,0,255), 1);
return mRgba;
}
EDIT:
One unique feature of my contours after threshold is they're totally black from inside, is there anyway I could calculate the mean color/intensity for a given contour ?
There is a similiar problem and possible solution on SO, titled Detection of coins (and fit ellipses) on an image. Here you will find some recomendations about opencv's function fitEllipse.
You should take a look at this for more info on opencv's function fitEllipse.
Also, to detect only black elements in an image, you can use HSV color model, to find only black colors. You can find an explanation here.