Center of mass computation yields wrong results in OpenCV - java

I will start by stating that I'm slowly going insane. I am trying to extract contours from an image and compute their centers of mass using Java and OpenCV.
For all the inner contours, the results are correct, however for the outer (largest) contour, the centroid is way, way off. The input image, the code and the output result are all below. OpenCV version is 3.1.
Others have had this problem and the suggestions were to:
Check if the contour is closed. It is, I checked.
Use Canny to detect edges before extracting contours. I don't understand why that's necessary, but I tried it and the result is that it messes up the tree hierarchy since it generates two contours for each edge, which is not something I want.
The input image is very large (27MB) and the weird part is that when I resized it to 1000x800, the center of mass suddenly got computed correctly, however, I need to be able to process the image at the original resolution.
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.philrovision.dxfvision.matching;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgproc.Moments;
import org.testng.annotations.Test;
/**
*
* #author rhobincu
*/
public class MomentsNGTest {
#Test
public void testOpenCvMoments() {
Mat image = Imgcodecs.imread("moments_fail.png");
Mat channel = new Mat();
Core.extractChannel(image, channel, 1);
Mat mask = new Mat();
Imgproc.threshold(channel, mask, 191, 255, Imgproc.THRESH_BINARY);
Mat filteredMask = new Mat();
Imgproc.medianBlur(mask, filteredMask, 5);
List<MatOfPoint> allContours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(filteredMask, allContours, hierarchy, Imgproc.RETR_TREE,
Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
MatOfPoint largestContour = allContours.stream().max((c1, c2) -> {
double area1 = Imgproc.contourArea(c1);
double area2 = Imgproc.contourArea(c2);
if (area1 < area2) {
return -1;
} else if (area1 > area2) {
return 1;
}
return 0;
}).get();
Mat debugCanvas = new Mat(image.size(), CvType.CV_8UC3);
Imgproc.drawContours(debugCanvas, Arrays.asList(largestContour), -1, new Scalar(255, 255, 255), 3);
Imgproc.drawMarker(debugCanvas, getCenterOfMass(largestContour),
new Scalar(255, 255, 255));
Rect boundingBox = Imgproc.boundingRect(largestContour);
Imgproc.rectangle(debugCanvas, boundingBox.br(), boundingBox.tl(), new Scalar(0, 255, 0), 3);
System.out.printf("Bounding box area is: %f and contour area is: %f", boundingBox.area(), Imgproc.contourArea(
largestContour));
Imgcodecs.imwrite("output.png", debugCanvas);
}
private static Point getCenterOfMass(MatOfPoint contour) {
Moments moments = Imgproc.moments(contour);
return new Point(moments.m10 / moments.m00, moments.m01 / moments.m00);
}
}
Input: (full image here)
Output:
STDOUT:
Bounding box area is: 6460729,000000 and contour area is: 5963212,000000
The centroid is drawn close to the upper left corner, outside the contour.

As mentioned in the comment discussion, it looks like this issue you're having was reported specifically in the Java implementation on OpenCV's GitHub. It was eventually solved with this simple pull request. There were some unnecessary int castings.
Possible solutions then:
Upgrading OpenCV should fix you up.
You can edit your library files with the fix (it's simply removing an (int) cast on a few lines).
Define your own function to calculate the centroids.
If you're bored and want to figure out 3, it's actually not a difficult calculation:
Centroids of a contour are usually calculated from image moments. As shown on that page, a moment M_ij can be defined on images as:
M_ij = sum_x sum_y (x^i * y^j * I(x, y))
and the centroid of a binary shape is
(x_c, y_c) = (M_10/M_00, M_01/M_00)
Note that M_00 = sum_x sum_y (I(x, y)) which, in a binary 0 and 1 image, is just the number of white pixels. If your contourArea is working as you stated in the comments, you can simply use that as M_00. Then note also that M_10 is just the sum of the x values corresponding to white pixels and M_01 with the y values. These can be easily calculated, and you can define your own centroid function with the contours.

Related

How to watershed(segment) an image in Java with BoofCV?

I am trying to segment a simple image using watershed function provided by BoofCV in Java. So I have writen (copied, edited and adjusted) the following code :
package alltestshere;
import boofcv.alg.filter.binary.BinaryImageOps;
import boofcv.alg.filter.binary.Contour;
import boofcv.alg.filter.binary.GThresholdImageOps;
import boofcv.gui.ListDisplayPanel;
import boofcv.gui.binary.VisualizeBinaryData;
import boofcv.gui.image.ShowImages;
import boofcv.io.UtilIO;
import boofcv.io.image.ConvertBufferedImage;
import boofcv.io.image.UtilImageIO;
import boofcv.struct.ConnectRule;
import boofcv.struct.image.GrayS32;
import boofcv.struct.image.GrayU8;
import java.awt.image.BufferedImage;
import java.util.List;
import boofcv.alg.segmentation.watershed.WatershedVincentSoille1991;
import boofcv.factory.segmentation.FactorySegmentationAlg;
import boofcv.gui.feature.VisualizeRegions;
public class examp {
public static void main( String args[] ) {
// load and convert the image into a usable format
BufferedImage image = UtilImageIO.loadImage(UtilIO.pathExample("C:\\\\Users\\\\Caterina\\\\Downloads\\\\boofcv\\\\data\\\\example\\\\shapes\\\\shapes02.png"));
// convert into a usable format
GrayU8 input = ConvertBufferedImage.convertFromSingle(image, null, GrayU8.class);
//declare some of my working data
GrayU8 binary = new GrayU8(input.width,input.height);
GrayS32 markers = new GrayS32(input.width,input.height);
// Select a global threshold using Otsu's method.
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeOtsu(input, 0, 255),true);
//through multiple erosion you can obtain the sure foreground and use it as marker in order to segment the image
GrayU8 filtered = new GrayU8 (input.width, input.height);
GrayU8 filtered2 = new GrayU8 (input.width, input.height);
GrayU8 filtered3 = new GrayU8 (input.width, input.height);
BinaryImageOps.erode8(binary, 1, filtered);
BinaryImageOps.erode8(filtered, 1, filtered2);
BinaryImageOps.erode8(filtered2, 1, filtered3);
//count how many markers you have (one for every foreground part +1 for the background
int numRegions = BinaryImageOps.contour(filtered3, ConnectRule.EIGHT, markers).size()+1 ;
// Detect foreground imagea using an 8-connect rule
List<Contour> contours = BinaryImageOps.contour(binary, ConnectRule.EIGHT, markers);
//Watershed function which takes the original b&w image as input and the markers
WatershedVincentSoille1991 watershed = FactorySegmentationAlg.watershed(ConnectRule.FOUR);
watershed.process(input, markers);
//get the results of the watershed as output
GrayS32 output = watershed.getOutput();
// display the results
BufferedImage visualBinary = VisualizeBinaryData.renderBinary(input, false, null);
BufferedImage visualFiltered = VisualizeBinaryData.renderBinary(filtered3, false, null);
BufferedImage visualLabel = VisualizeBinaryData.renderLabeledBG(markers , contours.size(), null);
BufferedImage outLabeled = VisualizeBinaryData.renderLabeledBG(output, numRegions, null);
ListDisplayPanel panel = new ListDisplayPanel();
panel.addImage(visualBinary, "Binary Original");
panel.addImage(visualFiltered, "Binary Filtered");
panel.addImage(visualLabel, "Markers");
panel.addImage(outLabeled, "Watershed");
ShowImages.showWindow(panel,"Watershed");
}
}
This code, however does not work well. Specifically, instead of colouring with different colours the foreground objects and leave the background as it may, it just splits all the image into region while each regions consists of only one foreground object and some part of the background and paints all this part with the same colour (picture 3). So, what do I do wrong?
I am uploading the Original Picture Markers Picture and Watershed Picture
Thanks in advance,
Katerina
You get this result because you are not processing the background as a region. The markers you provide to watershed are only the contour of your shapes. Since the background isn't a region, the watershed algorithm splits it equally to each region. It is done equally because the distance in your original image of each shape to the background is the same (binary image).
If you want to get the background as another region, then provide to the watershed algorithm some points of the background as markers such as the corners for example.

Cropping an image with an outline

I have a captured image, the image consists of a table. I want to crop the table out of that image.
This is a sample image.
Can someone suggest what can be done?
I have to use it in android.
Use a hough transform to find the lines in the image.
OpenCV can easily do this and has java bindings. See the tutorial on this page on how to do something very similar.
https://docs.opencv.org/3.4.1/d9/db0/tutorial_hough_lines.html
Here is the java code provided in the tutorial:
import org.opencv.core.*;
import org.opencv.core.Point;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class HoughLinesRun {
public void run(String[] args) {
// Declare the output variables
Mat dst = new Mat(), cdst = new Mat(), cdstP;
String default_file = "../../../../data/sudoku.png";
String filename = ((args.length > 0) ? args[0] : default_file);
// Load an image
Mat src = Imgcodecs.imread(filename, Imgcodecs.IMREAD_GRAYSCALE);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default "
+ default_file +"] \n");
System.exit(-1);
}
// Edge detection
Imgproc.Canny(src, dst, 50, 200, 3, false);
// Copy edges to the images that will display the results in BGR
Imgproc.cvtColor(dst, cdst, Imgproc.COLOR_GRAY2BGR);
cdstP = cdst.clone();
// Standard Hough Line Transform
Mat lines = new Mat(); // will hold the results of the detection
Imgproc.HoughLines(dst, lines, 1, Math.PI/180, 150); // runs the actual detection
// Draw the lines
for (int x = 0; x < lines.rows(); x++) {
double rho = lines.get(x, 0)[0],
theta = lines.get(x, 0)[1];
double a = Math.cos(theta), b = Math.sin(theta);
double x0 = a*rho, y0 = b*rho;
Point pt1 = new Point(Math.round(x0 + 1000*(-b)), Math.round(y0 + 1000*(a)));
Point pt2 = new Point(Math.round(x0 - 1000*(-b)), Math.round(y0 - 1000*(a)));
Imgproc.line(cdst, pt1, pt2, new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
// Probabilistic Line Transform
Mat linesP = new Mat(); // will hold the results of the detection
Imgproc.HoughLinesP(dst, linesP, 1, Math.PI/180, 50, 50, 10); // runs the actual detection
// Draw the lines
for (int x = 0; x < linesP.rows(); x++) {
double[] l = linesP.get(x, 0);
Imgproc.line(cdstP, new Point(l[0], l[1]), new Point(l[2], l[3]), new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
// Show results
HighGui.imshow("Source", src);
HighGui.imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst);
HighGui.imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP);
// Wait and Exit
HighGui.waitKey();
System.exit(0);
}
}
public class HoughLines {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new HoughLinesRun().run(args);
}
}
Lines or LinesP will contain the found lines. Instead of drawing them (as in the example) you will want to manipulate them a little further.
Sort the found lines by slope.
The two largest clusters will be horizontal lines and then vertical lines.
For the horizontal lines calculate and sort by the y intercept.
The largest y intercept describes the top of the table.
The smallest y intercept is the bottom of the table.
For the vertical lines calculate and sort by the x intercept.
The largest x intercept is the right side of the table.
The smallest x intercept is the left side of the table.
You'll now have the coordinates of the four table corners and can do standard image manipulation to crop/rotate etc. OpenCV can help you will this step too.
Convert your image to grayscale.
Threshold your image to drop noise.
Find the minimum area rect of the non-blank pixels.
In python the code would look like:
import cv2
import numpy as np
img = cv2.imread('table.jpg')
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 222, 255, cv2.THRESH_BINARY )
# write out the thresholded image to debug the 222 value
cv2.imwrite("thresh.png", thresh)
indices = np.where(thresh != 255)
coords = np.array([(b,a) for a, b in zip(*(indices[0], indices[1]))])
# coords = cv2.convexHull(coords)
rect = cv2.minAreaRect(coords)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img, [box], 0, (0, 0, 255), 2)
cv2.imwrite("box.png", img)
For me this produces the following image.
If your image didn't have the red squares it would be a tighter fit.

Rotating BufferedImage changes its colors

I'm trying to code a class to seam carve images in x and y direction. The x direction is working, and to reduce the y direction I thought about simply rotating the image 90° and run the same code over the already rescaled image (in x direction only) and after that, rotate it back to its initial state.
I found something with AffineTransform and tried it. It actually produced a rotated image, but messed up the colors and I don't know why.
This is all the code:
import java.awt.image.BufferedImage;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.io.File;
import java.io.IOException;
import javafx.scene.paint.Color;
import javax.imageio.ImageIO;
public class example {
/**
* #param args the command line arguments
*/
public static void main(String[] args) throws IOException {
// TODO code application logic here
BufferedImage imgIn = ImageIO.read(new File("landscape.jpg"));
BufferedImage imgIn2 = imgIn;
AffineTransform tx = new AffineTransform();
tx.rotate(Math.PI/2, imgIn2.getWidth() / 2, imgIn2.getHeight() / 2);//(radian,arbit_X,arbit_Y)
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_BILINEAR);
BufferedImage last = op.filter(imgIn2, null);//(sourse,destination)
ImageIO.write(last, "JPEG", new File("distortedColors.jpg"));
}
}
Just alter the filename in
BufferedImage imgIn = ImageIO.read(new File("landscape.jpg")); and try it.
When executed, you get 4 images: a heatmap, an image with seams in it and a rescaled image. The last image is a test to see if the rotation worked and it should show a rotated image but with distorted colors...
Help would be greatly appreciated!
EDIT:
The problem is with the AffineTransformOp You need :
AffineTransformOp.TYPE_NEAREST_NEIGHBOR
instead of the BILINEAR you have now.
Second paragraph from documentation hints towards this.
This class uses an affine transform to perform a linear mapping from
2D coordinates in the source image or Raster to 2D coordinates in the
destination image or Raster. The type of interpolation that is used is
specified through a constructor, either by a RenderingHints object or
by one of the integer interpolation types defined in this class. If a
RenderingHints object is specified in the constructor, the
interpolation hint and the rendering quality hint are used to set the
interpolation type for this operation.
The color rendering hint and
the dithering hint can be used when color conversion is required. Note
that the following constraints have to be met: The source and
destination must be different. For Raster objects, the number of bands
in the source must be equal to the number of bands in the destination.
So this works
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
It seems like there's a color conversion happening due to passing null to op.filter(imgIn2, null);.
If you change it like that it should work:
BufferedImage last = new BufferedImage( imgIn2.getWidth(), imgIn2.getHeight(), imgIn2.getType() );
op.filter(imgIn2, last );
Building on what bhavya said...
Keep it simple and you should use the dimensions expected from the operation:
AffineTransformOp op = new AffineTransformOp(transform, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
BufferedImage destinationImage = op.filter(bImage, op.createCompatibleDestImage(bImage, null));

Connect broken lines of detected wrinkles in an image using Java opencv

I am working on a program to detect wrinkles in an image taken from a high resolution camera.
Currently the project is in its starting phase. I have performed the following steps until now:
Convert to grayscale and contrast the image.
Remove noise using Gaussian Blur.
Apply Adaptive threshold to detect wrinkles.
Use dilation to enhance the size of the detected wrinkle and join disparate elements of a single wrinkle as much as possible.
Remove noise by finding the contours and removing the ones with lesser areas.
Here is the code of the same :
package Wrinkle.Detection;
import java.util.ArrayList;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
public class DetectWrinkle {
private Mat sourceImage;
private Mat destinationImage;
private Mat thresh;
public void detectUsingThresh(String filename) {
sourceImage = Highgui.imread(filename, Highgui.CV_LOAD_IMAGE_GRAYSCALE);
//Contrast
Mat contrast = new Mat(sourceImage.rows(), sourceImage.cols(), sourceImage.type());
Imgproc.equalizeHist(sourceImage, contrast);
Highgui.imwrite("wrinkle_contrast.jpg", contrast);
//Remove Noise
destinationImage = new Mat(contrast.rows(), contrast.cols(), contrast.type());
Imgproc.GaussianBlur(contrast, destinationImage,new Size(31,31), 0);
Highgui.imwrite("wrinkle_Blur.jpg", destinationImage);
//Apply Adaptive threshold
thresh = new Mat();
Imgproc.adaptiveThreshold(destinationImage, thresh, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 99, 10);
Highgui.imwrite("wrinkle_threshold.jpg", thresh);
// dilation
Mat element1 = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(2*3+1, 2*6+1));
Imgproc.dilate(thresh, thresh, element1);
Highgui.imwrite("wrinkle_thresh_dilation.jpg", thresh);
//Find contours
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat image32S = new Mat();
Mat threshClone = thresh.clone();
threshClone.convertTo(image32S, CvType.CV_32SC1);
Imgproc.findContours(image32S, contours, new Mat(), Imgproc.RETR_FLOODFILL,Imgproc.CHAIN_APPROX_SIMPLE);
//Find contours with smaller area and color them to black (removing furhter noise)
Imgproc.cvtColor(thresh, thresh, Imgproc.COLOR_GRAY2BGR);
for (int c=0; c<contours.size(); c++) {
double value = Imgproc.contourArea(contours.get(c));
if(value<500){
Imgproc.drawContours(thresh, contours, c, new Scalar(0, 0, 0), -1);
}
}
Highgui.imwrite("wrinkle_contour_fill.jpg", thresh);
}
public static void main(String[] args) {
DetectWrinkle dw = new DetectWrinkle();
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
String imagefile = "wrinkle_guo (1).bmp";
dw.detectUsingThresh(imagefile);
}
}
Question:
As you can see from the result shown below in the images, a single wrinkle on the skin is getting broken down into separate small elements. Here, I am trying to connect those elements to show the complete wrinkle by using dilate. Once it is done, I am removing the noise by first detecting the contours, calculating the area of the contours and then removing the contours with area less than a particular value.
However, this is not giving me a proper result so I feel that there could be some better way of joining the broken wrinkle elements. Please help me solve this.
And please pardon me if there is anything wrong with the question as I really need a solution and I am a newbie here.
Following are the images :
Input image
After getting contours and removing noise by finding contour area

Mapping different colors to points on a plane in Java3D

I am trying to write a Java3D application which emulates the what you would see on a spectrogram such as this: http://en.wikipedia.org/wiki/File:Spectrogram-19thC.png. My main difficulty right now is figuring out how to actually display amplitude values on the plane in the same way as the spectrogram does: using different colors to designate different intensities for each time-frequency coordinate ((x, y) point) on the spectrogram.
Here is the current version of my code. My constructor takes as an argument a 2D array containing time-frequency coordinates. I create a plane representing the spectrogram and set up some for loops to use when actually displaying the amplitude values. I'm aware of how to calculate the amplitude values for each point on the 2D plane and have a idea of how to assign ranges of amplitude values to colors (although I haven't yet coded these).
My main problem is with displaying these colors on the plane itself. While Java3D allows users to specify colors for an entire object (using the Appearance, ColoringAttributes and Color3f classes), I haven't seen any examples of mapping different colors to different points on an object, which is what I want to do. I want to be able to control the color of each pixel that makes up the plane, so that instead of making it completely blue, red, etc, I can color each pixel differently depending on the amplitude value I calculate for that point.
Does anyone know if this is possible in Java3D? If so, suggestions of how it can be done or links to resources that deal with it would be greatly appreciated. I am fairly new to Java3D, so any advice would be great.
I should add that my motivation for doing this in Java3D is to ultimately generate a spatiotemporal time-frequency graph using 2D planes (i.e., stacking up the 2D spectrograms to create a 3D image).
import java.util.*;
import java.applet.Applet;
import java.awt.BorderLayout;
import java.awt.Frame;
import java.awt.event.*;
import java.awt.GraphicsConfiguration;
import javax.vecmath.*;
import javax.media.j3d.*;
import com.sun.j3d.utils.universe.*;
import com.sun.j3d.utils.applet.*;
import com.sun.j3d.utils.behaviors.vp.*;
public class DrawTimeFrequencyGraph extends Applet {
public DrawTimeFrequencyGraph (float [][] timeFrequencyArray) {
/* Create the Canvas3D and BranchGroup objects needed for the scene graph */
setLayout(new BorderLayout());
GraphicsConfiguration config =
SimpleUniverse.getPreferredConfiguration();
Canvas3D canvas3D = new Canvas3D(config);
add("Center", canvas3D);
BranchGroup scene = new BranchGroup();
/* Create a 2D plane to represent the graph
* Note: The (x, y, z) coordinates are currently hardcoded, but should be set up
* to reflect the min and max frequency and time values in the 2D array */
QuadArray timeFrequencyGraph = new QuadArray(4, GeometryArray.COORDINATES);
timeFrequencyGraph.setCoordinate(0, new Point3f(-10f, 0f, 0f));
timeFrequencyGraph.setCoordinate(1, new Point3f(0f, 0f, 0f));
timeFrequencyGraph.setCoordinate(2, new Point3f(0f, 3f, 0f));
timeFrequencyGraph.setCoordinate(3, new Point3f(-10f, 3f, 0f));
/* Set up the appearance of the plane i.e. graph values on it using various colors */
for(int i = 0; i < timeFrequencyArray.length; i++){
for(int j = 0; j < timeFrequencyArray[i].length; j++){
/* TO DO: Calculate amplitude values, map them to colors and figure out how to
* map the colors to points on the plane that has been created.
}
}
}
}
Texture mapping: http://webdocs.cs.ualberta.ca/~anup/Courses/604/NOTES/J3Dtexture.pdf
You might also look at JFreeChart, which includes an XYBlockRenderer. Arbitrary color palettes may be constructed using a suitable PaintScale, referenced here and here.

Categories