I am trying to segment a simple image using watershed function provided by BoofCV in Java. So I have writen (copied, edited and adjusted) the following code :
package alltestshere;
import boofcv.alg.filter.binary.BinaryImageOps;
import boofcv.alg.filter.binary.Contour;
import boofcv.alg.filter.binary.GThresholdImageOps;
import boofcv.gui.ListDisplayPanel;
import boofcv.gui.binary.VisualizeBinaryData;
import boofcv.gui.image.ShowImages;
import boofcv.io.UtilIO;
import boofcv.io.image.ConvertBufferedImage;
import boofcv.io.image.UtilImageIO;
import boofcv.struct.ConnectRule;
import boofcv.struct.image.GrayS32;
import boofcv.struct.image.GrayU8;
import java.awt.image.BufferedImage;
import java.util.List;
import boofcv.alg.segmentation.watershed.WatershedVincentSoille1991;
import boofcv.factory.segmentation.FactorySegmentationAlg;
import boofcv.gui.feature.VisualizeRegions;
public class examp {
public static void main( String args[] ) {
// load and convert the image into a usable format
BufferedImage image = UtilImageIO.loadImage(UtilIO.pathExample("C:\\\\Users\\\\Caterina\\\\Downloads\\\\boofcv\\\\data\\\\example\\\\shapes\\\\shapes02.png"));
// convert into a usable format
GrayU8 input = ConvertBufferedImage.convertFromSingle(image, null, GrayU8.class);
//declare some of my working data
GrayU8 binary = new GrayU8(input.width,input.height);
GrayS32 markers = new GrayS32(input.width,input.height);
// Select a global threshold using Otsu's method.
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeOtsu(input, 0, 255),true);
//through multiple erosion you can obtain the sure foreground and use it as marker in order to segment the image
GrayU8 filtered = new GrayU8 (input.width, input.height);
GrayU8 filtered2 = new GrayU8 (input.width, input.height);
GrayU8 filtered3 = new GrayU8 (input.width, input.height);
BinaryImageOps.erode8(binary, 1, filtered);
BinaryImageOps.erode8(filtered, 1, filtered2);
BinaryImageOps.erode8(filtered2, 1, filtered3);
//count how many markers you have (one for every foreground part +1 for the background
int numRegions = BinaryImageOps.contour(filtered3, ConnectRule.EIGHT, markers).size()+1 ;
// Detect foreground imagea using an 8-connect rule
List<Contour> contours = BinaryImageOps.contour(binary, ConnectRule.EIGHT, markers);
//Watershed function which takes the original b&w image as input and the markers
WatershedVincentSoille1991 watershed = FactorySegmentationAlg.watershed(ConnectRule.FOUR);
watershed.process(input, markers);
//get the results of the watershed as output
GrayS32 output = watershed.getOutput();
// display the results
BufferedImage visualBinary = VisualizeBinaryData.renderBinary(input, false, null);
BufferedImage visualFiltered = VisualizeBinaryData.renderBinary(filtered3, false, null);
BufferedImage visualLabel = VisualizeBinaryData.renderLabeledBG(markers , contours.size(), null);
BufferedImage outLabeled = VisualizeBinaryData.renderLabeledBG(output, numRegions, null);
ListDisplayPanel panel = new ListDisplayPanel();
panel.addImage(visualBinary, "Binary Original");
panel.addImage(visualFiltered, "Binary Filtered");
panel.addImage(visualLabel, "Markers");
panel.addImage(outLabeled, "Watershed");
ShowImages.showWindow(panel,"Watershed");
}
}
This code, however does not work well. Specifically, instead of colouring with different colours the foreground objects and leave the background as it may, it just splits all the image into region while each regions consists of only one foreground object and some part of the background and paints all this part with the same colour (picture 3). So, what do I do wrong?
I am uploading the Original Picture Markers Picture and Watershed Picture
Thanks in advance,
Katerina
You get this result because you are not processing the background as a region. The markers you provide to watershed are only the contour of your shapes. Since the background isn't a region, the watershed algorithm splits it equally to each region. It is done equally because the distance in your original image of each shape to the background is the same (binary image).
If you want to get the background as another region, then provide to the watershed algorithm some points of the background as markers such as the corners for example.
Related
I will start by stating that I'm slowly going insane. I am trying to extract contours from an image and compute their centers of mass using Java and OpenCV.
For all the inner contours, the results are correct, however for the outer (largest) contour, the centroid is way, way off. The input image, the code and the output result are all below. OpenCV version is 3.1.
Others have had this problem and the suggestions were to:
Check if the contour is closed. It is, I checked.
Use Canny to detect edges before extracting contours. I don't understand why that's necessary, but I tried it and the result is that it messes up the tree hierarchy since it generates two contours for each edge, which is not something I want.
The input image is very large (27MB) and the weird part is that when I resized it to 1000x800, the center of mass suddenly got computed correctly, however, I need to be able to process the image at the original resolution.
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.philrovision.dxfvision.matching;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgproc.Moments;
import org.testng.annotations.Test;
/**
*
* #author rhobincu
*/
public class MomentsNGTest {
#Test
public void testOpenCvMoments() {
Mat image = Imgcodecs.imread("moments_fail.png");
Mat channel = new Mat();
Core.extractChannel(image, channel, 1);
Mat mask = new Mat();
Imgproc.threshold(channel, mask, 191, 255, Imgproc.THRESH_BINARY);
Mat filteredMask = new Mat();
Imgproc.medianBlur(mask, filteredMask, 5);
List<MatOfPoint> allContours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(filteredMask, allContours, hierarchy, Imgproc.RETR_TREE,
Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
MatOfPoint largestContour = allContours.stream().max((c1, c2) -> {
double area1 = Imgproc.contourArea(c1);
double area2 = Imgproc.contourArea(c2);
if (area1 < area2) {
return -1;
} else if (area1 > area2) {
return 1;
}
return 0;
}).get();
Mat debugCanvas = new Mat(image.size(), CvType.CV_8UC3);
Imgproc.drawContours(debugCanvas, Arrays.asList(largestContour), -1, new Scalar(255, 255, 255), 3);
Imgproc.drawMarker(debugCanvas, getCenterOfMass(largestContour),
new Scalar(255, 255, 255));
Rect boundingBox = Imgproc.boundingRect(largestContour);
Imgproc.rectangle(debugCanvas, boundingBox.br(), boundingBox.tl(), new Scalar(0, 255, 0), 3);
System.out.printf("Bounding box area is: %f and contour area is: %f", boundingBox.area(), Imgproc.contourArea(
largestContour));
Imgcodecs.imwrite("output.png", debugCanvas);
}
private static Point getCenterOfMass(MatOfPoint contour) {
Moments moments = Imgproc.moments(contour);
return new Point(moments.m10 / moments.m00, moments.m01 / moments.m00);
}
}
Input: (full image here)
Output:
STDOUT:
Bounding box area is: 6460729,000000 and contour area is: 5963212,000000
The centroid is drawn close to the upper left corner, outside the contour.
As mentioned in the comment discussion, it looks like this issue you're having was reported specifically in the Java implementation on OpenCV's GitHub. It was eventually solved with this simple pull request. There were some unnecessary int castings.
Possible solutions then:
Upgrading OpenCV should fix you up.
You can edit your library files with the fix (it's simply removing an (int) cast on a few lines).
Define your own function to calculate the centroids.
If you're bored and want to figure out 3, it's actually not a difficult calculation:
Centroids of a contour are usually calculated from image moments. As shown on that page, a moment M_ij can be defined on images as:
M_ij = sum_x sum_y (x^i * y^j * I(x, y))
and the centroid of a binary shape is
(x_c, y_c) = (M_10/M_00, M_01/M_00)
Note that M_00 = sum_x sum_y (I(x, y)) which, in a binary 0 and 1 image, is just the number of white pixels. If your contourArea is working as you stated in the comments, you can simply use that as M_00. Then note also that M_10 is just the sum of the x values corresponding to white pixels and M_01 with the y values. These can be easily calculated, and you can define your own centroid function with the contours.
I'm trying to code a class to seam carve images in x and y direction. The x direction is working, and to reduce the y direction I thought about simply rotating the image 90° and run the same code over the already rescaled image (in x direction only) and after that, rotate it back to its initial state.
I found something with AffineTransform and tried it. It actually produced a rotated image, but messed up the colors and I don't know why.
This is all the code:
import java.awt.image.BufferedImage;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.io.File;
import java.io.IOException;
import javafx.scene.paint.Color;
import javax.imageio.ImageIO;
public class example {
/**
* #param args the command line arguments
*/
public static void main(String[] args) throws IOException {
// TODO code application logic here
BufferedImage imgIn = ImageIO.read(new File("landscape.jpg"));
BufferedImage imgIn2 = imgIn;
AffineTransform tx = new AffineTransform();
tx.rotate(Math.PI/2, imgIn2.getWidth() / 2, imgIn2.getHeight() / 2);//(radian,arbit_X,arbit_Y)
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_BILINEAR);
BufferedImage last = op.filter(imgIn2, null);//(sourse,destination)
ImageIO.write(last, "JPEG", new File("distortedColors.jpg"));
}
}
Just alter the filename in
BufferedImage imgIn = ImageIO.read(new File("landscape.jpg")); and try it.
When executed, you get 4 images: a heatmap, an image with seams in it and a rescaled image. The last image is a test to see if the rotation worked and it should show a rotated image but with distorted colors...
Help would be greatly appreciated!
EDIT:
The problem is with the AffineTransformOp You need :
AffineTransformOp.TYPE_NEAREST_NEIGHBOR
instead of the BILINEAR you have now.
Second paragraph from documentation hints towards this.
This class uses an affine transform to perform a linear mapping from
2D coordinates in the source image or Raster to 2D coordinates in the
destination image or Raster. The type of interpolation that is used is
specified through a constructor, either by a RenderingHints object or
by one of the integer interpolation types defined in this class. If a
RenderingHints object is specified in the constructor, the
interpolation hint and the rendering quality hint are used to set the
interpolation type for this operation.
The color rendering hint and
the dithering hint can be used when color conversion is required. Note
that the following constraints have to be met: The source and
destination must be different. For Raster objects, the number of bands
in the source must be equal to the number of bands in the destination.
So this works
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
It seems like there's a color conversion happening due to passing null to op.filter(imgIn2, null);.
If you change it like that it should work:
BufferedImage last = new BufferedImage( imgIn2.getWidth(), imgIn2.getHeight(), imgIn2.getType() );
op.filter(imgIn2, last );
Building on what bhavya said...
Keep it simple and you should use the dimensions expected from the operation:
AffineTransformOp op = new AffineTransformOp(transform, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
BufferedImage destinationImage = op.filter(bImage, op.createCompatibleDestImage(bImage, null));
I am working on a program to detect wrinkles in an image taken from a high resolution camera.
Currently the project is in its starting phase. I have performed the following steps until now:
Convert to grayscale and contrast the image.
Remove noise using Gaussian Blur.
Apply Adaptive threshold to detect wrinkles.
Use dilation to enhance the size of the detected wrinkle and join disparate elements of a single wrinkle as much as possible.
Remove noise by finding the contours and removing the ones with lesser areas.
Here is the code of the same :
package Wrinkle.Detection;
import java.util.ArrayList;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
public class DetectWrinkle {
private Mat sourceImage;
private Mat destinationImage;
private Mat thresh;
public void detectUsingThresh(String filename) {
sourceImage = Highgui.imread(filename, Highgui.CV_LOAD_IMAGE_GRAYSCALE);
//Contrast
Mat contrast = new Mat(sourceImage.rows(), sourceImage.cols(), sourceImage.type());
Imgproc.equalizeHist(sourceImage, contrast);
Highgui.imwrite("wrinkle_contrast.jpg", contrast);
//Remove Noise
destinationImage = new Mat(contrast.rows(), contrast.cols(), contrast.type());
Imgproc.GaussianBlur(contrast, destinationImage,new Size(31,31), 0);
Highgui.imwrite("wrinkle_Blur.jpg", destinationImage);
//Apply Adaptive threshold
thresh = new Mat();
Imgproc.adaptiveThreshold(destinationImage, thresh, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 99, 10);
Highgui.imwrite("wrinkle_threshold.jpg", thresh);
// dilation
Mat element1 = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(2*3+1, 2*6+1));
Imgproc.dilate(thresh, thresh, element1);
Highgui.imwrite("wrinkle_thresh_dilation.jpg", thresh);
//Find contours
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat image32S = new Mat();
Mat threshClone = thresh.clone();
threshClone.convertTo(image32S, CvType.CV_32SC1);
Imgproc.findContours(image32S, contours, new Mat(), Imgproc.RETR_FLOODFILL,Imgproc.CHAIN_APPROX_SIMPLE);
//Find contours with smaller area and color them to black (removing furhter noise)
Imgproc.cvtColor(thresh, thresh, Imgproc.COLOR_GRAY2BGR);
for (int c=0; c<contours.size(); c++) {
double value = Imgproc.contourArea(contours.get(c));
if(value<500){
Imgproc.drawContours(thresh, contours, c, new Scalar(0, 0, 0), -1);
}
}
Highgui.imwrite("wrinkle_contour_fill.jpg", thresh);
}
public static void main(String[] args) {
DetectWrinkle dw = new DetectWrinkle();
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
String imagefile = "wrinkle_guo (1).bmp";
dw.detectUsingThresh(imagefile);
}
}
Question:
As you can see from the result shown below in the images, a single wrinkle on the skin is getting broken down into separate small elements. Here, I am trying to connect those elements to show the complete wrinkle by using dilate. Once it is done, I am removing the noise by first detecting the contours, calculating the area of the contours and then removing the contours with area less than a particular value.
However, this is not giving me a proper result so I feel that there could be some better way of joining the broken wrinkle elements. Please help me solve this.
And please pardon me if there is anything wrong with the question as I really need a solution and I am a newbie here.
Following are the images :
Input image
After getting contours and removing noise by finding contour area
I have been struggling to find and answer to this issue. I am trying to change the color of a pixel in a large BufferedImage with the imageType of TYPE_BYTE_BINARY. By default when I create the image it will create a black image which is fine but I cannot seem to be able to change pixel color to white.
This is the basic idea of what I want to do.
BufferedImage bi = new BufferedImage(dim[0], dim[1], BufferedImage.TYPE_BYTE_BINARY);
bi.setRBG(x, y, 255)
This seems weird to me as a TYPE_BYTE_BINARY image will not have RGB color, so I know that that is not the correct solution.
Another idea that I had was to create multiple bufferedImage TYPE_BYTE_BINARY with the createGraphics() method and then combine all of those buffered images into one large bufferedImage but I could not find any information about that when using the TYPE_BYTE_BINARY imageType.
When reading up on this I came across people saying that you need to use createGraphics() method on the BufferedImage but I don't want to do that as it will use up too much memory.
I came across this link http://docs.oracle.com/javase/7/docs/api/java/awt/image/Raster.html specifically for this method createPackedRaster()(the second one). This seems like it might be on the right track.
Are those the only options to be able to edit a TYPE_BYTE_BINARY image? Or is there another way that is similar to the way that python handles 1 bit depth images?
In python this is all that needs to be done.
im = Image.new("1", (imageDim, imageDim), "white")
picture = im.load()
picture[x, y] = 0 # 0 or 1 to change color black or white
All help or guidance is appreciated.
All works. I am able to get a white pixel on the image.
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.IOException;
import java.awt.Color;
import java.io.File;
public class MakeImage
{
public static void main(String[] args)
{
BufferedImage im = new BufferedImage(100, 100, BufferedImage.TYPE_BYTE_BINARY);
im.setRGB(10, 10, Color.WHITE.getRGB());
try
{
ImageIO.write(im, "png", new File("image.png"));
}
catch (IOException e)
{
System.out.println("Some exception occured " + e);
}
}
}
I am using the program ImageResizer with the XBR4x algorithm to upscale .gif images from an old 2D game from 32x32 to 48x48.
The exact procedure:
Manually rename all images to .jpeg because the program wont open .gif
Resize the images, they are saved by the program as .bmp
Manually rename the images to .gif again.
The problem:
When looking at the images in Paint they look very good, when drawn in my RGB BufferedImage they suddenly all have a white/grey ~1px border which is not the Background Color, the images are placed directly next to each other. As I have a whole mosaic of those images the white borders are a no go.
Image 32x32:
Image 48x48 after upscaling:
Ingame screenshot of 4 of those earth images with white borders:
The question:
How do those borders originate? And if not possible to answer this, are there more reliable methods of upscaling low resolution game images making them look less pixelated?
I think that is an artifact of the image resizing algorithm, the borders are actually visible one the upscaled image before it is combined, if you look at them in XnView, for example.
The best way to fix that would be to use another tool to resize the image, one which allows the user to control such borderline effects, but if you have to use this one you could still work around the problem by constructing a 3x3 grid of the original image (which would be 96x96), scaling it up to 144x144 and then cutting out the central 48x48 piece. This would eliminate the borderline effects.
The border is a result of a scaling procedure performed by the mentioned tool. Consider this demo that shows tiles based on scaled image from the question and scaled image created using Image.getScaledInstance().
Note that if you choose to stay with your own scaling method check out The Perils of Image.getScaledInstance() for more optimized solutions.
import java.awt.Graphics;
import java.awt.GraphicsEnvironment;
import java.awt.Image;
import java.awt.Transparency;
import java.awt.image.BufferedImage;
import java.net.URL;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JLabel;
import javax.swing.JOptionPane;
import javax.swing.JPanel;
public class TestImageScale {
public static void main(String[] args) {
try {
BufferedImage original = ImageIO.read(new URL(
"http://i.stack.imgur.com/rY2i8.gif"));
Image scaled = original.getScaledInstance(48, 48,
Image.SCALE_AREA_AVERAGING);
BufferedImage scaledOP = ImageIO.read(new URL(
"http://i.stack.imgur.com/Argxi.png"));
BufferedImage tilesOP = buildTiles(scaledOP, 3, 3);
BufferedImage tiles = buildTiles(scaled, 3, 3);
JPanel panel = new JPanel();
panel.add(new JLabel(new ImageIcon(tilesOP)));
panel.add(new JLabel(new ImageIcon(tiles)));
JOptionPane.showMessageDialog(null, panel,
"Tiles: OP vs getScaledInstance",
JOptionPane.INFORMATION_MESSAGE);
} catch (Exception e) {
JOptionPane.showMessageDialog(null, e.getMessage(), "Failure",
JOptionPane.ERROR_MESSAGE);
e.printStackTrace();
}
}
static BufferedImage buildTiles(Image tile, int rows, int columns) {
int width = tile.getWidth(null);
int height = tile.getHeight(null);
BufferedImage dest = GraphicsEnvironment
.getLocalGraphicsEnvironment()
.getDefaultScreenDevice()
.getDefaultConfiguration()
.createCompatibleImage(width * rows, height * columns,
Transparency.TRANSLUCENT);
Graphics g = dest.getGraphics();
for (int row = 0; row < rows; row++) {
for (int col = 0; col < columns; col++) {
g.drawImage(tile, row * width, col * width, null);
}
}
g.dispose();
return dest;
}
}
Just a wild guess: Do the original images have an Alpha channel (or do you implicitly create one when resizing)? When resizing an image with alpha, the scaling process may assume the area outside the image to be transparent and the border pixels may become partially transparent, too.
I emailed Hawkynt, the developer of the tool and it seems the error is not in the tool but in Microsofts implementation and he fixed it (actually even bigger tools like Multiple Image Resizer .NET have the problem). This is what he said about his program:
"When you entered width and/or height manually, the image got resized by the chosen algorithm where everything went fine.
Afterwards I used the resample command from GDI+ which implements a Microsoft version of the bicubic resize algorithm.
This implementation is flawed, so it produces one pixel on the left and upper side for images under 300px.
I fixed it by simply making the resized image one pixel larger than wanted and shifting it to the left and up one pixel, so the white border is no longer visible and the target image hast he expected dimensions."