algorithm, image processing, flood fill - java

I am trying to process an X-ray image.
The task is to paint each bone to a different color. I've used canny filter,otsu binarization and Morphological Image Processing such as erosion, to get this effect:
Now I need to find an algorithm to color each bone. I was thinking about using connected-component labeling or flood fill but these algorithms requires closed area which will be filled with a color, but in my image there are also "almost closed" area to color. I was trying to "close each bone" with Dilation but it doesn't work.
And now I completely do not know what to do with it and how to color bones.

You can try to vectorize your image. I have done something similar and after running simple vectorization, connected components were easy to fill.
You can vectorize directly your input by eg. running Marching Squares on it. It will also create edge image.

While this might not be the exact thing you're looking for, I would recommend a simple edge-finding algorithm. The way I would do this (which may not be the best of the most efficient) is to extract the image into a 2D array of pixels. What you can do is compare the RGB values of each pixel to it's neighboring pixel and then color it a brighter color if the difference is higher. To calculate the difference you can use the 3D version of the 2D pythagorean distance formula. Finding the 'distance' between the RGB values and multiplying it by something to keep the values between 0 and 255, then replacing the pixel which you are comparing to it's surrounding pixels with a pixel with the average of this number for the 8 surrounding pixels.
If this is done correctly, it should produce a result similar to, if not identical to the one you present here.

Related

How To Check If Two BufferedImages Are Equal Ignoring Color White?

I have one BufferedImage image1 and BufferedImage image2, and I want to know if they are equal.
image1 is made before-hand and stored into an image file, where I convert using ImageIO. However, image2 is made on the spot, so it is pretty much guaranteed that they have different sizes. What I do know is that image2 will equal one of 9 different image1's.
So, what I want to do is check if they are the same image's, but ignoring all the white pixels on the edge because they are different size, so if I compare all the pixels they would be different no matter what. If you're wondering why there is the color white on the edge, the images are numbers so the remaining space will be white.
If you want to make it simpler, the color of the real image will always be black, but I would like it better if you make it a generic solution (meaning taking in account all colors) so I could use the concepts later.
private boolean equals(BufferedImage image1, BufferedImage image2) {
// This is what I want to fill out.
}
What I first tried to do was to find the first non-white pixel of image1, and the first non-whiten pixel of image2, and then check the rows after that to see if everthing is equal. However, the images are pretty big, and this approach takes more than O(n ^ 2). I need a faster way.
What I first tried to do was to find the first non-white pixel of image1, and the first non-whiten pixel of image2, and then check the rows after that to see if everthing is equal. However, the images are pretty big, and this approach takes more than O(n ^ 2). I need a faster way.
Most probably there is no very faster way using this approach. You can use edge detection, but the algorithms for that aren't really faster too.
I would try to work with bounding boxes for each image (number).
If it is possible to save image1 the size the number is, this were the way to go. Just shrink the image to the real size of the number and save that image to disk. You then can shrink image2 to its bounding box too and the comparison is quite simple and fast.
If shrinking is no option, calculation of the bounding box is an option. Go through the image array and detect the top most and the left most pixel in both images. You then get at least the bounding edges for the top and left side, which is all you need to compare the images. (If images can differ in size, you need the whole bounding box)
By the way, you don't need to run in O(n^2). If you detect the top most or left most pixel in both images, you can set an offset to work from. You only need to find a difference to state that these numbers are different. You can work with logic to determine, which number it must be based on simple tests. For example take numbers one (1) and zero (0). Whereas zero has white pixels in the middle part, the one must have black pixels there and vice versa. So detecting areas where the numbers definitely are black or white can help you estimate the number in the image by testing up to 9 areas.

displaying characters in a dot matrix

I've built a matrix of LEDs controlled by a Java program on my Raspberry Pi. I want to display characters on this matrix. So what I need to do is convert the characters to a two-dimensional boolean-Array (each LED is represented by one boolean).
The only way to do this I can think of is to design a separate matrix for each existing character, but this is way to much work.
Is there any way to do this differently?
You could rasterize (draw) a given font at a given point size using something like AWT or FreeType and then examine the image to see which pixels/LEDs should be on or off.
This will break down as the font size gets smaller. Below some point, you're probably better off coming up with the matrixes yourself rather than pouring a bunch of effort into something that doesn't work.
OTOH, "render-and-read" would be Much Less Boring... so YMMV.
you could load a monochrome image for a character with a pixel size regarding to your led matrix and check with two for loops, whether a pixel at a certain position is black (true) or white (false).

How to extract a rectangular object from image in Java

I have a photo of a paper that I hold up to my webcam, and want to minimize the area of the photo to just the paper. This way, my OCR program will potentially be more accurate, as well as conceivably faster.
I have taken a couple steps thus far to isolate the paper from the background.
First, I use Canny Edge detection, set with high thresholds. This provides a two-color representation of the edges of my image. On it, I can see a rounded rectangle among some other artifacts that happen to have sharp edges in the background.
Next, I use a Hough transformation, to draw vectors with over 100 point hits in polar coordinates on a black background. The resulting image is as shown:
See that large (the largest), almost-rectangular figure right in the middle? That's the paper I'm holding. I need to isolate that trapezoid as a polygon, or somehow otherwise get the coordinates of its vertices.
I can use these coordinates on the original image to isolate a PNG of the paper and nothing else.
I would also highly appreciate if you could provide answers to any of these three sub-questions.
-How do you find the locations of the intersections of these lines on the image?
-How would I get rid of any lines that don't form the center trapezoidal polygon?
-With these points, is there anything better than convex hull that would allow me only to get the trapezoidal/rectangular shaped region of the image?
Here is another example, in which my program produced a better image:

what values of an image should I use to produce a haar wavelet?

I currently have a Java program that will get the rgb values for each of the pixels in an image. I also have a method to calculate a Haar wavelet on a 2d matrix of values. However I don't know which values I should give to my method that calculates the Haar wavelet. Should I average each pixels rgb value and computer a haar wavelet on that? or maybe just use 1 of r, g,b.
I am trying to create a unique fingerprint for an image. I read elsewhere that this was a good method as I can take the dot product of 2 wavelets to see how similar the images are to each other.
Please let me know of what values I should be computing a Haar wavelet on.
Thanks
Jess
You should regard the R/G/B components as different images: Create one matrix for R, G and B each, then apply the wavelet to parts of those independently.
You then reconstruct the R/G/B-images from the 3 wavelet-compressed channels and finally combine those to a 3-channel bitmap.
Since eznme didn't answer your question (You want fingerprints, he explains compression and reconstruction), here's a method you'll often come across:
You separate color and brightness information (chrominance and luma), and weigh them differently. Sometimes you'll even throw away the chrominance and just use the luma part. This reduces the size of your fingerprint significantly (~factor three) and takes into account how we perceive an image - mainly by local brightness, not by absolute color. As a bonus you gain some robustness concerning color manipulation of the image.
The separation can be done in different ways, e.g. transforming your RGB image to YUV or YIQ color space. If you only want to keep the luma component, these two color spaces are equivalent. However, they encode the chrominance differently.
Here's the linear transformation for the luma Y from RGB:
Y = 0.299*R + 0.587*G + 0.114*B
When you take a look at the mathematics, you notice that we're doing nothing else than creating a grayscale image – taking into account that we perceive green brighter than red and red brighther than blue when they all are numerically equal.
Incase you want to keep a bit of chrominance information, in order to keep your fingerprint as concise as possible, you could reduce the resolution of the two U,V components (each actually 8 bit). So you could join them both into one 8 bit value by reducing their information to 4 bit and combining them with the shift operator (don't know how that works in java). The chrominance should weigh less in comparison to the luma, in the final fingerprint-distance calculation (the dot product you mentioned).

How to specify behavior of Java BufferedImage resize: need min for pixel rows instead of averaging

I would like to resize a Java BufferedImage, making it smaller vertically but without using any type of averaging, so that if a pixel-row is "blank" (white) in the source image, there will be a white pixel-row in the corresponding position of the destination image: the "min" operation. The default algorithms (specified in getScaledInstance) do not allow me a fine-grained enough control. I would like to implement the following logic:
for each pixel row in the w-pixels wide destination image, d = pixel[w]
find the corresponding j pixel rows of the source image, s[][] = pixel[j][w]
write the new line of pixels, so that d[i] = min(s[j][i]) over all j, i
I have been reading on RescaleOp, but have not figured out how to implement this functionality -- it is admittedly a weird type of scaling. Can anyone provide me pointers on how to do this? In the worse case, I figure I can just reserve the destination ImageBuffer and copy the pixels following the pseudocode, but I was wondering if there is better way.
The RescaleOp methods include a parameter called RenderingHints. There is a hint called KEY_INTERPOLATION that decides the color to use when scaling an image.
If you use the value VALUE_INTERPOLATION_NEAREST_NEIGHBOR for the KEY_INTERPOLATION, Java will use the original colors, rather than using some type of algorithm to recalculate the new colors.
So, instead of white lines turning to gray or some mix of color, you'll get either white lines, or you won't get any lines at all. It all depends on the scaling factor, and if it's an even or odd row. For example, if you are scaling by half, then each 1 pixel horizontal line has at least a 50% change of appearing in the new image. However, if the white lines were two pixels in height, you'd have a 100% chance of the white line appearing.
This is probably the closest you're going to get besides writing your own scaling method. Unfortunately, I don't see any other hints that might help further.
To implement your own scaling method, you could create a new class that implements the BufferedImageOp interface, and implement the filter() method. Use getRGB() and setRGB() on the BufferedImage object to get the pixels from the original image and set the pixels on the new image.

Categories