how to map negative values to rgb component? - java

I want to apply haar transformation on a colored image. For this I will have to apply the haar function on red, green and blue components separately. Now according to my understanding, haar function is averaging and differencing. So the red, blue and green component values become negative in some cases( while performing differencing). Once i get negative values, I cannot map to r/g/b component. How to solve this problem. I am implementing haar function in JAVA. Also I am not using any library to compute harr transformation. Please help.

You have two choices. Either change your representation so you're not using unsigned 8-bit bytes anymore, or add a fixed offset such as 128. The appropriate choice depends on how you will process the result.

Related

color shift image to match a palette

I'd like to know if anyone knows about an algorithm that changes properties like saturation, color, brightness etc. of an Image, such that it matches a color palette, but also still looks good enough.
It would be nice if anyone could share any ideas on how to do that.
My goal is to achieve a better result by doing this, when dithering the image, because my palette only contains 20 color.
When I used my simple C++ dithering on this input image(cropped from yours):
And use distinct colors as palette from this:
I got this output:
The squares in top are the palette used looks like more than 20 colors to me ... As you can see the output is much better than yours but your dithered result looks too pixelated is it downsampled and then zoomed?
Using just 2 shades of R,G,B,C,M,Y and 3 grayscales leads to 15 colors dithering:
The more shades and combinations of RGB the better the result... Here all the combinations of 3 shades 3^3=27 colors:
[Edit1] algorithm
handle each line of image separately
so process your image by horizontal or vertical lines. You need to have a r0,g0,b0 signed temp variable set to zero before processing any line and palette pal[] holding your allowed colors.
for each pixel (of processed line)
extract its r,g,b and add it to r0,g0,b0, then find closest color to (r0,g0,b0) in your palette pal[] and substract chosen r,g,b from r0,g0,b0.
something like this:
for (y=0;y<height;y++)
{
r0=0; g0=0; b0=0;
for (x=0;x<width;x++)
{
// pick and add wanted color
col=pixel(x,y);
r0+=col.r;
g0+=col.g;
b0+=col.b;
// chose closest color
ix=0; col=(r0,g0,b0);
for (i=0;i<palette_size;i++)
if (color_distance(col,pal[i])<color_distance(col,pal[ix]))
ix=i;
col=pal[ix];
// render and substract chosed color
pixel(x,y)=col;
r0-=col.r;
g0-=col.g;
b0-=col.b;
}
}
The choosing of closest color can be significantly speed up by LUT[r][g][b] table.
This approach is fast and simple but far from best visually.

algorithm, image processing, flood fill

I am trying to process an X-ray image.
The task is to paint each bone to a different color. I've used canny filter,otsu binarization and Morphological Image Processing such as erosion, to get this effect:
Now I need to find an algorithm to color each bone. I was thinking about using connected-component labeling or flood fill but these algorithms requires closed area which will be filled with a color, but in my image there are also "almost closed" area to color. I was trying to "close each bone" with Dilation but it doesn't work.
And now I completely do not know what to do with it and how to color bones.
You can try to vectorize your image. I have done something similar and after running simple vectorization, connected components were easy to fill.
You can vectorize directly your input by eg. running Marching Squares on it. It will also create edge image.
While this might not be the exact thing you're looking for, I would recommend a simple edge-finding algorithm. The way I would do this (which may not be the best of the most efficient) is to extract the image into a 2D array of pixels. What you can do is compare the RGB values of each pixel to it's neighboring pixel and then color it a brighter color if the difference is higher. To calculate the difference you can use the 3D version of the 2D pythagorean distance formula. Finding the 'distance' between the RGB values and multiplying it by something to keep the values between 0 and 255, then replacing the pixel which you are comparing to it's surrounding pixels with a pixel with the average of this number for the 8 surrounding pixels.
If this is done correctly, it should produce a result similar to, if not identical to the one you present here.

How to normalize RGB value with reference RGB values

I want to get RGB values of an image in many lighting conditions. To get a somehow neutral scenario, I want to normalize the RGB values with RGB values of some predefined images.
Let me explain. I have 6 predefined images and I know their exact average RGB values. Now I will take the picture of the unknown image in different lighting conditions. I will also take the pictures of predefined 6 images in same conditions. Now my goal is to define a normalization formula by comparing the known reference rgb values of the predefined images to the values computed from the camera picture. with this normalization parameter I will calibrate the RGB value of the unknown picture. So that I can get average RGB value from the unknown picture in a neutral manner irrespective of the lighting condition.
How can I achieve this easily in Java.
Is the reason you are doing this to truly normalize RGB, or are you trying to normalize the images to have similar brightness. Because if your goal is simply the brightness, then I would convert to a color standard that has a brightness component, and normalize only the brightness component.
From there you can take the new image in the different color component standard and convert back to RGB if you like.
The steps (but not in java):
1) Convert - RGBImage --> YUVImage
2) Normalize RGBImage using the Y component
3) Convert - Normalized(YUVImage) --> Normalized(RGBImage)
In this way you can implement Normalization on the brightness using the algorithm described here.
ELSE, you can average the averages for each channel and use those as the numerator for the normalization factors for your new images calculating each channel separately.
For differing lighting situations, a linear RGB correction is all that is required. Simply multiply each of the R,G,B values by a constant derived for each channel.
If there was only one reference color, it would be easy - multiply by the reference color, and divide by the captured color. For example if your reference color was (240,200,120) but your image measured (250,190,150) - you would multiply red by 240/250, green by 200/190, and blue by 120/150. Use the same constants for every pixel in the image.
With multiple colors to match you'll have to average the correction factors to arrive at a single set of constants. Greater weighting needs to be given to the brighter colors, for example if you had a reference of (200,150,20) and it measured (190,140,10) you'd be trying to double the amount of blue which could be very far off. The simplest method would be to sum all the reference values and divide by the sum of the measured values.

Assigning java colors

I'm building an application in Java where different patches of land consist of set of parcels. I want to color different patches with different color. My map presents patches' parcels. However two patches can have parcels in common. In that case I would like to color those parcels with some intermediate color(mix of two patches' colors). As I don't know how many patches are going to be selected I have to assign random colors to those in a loop. Then I have to resolve colors for those intersected parcels. Any idea about best possible way to do this in java?
For blending the colors, you can get the individual R,G,B values, via either getRed(), getBlue() etc. or getColorComponents(), then take the average for each, then create a new color. You could also average the HSB values. You could also play with alpha (transparency), drawing each original color with an alpha of 0.5. However, at some point all these subtle blendings become difficult for the user to figure out. Instead, you might want to use some pattern (like stripes) with the original colors.
I've experimented with many ways to pick "good" random colors, and haven't had much success. The best technique used HSB values. Lets say you need 5 colors. Divide 360 degrees by 5 and pick hues "around" the color wheel at those angles. This works o.k. if you like really bright fully saturated colors.
IMO, consider hard coding at least the "most common" parcel colors into something that looks good. Pick colors at random as little as possible.
I don't know obvious solution to this problem, but i suppose you can use average rgb components.
For example you have two colors in RGB notation: A(100,0,0) and B(0,100,0). Resulted color will be C(50, 50, 0).
Note: in that case you save invariant "Intersection between two equal colors will be same color"

what values of an image should I use to produce a haar wavelet?

I currently have a Java program that will get the rgb values for each of the pixels in an image. I also have a method to calculate a Haar wavelet on a 2d matrix of values. However I don't know which values I should give to my method that calculates the Haar wavelet. Should I average each pixels rgb value and computer a haar wavelet on that? or maybe just use 1 of r, g,b.
I am trying to create a unique fingerprint for an image. I read elsewhere that this was a good method as I can take the dot product of 2 wavelets to see how similar the images are to each other.
Please let me know of what values I should be computing a Haar wavelet on.
Thanks
Jess
You should regard the R/G/B components as different images: Create one matrix for R, G and B each, then apply the wavelet to parts of those independently.
You then reconstruct the R/G/B-images from the 3 wavelet-compressed channels and finally combine those to a 3-channel bitmap.
Since eznme didn't answer your question (You want fingerprints, he explains compression and reconstruction), here's a method you'll often come across:
You separate color and brightness information (chrominance and luma), and weigh them differently. Sometimes you'll even throw away the chrominance and just use the luma part. This reduces the size of your fingerprint significantly (~factor three) and takes into account how we perceive an image - mainly by local brightness, not by absolute color. As a bonus you gain some robustness concerning color manipulation of the image.
The separation can be done in different ways, e.g. transforming your RGB image to YUV or YIQ color space. If you only want to keep the luma component, these two color spaces are equivalent. However, they encode the chrominance differently.
Here's the linear transformation for the luma Y from RGB:
Y = 0.299*R + 0.587*G + 0.114*B
When you take a look at the mathematics, you notice that we're doing nothing else than creating a grayscale image – taking into account that we perceive green brighter than red and red brighther than blue when they all are numerically equal.
Incase you want to keep a bit of chrominance information, in order to keep your fingerprint as concise as possible, you could reduce the resolution of the two U,V components (each actually 8 bit). So you could join them both into one 8 bit value by reducing their information to 4 bit and combining them with the shift operator (don't know how that works in java). The chrominance should weigh less in comparison to the luma, in the final fingerprint-distance calculation (the dot product you mentioned).

Categories