I want to get RGB values of an image in many lighting conditions. To get a somehow neutral scenario, I want to normalize the RGB values with RGB values of some predefined images.
Let me explain. I have 6 predefined images and I know their exact average RGB values. Now I will take the picture of the unknown image in different lighting conditions. I will also take the pictures of predefined 6 images in same conditions. Now my goal is to define a normalization formula by comparing the known reference rgb values of the predefined images to the values computed from the camera picture. with this normalization parameter I will calibrate the RGB value of the unknown picture. So that I can get average RGB value from the unknown picture in a neutral manner irrespective of the lighting condition.
How can I achieve this easily in Java.
Is the reason you are doing this to truly normalize RGB, or are you trying to normalize the images to have similar brightness. Because if your goal is simply the brightness, then I would convert to a color standard that has a brightness component, and normalize only the brightness component.
From there you can take the new image in the different color component standard and convert back to RGB if you like.
The steps (but not in java):
1) Convert - RGBImage --> YUVImage
2) Normalize RGBImage using the Y component
3) Convert - Normalized(YUVImage) --> Normalized(RGBImage)
In this way you can implement Normalization on the brightness using the algorithm described here.
ELSE, you can average the averages for each channel and use those as the numerator for the normalization factors for your new images calculating each channel separately.
For differing lighting situations, a linear RGB correction is all that is required. Simply multiply each of the R,G,B values by a constant derived for each channel.
If there was only one reference color, it would be easy - multiply by the reference color, and divide by the captured color. For example if your reference color was (240,200,120) but your image measured (250,190,150) - you would multiply red by 240/250, green by 200/190, and blue by 120/150. Use the same constants for every pixel in the image.
With multiple colors to match you'll have to average the correction factors to arrive at a single set of constants. Greater weighting needs to be given to the brighter colors, for example if you had a reference of (200,150,20) and it measured (190,140,10) you'd be trying to double the amount of blue which could be very far off. The simplest method would be to sum all the reference values and divide by the sum of the measured values.
Related
I have a processing algo which performs well if I process each color channel seperately. but when I tried to process the whole pixel value, things missed up. the results are not good. now I want to isolate the 3 color channel from the pixel value( exclude alpha) then work on the new value (the 3 channels).
How can I do that in C++? knowing that I tried the RGB_565 bitmap format which is not a good solution. and knowing that I want to merge the RGB into a 24bits variable.
You can access each channel separately. The exact way depends on actual pixel format.
ANDROID_BITMAP_FORMAT_RGBA_8888: each pixel is 4-byte long, layout pattern is RGBARGBA..., i.e. the 1-st pixel byte is red component, the 2-d is green, the 3-d is blue and the 4-th is alpha component.
ANDROID_BITMAP_FORMAT_RGB_565: each pixel is 2-byte long, stored in native endianness, so color components may be extracted in next way:
red = (u16_pix >> 11) & 0x1f;
green = (u16_pix >> 5) & 0x3f;
blue = (u16_pix >> 0) & 0x1f;
ANDROID_BITMAP_FORMAT_RGBA_4444:
is deprecated because of poor quality, you shouldn't even think about this one
ANDROID_BITMAP_FORMAT_A_8:
is 1 byte per pixel and designed for alpha-only or grayscale images. It is probably not what you are looking for.
Note that Android has no 24bpp format, and you must choose 32bpp or 16bpp one. About your algo: there are two alternatives - code may access individual components right inside packed pixel value, or you may deinterleave packed pixels into few planes, i.e. arrays, each of them will hold only one channel. Then after processing you may interleave them again to one of the supported formats or transform to some other format you are interested in.
I am trying to process an X-ray image.
The task is to paint each bone to a different color. I've used canny filter,otsu binarization and Morphological Image Processing such as erosion, to get this effect:
Now I need to find an algorithm to color each bone. I was thinking about using connected-component labeling or flood fill but these algorithms requires closed area which will be filled with a color, but in my image there are also "almost closed" area to color. I was trying to "close each bone" with Dilation but it doesn't work.
And now I completely do not know what to do with it and how to color bones.
You can try to vectorize your image. I have done something similar and after running simple vectorization, connected components were easy to fill.
You can vectorize directly your input by eg. running Marching Squares on it. It will also create edge image.
While this might not be the exact thing you're looking for, I would recommend a simple edge-finding algorithm. The way I would do this (which may not be the best of the most efficient) is to extract the image into a 2D array of pixels. What you can do is compare the RGB values of each pixel to it's neighboring pixel and then color it a brighter color if the difference is higher. To calculate the difference you can use the 3D version of the 2D pythagorean distance formula. Finding the 'distance' between the RGB values and multiplying it by something to keep the values between 0 and 255, then replacing the pixel which you are comparing to it's surrounding pixels with a pixel with the average of this number for the 8 surrounding pixels.
If this is done correctly, it should produce a result similar to, if not identical to the one you present here.
I want to apply haar transformation on a colored image. For this I will have to apply the haar function on red, green and blue components separately. Now according to my understanding, haar function is averaging and differencing. So the red, blue and green component values become negative in some cases( while performing differencing). Once i get negative values, I cannot map to r/g/b component. How to solve this problem. I am implementing haar function in JAVA. Also I am not using any library to compute harr transformation. Please help.
You have two choices. Either change your representation so you're not using unsigned 8-bit bytes anymore, or add a fixed offset such as 128. The appropriate choice depends on how you will process the result.
I'm building an application in Java where different patches of land consist of set of parcels. I want to color different patches with different color. My map presents patches' parcels. However two patches can have parcels in common. In that case I would like to color those parcels with some intermediate color(mix of two patches' colors). As I don't know how many patches are going to be selected I have to assign random colors to those in a loop. Then I have to resolve colors for those intersected parcels. Any idea about best possible way to do this in java?
For blending the colors, you can get the individual R,G,B values, via either getRed(), getBlue() etc. or getColorComponents(), then take the average for each, then create a new color. You could also average the HSB values. You could also play with alpha (transparency), drawing each original color with an alpha of 0.5. However, at some point all these subtle blendings become difficult for the user to figure out. Instead, you might want to use some pattern (like stripes) with the original colors.
I've experimented with many ways to pick "good" random colors, and haven't had much success. The best technique used HSB values. Lets say you need 5 colors. Divide 360 degrees by 5 and pick hues "around" the color wheel at those angles. This works o.k. if you like really bright fully saturated colors.
IMO, consider hard coding at least the "most common" parcel colors into something that looks good. Pick colors at random as little as possible.
I don't know obvious solution to this problem, but i suppose you can use average rgb components.
For example you have two colors in RGB notation: A(100,0,0) and B(0,100,0). Resulted color will be C(50, 50, 0).
Note: in that case you save invariant "Intersection between two equal colors will be same color"
I currently have a Java program that will get the rgb values for each of the pixels in an image. I also have a method to calculate a Haar wavelet on a 2d matrix of values. However I don't know which values I should give to my method that calculates the Haar wavelet. Should I average each pixels rgb value and computer a haar wavelet on that? or maybe just use 1 of r, g,b.
I am trying to create a unique fingerprint for an image. I read elsewhere that this was a good method as I can take the dot product of 2 wavelets to see how similar the images are to each other.
Please let me know of what values I should be computing a Haar wavelet on.
Thanks
Jess
You should regard the R/G/B components as different images: Create one matrix for R, G and B each, then apply the wavelet to parts of those independently.
You then reconstruct the R/G/B-images from the 3 wavelet-compressed channels and finally combine those to a 3-channel bitmap.
Since eznme didn't answer your question (You want fingerprints, he explains compression and reconstruction), here's a method you'll often come across:
You separate color and brightness information (chrominance and luma), and weigh them differently. Sometimes you'll even throw away the chrominance and just use the luma part. This reduces the size of your fingerprint significantly (~factor three) and takes into account how we perceive an image - mainly by local brightness, not by absolute color. As a bonus you gain some robustness concerning color manipulation of the image.
The separation can be done in different ways, e.g. transforming your RGB image to YUV or YIQ color space. If you only want to keep the luma component, these two color spaces are equivalent. However, they encode the chrominance differently.
Here's the linear transformation for the luma Y from RGB:
Y = 0.299*R + 0.587*G + 0.114*B
When you take a look at the mathematics, you notice that we're doing nothing else than creating a grayscale image – taking into account that we perceive green brighter than red and red brighther than blue when they all are numerically equal.
Incase you want to keep a bit of chrominance information, in order to keep your fingerprint as concise as possible, you could reduce the resolution of the two U,V components (each actually 8 bit). So you could join them both into one 8 bit value by reducing their information to 4 bit and combining them with the shift operator (don't know how that works in java). The chrominance should weigh less in comparison to the luma, in the final fingerprint-distance calculation (the dot product you mentioned).