Getting all RGB colors of an image - java

So far I have this:
BufferedImage image = ImageIO.read(
new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
int w = image.getWidth();
int h = image.getHeight();
int[] dataBuffInt = image.getRGB(0, 0, w, h, null, 0, w);
Color c = new Color(dataBuffInt[100]);
System.out.println(c.getRed()); // = (dataBuffInt[100] >> 16) & 0xFF
System.out.println(c.getGreen()); // = (dataBuffInt[100] >> 8) & 0xFF
System.out.println(c.getBlue()); // = (dataBuffInt[100] >> 0) & 0xFF
System.out.println(c.getAlpha()); // = (dataBuffInt[100] >> 24) & 0xFF
Earlier, I tried putting the getRed, getGreen, and getBlue in a for loop but it only shows the same RGB value. How do I get all the RGB values in an image? Given that I wanna store them in different arrays.

I'm not entirely clear on the question, but assuming you mean unique RGB values, just loop, and just use say java.util.Set implementation that maintains uniqueness?
Set<Color> colors = new HashSet<Color>();
for (int datum : dataBuffInt) {
colors.add(new Color(datum));
}
System.out.println(String.format("%d different colors", colors.size()));
Or if you mean separate components?
for (int datum : dataBuffInt) {
Color color = new Color(datum);
reds.add(color.getRed());
greens.add(color.getGreen());
blues.add(color.getBlue());
}
System.out.println(String.format("reds: %d greens: %d blues: %d", reds.size(), greens.size(), blues.size()));

Are you certain when you had the for loop you were using the index variable into the array and not a static value, like 100? When I run your code with a for loop I see different values:
for (int i = 0; i < dataBuffInt.length; i++) {
Color c = new Color(dataBuffInt[i]);
System.out.println("COLOR");
System.out.println(c.getRed()); // = (dataBuffInt[100] >> 16) & 0xFF
System.out.println(c.getGreen()); // = (dataBuffInt[100] >> 8) & 0xFF
System.out.println(c.getBlue()); // = (dataBuffInt[100] >> 0) & 0xFF
System.out.println(c.getAlpha()); // = (dataBuffInt[100] >> 24) & 0xFF
System.out.println();
}
If you want unique colors you could build a set one pixel at a time:
final BufferedImage image = ImageIO.read(new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
final Set<Color> uniqueColors = new HashSet<Color>(image.getWidth() * image.getHeight());
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
final int rgb = image.getRGB(x, y);
uniqueColors.add(new Color(rgb));
}
}
for (final Color color : uniqueColors) {
System.out.println(format("red: {0}, green: {1}, blue: {2}, alpha: {3}",
color.getRed(),
color.getGreen(),
color.getBlue(),
color.getAlpha()));
}
Or use your existing code and dump the array into a set.

Related

Java - Remove pixels below a certain alpha value

I have an image with a lot of anti-aliased lines in it and trying to remove pixels that fall below a certain alpha channel threshold (and anything above the threshold gets converted to full 255 alpha). I've got this coded up and working, its just not as fast as I would like when running it on large images. Does anyone have an alternative method they could suggest?
//This will convert all pixels with > minAlpha to 255
public static void flattenImage(BufferedImage inSrcImg, int minAlpha)
{
//loop through all the pixels in the image
for (int y = 0; y < inSrcImg.getHeight(); y++)
{
for (int x = 0; x < inSrcImg.getWidth(); x++)
{
//get the current pixel (with alpha channel)
Color c = new Color(inSrcImg.getRGB(x,y), true);
//if the alpha value is above the threshold, convert it to full 255
if(c.getAlpha() >= minAlpha)
{
inSrcImg.setRGB(x,y, new Color(c.getRed(), c.getGreen(), c.getBlue(), 255).getRGB());
}
//otherwise set it to 0
else
{
inSrcImg.setRGB(x,y, new Color(0,0,0,0).getRGB()); //white (transparent)
}
}
}
}
per #BenoitCoudour 's comments I've modified the code accordingly, but it appears to be affecting the resulting RGB values of pixels, any idea what I might be doing wrong?
public static void flattenImage(BufferedImage src, int minAlpha)
{
int w = src.getWidth();
int h = src.getHeight();
int[] rgbArray = src.getRGB(0, 0, w, h, null, 0, w);
for (int i=0; i<w*h; i++)
{
int a = (rgbArray[i] >> 24) & 0xff;
int r = (rgbArray[i] >> 16) & 0xff;
int b = (rgbArray[i] >> 8) & 0xff;
int g = rgbArray[i] & 0xff;
if(a >= minAlpha) { rgbArray[i] = (255<<24) | (r<<16) | (g<<8) | b; }
else { rgbArray[i] = (0<<24) | (r<<16) | (g<<8) | b; }
}
src.setRGB(0, 0, w, h, rgbArray, 0, w);
}
What may slow you down is the instantiation of a Color object for every pixel.
Please see this answer to iterate over pixels in a BufferedImage and access the alpha channel : https://stackoverflow.com/a/6176783/3721907
I'll just paste the code below
public Image alpha2gray(BufferedImage src) {
if (src.getType() != BufferedImage.TYPE_INT_ARGB)
throw new RuntimeException("Wrong image type.");
int w = src.getWidth();
int h = src.getHeight();
int[] srcBuffer = src.getData().getPixels(0, 0, w, h, null);
int[] dstBuffer = new int[w * h];
for (int i=0; i<w*h; i++) {
int a = (srcBuffer[i] >> 24) & 0xff;
dstBuffer[i] = a | a << 8 | a << 16;
}
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h, pix, 0, w));
}
This is very close to what you want to achieve.
You have a theoretical complexity of O(n) which you optimize by performing byte manipulation.
You can go further and use threads (you have an embarrassing parallel problem), but since most of user machines have at most 8 physical threads it will not get you too far. You could add another level of optimization on top of this by manipulating parts of the image one at the time, adapted to the memory buffers and different cache levels in your system.
Since I already mentioned you have an embarrassing parallel problem, the best solution is to perform GPU programming.
You can follow this tutorial on simple image processing with cuda and change the code of the filter to something like this
void blur(unsigned char* input_image, unsigned char* output_image, int width, int height) {
const unsigned int offset = blockIdx.x*blockDim.x + threadIdx.x;
const int currentoffset = (offset)*4;
if(offset < width*height) {
if (input_image[currentoffset+3]>= threshold )
output_red = input_image[currentoffset];
output_green = input_image[currentoffset+1];
output_blue = input_image[currentoffset+2];
output_alpha = 255;
}else{
output_red = 0;
output_green = 0;
output_blue = 0;
output_alpha = 0;
}
}
}
output_image[currentoffset*3] = output_red;
output_image[currentoffset*3+1] = output_green;
output_image[currentoffset*3+2] = output_blue;
output_image[currentoffset*3+3] = output_alpha
}
}
If you are set on using Java you have here a great answer on how to get started on using java with nvidia gpu

identify Blank Image Using Java

Hi How can we identify Blank Image(White Image),
BufferReaderImage im = ImageIO.read("samplePath");
the image iam passing is empty with some height and width, i want to identify it
This link should help you get started:
Get RGB values of a BufferedImage
In particular, this is the relevant part:
BufferedImage image = ImageIO.read(
new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
int w = image.getWidth();
int h = image.getHeight();
int[] dataBuffInt = image.getRGB(0, 0, w, h, null, 0, w);
Color c = new Color(dataBuffInt[100]);
System.out.println(c.getRed()); // = (dataBuffInt[100] >> 16) & 0xFF
System.out.println(c.getGreen()); // = (dataBuffInt[100] >> 8) & 0xFF
System.out.println(c.getBlue()); // = (dataBuffInt[100] >> 0) & 0xFF
System.out.println(c.getAlpha()); // = (dataBuffInt[100] >> 24) & 0xFF
Then go through all the entries and make sure there are some different values.

Incorrect result of image subtraction

I wanted to subtract two images pixel by pixel to check how much they are similar. Images have the same size one is little darker and beside brightness they don't differ. But I get those little dots in the result. Did I subtract those two images rigth? Both are bmp files.
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
public class Main2 {
public static void main(String[] args) throws Exception {
int[][][] ch = new int[4][4][4];
BufferedImage image1 = ImageIO.read(new File("1.bmp"));
BufferedImage image2 = ImageIO.read(new File("2.bmp"));
BufferedImage image3 = new BufferedImage(image1.getWidth(), image1.getHeight(), image1.getType());
int color;
for(int x = 0; x < image1.getWidth(); x++)
for(int y = 0; y < image1.getHeight(); y++) {
color = Math.abs(image2.getRGB(x, y) - image1.getRGB(x, y));
image3.setRGB(x, y, color);
}
ImageIO.write(image3, "bmp", new File("image.bmp"));
}
}
Image 1
Image 2
Result
The problem here is that you can't subtract the colors direcly. Each pixel is represented by one int value. This int value consists of 4 bytes. These 4 bytes represent the color components ARGB, where
A = Alpha
R = Red
G = Green
B = Blue
(Alpha is the opacity of the pixel, and always 255 (that is, the maximum value) in BMP images).
Thus, one pixel may be represented by
(255, 0, 254, 0)
When you subtract another pixel from this one, like (255, 0, 255, 0), then the third byte will underflow: It would become -1. But since this is part of ONE integer, the resulting color will be something like
(255, 0, 254, 0) -
(255, 0, 255, 0) =
(255, 255, 255, 0)
and thus, be far from what you would expect in this case.
The key point is that you have to split your color into the A,R,G and B components, and perform the computation on these components. In the most general form, it may be implemented like this:
int argb0 = image0.getRGB(x, y);
int argb1 = image1.getRGB(x, y);
int a0 = (argb0 >> 24) & 0xFF;
int r0 = (argb0 >> 16) & 0xFF;
int g0 = (argb0 >> 8) & 0xFF;
int b0 = (argb0 ) & 0xFF;
int a1 = (argb1 >> 24) & 0xFF;
int r1 = (argb1 >> 16) & 0xFF;
int g1 = (argb1 >> 8) & 0xFF;
int b1 = (argb1 ) & 0xFF;
int aDiff = Math.abs(a1 - a0);
int rDiff = Math.abs(r1 - r0);
int gDiff = Math.abs(g1 - g0);
int bDiff = Math.abs(b1 - b0);
int diff =
(aDiff << 24) | (rDiff << 16) | (gDiff << 8) | bDiff;
result.setRGB(x, y, diff);
Since these are grayscale images, the computations done here are somewhat redundant: For grayscale images, the R, G and B components are always equal. And since the opacity is always 255, it does not have to be treated explicitly here. So for your particular case, it should be sufficient to simplify this to
int argb0 = image0.getRGB(x, y);
int argb1 = image1.getRGB(x, y);
// Here the 'b' stands for 'blue' as well
// as for 'brightness' :-)
int b0 = argb0 & 0xFF;
int b1 = argb1 & 0xFF;
int bDiff = Math.abs(b1 - b0);
int diff =
(255 << 24) | (bDiff << 16) | (bDiff << 8) | bDiff;
result.setRGB(x, y, diff);
You did not "subtract one pixel from the other" correctly. getRGB returns "an integer pixel in the default RGB color model (TYPE_INT_ARGB)". What you are seeing is an "overflow" from one byte into the next, and thus from one color into the next.
Suppose you have colors 804020 - 404120 -- this is 3FFF00; the difference in the G component, 1 gets output as FF.
The correct procedure is to split the return value from getRGB into separate red, green, and blue, subtract each one, make sure they fit into unsigned bytes again (I guess your Math.abs is okay) and then write out a reconstructed new RGB value.
I found this which does what you want. It does seem to do the same thing and it may be more "correct" than your code. I assume it's possible to extract the source code.
http://tutorial.simplecv.org/en/latest/examples/image-math.html
/Fredrik Wahlgren

Not sure how to implement the following algorithm

I am trying to implement Histogram/image Equalization on a coloured image. I am not sure if I have implemented it correct because the screen just goes black every time I apply it to a bitmap image. The algorithm is called histogram equalization.
The part of my code that does the Histogram Equalization calculation:
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
A = (pixels[index] >> 24) & 0xFF;
R = (pixels[index] >> 16) & 0xFF;
G = (pixels[index] >> 8) & 0xFF;
B = pixels[index] & 0xFF;
R = Math.round(((R - cumR[minR]) / (cumR[maxR] - cumR[minR])) * 255);
G = Math.round(((G - cumG[minG]) / (cumG[maxG] - cumG[minG])) * 255);
B = Math.round(((B - cumB[minB]) / (cumB[maxB] - cumB[minB])) * 255);
returnBitmap.setPixel(x, y, Color.argb(A, R, G, B));
++index;
}
}
The image appears black once my code is applied, why doesnt it display an equalized image?
You're not calculating the histograms correctly. You shouldn't have a histogram spot for each pixel, you have one for each value[0..255]. You want to count how many pixels have that value, not the total "value" of red.
Here's a good way to get the histogram(and cumulative) for an image. It should get you started on the right path.
// generate histogram channels
// histogram arrays should be [0...255]
for (int i = 0; i < pixels.length; i++) {
R = (pixels[i] >> 16) & 0xFF;
G = (pixels[i] >> 8) & 0xFF;
B = pixels[i] & 0xFF;
histoR[R]++;
histoG[G]++;
histoB[B]++;
}
// generate cumulative histograms
cumR[0] = histoR[0];
cumG[0] = histoG[0];
cumB[0] = histoB[0];
for(int i=1;i<histoR.length;i++){
cumR[i] = histoR[i] + histoR[i-1];
cumG[i] = histoG[i] + histoG[i-1];
cumB[i] = histoB[i] + histoB[i-1];
}
After some research, I was able to find a Histogram Equalization using a LUT example for Java and it is a better option than converting it to another Color Space such as RGB to YUV.
With minimal modification, I was able to use the following code:
Histogram Equalization for Java

Convert alpha channel in white when saving to JPEG with ImageWriter

I'm converting a png image to jpeg with the following snippet of code:
ByteArrayOutputStream image1baos = new ByteArrayOutputStream();
image1 = resizeImage(cropImage(image1, rect1), 150);
ImageWriter writer = null;
Iterator<ImageWriter> iter = ImageIO.getImageWritersByFormatName("jpg");
if (iter.hasNext()) {
writer = (ImageWriter) iter.next();
}
ImageOutputStream ios = ImageIO.createImageOutputStream(image1baos);
writer.setOutput(ios);
// set the compression quality
ImageWriteParam iwparam = new MyImageWriteParam();
iwparam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
iwparam.setCompressionQuality(0.2f);
// write image 1
writer.write(null, new IIOImage(image1, null, null), iwparam);
ios.flush();
// set image 1
c.getItem1().setImageData(image1baos.toByteArray());
I'd like to convert the alpha channel to white, not black as it does by default, but I couldn't find a way to do that. Will appreciate any help!
My solution is ugly and probably slow, but it's a solution :)
BufferedImage img = <your image>
for( int i = 0; i < img.getWidth( ); i++ )
for( int j = 0; j < img.getHeight( ); j++ ) {
// get argb from pixel
int coli = img.getRGB( i, j );
int a = coli >> 24 & 0xFF;
int r = coli >> 16 & 0xFF;
int g = coli >> 8 & 0xFF;
int b = coli & 0xFF;
coli &= ~0xFFFFFFFF;
// do what you want with a, r, g and b, in your case :
a = 0xFF;
// save argb
coli |= a << 24;
coli |= r << 16;
coli |= g << 8;
coli |= b << 0;
img.setRGB( i, j, coli );
}
}
Of course, you can reduce the code by 60% if you just need to adjust the alpha channel. I kept all RGB stuff for further referece.

Categories