I just need to check three pixel values on the whole image (Image instance). I'd really like to do this without allocating an array of pixels.
Is that possible? Something like BufferedImage's getRGB()?
Yes. You are on the right track with the getRGB method, as it would return a single int with three values (RGB). To convert the single int to three ints, you can do two things:
1. Use the Color class's built-in constructor:
int rgb = img.getGraphics().getRGB(0,0);//Get color of pixel 0,0
Color c = new Color(rgb); //c now contains the r, g, and b values.
2. Build the decoder yourself:
int rgb = img.getGraphics().getRGB(0,0) //Get color of pixel 0,0
int r = rgb >> 24;
int g = 0 >> 16;
int b = 0 >> 8;
Both methods will return a r, g, and b value, which can be used.
Related
I don't understand how WritableRaster class of Java works. I tried looking at the documentation but don't understand how it takes values from an array of pixels. Plus, I am not sure what the array of pixels consists.
Here I explain.
What I want to do is : Shamir's Secret Sharing on images. For that I need to fetch an image in BuferedImage image. I take a secret image. Create shares by running a 'function' on each pixel of the image. (basically changing the pixel values by something)
Snippet:
int w = image.getWidth();
int h = image.getHeight();
for (int i = 0; i < h; i++)
{
for (int j = 0; j < w; j++)
{
int pixel = image.getRGB(j, i);
int red = (pixel >> 16) & 0xFF;
int green = (pixel >> 8) & 0xFF;
int blue = (pixel) & 0xFF;
pixels[j][i] = share1(red, green, blue);
// Now taking those rgb values. I change them using some function and return an int value. Something like this:
public int share1 (r, g, b)
{
a1 = rand.nextInt(primeNumber);
total1 = r+g+b+a1;
new_pixel = total1 % primeNumber;
return new_pixel;
}
// This 2d array pixels has all the new color values, right? But now I want to build an image using this new values. So what I did is.
First converted this pixels array to a list.
Now this list has pixel values of the new image. But to build an image using RasterObj.setPixels() method, I need an array with RGB values [I MIGHT BE WRONG HERE!]
So I take individual values of a list and find rgb values and put it consecutively in a new 1D array
pixelvector..something like this (r1,g1,b1,r2,g2,b2,r3,g3,b3...)
Size of the list is wh because it contains single pixel value of each pixel.
BUT, Size of the new array pixelvector will become wh*3 since it contains r,g,b values of each pixel..
Then to form image I do this: Snippet
BufferedImage image_share1 = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
WritableRaster rast = (WritableRaster) image_share1.getData();
rast.setPixels(0, 0, w, h, pixelvector);
image_share1.setData(rast);
ImageIO.write(image_share1,"JPG",new File("share1.jpg"));
If I put an array with just single pixel values in setPixels() method, it does not return from that function! But if I put an array with separate r,g,b values, it returns from the function. But doing the same thing for share1 , share 2 etc.. I am getting nothing but shades of blue. So, I am not even sure I will be able to reconstruct the image..
PS - This might look like a very foolish code I know. But I had just one day to do this and learn about images in Java. So I am doing the best I can.
Thanks..
A Raster (like WriteableRaster and its subclasses) consists of a SampleModel and a DataBuffer. The SampleModel describes the sample layout (is it pixel packed, pixel interleaved, band interleaved? how many bands? etc...) and dimensions, while the DataBuffer describes the actual storage (are the samples bytes, short, ints, signed or unsigned? single array or array per band? etc...).
For BufferedImage.TYPE_INT_RGB the samples will be pixel packed (all 3 R, G and B samples packed into a single int for each pixel), and data/transfer type DataBuffer.TYPE_INT.
Sorry for not answering your question regarding WritableRaster.setPixels(...) directly, but I don't think it's the method you are looking for (in most cases, it's not). :-)
For your goal, I think what you should do is something like:
// Pixels in TYPE_INT_RGB format
// (ie. 0xFFrrggbb, where rr is two bytes red, gg two bytes green etc)
int[] pixelvector = new int[w * h];
BufferedImage image_share1 = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
WritableRaster rast = image_share1.getRaster(); // Faster! No copy, and live updated
rast.setDataElements(0, 0, w, h, pixelvector);
// No need to call setData, as we modified image_share1 via it's raster
ImageIO.write(image_share1,"JPG",new File("share1.jpg"));
I'm assuming the rest of your code for modifying each pixel value is correct. :-)
But just a tip: You'll make it easier for yourself (and faster due to less conversion) if you use a 1D array instead of a 2D array. I.e.:
int[] pixels = new int[w * h]; // instead of int[][] pixels = new int[w][h];
// ...
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
// ...
pixels[y * w + x] = share1(red, green, blue); // instead of pixels[x][y];
}
}
I want to extract the R,G and B values of the pixels of an image. I do it in two ways.
File img_file = new File("../foo.png");
BufferedImage img = ImageIO.read(img_file);
1st method(which works fine):
img.getRaster().getPixel(i, j, rgb);
2nd method(which throws new IllegalArgumentException("More than one component per pixel"))
red = img.getColorModel().getRed(img.getRGB(i, j));
What is the reason for this behaviour?
Normally when I want to extract RGB from a BufferedImage I do something like this:
File img_file = new File("../foo.png");
BufferedImage img = ImageIO.read(img_file);
Color color = new Color(img.getRGB(i,j));
int red = color.getRed();
Based on the JavaDocs
An IllegalArgumentException is thrown if pixel values for this
ColorModel are not conveniently representable as a single int
It would suggest that the underlying color model is representable by a single int value
You may also want to take a look at this answer for some more details
Typically, you would simply take the int packed pixel from the image and use Color to generate a Color representation and then extract the values from there...
First, get the int packed value of the pixel at x/y...
int pixel = img.getRGB(i, j);
Use this to construct a Color object...
Color color = new Color(pixel, true); // True if you care about the alpha value...
Extract the R, G, B values...
int red = color.getRed();
int green = color.getGreen();
int blue = color.getBlue();
Now you could simply do some bit maths, but this is simpler and is more readable - IMHO
I have a BufferedImage which is of TYPE_BYTE_GRAY and I need to get the pixel value at x,y. I know I can't use getRGB as it returns the wrong color model so how do I go about it? Many thanks!
Get java.awt.image.Raster from BufferedImage by invoking getData() method.
Then use
int getSample(int x, int y, int b)
on received object, where b is the color channel (where each color is represented by 8 bits).
For gray scale
b = 0.
For RGB image
b = 0 ==>> R channel,
b = 1 ==>> G channel,
b = 2 ==>> B channel.
I guess what you looking for is the math to get a one number to represent the Gray scale in that RGB, there are few diff ways, follow some of them:
The lightness method averages the most prominent and least prominent
colors: (max(R, G, B) + min(R, G, B)) / 2.
The average method simply averages the values: (R + G + B) / 3.
The luminosity method is a more sophisticated version of the average
method. It also averages the values, but it forms a weighted average
to account for human perception. We’re more sensitive to green than
other colors, so green is weighted most heavily. The formula for
luminosity is 0.21 R + 0.71 G + 0.07 B.
Reference : http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
Provided that you have a BufferedImage named grayImg whose type is TYPE_BYTE_GRAY
int width = grayImg.getWidth();
int height = grayImg.getHeight();
byte[] dstBuff = ((DataBufferByte) grayImg.getRaster().getDataBuffer()).getData();
Then the gray value at (x,y) would simply be:
dstBuff[x+y*width] & 0xFF;
I want to do a simple color to grayscale conversion using java.awt.image.BufferedImage. I'm a beginner in the field of image processing, so please forgive if I confused something.
My input image is an RGB 24-bit image (no alpha), I'd like to obtain a 8-bit grayscale BufferedImage on the output, which means I have a class like this (details omitted for clarity):
public class GrayscaleFilter {
private BufferedImage colorFrame;
private BufferedImage grayFrame =
new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
I've succesfully tried out 2 conversion methods until now, first being:
private BufferedImageOp grayscaleConv =
new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_GRAY), null);
protected void filter() {
grayscaleConv.filter(colorFrame, grayFrame);
}
And the second being:
protected void filter() {
WritableRaster raster = grayFrame.getRaster();
for(int x = 0; x < raster.getWidth(); x++) {
for(int y = 0; y < raster.getHeight(); y++){
int argb = colorFrame.getRGB(x,y);
int r = (argb >> 16) & 0xff;
int g = (argb >> 8) & 0xff;
int b = (argb ) & 0xff;
int l = (int) (.299 * r + .587 * g + .114 * b);
raster.setSample(x, y, 0, l);
}
}
}
The first method works much faster but the image produced is very dark, which means I'm losing bandwidth which is unacceptable (there is some color conversion mapping used between grayscale and sRGB ColorModel called tosRGB8LUT which doesn't work well for me, as far as I can tell but I'm not sure, I just suppose those values are used). The second method works slower, but the effect is very nice.
Is there a method of combining those two, eg. using a custom indexed ColorSpace for ColorConvertOp? If yes, could you please give me an example?
Thanks in advance.
public BufferedImage getGrayScale(BufferedImage inputImage){
BufferedImage img = new BufferedImage(inputImage.getWidth(), inputImage.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics g = img.getGraphics();
g.drawImage(inputImage, 0, 0, null);
g.dispose();
return img;
}
There's an example here which differs from your first example in one small aspect, the parameters to ColorConvertOp. Try this:
protected void filter() {
BufferedImageOp grayscaleConv =
new ColorConvertOp(colorFrame.getColorModel().getColorSpace(),
grayFrame.getColorModel().getColorSpace(), null);
grayscaleConv.filter(colorFrame, grayFrame);
}
Try modifying your second approach. Instead of working on a single pixel, retrieve an array of argb int values, convert that and set it back.
The second method is based on pixel's luminance therefore it obtains more favorable visual results. It could be sped a little bit by optimizing the expensive floating point arithmetic operation when calculate l using lookup array or hash table.
Here is a solution that has worked for me in some situations.
Take image height y, image width x, the image color depth m, and the integer bit size n. Only works if (2^m)/(x*y*2^n) >= 1.
Keep a n bit integer total for each color channel as you process the initial gray scale values. Divide each total by the (x*y) for the average value avr[channel] of each channel. Add (192 - avr[channel]) to each pixel for each channel.
Keep in mind that this approach probably won't have the same level of quality as standard luminance approaches, but if you're looking for a compromise between speed and quality, and don't want to deal with expensive floating point operations, it may work for you.
I'm trying to color individual pixels in a BufferedImage (TYPE_INT_RGB) using setRGB(), but I'm not sure how to format the RGB values. I want the result as a single integer. Is there a method that will take three int values (red, green, blue) and return a correctly formatted integer for setRGB()?
new Color(red, green, blue).getRGB();
Assuming you have ints r, g, and b, you can do:
int pixel = (r << 16) | (g << 8) | b;
This is because pixels in a BufferedImage are 4-byte ints. The 4-bytes represent Alpha, Red, Green, and Blue, in that order. So, if you shift red left by two bytes and green left by one byte, then bitwise-or r, g, and b, you will get a valid pixel to use with setRGB().