BufferedImage: Set all pixels white where alpha = 0 - java

I have a BufferedImage and would like to set all pixels that are fully transparent to be fully-transparent white (rather than transparent blank, or whatever may be in the source file). I can of course loop through the entire image using getRGB and setRGB, but is there some other way that would be much faster?

You can set the pixels like this:
public void setRGB(int startX,
int startY,
int w,
int h,
int[] rgbArray,
int offset,
int scansize)
This method sets an array of integer pixels in the default RGB color model (TYPE_INT_ARGB) and default sRGB color space, into a portion of the image data. Color conversion takes place if the default model does not match the image ColorModel. There are only 8-bits of precision for each color component in the returned data when using this method. With a specified coordinate (x, y) in the this image, the ARGB pixel can be accessed in this way:
pixel = rgbArray[offset + (y-startY)*scansize + (x-startX)];

I can't say for sure if it is faster, but take a look at the ColorConvertOp class.
I haven't used it personally, but it might be what you are looking for.

Related

Weird RGB value from java BufferedImage getRGB()

Im trying to get RGB value from a grayscale image and it was return wrong(?) RGB value. Here is the code.
Color color = new Color(image.getRGB(0, 0));
System.out.print(color.getRed());
System.out.print(color.getGreen());
System.out.print(color.getBlue());
At a color picker was using, the first pixel RGB value R:153,G:153,B:153 but my code print
203203203
Why this thing happened? And also, im trying to use MATLAB Grayscale values for the exact pixel is also 153. Am i doing this wrong?
this is the image
This is because image.getRGB(x, y) by definition returns ARGB values in sRGB colorspace.
From the JavaDoc:
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace. Color conversion takes place if this default model does not match the image ColorModel.
Matlab and other tools likely use a linear RGB or gray color space, and this is why the values are different.
You can get the same values from Java if the image is gray scale (TYPE_BYTE_GRAY), by accessing the Raster and its getDataElements method.
Object pixel = raster.getDataElements(0, 0, null); // x, y, data array (initialized if null)
If the image is TYPE_BYTE_GRAY, pixel will be a byte array with a single element.
int grayValue = ((byte[]) pixel)[0] & 0xff;
This value will be 153 in your case.
Just try this
System.out.println(image.getRaster().getSample(0, 0, 0)); //R
System.out.println(image.getRaster().getSample(0, 0, 1)); //G
System.out.println(image.getRaster().getSample(0, 0, 2)); //B
Here getSample(int x, int y, int b) Returns the sample in a specified band for the pixel located at (x,y) as an int. [According to this]
Parameters:
x - The X coordinate of the pixel location
y - The Y coordinate of the pixel location
b - The band to return b = [0,1,2] for [R,G,B]
and also take a look at BufferedImage getRGB vs Raster getSample

Java: Custom BufferedImage / More than RGBA

I want to save not only Red, Green, Blue and Alpha in my Image.
Every pixel also needs Z-Depth information, like, how far it was away from the camera.
Furthermore, I need to display the Image in a JFrame, so I can't use my custom Image class, but instead I need a BufferedImage or a subclass of it.
The Z-Depth shouldn't be visible in the JFrame. I just want to store it.
I've read much about the BufferedImage class.
I assume that I will have to extend classes like SampleModel or ColorModel, but I can't figure out how that should be done.
A nice solution would be to instantiate a new BufferedImage but with a custom Pixelclass that also stores depth, without actually extending the BufferedImage.
But any solution and any idea will be appreciated!
Does anybody know, which classes I have to extend, in order to save more information in every Pixel?
Well, I still don't understand why putting extra data into the BufferedImage would be more "flexible and extensible" in this case, so I'd probably just go with something like this myself:
public class DeepImage {
private final BufferedImage image;
private final float[] zIndex; // Must be image.width * image.height long
// TODO: Constructors and accessors as needed...
}
But of course, you could put the Z-depth into a BufferedImage, if you really like to:
public class DeepBufferedImage extends BufferedImage {
private final float[] zIndex;
public DeepImage(final BufferedImage image, final float[] zIndex) {
super(image.getColorModel(), image.getRaster(), image.getColorModel().isAlphaPremultiplied(), null);
if (zIndex.length != image.getWidth() * image.getHeight()) {
throw new IllegalArgumentException("bad zIndex length");
}
this.zIndex = zIndex; // Obviously not safe, but we'll go for performance here
}
public float getZ(int x, int y) {...};
public void setZ(int x, int y, float z) {...};
public float[] getZ(int x, int y, int w, int h) {...};
public void setZ(int x, int y, int w, int h, float[] z) {...};
}
The above class would work just like a normal BufferedImage for all cases, except it also happens to have a Z-index for each pixel.
Alternatively, you could make the Z-index part of the DataBuffer/SampleModel/Raster, but I think that would also require a custom ColorModel (or ColorSpace), and require quite a huge effort. These classes don't normally work well with "mixed" data types. Of course you could pack all your samples into one long per pixel, to avoid mixing data types:
long rgbaz = getRGB(x, y) << 32l | Float.toIntBits(getZ(x, y));
But, this would obviously kill performance.
So, in short, I just don't see the benefit from doing that. Especially as you don't want to visualize anything but the RGBA values, and there is also no file format I know of, that would support such a pixel layout*.
All of that said, there might still be a reason for you to implement this, I just don't see the need for it, given the requirements mentioned so far. Could of course be that I am missing something important. :-)
*) The TIFF format would support storing it, if you were using float samples for all 5 (R,G,B,A,Z) channels, or the packed long (64 bits) samples. However, I think it would be a lot simpler (and be a lot more compatible), to just store a normal 32 bit (8,8,8,8) RGBA image, and then a separate 1 channel float image in a multipage TIFF.
I believe that you are looking for ImageComponent3D with allows you to define an 3D array of pixels.
ImageComponent3D(int format, int width, int height, int depth)
Constructs an 3D image component object using the specified format, width, height and depth.
ImageComponent3D Documentation

Get RGB components from ColorModel

I want to extract the R,G and B values of the pixels of an image. I do it in two ways.
File img_file = new File("../foo.png");
BufferedImage img = ImageIO.read(img_file);
1st method(which works fine):
img.getRaster().getPixel(i, j, rgb);
2nd method(which throws new IllegalArgumentException("More than one component per pixel"))
red = img.getColorModel().getRed(img.getRGB(i, j));
What is the reason for this behaviour?
Normally when I want to extract RGB from a BufferedImage I do something like this:
File img_file = new File("../foo.png");
BufferedImage img = ImageIO.read(img_file);
Color color = new Color(img.getRGB(i,j));
int red = color.getRed();
Based on the JavaDocs
An IllegalArgumentException is thrown if pixel values for this
ColorModel are not conveniently representable as a single int
It would suggest that the underlying color model is representable by a single int value
You may also want to take a look at this answer for some more details
Typically, you would simply take the int packed pixel from the image and use Color to generate a Color representation and then extract the values from there...
First, get the int packed value of the pixel at x/y...
int pixel = img.getRGB(i, j);
Use this to construct a Color object...
Color color = new Color(pixel, true); // True if you care about the alpha value...
Extract the R, G, B values...
int red = color.getRed();
int green = color.getGreen();
int blue = color.getBlue();
Now you could simply do some bit maths, but this is simpler and is more readable - IMHO

Edit pixel values

How to edit pixel values of an image in Java. Is there any method to change pixel values?
For example:
BufferedImage image = ...
image.setRGB(x, y, 0);
From documentation:
void setRGB(int x, int y, int rgb)
//Sets a pixel in this BufferedImage to the specified RGB value.
In BufferedImage: public void setRGB(int x,
int y,
int rgb)
Sets a pixel in this BufferedImage to
the specified RGB value. The pixel is
assumed to be in the default RGB color
model, TYPE_INT_ARGB, and default sRGB
color space. For images with an
IndexColorModel, the index with the
nearest color is chosen.
http://download.oracle.com/javase/6/docs/api/java/awt/image/BufferedImage.html

How do I add a spotlights to an image

I have an image that I want to show some 'spotlights' on, like they do on TV. The rest of the image should be darker than the original, and the person that I'm spotlighting should be normal. I have the x,y and radius of the spotlight, but I'm not sure how to change the brightness at that location.
Also, if I have two spotlights and they intersect, the intersection should be brighter than either of the spotlights.
Use RescaleOp on the original image and subimages. Given that you have a buffered image (called biDest) that contains the image, call RescaleOp(0.6, 0, null) on it to make it darker. Then, to add a (rectangular) spotlight, call the following:
public void spotLight(int x, int y, int w, int h)
{
BufferedImage i = biDest.getSubimage(x, y, w, h);
RescaleOp rescale = new RescaleOp(SPOTLIGHT_BRIGHTNESS, 0, null);
rescale.filter(i, i);
repaint();
}
A simple way is to convert the color to HSL, lower L to darken, increase to lighten, then convert back to RGB and set the pixel.
http://www.mpa-garching.mpg.de/MPA-GRAPHICS/hsl-rgb.html

Categories