Program can't grayscale certain images? - java

I am trying to create a program that applies a grayscale filter over a chosen image for my computer science class.
I found the following code in a tutorial, it demonstrates the grayscale algorithm where the R, G, and B values of every pixel in the image is replaced with the average of the RGB value.
import java.io.File;
import java.io.IOException;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
public class Grayscale{
public static void main(String args[])throws IOException{
BufferedImage img = null;
File f = null;
//read image
try{
f = new File("D:\\Image\\Taj.jpg");
img = ImageIO.read(f);
}catch(IOException e){
System.out.println(e);
}
//get image width and height
int width = img.getWidth();
int height = img.getHeight();
//convert to grayscale
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int p = img.getRGB(x,y);
int a = (p>>24)&0xff;
int r = (p>>16)&0xff;
int g = (p>>8)&0xff;
int b = p&0xff;
//calculate average
int avg = (r+g+b)/3;
//replace RGB value with avg
p = (a<<24) | (avg<<16) | (avg<<8) | avg;
img.setRGB(x, y, p);
}
}
//write image
try{
f = new File("D:\\Image\\Output.jpg");
ImageIO.write(img, "jpg", f);
}catch(IOException e){
System.out.println(e);
}
}//main() ends here
}//class ends here
The problem is, the program does not properly apply the grayscale filter over certain images. For example, the code can properly apply a filter over this image, creating a grayscale image. But the following image of a
rainbow looks like this with the grayscale filter applied to it.
Why are red, green, blue, and pink showing with the filter over it? My understanding is that when the R, G, and B values of a pixel are the same, a gray colour should be created?

From the JavaDoc of BufferedImage.setRGB()
"Sets a pixel in this BufferedImage to the specified RGB value. The pixel is assumed to be in the default RGB color model, TYPE_INT_ARGB, and default sRGB color space. For images with an IndexColorModel, the index with the nearest color is chosen."
To solve this, create a new BufferedImage with the required color space, the same dimensions as the original image, and write the pixels to that, not back to the original BufferedImage.
BufferedImage targetImage = new BufferedImage(img.getWidth(),
img.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
write the pixels to this image instead...
targetImage.setRGB(x, y, p);
then save this new image..
ImageIO.write(targetImage, "jpg", f);
As a note, the more accurate way to convert a colour image to grey scale is to convert the RGB pixels to YUV colour space and then use the luminance value, rather than the average of RGB. This is because the brightness of R G and B are weighted differently.

Related

Converting Grayscale values from .csv to BufferedImage

I'm attempting to convert a .csv file containing grayscale values to an image using BufferedImage.
The csv is read into pixArray[] initially, in which all values are doubles.
I am attempting to use BufferedImage to create a 100x100px output image with the code
BufferedImage image = new BufferedImage(width,height,BufferedImage.
TYPE_BYTE_GRAY);
for(int x = 0; x < width; x++)
{
for(int y = 0; y < height; y++)
{
image.setRGB(x, y, (int)Math.round(pixArray[y]));
}
}
File file_out = new File("output.png");
try {
ImageIO.write(image, "png", file_out);
} catch (IOException e) {
e.printStackTrace();
}
but all I have as output is a 100x100 black square.
I've tried alternatives to TYPE_BYTE_GRAY with no success, as well as the png format for outout, and can't find what is producing this error.
It should be
int g = (int)Math.round(pixArray[y]);
image.setRGB(x,y,new Color(g,g,g).getRGB());
What your current code is doing is setting the alpha to the pixel value but leaving the color components all zero.
Posting an alternative solution. While Jim's answer is correct and works, it is also one of the slowest* ways to put sample values into a gray scale BufferedImage.
A BufferedImage with TYPE_BYTE_GRAY don't need all the conversion to and from RGB colors. To put the gray values directly into the image, do it through the image's raster:
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
WritableRaster raster = image.getRaster();
for (int y = 0; y < height; y++) {
int value = (int) Math.round(pixArray[y])
for (int x = 0; x < width; x++) {
raster.setSample(x, y, 0, value);
}
}
*) Slow because of creating excessive throw-away Color instances, but mostly due to color space conversion to/from sRGB color space. Probably not very noticeable in a 100x100 image, but if you try 1000x1000 or larger, you will notice.
PS: I also re-arranged the loops to loop over x in the inner loop. This is normally faster, especially when reading values, due to data locality and caching in modern CPUs. In your case, it matters mostly because you only need to compute (round, cast) the value for each row.

Can't get correct alpha values from individual BufferedImage pixels

I need to get RGBA values, modify RGB, then set RGB (and A) values again.
However, it seems to return 255 for all the pixels, even though I know the image I'm loading has transparent pixels in it.
Here's an SSCCE
import javax.imageio.ImageIO;
import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
public class TestImage {
public static void main(String args[]) {
try {
BufferedImage image = ImageIO.read(new File("/home/affy/Pictures/salamenceavatar.png"));
BufferedImage buff = new BufferedImage(
image.getWidth(),
image.getHeight(),
BufferedImage.TYPE_INT_ARGB
);
Graphics2D g = buff.createGraphics();
g.drawImage(image, 0, 0, null);
for(int x = 0; x < image.getWidth(); x++) {
for(int y = 0; y < image.getHeight(); y++) {
Color c = new Color(buff.getRGB(x, y));
int alpha = c.getAlpha();
System.out.println(alpha);
}
}
}catch (IOException e) {
e.printStackTrace();
}
}
}
Use Color(int, boolean) instead, passing it true to tell it you want to extract the alpha value from the supplied packed int
Color c = new Color(buff.getRGB(x, y), true);
From the JavaDocs...
Creates an sRGB color with the specified combined RGBA value
consisting of the alpha component in bits 24-31, the red component in
bits 16-23, the green component in bits 8-15, and the blue component
in bits 0-7. If the hasalpha argument is false, alpha is defaulted to
255. Parameters: rgba - the combined RGBA components hasalpha - true if the alpha bits are valid; false
otherwise

Removing BufferedImage pixel values and or setting them transparent

I have been working with the polygon class and trying to set the pixel values inside of the polygon to transparent or remove them all together if this is possible, however I have hit a bit of a wall as I am trying to store the values as RGB int values and don't know how I would be able to make a pixel transparent/removed via this method.
Additionally to this I would also like to do the same thing but keeping pixels inside the polygon and deleting those outside if possible in order to be left with only the pixels contained within the polygon. I have searched around for this before but to no avail.
I did attempt to create a SSCCE for this to make it easier to work with and view for anyone taking the time to help however as its part of a much larger programme that I am working on creating one is proving to take some time, however once I have one working to better demonstrate this problem I will edit this post.
Thank you to anyone for taking the time to help me with this problem
Below I have some code for what I am currently using to segment the pixels that are contained within an already specified polygon. This is extremely similar to the way i do it for setting pixels outside the polygon to transparent only with the if statement arguments swapped around to remove a segment of the image and haveing a return for newImage rather than save image stuff and it works perfectly, however when I do it this way to save the pixels contained in the polygon it doesn't save for some reason.
public void saveSegment(int tabNum, BufferedImage img) {
segmentation = new GUI.Segmentation();
Polygon p = new Polygon();
Color pixel;
p = createPolygon(segmentation);
int height = img.getHeight();
int width = img.getWidth();
newImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//loop through the image to fill the 2d array up with the segmented pixels
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
//If the pixel is inside polygon
if(p.contains(x, y) == true) {
pixel = new Color(img.getRGB(x, y));
//set pixel equal to the RGB value of the pixel being looked at
int r = pixel.getRed(); // red component 0...255
int g = pixel.getGreen(); // green component 0...255
int b = pixel.getBlue(); // blue component 0...255
int a = pixel.getAlpha(); // alpha (transparency) component 0...255
int col = (a << 24) | (r << 16) | (g << 8) | b;
newImage.setRGB(x, y, col);
}
else {
pixel = new Color(img.getRGB(x, y));
int a = 0; // alpha (transparency) component 0...255
int col = (a << 24);
newImage.setRGB(x, y, col);
}
}
}
try {
//then save as image once all in correct order
ImageIO.write(newImage, "bmp", new File("saved-Segment.bmp"));
JOptionPane.showMessageDialog(null, "New image saved successfully");
} catch (IOException e) {
e.printStackTrace();
}
}
An easier way is to use Java2D's clipping capability:
BufferedImage cutHole(BufferedImage image, Polygon holeShape) {
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(), image.getType());
Graphics2D g = newImage.createGraphics();
Rectangle entireImage =
new Rectangle(image.getWidth(), image.getHeight());
Area clip = new Area(entireImage);
clip.subtract(new Area(holeShape));
g.clip(clip);
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
BufferedImage clipToPolygon(BufferedImage image, Polygon polygon) {
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(), image.getType());
Graphics2D g = newImage.createGraphics();
g.clip(polygon);
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}

Convert 2D array in Java to Image

I need to convert a 2D array of pixel intensity data of a grayscale image back to an image. I tried this:
BufferedImage img = new BufferedImage(
regen.length, regen[0].length, BufferedImage.TYPE_BYTE_GRAY);
for(int x = 0; x < regen.length; x++){
for(int y = 0; y<regen[x].length; y++){
img.setRGB(x, y, (int)Math.round(regen[x][y]));
}
}
File imageFile = new File("D:\\img\\conv.bmp");
ImageIO.write(img, "bmp", imageFile);
where "regen" is a 2D double array. I am getting an output which is similar but not exact. There are few pixels that are totally opposite to what it must be (I get black color for a pixel which has a value of 255). Few gray shades are also taken as white. Can you tell me what is the mistake that I am doing?
Try some code like this:
public void writeImage(int Name) {
String path = "res/world/PNGLevel_" + Name + ".png";
BufferedImage image = new BufferedImage(color.length, color[0].length, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < 200; x++) {
for (int y = 0; y < 200; y++) {
image.setRGB(x, y, color[x][y]);
}
}
File ImageFile = new File(path);
try {
ImageIO.write(image, "png", ImageFile);
} catch (IOException e) {
e.printStackTrace();
}
}
BufferedImage.TYPE_BYTE_GRAY is unsigned and non-indexed. Moreover,
When data with non-opaque alpha is stored in an image of this type, the color data must be adjusted to a non-premultiplied form and the alpha discarded, as described in the AlphaComposite documentation.
At a minimum you need to preclude sign extension and mask off all but the lowest eight bits of the third parameter to setRGB(). Sample data that reproduces the problem would be dispositive.

Image Processing in Java

I want to extract the pixel values of the jpeg image using the JAVA language, and need to store it in array(bufferdArray) for further manipulation. So how i can extract the pixel values from jpeg image format?
Have a look at BufferedImage.getRGB().
Here is a stripped-down instructional example of how to pull apart an image to do a conditional check/modify on the pixels. Add error/exception handling as necessary.
public static BufferedImage exampleForSO(BufferedImage image) {
BufferedImage imageIn = image;
BufferedImage imageOut =
new BufferedImage(imageIn.getWidth(), imageIn.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
int width = imageIn.getWidth();
int height = imageIn.getHeight();
int[] imageInPixels = imageIn.getRGB(0, 0, width, height, null, 0, width);
int[] imageOutPixels = new int[imageInPixels.length];
for (int i = 0; i < imageInPixels.length; i++) {
int inR = (imageInPixels[i] & 0x00FF0000) >> 16;
int inG = (imageInPixels[i] & 0x0000FF00) >> 8;
int inB = (imageInPixels[i] & 0x000000FF) >> 0;
if ( conditionChecker_inRinGinB ){
// modify
} else {
// don't modify
}
}
imageOut.setRGB(0, 0, width, height, imageOutPixels, 0, width);
return imageOut;
}
The easiest way to get a JPEG into a java-readable object is the following:
BufferedImage image = ImageIO.read(new File("MyJPEG.jpg"));
BufferedImage provides methods for getting RGB values at exact pixel locations in the image (X-Y integer coordinates), so it'd be up to you to figure out how you want to store that in a single-dimensional array, but that's the gist of it.
There is a way of taking a buffered image and converting it into an integer array, where each integer in the array represents the rgb value of a pixel in the image.
int[] pixels = ((DataBufferInt)image.getRaster().grtDataBuffer()).getData();
The interesting thing is, when an element in the integer array is edited, the corresponding pixel in the image is as well.
In order to find a pixel in the array from a set of x and y coordinates, you would use this method.
public void setPixel(int x, int y ,int rgb){
pixels[y * image.getWidth() + x] = rgb;
}
Even with the multiplication and addition of coordinates, it is still faster than using the setRGB() method in the BufferedImage class.
EDIT:
Also keep in mind, the image needs type needs to be that of TYPE_INT_RGB, and isn't by default. It can be converted by creating a new image of the same dimensions, and of the type of TYPE_INT_RGB. Then using the graphics object of the new image to draw the original image to the new one.
public BufferedImage toIntRGB(BufferedImage image){
if(image.getType() == BufferedImage.TYPE_INT_RGB)
return image;
BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight, BufferedImage.TYPE_INT_RGB);
newImage.getGraphics().drawImage(image, 0, 0, null);
return newImage;
}

Categories