Image Processing in Java - java

I want to extract the pixel values of the jpeg image using the JAVA language, and need to store it in array(bufferdArray) for further manipulation. So how i can extract the pixel values from jpeg image format?

Have a look at BufferedImage.getRGB().
Here is a stripped-down instructional example of how to pull apart an image to do a conditional check/modify on the pixels. Add error/exception handling as necessary.
public static BufferedImage exampleForSO(BufferedImage image) {
BufferedImage imageIn = image;
BufferedImage imageOut =
new BufferedImage(imageIn.getWidth(), imageIn.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
int width = imageIn.getWidth();
int height = imageIn.getHeight();
int[] imageInPixels = imageIn.getRGB(0, 0, width, height, null, 0, width);
int[] imageOutPixels = new int[imageInPixels.length];
for (int i = 0; i < imageInPixels.length; i++) {
int inR = (imageInPixels[i] & 0x00FF0000) >> 16;
int inG = (imageInPixels[i] & 0x0000FF00) >> 8;
int inB = (imageInPixels[i] & 0x000000FF) >> 0;
if ( conditionChecker_inRinGinB ){
// modify
} else {
// don't modify
}
}
imageOut.setRGB(0, 0, width, height, imageOutPixels, 0, width);
return imageOut;
}

The easiest way to get a JPEG into a java-readable object is the following:
BufferedImage image = ImageIO.read(new File("MyJPEG.jpg"));
BufferedImage provides methods for getting RGB values at exact pixel locations in the image (X-Y integer coordinates), so it'd be up to you to figure out how you want to store that in a single-dimensional array, but that's the gist of it.

There is a way of taking a buffered image and converting it into an integer array, where each integer in the array represents the rgb value of a pixel in the image.
int[] pixels = ((DataBufferInt)image.getRaster().grtDataBuffer()).getData();
The interesting thing is, when an element in the integer array is edited, the corresponding pixel in the image is as well.
In order to find a pixel in the array from a set of x and y coordinates, you would use this method.
public void setPixel(int x, int y ,int rgb){
pixels[y * image.getWidth() + x] = rgb;
}
Even with the multiplication and addition of coordinates, it is still faster than using the setRGB() method in the BufferedImage class.
EDIT:
Also keep in mind, the image needs type needs to be that of TYPE_INT_RGB, and isn't by default. It can be converted by creating a new image of the same dimensions, and of the type of TYPE_INT_RGB. Then using the graphics object of the new image to draw the original image to the new one.
public BufferedImage toIntRGB(BufferedImage image){
if(image.getType() == BufferedImage.TYPE_INT_RGB)
return image;
BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight, BufferedImage.TYPE_INT_RGB);
newImage.getGraphics().drawImage(image, 0, 0, null);
return newImage;
}

Related

Program can't grayscale certain images?

I am trying to create a program that applies a grayscale filter over a chosen image for my computer science class.
I found the following code in a tutorial, it demonstrates the grayscale algorithm where the R, G, and B values of every pixel in the image is replaced with the average of the RGB value.
import java.io.File;
import java.io.IOException;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
public class Grayscale{
public static void main(String args[])throws IOException{
BufferedImage img = null;
File f = null;
//read image
try{
f = new File("D:\\Image\\Taj.jpg");
img = ImageIO.read(f);
}catch(IOException e){
System.out.println(e);
}
//get image width and height
int width = img.getWidth();
int height = img.getHeight();
//convert to grayscale
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int p = img.getRGB(x,y);
int a = (p>>24)&0xff;
int r = (p>>16)&0xff;
int g = (p>>8)&0xff;
int b = p&0xff;
//calculate average
int avg = (r+g+b)/3;
//replace RGB value with avg
p = (a<<24) | (avg<<16) | (avg<<8) | avg;
img.setRGB(x, y, p);
}
}
//write image
try{
f = new File("D:\\Image\\Output.jpg");
ImageIO.write(img, "jpg", f);
}catch(IOException e){
System.out.println(e);
}
}//main() ends here
}//class ends here
The problem is, the program does not properly apply the grayscale filter over certain images. For example, the code can properly apply a filter over this image, creating a grayscale image. But the following image of a
rainbow looks like this with the grayscale filter applied to it.
Why are red, green, blue, and pink showing with the filter over it? My understanding is that when the R, G, and B values of a pixel are the same, a gray colour should be created?
From the JavaDoc of BufferedImage.setRGB()
"Sets a pixel in this BufferedImage to the specified RGB value. The pixel is assumed to be in the default RGB color model, TYPE_INT_ARGB, and default sRGB color space. For images with an IndexColorModel, the index with the nearest color is chosen."
To solve this, create a new BufferedImage with the required color space, the same dimensions as the original image, and write the pixels to that, not back to the original BufferedImage.
BufferedImage targetImage = new BufferedImage(img.getWidth(),
img.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
write the pixels to this image instead...
targetImage.setRGB(x, y, p);
then save this new image..
ImageIO.write(targetImage, "jpg", f);
As a note, the more accurate way to convert a colour image to grey scale is to convert the RGB pixels to YUV colour space and then use the luminance value, rather than the average of RGB. This is because the brightness of R G and B are weighted differently.

Converting Grayscale values from .csv to BufferedImage

I'm attempting to convert a .csv file containing grayscale values to an image using BufferedImage.
The csv is read into pixArray[] initially, in which all values are doubles.
I am attempting to use BufferedImage to create a 100x100px output image with the code
BufferedImage image = new BufferedImage(width,height,BufferedImage.
TYPE_BYTE_GRAY);
for(int x = 0; x < width; x++)
{
for(int y = 0; y < height; y++)
{
image.setRGB(x, y, (int)Math.round(pixArray[y]));
}
}
File file_out = new File("output.png");
try {
ImageIO.write(image, "png", file_out);
} catch (IOException e) {
e.printStackTrace();
}
but all I have as output is a 100x100 black square.
I've tried alternatives to TYPE_BYTE_GRAY with no success, as well as the png format for outout, and can't find what is producing this error.
It should be
int g = (int)Math.round(pixArray[y]);
image.setRGB(x,y,new Color(g,g,g).getRGB());
What your current code is doing is setting the alpha to the pixel value but leaving the color components all zero.
Posting an alternative solution. While Jim's answer is correct and works, it is also one of the slowest* ways to put sample values into a gray scale BufferedImage.
A BufferedImage with TYPE_BYTE_GRAY don't need all the conversion to and from RGB colors. To put the gray values directly into the image, do it through the image's raster:
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
WritableRaster raster = image.getRaster();
for (int y = 0; y < height; y++) {
int value = (int) Math.round(pixArray[y])
for (int x = 0; x < width; x++) {
raster.setSample(x, y, 0, value);
}
}
*) Slow because of creating excessive throw-away Color instances, but mostly due to color space conversion to/from sRGB color space. Probably not very noticeable in a 100x100 image, but if you try 1000x1000 or larger, you will notice.
PS: I also re-arranged the loops to loop over x in the inner loop. This is normally faster, especially when reading values, due to data locality and caching in modern CPUs. In your case, it matters mostly because you only need to compute (round, cast) the value for each row.

What order does PixelGrabber put pixels into the array in java?

What order does PixelGrabber put pixels into the array in java? Does it take the pixels along the width of the image first? Or along the height of the image first?
public static int[] convertImgToPixels(Image img, int width, int height) {
int[] pixel = new int[width * height];
PixelGrabber pixels = new PixelGrabber(img, 0, 0, width, height, pixel, 0, width);
try {
pixels.grabPixels();
} catch (InterruptedException e) {
throw new IllegalStateException("Interrupted Waiting for Pixels");
}
if ((pixels.getStatus() & ImageObserver.ABORT) != 0) {
throw new IllegalStateException("Image Fetch Aborted");
}
return pixel;
}
In the code example provided by the documentation
It has the following for loops:
for (int j = 0; j < h; j++) {
for (int i = 0; i < w; i++) {
handlesinglepixel(x+i, y+j, pixels[j * w + i]);
}
}
The access pixels[j * w + i] shows that it goes first along the row, then by along the columns. It grabs the pixels along the width first.
I'm pretty sure it uses row major order, but the easiest way is to actually grab the pixels, set a sequence of them to a particular color (for easy identification) and then save them out to an image. If the pixel strip appears vertical than the order is column major, otherwise it is row major. You can use code like: here
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
To convert an the int[] to an image.
Also, I use ((DataBufferInt)img.grtRaster().getDataBuffer()).getData() to quickly grab the pixels of the image. Any modifications to that int[] will reflect in the image and vice versa. And that is row major for sure.

Java create BufferedImage with float precision

I created a map editor in Java. The problem is, I have steps for every byte value, so the map isn't smooth. Is it possible to change the BufferedImage raster data to float data and draw in float precision on it?
To answer your question, yes, you can create a BufferedImage with float precision. It is however a little unclear if this will help you solve your problem.
In any case, here's working example code for creating a BufferedImage with float precision:
public class FloatImage {
public static void main(String[] args) {
// Define dimensions and layout of the image
int w = 300;
int h = 200;
int bands = 4; // 4 bands for ARGB, 3 for RGB etc
int[] bandOffsets = {0, 1, 2, 3}; // length == bands, 0 == R, 1 == G, 2 == B and 3 == A
// Create a TYPE_FLOAT sample model (specifying how the pixels are stored)
SampleModel sampleModel = new PixelInterleavedSampleModel(DataBuffer.TYPE_FLOAT, w, h, bands, w * bands, bandOffsets);
// ...and data buffer (where the pixels are stored)
DataBuffer buffer = new DataBufferFloat(w * h * bands);
// Wrap it in a writable raster
WritableRaster raster = Raster.createWritableRaster(sampleModel, buffer, null);
// Create a color model compatible with this sample model/raster (TYPE_FLOAT)
// Note that the number of bands must equal the number of color components in the
// color space (3 for RGB) + 1 extra band if the color model contains alpha
ColorSpace colorSpace = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ColorModel colorModel = new ComponentColorModel(colorSpace, true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_FLOAT);
// And finally create an image with this raster
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
System.out.println("image = " + image);
}
}
For map elevation data, using a single band (bands = 1; bandOffsets = {0};) and a grayscale color space (ColorSpace.CS_GRAY) and no transparency may make more sense.

Removing BufferedImage pixel values and or setting them transparent

I have been working with the polygon class and trying to set the pixel values inside of the polygon to transparent or remove them all together if this is possible, however I have hit a bit of a wall as I am trying to store the values as RGB int values and don't know how I would be able to make a pixel transparent/removed via this method.
Additionally to this I would also like to do the same thing but keeping pixels inside the polygon and deleting those outside if possible in order to be left with only the pixels contained within the polygon. I have searched around for this before but to no avail.
I did attempt to create a SSCCE for this to make it easier to work with and view for anyone taking the time to help however as its part of a much larger programme that I am working on creating one is proving to take some time, however once I have one working to better demonstrate this problem I will edit this post.
Thank you to anyone for taking the time to help me with this problem
Below I have some code for what I am currently using to segment the pixels that are contained within an already specified polygon. This is extremely similar to the way i do it for setting pixels outside the polygon to transparent only with the if statement arguments swapped around to remove a segment of the image and haveing a return for newImage rather than save image stuff and it works perfectly, however when I do it this way to save the pixels contained in the polygon it doesn't save for some reason.
public void saveSegment(int tabNum, BufferedImage img) {
segmentation = new GUI.Segmentation();
Polygon p = new Polygon();
Color pixel;
p = createPolygon(segmentation);
int height = img.getHeight();
int width = img.getWidth();
newImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//loop through the image to fill the 2d array up with the segmented pixels
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
//If the pixel is inside polygon
if(p.contains(x, y) == true) {
pixel = new Color(img.getRGB(x, y));
//set pixel equal to the RGB value of the pixel being looked at
int r = pixel.getRed(); // red component 0...255
int g = pixel.getGreen(); // green component 0...255
int b = pixel.getBlue(); // blue component 0...255
int a = pixel.getAlpha(); // alpha (transparency) component 0...255
int col = (a << 24) | (r << 16) | (g << 8) | b;
newImage.setRGB(x, y, col);
}
else {
pixel = new Color(img.getRGB(x, y));
int a = 0; // alpha (transparency) component 0...255
int col = (a << 24);
newImage.setRGB(x, y, col);
}
}
}
try {
//then save as image once all in correct order
ImageIO.write(newImage, "bmp", new File("saved-Segment.bmp"));
JOptionPane.showMessageDialog(null, "New image saved successfully");
} catch (IOException e) {
e.printStackTrace();
}
}
An easier way is to use Java2D's clipping capability:
BufferedImage cutHole(BufferedImage image, Polygon holeShape) {
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(), image.getType());
Graphics2D g = newImage.createGraphics();
Rectangle entireImage =
new Rectangle(image.getWidth(), image.getHeight());
Area clip = new Area(entireImage);
clip.subtract(new Area(holeShape));
g.clip(clip);
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
BufferedImage clipToPolygon(BufferedImage image, Polygon polygon) {
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(), image.getType());
Graphics2D g = newImage.createGraphics();
g.clip(polygon);
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}

Categories