Java Histogram Equalisation - Can't get Pixel from Image Raster - java

I have been business a histogram equalisation method. I've used this question as a foundation to build on. However I cannot get this code to run and Google isn't too helpful in helping me find the issue. I pass in a JPG BufferedImage object. I first display the image so I see what I'm working with and then process it. However it ALWAYS fails on the line int valueBefore=img.getRaster().getPixel(x, y,iarray)[0]; and I'm not sure why. The error I get is Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1 but I cannot see why it gives this error, the picture is there and filled with pixels!
public BufferedImage hisrogramNormatlisation(BufferedImage img) {
// To view image we're working on
JFrame frame = new JFrame();
frame.getContentPane().setLayout(new FlowLayout());
frame.getContentPane().add(new JLabel(new ImageIcon(img)));
frame.pack();
frame.setVisible(true);
int width =img.getWidth();
int height =img.getHeight();
int anzpixel= width*height;
int[] histogram = new int[255];
int[] iarray = new int[1];
int i =0;
// Create histogram
for (int x = 50; x < width; x++) {
for (int y = 50; y < height; y++) {
int valueBefore=img.getRaster().getPixel(x, y,iarray)[0];
histogram[valueBefore]++;
System.out.println("here");
}
}
int sum = 0;
float[] lut = new float[anzpixel];
for ( i=0; i < 255; ++i )
{
sum += histogram[i];
lut[i] = sum * 255 / anzpixel;
}
i=0;
for (int x = 1; x < width; x++) {
for (int y = 1; y < height; y++) {
int valueBefore=img.getRaster().getPixel(x, y,iarray)[0];
int valueAfter= (int) lut[valueBefore];
iarray[0]=valueAfter;
img.getRaster().setPixel(x, y, iarray);
i=i+1;
}
}
return img;
}
Error description:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
at java.awt.image.ComponentSampleModel.getPixel(ComponentSampleModel.java:n)
at java.awt.image.Raster.getPixel(Raster.java:n)
at MainApp.hisrogramNormatlisation(MainApp.java: * line described *)
at MainApp.picture(MainApp.java:n)
at MainApp.<init>(Main.java:n)
at MainApp.main(Main.java:n)

The stack trace you posted says your out of range index is 1.
The exception isn't thrown where you think it is.
getPixel(int x, int y, int[] iarray) fills iarray with the intensity values of the pixel. If you are using an rgb image, there will be at least three intensity values for each channel, if you are using rgb with alpha there will be 4 intensity values. Your iarray is just of size 1, so when raster wants to access further elements to store the additional values an IndexOutOfBoundsException is thrown.
Increase the size of iarray and the exception will be gone.

Don't use getPixel(), but getSample().
So your code would be: final int valueBefore = img.getRaster().getSample(x, y, 0) ; or even histogram[img.getRaster().getSample(x, y, 0)]++ ;
Btw, you may want to check the image type first in order to determine the number of channels/bands and do this process for each channel.

Related

Run each pixel through a rectangular PGraphics (Processing)

I have a squared PGraphics where I want to read the color of each pixel. I do this with the following lines:
size(1080,1080, P2D);
pg = createGraphics(width, height, P2D);
…then in draw:
pg.loadPixels();
for (int y = 0; y < pixel; y++) {
for (int x = 0; x < pixel; x++) {
int ix = int(size*x);
int iy = int(size*y);
color c = pg.pixels[iy * width + ix];
However, if I now change the size so that the PGrpahics or my sketch window is no longer squared I get an ArrayIndexOutOfBoundsException error message. I know that you can also read the pixels from rectangular sketches, for example with:
pixels[x+y*width] = …etc
Works fine – but what is it that I am missing above if I change the size to lets say size(1920,1080, P2D);?

Java Convolution

Hi I am in need of some help. I need to write a convolution method from scratch that takes in the following inputs: int[][] and BufferedImage inputImage. I can assume that the kernel has size 3x3.
My approach is to do the follow:
convolve inner pixels
convolve corner pixels
convolve outer pixels
In the program that I will post below I believe I convolve the inner pixels but I am a bit lost at how to convolve the corner and outer pixels. I am aware that corner pixels are at (0,0), (width-1,0), (0, height-1) and (width-1,height-1). I think I know to how approach the problem but not sure how to execute that in writing though. Please to aware that I am very new to programming :/ Any assistance will be very helpful to me.
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
}
public BufferedImage convolveInner(double center, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 1; x < width - 1; x++) {
for (int y = 1; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) center*red;
int innergreen = (int) center*green;
int innerblue = (int) center*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage1.setRGB(x, y, newRgbvalue);
}
}
return inputImage1;
}
public BufferedImage convolveEdge(double edge, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage2 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) edge*red;
int innergreen = (int) edge*green;
int innerblue = (int) edge*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage2.setRGB(x, y, newRgbvalue);
}
}
return inputImage2;
}
public BufferedImage convolveCorner(double corner, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage3 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) corner*red;
int innergreen = (int) corner*green;
int innerblue = (int) corner*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage3.setRGB(x, y, newRgbvalue);
}
}
return inputImage3;
}
public static void main(String[] args) {
DrawingKit dk = new DrawingKit("Compositor", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p5 = c.convolve();
dk.drawPicture(p5, 0, 100);
}
}
I changed the code a bit but the output comes out as black. What did I do wrong:
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//for every pixel
for (int x = 0; x < width; x ++) {
for (int y = 0; y < height; y ++) {
int colorValue = inputImage.getRGB(x,y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed();
int green = pixelColor.getGreen();
int blue = pixelColor.getBlue();
double gray = 0;
//multiply every value of kernel with corresponding image pixel
for (int i = 0; i < 3; i ++) {
for (int j = 0; j < 3; j ++) {
int imageX = (x - 3/2 + i + width) % width;
int imageY = (x -3/2 + j + height) % height;
int RGB = inputImage.getRGB(imageX, imageY);
int GRAY = (RGB) & 0xff;
gray += (GRAY*kernel[i][j]);
}
}
int out;
out = (int) Math.min(Math.max(gray * 1, 0), 255);
inputImage1.setRGB(x, y, new Color(out,out,out).getRGB());
}
}
return inputImage1;
}
public static void main(String[] args) {
int[][] newArray = {{1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}};
DrawingKit dk = new DrawingKit("Problem28", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p2 = c.convolve(newArray, p1);
dk.drawPicture(p2, 0, 100);
}
}
Welcome ewuzz! I wrote a convolution using CUDA about a week ago, and the majority of my experience is with Java, so I feel qualified to provide advice for this problem.
Rather than writing all of the code for you, the best way to solve this large program is to discuss individual elements. You mentioned you are very new to programming. As the programs you write become more complex, it's essential to write small working snippets before combining them into a large successful program (or iteratively add snippets). With this being said, it's already apparent you're trying to debug a ~100 line program, and this approach will cost you time in most cases.
The first point to discuss is the general approach you mentioned. If you think about the program, what is the simplest and most repeated step? Obviously this is the kernel/mask step, so we can start from here. When you convolute each pixel, you are performing a similar option, regardless of the position (corner, edge, inside). While there are special steps necessary for these edge cases, they share similar underlying steps. If you try to write code for each of these cases separately, you will have to update the code in multiple (three) places with each adjustment and it will make the whole program more difficult to grasp.
To support my point above, here's what happened when I pasted your code into IntelliJ. This illustrates the (yellow) red flag of using the same code in multiple places:
The concrete way to fix this problem is to combine the three convolve methods into a single one and use if statements for edge-cases as necessary.
Our pseudocode with this change:
convolve(kernel, inputImage)
for each pixel in the image
convolve the single pixel and check edge cases
endfor
end
That seems pretty basic right? If we are able to successfully check edge cases, then this extremely simple logic will work. The reason I left it so general above to show how convolve the single pixel and check edge cases is logically grouped. This means it's a good candidate for extracting a method, which could look like:
private void convolvePixel(int x, int y, int[][] kernel, BufferedImage input, BufferedImage output)
Now to implement our method above, we will need to break it into a few steps, which we may then break into more steps if necessary. We'll need to look at the input image, if possible for each pixel accumulate the values using the kernel, and then set this in the output image. For brevity I will only write pseudocode from here.
convolvePixel(x, y, kernel, input, output)
accumulation = 0
for each row of kernel applicable pixels
for each column of kernel applicable pixels
if this neighboring pixel location is within the image boundaries then
input color = get the color at this neighboring pixel
adjusted value = input color * relative kernel mask value
accumulation += adjusted value
else
//handle this somehow, mentioned below
endif
endfor
endfor
set output pixel as accumulation, assuming this convolution method does not require normalization
end
The pseudocode above is already relatively long. When implementing you could write methods for the if and the else cases, but it you should be fine with this structure.
There are a few ways to handle the edge case of the else above. Your assignment probably specifies a requirement, but the fancy way is to tile around, and pretend like there's another instance of the same image next to this input image. Wikipedia explains three possibilities:
Extend - The nearest border pixels are conceptually extended as far as necessary to provide values for the convolution. Corner pixels are extended in 90° wedges. Other edge pixels are extended in lines.
Wrap - (The method I mentioned) The image is conceptually wrapped (or tiled) and values are taken from the opposite edge or corner.
Crop - Any pixel in the output image which would require values from beyond the edge is skipped. This method can result in the output image being slightly smaller, with the edges having been cropped.
A huge part of becoming a successful programmer is researching on your own. If you read about these methods, work through them on paper, run your convolvePixel method on single pixels, and compare the output to your results by hand, you will find success.
Summary:
Start by cleaning-up your code before anything.
Group the same code into one place.
Hammer out a small chunk (convolving a single pixel). Print out the result and the input values and verify they are correct.
Draw out edge/corner cases.
Read about ways to solve edge cases and decide what fits your needs.
Try implementing the else case through the same form of testing.
Call your convolveImage method with the loop, using the convolvePixel method you know works. Done!
You can look up pseudocode and even specific code to solve the exact problem, so I focused on providing general insight and strategies I have developed through my degree and personal experience. Good luck and please let me know if you want to discuss anything else in the comments below.
Java code for multiple blurs via convolution.

Pixelated Video with Processing

I'm trying to load a video and then display it in a pixelated manner. It worked one time after loading for very long time, but then it stopped working - just a black screen and nothing comes up and without error message I wonder what goes wrong. Thanks.
import processing.video.*;
Movie movie;
int videoScale = 8;
int cols, rows;
void setup() {
size(640, 360);
background(0);
movie = new Movie(this, "movie.mp4");
movie.loop();
cols = width / videoScale;
rows = height / videoScale;
}
void draw() {
movie.loadPixels();
for (int i = 0; i < cols; i++) {
for (int j = 0; j < rows; j++) {
int x = i * videoScale;
int y = j * videoScale;
color c = movie.pixels[i + j * movie.width];
fill(c);
noStroke();
rect(x, y, videoScale, videoScale);
}
}
}
// Called every time a new frame is available to read
void movieEvent(Movie movie) {
movie.read();
}
You may be sampling from the wrong place here:
color c = movie.pixels[i + j * movie.width];
First off, i is your cols counter, which is the x dimension, the j is the rows counter, y dimension.
Secondly, you probably want to sample at the same scale, and therefore need to multiply by videoScale. You already have the x,y variables for that, so try sampling like this:
color c = movie.pixels[y * movie.width + x];
Alternatively, you can use a PGraphics instance as a frame buffer to draw into at a smaller scale (resample), then draw the small buffer at a larger scale:
import processing.video.*;
Movie movie;
int videoScale = 8;
int cols, rows;
PGraphics resized;
void setup() {
size(640, 360);
background(0);
noSmooth();//remove aliasing
movie = new Movie(this, "transit.mov");
movie.loop();
cols = width / videoScale;
rows = height / videoScale;
//setup a smaller sized buffer to draw into
resized = createGraphics(cols, rows);
resized.beginDraw();
resized.noSmooth();//remove aliasing
resized.endDraw();
}
void draw() {
//draw video resized smaller into a buffer
resized.beginDraw();
resized.image(movie,0,0,cols,rows);
resized.endDraw();
//draw the small buffer resized bigger
image(resized,0,0,movie.width,movie.height);
}
// Called every time a new frame is available to read
void movieEvent(Movie movie) {
movie.read();
}

Some RGB values seem not to be convertable into a Color Object

I have this kind of method:
public void SaveImageOntoObject(String filepath) throws IOException {
BufferedImage image = ImageIO.read(getClass().getResourceAsStream(filepath));
this.width = image.getWidth();
this.height = image.getHeight();
this.ResetPointInformation();
for (int row = 0; row < width; row++) {
for (int col = 0; col < height; col++) {
this.PointInformation[row][col] = new Color(image.getRGB(col, row));
}
}
}
It takes the filepath of an image as input, converts the RPG Value of each pixel into a Color Object and then stores it into the two-dimensional Array PointInformation of the Object the Method was called onto.
Now to my problem:
While some pictures like this one:
Work like a charm, others like this:
Let me end up with the error:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds!
at sun.awt.image.ByteInterleavedRaster.getDataElements(ByteInterleavedRaster.java:318)
at java.awt.image.BufferedImage.getRGB(BufferedImage.java:888)
at Drawing.Object2D.SaveImageOntoObject(Object2D.java:75)** (that's the class whose object's my method works on)
Why is that like that? It seems like Java is not able to convert certain RGB values into Colors?
Could you tell me how I can make it work?
The error message actually says it: "index out of bounds". It seems, that you confused your coordinates and their bounds. getRGB takes the parameters x (range 0 .. width) as first, and y (range 0 .. height) as second.
this.width = image.getWidth();
this.height = image.getHeight();
for (int row = 0; row < height; row++) { // swapped the ...
for (int col = 0; col < width; col++) { // ... bounds
this.PointInformation[row][col] = new Color(image.getRGB(col, row));
}
}
Your first example has width = height, so that the problem doesn't show.

Taking a picture as input, Make grey scale and & then outputting

I'm attempting to take a picture as input, then manipulate said picture (I specifically want to make it greyscale) and then output the new image. This is a snippet of the code that I'm editing in order to do so but I'm getting stuck. Any ideas of what I can change/do next. Greatly appreciated!
public boolean recieveFrame (Image frame) {
int width = frame.width();
int height = frame.height();
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
Color c1 = frame.get(i, j);
double greyScale = (double) ((Color.red *.3) + (Color.green *.59) + (Color.blue * .11));
Color newGrey = Color.greyScale(greyScale);
frame.set(i, j, newGrey);
}
}
boolean shouldStop = displayImage(frame);
return shouldStop;
}
I'm going to try to stick as close as possible to what you already have. So, I'll assume that you are looking for how to do pixel-level processing on an Image, rather than just looking for a technique that happens to work for converting to greyscale.
The first step is that you need the image to be a BufferedImage. This is what you get by default from ImageIO, but if you have some other type of image, you can create a BufferedImage and paint the other image into it first:
BufferedImage buffer = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
Graphics2D g = buffer.createGraphics();
g.drawImage(image, 0, 0);
g.dispose()
Then, you can operate on the pixels like this:
public void makeGrey(BufferedImage image) {
for(int x = 0; x < image.getWidth(); ++x) {
for(int y = 0; y < image.getHeight(); ++y) {
Color c1 = new Color(image.getRGB(x, y));
int grey = (int)(c1.getRed() * 0.3
+ c1.getGreen() * 0.59
+ c1.getBlue() * .11
+ .5);
Color newGrey = new Color(grey, grey, grey);
image.setRGB(x, y, newGrey.getRGB());
}
}
}
Note that this code is horribly slow. A much faster option is to extract all the pixels from the BufferedImage into an int[], operate on that, and then set it back into the image. This uses the other versions of the setRGB()/getRGB() methods that you'll find in the javadoc.

Categories