I'm looping over image and summing the values of all pixels. I do this to create an integral image. To keep the last value easily available, I created the_sum variable to which I add values for each pixel.
If you know how integral image works, you know that every pixel in such image contains sum of all pixels before plus it's own value.
Hence:
integral_image[x][y][0] = (the_sum[0]+= (pixel & 0x00FF0000)>>16);
I increase the sum and assign it to current pixel. Netbeans IDE, however, warns me that I'm not reading from the_sum.
Something in the algorithm is broken and I'm not sure what is it. Is my approach wrong or is this a false positive report by NetBeans?
To avoid misunderstanding, this is the whole method:
/* Generate an integral image. Every pixel on such image contains sum of colors or all the
pixels before and itself.
*/
public static double[][][] integralImage(BufferedImage image) {
int w = image.getWidth();
int h = image.getHeight();
double integral_image[][][] = new double[w][h][3];
double the_sum[] = new double[3];
for (int x = 0; x < w; x++) {
for (int y = 0; y < h; y++) {
int pixel = image.getRGB(x, y);
integral_image[x][y][0] = (the_sum[0]+= (pixel & 0x00FF0000)>>16);
integral_image[x][y][1] = (the_sum[1]+= (pixel & 0x0000FF00)>>8);
integral_image[x][y][2] = (the_sum[2]+= pixel & 0x000000FF);
}
}
return integral_image;
}
Yes, += returns the newly assigned value. It is a false positive from netbeans.
At run time, the result of the assignment expression is the value of the variable after the assignment has occurred.
http://docs.oracle.com/javase/specs/jls/se8/html/jls-15.html#jls-15.26
Related
The problem is I am trying to identify plants from a drone image. The plants from the image contain overlaps and are of different shapes/sizes. I am unsure of where to look or what types of methods to use for this and am looking for suggestions.
I have taken snippets from a full image of the fields and attempted to: Use Convolusion Neural Networks to train a xml file that was used in a java program, Used convulsion 3x3 matrixes such as sharpen and edge detection (specifically focused around Sobel and Laplace), identify each plant by taking their RGB values and greying out all others. I have focused efforts on the 3rd method of identifying their individual RGB values however this is difficult due to all of them having different values.
This is the current code I use to scan and remove irrelevant RGB values:
I only store red and blue because a funny thing is, the green values of plants and that of background are actually nearly identical.
Here is a image of the field:
I also have a few other images of the ways I have tried so far:
The result of using sobel operator
//important data
int pixX = 330; //resolution of image
int pixY = 370;
int pixT = pixX * pixY; //total amount of pixels
int xloc = 0;//used to locate pixels to remove
int yloc = 0;
File test = new File("FILE.png)
int x = 0, y = 0; //Current XY values
int colorsplit[][] = new int[2][pixT];//obtaining red and blue will be stored in this 2d array
try {
for (int c = 0; c < pixT - 1; c=c+2) { //I use C+2 to jump 2 pixels to make the process faster
BufferedImage image = ImageIO.read(test);//buffered image read
int clr = image.getRGB(x, y); //getting values as binary(? i think)
int red = (clr & 0x00ff0000) >> 16; //bit shifting for red values
int blue = clr & 0x000000ff; //Blue values
for (int n = 0; n < 4; n++) { // this is used to store values so that a single run will store the same value twice for 2 pixels side by side (efficiency measures)
switch (n) {//switch to store Red Blue codes
case 0:
colorsplit[0][c] = red;//store
break;
case 1:
colorsplit[1][c] = blue;//store
break;
case 2:
colorsplit[0][c + 1] = red;//store
break;
case 3:
colorsplit[1][c + 1] = blue;//store
}
}//END switch
x = x + 2;
if (x == pixX) {//Going up in XY values to cover all values
x = 0;
y++;
}//end if
}//end for
...//end of try, catch IOException
for (int c = 0; c < pixT; c++) { //Starting to identify redundant pixels
if (colorsplit[0][c] > 200 || colorsplit[1][c] > 100) { //parameters of redundant pixels
xloc = c % pixX;
yloc = c / pixX;//locating XY pixels
image.setRGB(xloc, yloc, 0); //Setting redundant pixels to black
System.out.println(xloc + "," + yloc + " setted"); //confirmation text
}//end for
//end
I do not get many errors however the problem lies more with the program not working as intended.
Edit: Turns out most things were fine, but one thing I didn't do was combine multiple identification programs. Ended up doing RGB reduction to only get R values followed by a Sobel operator and then using pixel by pixel analysis to filter out the redundant pixels.
Hi I am in need of some help. I need to write a convolution method from scratch that takes in the following inputs: int[][] and BufferedImage inputImage. I can assume that the kernel has size 3x3.
My approach is to do the follow:
convolve inner pixels
convolve corner pixels
convolve outer pixels
In the program that I will post below I believe I convolve the inner pixels but I am a bit lost at how to convolve the corner and outer pixels. I am aware that corner pixels are at (0,0), (width-1,0), (0, height-1) and (width-1,height-1). I think I know to how approach the problem but not sure how to execute that in writing though. Please to aware that I am very new to programming :/ Any assistance will be very helpful to me.
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
}
public BufferedImage convolveInner(double center, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 1; x < width - 1; x++) {
for (int y = 1; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) center*red;
int innergreen = (int) center*green;
int innerblue = (int) center*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage1.setRGB(x, y, newRgbvalue);
}
}
return inputImage1;
}
public BufferedImage convolveEdge(double edge, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage2 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) edge*red;
int innergreen = (int) edge*green;
int innerblue = (int) edge*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage2.setRGB(x, y, newRgbvalue);
}
}
return inputImage2;
}
public BufferedImage convolveCorner(double corner, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage3 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) corner*red;
int innergreen = (int) corner*green;
int innerblue = (int) corner*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage3.setRGB(x, y, newRgbvalue);
}
}
return inputImage3;
}
public static void main(String[] args) {
DrawingKit dk = new DrawingKit("Compositor", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p5 = c.convolve();
dk.drawPicture(p5, 0, 100);
}
}
I changed the code a bit but the output comes out as black. What did I do wrong:
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//for every pixel
for (int x = 0; x < width; x ++) {
for (int y = 0; y < height; y ++) {
int colorValue = inputImage.getRGB(x,y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed();
int green = pixelColor.getGreen();
int blue = pixelColor.getBlue();
double gray = 0;
//multiply every value of kernel with corresponding image pixel
for (int i = 0; i < 3; i ++) {
for (int j = 0; j < 3; j ++) {
int imageX = (x - 3/2 + i + width) % width;
int imageY = (x -3/2 + j + height) % height;
int RGB = inputImage.getRGB(imageX, imageY);
int GRAY = (RGB) & 0xff;
gray += (GRAY*kernel[i][j]);
}
}
int out;
out = (int) Math.min(Math.max(gray * 1, 0), 255);
inputImage1.setRGB(x, y, new Color(out,out,out).getRGB());
}
}
return inputImage1;
}
public static void main(String[] args) {
int[][] newArray = {{1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}};
DrawingKit dk = new DrawingKit("Problem28", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p2 = c.convolve(newArray, p1);
dk.drawPicture(p2, 0, 100);
}
}
Welcome ewuzz! I wrote a convolution using CUDA about a week ago, and the majority of my experience is with Java, so I feel qualified to provide advice for this problem.
Rather than writing all of the code for you, the best way to solve this large program is to discuss individual elements. You mentioned you are very new to programming. As the programs you write become more complex, it's essential to write small working snippets before combining them into a large successful program (or iteratively add snippets). With this being said, it's already apparent you're trying to debug a ~100 line program, and this approach will cost you time in most cases.
The first point to discuss is the general approach you mentioned. If you think about the program, what is the simplest and most repeated step? Obviously this is the kernel/mask step, so we can start from here. When you convolute each pixel, you are performing a similar option, regardless of the position (corner, edge, inside). While there are special steps necessary for these edge cases, they share similar underlying steps. If you try to write code for each of these cases separately, you will have to update the code in multiple (three) places with each adjustment and it will make the whole program more difficult to grasp.
To support my point above, here's what happened when I pasted your code into IntelliJ. This illustrates the (yellow) red flag of using the same code in multiple places:
The concrete way to fix this problem is to combine the three convolve methods into a single one and use if statements for edge-cases as necessary.
Our pseudocode with this change:
convolve(kernel, inputImage)
for each pixel in the image
convolve the single pixel and check edge cases
endfor
end
That seems pretty basic right? If we are able to successfully check edge cases, then this extremely simple logic will work. The reason I left it so general above to show how convolve the single pixel and check edge cases is logically grouped. This means it's a good candidate for extracting a method, which could look like:
private void convolvePixel(int x, int y, int[][] kernel, BufferedImage input, BufferedImage output)
Now to implement our method above, we will need to break it into a few steps, which we may then break into more steps if necessary. We'll need to look at the input image, if possible for each pixel accumulate the values using the kernel, and then set this in the output image. For brevity I will only write pseudocode from here.
convolvePixel(x, y, kernel, input, output)
accumulation = 0
for each row of kernel applicable pixels
for each column of kernel applicable pixels
if this neighboring pixel location is within the image boundaries then
input color = get the color at this neighboring pixel
adjusted value = input color * relative kernel mask value
accumulation += adjusted value
else
//handle this somehow, mentioned below
endif
endfor
endfor
set output pixel as accumulation, assuming this convolution method does not require normalization
end
The pseudocode above is already relatively long. When implementing you could write methods for the if and the else cases, but it you should be fine with this structure.
There are a few ways to handle the edge case of the else above. Your assignment probably specifies a requirement, but the fancy way is to tile around, and pretend like there's another instance of the same image next to this input image. Wikipedia explains three possibilities:
Extend - The nearest border pixels are conceptually extended as far as necessary to provide values for the convolution. Corner pixels are extended in 90° wedges. Other edge pixels are extended in lines.
Wrap - (The method I mentioned) The image is conceptually wrapped (or tiled) and values are taken from the opposite edge or corner.
Crop - Any pixel in the output image which would require values from beyond the edge is skipped. This method can result in the output image being slightly smaller, with the edges having been cropped.
A huge part of becoming a successful programmer is researching on your own. If you read about these methods, work through them on paper, run your convolvePixel method on single pixels, and compare the output to your results by hand, you will find success.
Summary:
Start by cleaning-up your code before anything.
Group the same code into one place.
Hammer out a small chunk (convolving a single pixel). Print out the result and the input values and verify they are correct.
Draw out edge/corner cases.
Read about ways to solve edge cases and decide what fits your needs.
Try implementing the else case through the same form of testing.
Call your convolveImage method with the loop, using the convolvePixel method you know works. Done!
You can look up pseudocode and even specific code to solve the exact problem, so I focused on providing general insight and strategies I have developed through my degree and personal experience. Good luck and please let me know if you want to discuss anything else in the comments below.
Java code for multiple blurs via convolution.
This is part of a project for my computer science class. One of the tasks is to take an image and reflect it. I've already initialized a picture called image. When I run this method, instead of reflecting the picture it mirrors it.
public void reflect()
{
//Creating a for loop to get all of the x values for the image object
for(int x = 0; x < image.getWidth(); x++)
{
//Creating a nested for loop to get all of the y values for each x value
for(int y = 0; y < image.getHeight(); y++)
{
//Getting a pixel object for the given x and y value
Pixel pixelObj = image.getPixel(x, y);
//I'm pretty sure this next line is where I'm screwing up.
//It's probably really simple, but I can't figure it out.
Pixel newPixel = image.getPixel(image.getWidth()-x-1, y);
//This sets the color values of the new pixel to the ones of the old pixel
newPixel.setRed(pixel0bj.getRed());
newPixel.setGreen(pixel0bj.getGreen());
newPixel.setBlue(pixel0bj.getBlue());
}
}
image.show();
}
You have to swap the corresponding pixel values. Currently, you are overwriting the right half of the image before saving their pixel values in a reference for putting them to the left half.
Below I've illustrated what I mean by "swapping" value instead of just assigning them uni-directionally:
//Getting a pixel object for the given x and y value
Pixel pixelObj = image.getPixel(x, y);
Pixel oppositePixel = image.getPixel(image.getWidth()-x-1, y);
//Store the RGB values of the opposite pixel temporarily
int redValue = oppositePixel.getRed();
int greenValue = oppositePixel.getGreen();
int blueValue = oppositePixel.getBlue();
//This swaps the color values of the new pixel to the ones of the old pixel
oppositePixel.setRed(pixel0bj.getRed());
oppositePixel.setGreen(pixel0bj.getGreen());
oppositePixel.setBlue(pixel0bj.getBlue());
pixelObj.setRed(redValue);
pixelObj.setGreen(greenValue);
pixelObj.setBlue(blueValue);
If you swap pixels both ways in each round, it's sufficient to loop from 0 to image.getWidth() / 2.
Have a look how it's done in ImageJ's ImageProcessor class as a reference.
Another solution is to use a matrix transformation and scale by -1 in the x direction. See the ImageJ scale op for a more elaborate example using the ImgLib2 library for image processing in Java.
I'm looking to use a very crude heightmap I've created in Photoshop to define a tiled isometric grid for me:
Map:
http://i.imgur.com/jKM7AgI.png
I'm aiming to loop through every pixel in the image and convert the colour of that pixel to a scale of my choosing, for example 0-100.
At the moment I'm using the following code:
try
{
final File file = new File("D:\\clouds.png");
final BufferedImage image = ImageIO.read(file);
for (int x = 0; x < image.getWidth(); x++)
{
for (int y = 0; y < image.getHeight(); y++)
{
int clr = image.getRGB(x, y) / 99999;
if (clr <= 0)
clr = -clr;
System.out.println(clr);
}
}
}
catch (IOException ex)
{
// Deal with exception
}
This works to an extent; the black pixel at position 0 is 167 and the white pixel at position 999 is 0. However when I insert certain pixels into the image I get slightly odd results, for example a gray pixel that's very close to white returns over 100 when I would expect it to be in single digits.
Is there an alternate solution I could use that would yield more reliable results?
Many thanks.
Since it's a grayscale map, the RGB parts will all be the same value (with range 0 - 255), so just take one out of the packed integer and find out what percent of 255 it is:
int clr = (int) ((image.getRGB(x, y) & 0xFF) / 255.0 * 100);
System.out.println(clr);
getRGB returns all channels packed into one int so you shouldn't do arithmetic with it. Maybe use the norm of the RGB-vector instead?
for (int x = 0; x < image.getWidth(); ++x) {
for (int y = 0; y < image.getHeight(); ++y) {
final int rgb = image.getRGB(x, y);
final int red = ((rgb & 0xFF0000) >> 16);
final int green = ((rgb & 0x00FF00) >> 8);
final int blue = ((rgb & 0x0000FF) >> 0);
// Norm of RGB vector mapped to the unit interval.
final double intensity =
Math.sqrt(red * red + green * green + blue * blue)
/ Math.sqrt(3 * 255 * 255);
}
}
Note that there is also the java.awt.Color class that can be instantiated with the int returned by getRGB and provides getRed, getGreen and getBlue methods if you don't want to do the bit manipulations yourself.
What is the fastest way to get the RGB value of each pixel of a BufferedImage?
Right now I am getting the RGB values using two for loops as shown in the code below, but it took too long to get those values as the nested loop runs a total of 479999 times for my image. If I use a 16-bit image this number would be even higher!
I need a faster way to get the pixel values.
Here is the code I am currently trying to work with:
BufferedImage bi=ImageIO.read(new File("C:\\images\\Sunset.jpg"));
int countloop=0;
for (int x = 0; x <bi.getWidth(); x++) {
for (int y = 0; y < bi.getHeight(); y++) {
Color c = new Color(bi.getRGB(x, y));
System.out.println("red=="+c.getRed()+" green=="+c.getGreen()+" blue=="+c.getBlue()+" countloop="+countloop++);
}
}
I don't know if this might help and I haven't tested it yet but you can get the rgb values this way:
BufferedImage bi=ImageIO.read(new File("C:\\images\\Sunset.jpg"));
int[] pixel;
for (int y = 0; y < bi.getHeight(); y++) {
for (int x = 0; x < bi.getWidth(); x++) {
pixel = bi.getRaster().getPixel(x, y, new int[3]);
System.out.println(pixel[0] + " - " + pixel[1] + " - " + pixel[2] + " - " + (bi.getWidth() * y + x));
}
}
As you can see you don't have to initialize a new Color inside the loop. I also inverted the width/height loops as suggested by onemasse to retrieve the counter from data I already have.
By changing from a bunch of in individual getRGB's to one big getRGB to copy the entire image into an array, the execution time dropped by an order of magnitude from 33,000 milliseconds to 3,200 milliseconds, while the time create the array was only 31 milliseconds.
No doubt about it, one big read into an array and direct indexing of the array is much faster than many individual reads.
Performance difference appears related to the use of a breakpoint statement at the end of the class. While the breakpoint was outside the loop, every line of code within the class appears to be tested for the breakpoint. Changing to individual gets does NOT improve speed.
Since the code is still correct, the remainder of the answer may still be of use.
Old read statement
colorRed=new Color(bi.getRGB(x,y)).getRed();
Read statement to copy an bit image into an array
int[] rgbData = bi.getRGB(0,0, bi.getWidth(), bi.getHeight(),
null, 0,bi.getWidth());
The getRGB into an array puts all 3 color values into a single array element, so individual colors must be extracted by rotating and an "and". The y coordinate must be multiplied by the width of the image.
Code to read individual colors out of the array
colorRed=(rgbData[(y*bi.getWidth())+x] >> 16) & 0xFF;
colorGreen=(rgbData[(y*bi.getWidth())+x] >> 8) & 0xFF;
colorBlue=(rgbData[(y*bi.getWidth())+x]) & 0xFF;
You should loop the rows in the outer loop and the columns in the inner. That way you'll avoid cache misses.
Did you try BufferedImage.getRGB(int, int ,int ,int, int[] , int , int)?
Something like:
int[] rgb = bi.getRGB(0,0, bi.getWidth(), bi.getHeight(), new int[bi.getWidth() * bi.getHeight(), bi.getWidth()])
Didn't try, so not sure if it's faster.
edit
Having looked # the code, it probably isn't, but worth a shot.
I found a solution here
https://alvinalexander.com/blog/post/java/getting-rgb-values-for-each-pixel-in-image-using-java-bufferedi
BufferedImage bi = ImageIO.read(new File("C:\\images\\Sunset.jpg"));
for (int x = 0; x < bi.getWidth(); x++) {
for (int y = 0; y < bi.getHeight(); y++) {
int pixel = bi.getRGB(x, y);
int red = (pixel >> 16) & 0xff;
int green = (pixel >> 8) & 0xff;
int blue = (pixel) & 0xff;
System.out.println("red: " + red + ", green: " + green + ", blue: " + blue);
}
}