I tried to implement the Sobel edge detection in java.
It kind of works but I get a lot of seemingly random noise...
I loaded the image as BufferedImages and converted those to greyscaleimages first (via an algorithm i found online). After that I calculate the edges in x and y direction.
This is my code:
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Sobel {
static int [] sobel_x = {1, 0, -1,
2, 0, -2,
1, 0, -1};
static int [] sobel_y = {1, 2, 1,
0, 0, 0,
-1, -2, -1};
public static void main(String argc[]) throws IOException {
BufferedImage imgIn = ImageIO.read(new File("test.jpeg"));
BufferedImage imgGrey = greyscale(imgIn);
ImageIO.write(imgGrey, "PNG", new File("greyscale.jpg"));
BufferedImage edgesX = edgeDetection(imgGrey, sobel_x);
ImageIO.write(edgesX, "PNG", new File("edgesX.jpg"));
BufferedImage edgesY = edgeDetection(imgGrey, sobel_y);
ImageIO.write(edgesY, "PNG", new File("edgesY.jpg"));
BufferedImage sobel = sobel(edgesX,edgesY);
ImageIO.write(sobel, "PNG", new File("sobel.jpg"));
}
private static BufferedImage sobel (BufferedImage edgesX, BufferedImage edgesY){
BufferedImage result = new BufferedImage(edgesX.getWidth(), edgesX.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
int height = result.getHeight();
int width = result.getWidth();
for(int x = 0; x < width ; x++){
for(int y = 0; y < height; y++){
int tmp = Math.abs(edgesX.getRGB(x, y) + Math.abs(edgesY.getRGB(x, y)));
result.setRGB(x, y, tmp);
}
}
return result;
}
private static BufferedImage edgeDetection(BufferedImage img, int[] kernel){
int height = img.getHeight();
int width = img.getWidth();
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_BYTE_GRAY);
for(int x = 1; x < width -1 ; x++){
for(int y = 1; y < height - 1; y++){
int [] tmp = {img.getRGB(x-1, y-1),img.getRGB(x, y-1),img.getRGB(x+1, y-1),img.getRGB(x-1, y),img.getRGB(x, y),img.getRGB(x+1, y),img.getRGB(x-1, y+1),img.getRGB(x, y+1),img.getRGB(x+1, y+1)};
int value = convolution (kernel, tmp);
result.setRGB(x,y, value);
}
}
return result;
}
private static int convolution (int [] kernel, int [] pixel){
int result = 0;
for (int i = 0; i < pixel.length; i++){
result += kernel[i] * pixel[i];
}
return result / 9;
}
private static BufferedImage greyscale(BufferedImage img){
//get image width and height
int width = img.getWidth();
int height = img.getHeight();
//convert to grayscale
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int p = img.getRGB(x,y);
int a = (p>>24)&0xff;
int r = (p>>16)&0xff;
int g = (p>>8)&0xff;
int b = p&0xff;
//calculate average
int avg = (r+g+b)/3;
//replace RGB value with avg
p = (a<<24) | (avg<<16) | (avg<<8) | avg;
img.setRGB(x, y, p);
}
}
return img;
}
}
And this is an example of the noise I'm talking about:
An image of Lena :
I don't know why I get all this noise.
Any advice is appreciated.
You have to make the following changes:
in convolution take the absolute value
private static int convolution (int [] kernel, int [] pixel){
int result = 0;
for (int i = 0; i < pixel.length; i++){
result += kernel[i] * pixel[i];
}
return (int)(Math.abs(result) / 9);
}
in edgeDetection apply the value to all three channels
private static BufferedImage edgeDetection(BufferedImage img, int[] kernel){
int height = img.getHeight();
int width = img.getWidth();
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_INT_RGB);
for(int x = 1; x < width -1 ; x++){
for(int y = 1; y < height - 1; y++){
int [] tmp = {img.getRGB(x-1, y-1)&0xff,img.getRGB(x, y-1)&0xff,img.getRGB(x+1, y-1)&0xff,
img.getRGB(x-1, y)&0xff,img.getRGB(x, y)&0xff,img.getRGB(x+1, y)&0xff,img.getRGB(x-1, y+1)&0xff,
img.getRGB(x, y+1)&0xff,img.getRGB(x+1, y+1)&0xff};
int value = convolution (kernel, tmp);
result.setRGB(x,y, 0xff000000|(value<<16)|(value<<8)|value);
}
}
return result;
}
And finally declare the images as INT_RGB type
BufferedImage result = new BufferedImage(edgesX.getWidth(), edgesX.getHeight(), BufferedImage.TYPE_INT_RGB);
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_INT_RGB);
Related
I need to implement image blur
to do this, I have to use a double array in the form of a matrix, and implement the following:
each coefficient of the convolution matrix must be multiplied by the color value of the corresponding neighbor of the current (changeable) pixel
I need to handle going beyond 0 to 255
I tried to create and fill in the matrix with the 1/9 numbers that are in the task, but then I don't understand what and by what should I multiply?
has anyone solved this?
public static void main(String[] args) throws IOException {
BufferedImage image = ImageIO.read(new File("image.jpg"));
WritableRaster raster = image.getRaster();
int width = raster.getWidth();
int height = raster.getHeight();
final int colorsCountInRgb = 3;
final int colorMaximum = 255;
int[] pixel = new int[colorsCountInRgb];
double[][] matrix = new double[3][3];
double matrixMultiplier = 1 / 9d;
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix.length; j++) {
matrix[i][j] = matrixMultiplier;
}
}
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
raster.getPixel(x, y, pixel);
raster.setPixel(x, y, pixel);
}
}
ImageIO.write(image, "png", new File("out.png"));
}
I would like to create an image filter and have read the following Wikipedia article. I wanted to test the example from Wikipedia and get an incorrect result.
https://en.wikipedia.org/wiki/Kernel_(image_processing)
(For some reason I cannot upload images)
Result:
https://imgur.com/FiYFuZS
Expected result:
https://upload.wikimedia.org/wikipedia/commons/2/20/Vd-Rige1.png
I've also read the following source and still do not know how to fix it :/
Bluring a Java buffered image
URL url = new URL("https://upload.wikimedia.org/wikipedia/commons/5/50/Vd-Orig.png");
BufferedImage image = ImageIO.read(url);
float[][] kernel = {
{0, -1, 0},
{-1, 4, -1},
{0, -1, 0}
};
int w = image.getWidth();
int h = image.getHeight();
// Center point
int cx = kernel.length / 2;
int cy = kernel[0].length / 2;
BufferedImage cImage = new BufferedImage(w, h, image.getType());
for (int x = 0; x < w; x++) {
for (int y = 0; y < h; y++) {
float r = 0;
float g = 0;
float b = 0;
for (int dx = -cx; dx <= cx; dx++) {
for (int dy = -cy; dy <= cy; dy++) {
float e = kernel[dx + cx][dy + cy];
int xImage = x + dx;
int yImage = y + dy;
if (xImage < 0 || xImage >= w || yImage < 0 || yImage >= h) {
continue;
}
Color pixel = new Color(image.getRGB(xImage, yImage));
r += pixel.getRed() * e;
g += pixel.getGreen() * e;
b += pixel.getBlue() * e;
}
}
// Boundaries
r = Math.min(255, Math.max(0, r));
g = Math.min(255, Math.max(0, g));
b = Math.min(255, Math.max(0, b));
Color newPixel = new Color((int) r, (int) g, (int) b);
cImage.setRGB(x, y, newPixel.getRGB());
}
}
ImageIO.write(cImage, "png", Files.newOutputStream(Path.of("c.png")));
I need to iterate through each pixel of the Bitmap image, this is my code, but it's too slow
Bitmap img=......;
int imgWidth = img.getWidth();
int imgHeight = img.getHeight();
for (int i = 0; i < imgWidth; i++) {
for (int j = 0; j < imgHeight; j++) {
int color = img.getPixel(i, j);
int R = android.graphics.Color.red(color);
int G = android.graphics.Color.green(color);
int B = android.graphics.Color.blue(color);
// do something
}
}
I had solved this problem.This is my new code.
int[] colors = new int[img.getWidth() * img.getHeight()];
img.getPixels(colors, 0, img.getWidth(), 0, 0, img.getWidth(), img.getHeight());
for (int i = 0; i < colors.length; i++) {
int y = (int) (i / img.getWidth());
int x = i % img.getWidth();
int R = android.graphics.Color.red(colors[i]);
int G = android.graphics.Color.green(colors[i]);
int B = android.graphics.Color.blue(colors[i]);
}
}
The new solution takes about 8 seconds, and the old solution takes 27 seconds.
When I give it a picture with salt and pepper noise it returns an image loosing all details and I don't know what's wrong with my code:
public class Q1 {
public static void main(String[] args) throws IOException {
BufferedImage img = ImageIO.read(new File("task1input.png"));
//get dimensions
int maxHeight = img.getHeight();
int maxWidth = img.getWidth();
//create 2D Array for new picture
int pictureFile[][] = new int [maxHeight][maxWidth];
for( int i = 0; i < maxHeight; i++ ){
for( int j = 0; j < maxWidth; j++ ){
pictureFile[i][j] = img.getRGB( j, i );
}
}
int output [][] = new int [maxHeight][maxWidth];
//Apply Mean Filter
for (int v=1; v<maxHeight; v++) {
for (int u=1; u<maxWidth; u++) {
//compute filter result for position (u,v)
int sum = 0;
for (int j=-1; j<=1; j++) {
for (int i=-1; i<=1; i++) {
if((u+(j)>=0 && v+(i)>=0 && u+(j)<maxWidth && v+(i)<maxHeight)){
int p = pictureFile[v+(i)][u+(j)];
sum = sum + p;
}
}
}
int q = (int) (sum /9);
output[v][u] = q;
}
}
//Turn the 2D array back into an image
BufferedImage theImage = new BufferedImage(
maxHeight,
maxWidth,
BufferedImage.TYPE_INT_RGB);
int value;
for(int y = 1; y<maxHeight; y++){
for(int x = 1; x<maxWidth; x++){
value = output[y][x] ;
theImage.setRGB(y, x, value);
}
}
File outputfile = new File("task1output3x3.png");
ImageIO.write(theImage, "png", outputfile);
}
}
getRGB "Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB)" so therefore you have to extract the R, G, and B and add them separately; then put them back together. One way is this
int pixel=pictureFile[u+i][v+j];
int rr=(pixel&0x00ff0000)>>16, rg=(pixel&0x0000ff00)>>8, rb=pixel&0x000000ff;
sumr+=rr;
sumg+=rg;
sumb+=rb;
then to put them back together
sumr/=9; sumg/=9; sumb/=9;
newPixel=0xff000000|(sumr<<16)|(sumg<<8)|sumb);
HiI was wondering how to flip and image horizontally, for a practce task I was given a code that reads an image, inverting it to an image indicating it's brightness from 0-5, I had to flip an image.
This is my code of my reading an image and drawing it
public int[][] readImage(String url) throws IOException
{
// fetch the image
BufferedImage img = ImageIO.read(new URL(url));
// create the array to match the dimensions of the image
int width = img.getWidth();
int height = img.getHeight();
int[][] imageArray = new int[width][height];
// convert the pixels of the image into brightness values
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
// get the pixel at (x,y)
int rgb = img.getRGB(x,y);
Color c = new Color(rgb);
int red = c.getRed();
int green = c.getGreen();
int blue = c.getBlue();
// convert to greyscale
float[] hsb = Color.RGBtoHSB(red, green, blue, null);
int brightness = (int)Math.round(hsb[2] * (PIXEL_CHARS.length - 1));
imageArray[x][y] = brightness;
}
}
return imageArray;
}
public void draw() throws IOException
{
int[][] array = readImage("http://sfpl.org/images/graphics/chicklets/google-small.png");
for(int i=0; i<array.length; i++)
{
for(int pic=0; pic<array[i].length; pic++)
{
if(array[pic][i] == 0)
{
System.out.print("X");
}
else if(array[pic][i] == 1)
{
System.out.print("8");
}
else if(array[pic][i] == 2)
{
System.out.print("0");
}
else if(array[pic][i] == 3)
{
System.out.print(":");
}
else if(array[pic][i] == 4)
{
System.out.print(".");
}
else if (array[pic][i] == 5)
{
System.out.print(" ");
}
else
{
System.out.print("error");
break;
}
}
System.out.println();
}
}
and this is the code I tried to create to horizontally flip it,
void mirrorUpDown()
{
int[][] array = readImage("http://sfpl.org/images/graphics/chicklets/google-small.png");
int i = 0;
for (int x = 0; x < array.length; x++)
{
for (int y = 0; y < array[i].length; y++)
{{
int temp = array[x][y];
array[x][y]= array[-x][y];
array[array[i].length-x][y]=temp;
}
}
}
}
I get an error
unreported exception java.io.IException;
must be caught or declared to be thrown
I'd actually do it by this way...
BufferedImage flip(BufferedImage sprite){
BufferedImage img = new BufferedImage(sprite.getWidth(),sprite.getHeight(),BufferedImage.TYPE_INT_ARGB);
for(int xx = sprite.getWidth()-1;xx>0;xx--){
for(int yy = 0;yy < sprite.getHeight();yy++){
img.setRGB(sprite.getWidth()-xx, yy, sprite.getRGB(xx, yy));
}
}
return img;
}
Just a loop whose x starts at the end of the first image and places its rgba value on the flipped position of the second image. Clean, easy code :)
The function mirrorUpDown() , add a throws IOException there.
Also the function from which you are calling these methods, does that handle exception, does that code enclosed in a try catch block or the function is also set to throw IOException (one of either should be there)
How is your image supposed to know it should get it's data from imageArray ?
instead, you should access the raster of your image and modify the data in it.
void flip(BufferedImage image) {
WritableRaster raster = image.getRaster();
int h = raster.getHeight();
int w = raster.getWidth();
int x0 = raster.getMinX();
int y0 = raster.getMinY();
for (int x = x0; x < x0 + w; x++){
for (int y = y0; y < y0 + h / 2; y++){
int[] pix1 = new int[3];
pix1 = raster.getPixel(x, y, pix1);
int[] pix2 = new int[3];
pix2 = raster.getPixel(x, y0 + h - 1 - (y - y0), pix2);
raster.setPixel(x, y, pix2);
raster.setPixel(x, y0 + h - 1 - (y - y0), pix1);
}
}
return;
}
Sorry about posting this here over a year later but it should aid someone at a stage
try{
java.awt.image.BufferedImage bi = javax.imageio.ImageIO.read(getClass().getResource("Your image bro.jpg")) ;
int[] h = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), null, 0, bi.getWidth());
int [] h1 = new int[h.length];
System.out.println(""+h.length);
for(int j = 0;500>j;j++){
for(int i = 500;i>0;i--){
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
}
}
bi.setRGB(0, 0, bi.getWidth(), bi.getHeight(), h1, 0, bi.getWidth());
}
catch(Exception e){e.printStackTrace();}
Lets break the code down
java.awt.image.BufferedImage bi =javax.imageio.ImageIO.read(getClass().getResource("Your image bro.jpg"));
Tries to read the image and stores the read image into the BufferedImage variable bi
int[] h = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), null, 0, bi.getWidth());
int [] h1 = new int[h.length];
instantiate two arrays, h is the original RGB Array and h1 will be the horizontally flipped RGB array.
for(int j = 0;500>j;j++){
for(int i = 500;i>0;i--){
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
}
}
Lets look at something in particular more closely
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
Images are scanned from position 0;0 to x.length;y.length
but it is scanned in a coninual array. Thus we use a psuedo-array to manipulate the flipping of the image. j*500 references the Y values and (500-i) references the x values.
bi.setRGB(0, 0, bi.getWidth(), bi.getHeight(), h1, 0, bi.getWidth());
Finally, the image gets stored back into the BufferedImage variable.
Note that the 500 constant is referencing your x resolution of the image. For example, 1920 x 1080 sized image uses a max value of 1920. The logic is yours to decide.