I need to implement image blur
to do this, I have to use a double array in the form of a matrix, and implement the following:
each coefficient of the convolution matrix must be multiplied by the color value of the corresponding neighbor of the current (changeable) pixel
I need to handle going beyond 0 to 255
I tried to create and fill in the matrix with the 1/9 numbers that are in the task, but then I don't understand what and by what should I multiply?
has anyone solved this?
public static void main(String[] args) throws IOException {
BufferedImage image = ImageIO.read(new File("image.jpg"));
WritableRaster raster = image.getRaster();
int width = raster.getWidth();
int height = raster.getHeight();
final int colorsCountInRgb = 3;
final int colorMaximum = 255;
int[] pixel = new int[colorsCountInRgb];
double[][] matrix = new double[3][3];
double matrixMultiplier = 1 / 9d;
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix.length; j++) {
matrix[i][j] = matrixMultiplier;
}
}
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
raster.getPixel(x, y, pixel);
raster.setPixel(x, y, pixel);
}
}
ImageIO.write(image, "png", new File("out.png"));
}
I wanted to flip original image horizontally and create an image in the same folder, but it does not create a new image. Thanks in advance
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
public class ImagesFlipHorizontally {
public static void main(String[] args) throws IOException {
File location1 = new File("E:\\Users/Peter/Downloads/moon1.jpg");
BufferedImage image = ImageIO.read(location1);
File location2 = new File("E:\\Users/Peter/Downloads/moon1mirror.jpg");
int width = image.getWidth();
int height = image.getHeight();
BufferedImage mirror = mirrorimage (image, width, height);
ImageIO.write(mirror, "jpg", location2);
}
private static BufferedImage mirrorimage (BufferedImage img, int w, int h) {
BufferedImage horizontallyflipped = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
for (int xx = w-1; xx > 0; xx--) {
for (int yy = 0; yy < h; yy++) {
img.setRGB(w-xx, yy, img.getRGB(xx, yy));
}
}
return horizontallyflipped;
}
}
Besides doing xx >= 0 the swapping was not done.
private static BufferedImage mirrorimage (BufferedImage img, int w, int h) {
BufferedImage horizontallyflipped = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
for (int yy = 0; yy < h; yy++) {
for (int xx = 0; xx < w/2; xx++) {
int c = img.getRGB(xx, yy);
img.setRGB(xx, yy, img.getRGB(w - 1 - xx, yy));
img.setRGB(w - 1 - xx, yy, c);
}
}
return horizontallyflipped;
}
There are faster means, like Flip Image with Graphics2D
I tried to implement the Sobel edge detection in java.
It kind of works but I get a lot of seemingly random noise...
I loaded the image as BufferedImages and converted those to greyscaleimages first (via an algorithm i found online). After that I calculate the edges in x and y direction.
This is my code:
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Sobel {
static int [] sobel_x = {1, 0, -1,
2, 0, -2,
1, 0, -1};
static int [] sobel_y = {1, 2, 1,
0, 0, 0,
-1, -2, -1};
public static void main(String argc[]) throws IOException {
BufferedImage imgIn = ImageIO.read(new File("test.jpeg"));
BufferedImage imgGrey = greyscale(imgIn);
ImageIO.write(imgGrey, "PNG", new File("greyscale.jpg"));
BufferedImage edgesX = edgeDetection(imgGrey, sobel_x);
ImageIO.write(edgesX, "PNG", new File("edgesX.jpg"));
BufferedImage edgesY = edgeDetection(imgGrey, sobel_y);
ImageIO.write(edgesY, "PNG", new File("edgesY.jpg"));
BufferedImage sobel = sobel(edgesX,edgesY);
ImageIO.write(sobel, "PNG", new File("sobel.jpg"));
}
private static BufferedImage sobel (BufferedImage edgesX, BufferedImage edgesY){
BufferedImage result = new BufferedImage(edgesX.getWidth(), edgesX.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
int height = result.getHeight();
int width = result.getWidth();
for(int x = 0; x < width ; x++){
for(int y = 0; y < height; y++){
int tmp = Math.abs(edgesX.getRGB(x, y) + Math.abs(edgesY.getRGB(x, y)));
result.setRGB(x, y, tmp);
}
}
return result;
}
private static BufferedImage edgeDetection(BufferedImage img, int[] kernel){
int height = img.getHeight();
int width = img.getWidth();
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_BYTE_GRAY);
for(int x = 1; x < width -1 ; x++){
for(int y = 1; y < height - 1; y++){
int [] tmp = {img.getRGB(x-1, y-1),img.getRGB(x, y-1),img.getRGB(x+1, y-1),img.getRGB(x-1, y),img.getRGB(x, y),img.getRGB(x+1, y),img.getRGB(x-1, y+1),img.getRGB(x, y+1),img.getRGB(x+1, y+1)};
int value = convolution (kernel, tmp);
result.setRGB(x,y, value);
}
}
return result;
}
private static int convolution (int [] kernel, int [] pixel){
int result = 0;
for (int i = 0; i < pixel.length; i++){
result += kernel[i] * pixel[i];
}
return result / 9;
}
private static BufferedImage greyscale(BufferedImage img){
//get image width and height
int width = img.getWidth();
int height = img.getHeight();
//convert to grayscale
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int p = img.getRGB(x,y);
int a = (p>>24)&0xff;
int r = (p>>16)&0xff;
int g = (p>>8)&0xff;
int b = p&0xff;
//calculate average
int avg = (r+g+b)/3;
//replace RGB value with avg
p = (a<<24) | (avg<<16) | (avg<<8) | avg;
img.setRGB(x, y, p);
}
}
return img;
}
}
And this is an example of the noise I'm talking about:
An image of Lena :
I don't know why I get all this noise.
Any advice is appreciated.
You have to make the following changes:
in convolution take the absolute value
private static int convolution (int [] kernel, int [] pixel){
int result = 0;
for (int i = 0; i < pixel.length; i++){
result += kernel[i] * pixel[i];
}
return (int)(Math.abs(result) / 9);
}
in edgeDetection apply the value to all three channels
private static BufferedImage edgeDetection(BufferedImage img, int[] kernel){
int height = img.getHeight();
int width = img.getWidth();
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_INT_RGB);
for(int x = 1; x < width -1 ; x++){
for(int y = 1; y < height - 1; y++){
int [] tmp = {img.getRGB(x-1, y-1)&0xff,img.getRGB(x, y-1)&0xff,img.getRGB(x+1, y-1)&0xff,
img.getRGB(x-1, y)&0xff,img.getRGB(x, y)&0xff,img.getRGB(x+1, y)&0xff,img.getRGB(x-1, y+1)&0xff,
img.getRGB(x, y+1)&0xff,img.getRGB(x+1, y+1)&0xff};
int value = convolution (kernel, tmp);
result.setRGB(x,y, 0xff000000|(value<<16)|(value<<8)|value);
}
}
return result;
}
And finally declare the images as INT_RGB type
BufferedImage result = new BufferedImage(edgesX.getWidth(), edgesX.getHeight(), BufferedImage.TYPE_INT_RGB);
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_INT_RGB);
I'm trying to create a program that, when selecting an image, reverses the colors of the image.
But when I run the code, my BufferedImage changes the RGB I assigned earlier.
I leave you the code that reverses the image.
image is a static BufferedImage.
public static void saveImage(File input, File output) throws IOException{
image = ImageIO.read(input);
for (int x = 0; x < image.getWidth(); x++) {
for (int y = 0; y < image.getHeight(); y++) {
boolean isTransparent = isTransparent(x, y);
if (!isTransparent) {
Color color = new Color(image.getRGB(x, y));
int r = 255 - color.getRed();
int g = 255 - color.getGreen();
int b = 255 - color.getBlue();
color = new Color(r, g, b);
int rgb = color.getRGB();
image.setRGB(x, y, rgb);
System.out.println(rgb+" --> "+image.getRGB(x, y));
}
}
}
ImageIO.write(image, "png", output);
}
public static boolean isTransparent(int x, int y) {
int pixel = image.getRGB(x, y);
return (pixel >> 24) == 0x00;
}
I have the following Java code:
public static BufferedImage createImage(byte[] data, int width, int height)
{
BufferedImage res = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
byte[] rdata = ((DataBufferByte)res.getRaster().getDataBuffer()).getData();
for (int y = 0; y < height; y++) {
int yi = y * width;
for (int x = 0; x < width; x++) {
rdata[yi] = data[yi];
yi++;
}
}
return res;
}
Is there a faster way to do this?
In C++ I would use memcpy, but in Java?
Or maybe it is possible to initialize the result image with the passed data directly?
Well, to copy the array quickly you can use System.arraycopy:
System.arraycopy(data, 0, rdata, 0, height * width);
I don't know about initializing the BufferedImage to start with though, I'm afraid.
Have you tried:
res.getRaster().setDataElements(0, 0, width, height, data);
?