I have the following Java code:
public static BufferedImage createImage(byte[] data, int width, int height)
{
BufferedImage res = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
byte[] rdata = ((DataBufferByte)res.getRaster().getDataBuffer()).getData();
for (int y = 0; y < height; y++) {
int yi = y * width;
for (int x = 0; x < width; x++) {
rdata[yi] = data[yi];
yi++;
}
}
return res;
}
Is there a faster way to do this?
In C++ I would use memcpy, but in Java?
Or maybe it is possible to initialize the result image with the passed data directly?
Well, to copy the array quickly you can use System.arraycopy:
System.arraycopy(data, 0, rdata, 0, height * width);
I don't know about initializing the BufferedImage to start with though, I'm afraid.
Have you tried:
res.getRaster().setDataElements(0, 0, width, height, data);
?
Related
I have the following code:
Bitmap bitmap = Bitmap.createBitmap(WIDTH, HEIGHT, Bitmap.Config.ARGB_4444);
for (int y = 0; y < HEIGHT; y++) {
for (int x = 0; x < WIDTH; x++) {
int index = y * WIDTH + x;
bitmap.setPixel(x, y, Color.argb(255, 0, mask[index],0)); // mask is an array of int between 0 and 255
}
}
It works properly: I get my bitmap but...this code is really extremely slow.
I tried to replace it with:
Bitmap bitmap = Bitmap.createBitmap(WIDTH, HEIGHT, Bitmap.Config.ARGB_4444);
bitmap.setPixels(mask, 0, WIDTH, 0, 0, WIDTH, HEIGHT);
but this is not working. I get a black image.
Anybody can help ?
Thanks !
I tried to implement the Sobel edge detection in java.
It kind of works but I get a lot of seemingly random noise...
I loaded the image as BufferedImages and converted those to greyscaleimages first (via an algorithm i found online). After that I calculate the edges in x and y direction.
This is my code:
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Sobel {
static int [] sobel_x = {1, 0, -1,
2, 0, -2,
1, 0, -1};
static int [] sobel_y = {1, 2, 1,
0, 0, 0,
-1, -2, -1};
public static void main(String argc[]) throws IOException {
BufferedImage imgIn = ImageIO.read(new File("test.jpeg"));
BufferedImage imgGrey = greyscale(imgIn);
ImageIO.write(imgGrey, "PNG", new File("greyscale.jpg"));
BufferedImage edgesX = edgeDetection(imgGrey, sobel_x);
ImageIO.write(edgesX, "PNG", new File("edgesX.jpg"));
BufferedImage edgesY = edgeDetection(imgGrey, sobel_y);
ImageIO.write(edgesY, "PNG", new File("edgesY.jpg"));
BufferedImage sobel = sobel(edgesX,edgesY);
ImageIO.write(sobel, "PNG", new File("sobel.jpg"));
}
private static BufferedImage sobel (BufferedImage edgesX, BufferedImage edgesY){
BufferedImage result = new BufferedImage(edgesX.getWidth(), edgesX.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
int height = result.getHeight();
int width = result.getWidth();
for(int x = 0; x < width ; x++){
for(int y = 0; y < height; y++){
int tmp = Math.abs(edgesX.getRGB(x, y) + Math.abs(edgesY.getRGB(x, y)));
result.setRGB(x, y, tmp);
}
}
return result;
}
private static BufferedImage edgeDetection(BufferedImage img, int[] kernel){
int height = img.getHeight();
int width = img.getWidth();
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_BYTE_GRAY);
for(int x = 1; x < width -1 ; x++){
for(int y = 1; y < height - 1; y++){
int [] tmp = {img.getRGB(x-1, y-1),img.getRGB(x, y-1),img.getRGB(x+1, y-1),img.getRGB(x-1, y),img.getRGB(x, y),img.getRGB(x+1, y),img.getRGB(x-1, y+1),img.getRGB(x, y+1),img.getRGB(x+1, y+1)};
int value = convolution (kernel, tmp);
result.setRGB(x,y, value);
}
}
return result;
}
private static int convolution (int [] kernel, int [] pixel){
int result = 0;
for (int i = 0; i < pixel.length; i++){
result += kernel[i] * pixel[i];
}
return result / 9;
}
private static BufferedImage greyscale(BufferedImage img){
//get image width and height
int width = img.getWidth();
int height = img.getHeight();
//convert to grayscale
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int p = img.getRGB(x,y);
int a = (p>>24)&0xff;
int r = (p>>16)&0xff;
int g = (p>>8)&0xff;
int b = p&0xff;
//calculate average
int avg = (r+g+b)/3;
//replace RGB value with avg
p = (a<<24) | (avg<<16) | (avg<<8) | avg;
img.setRGB(x, y, p);
}
}
return img;
}
}
And this is an example of the noise I'm talking about:
An image of Lena :
I don't know why I get all this noise.
Any advice is appreciated.
You have to make the following changes:
in convolution take the absolute value
private static int convolution (int [] kernel, int [] pixel){
int result = 0;
for (int i = 0; i < pixel.length; i++){
result += kernel[i] * pixel[i];
}
return (int)(Math.abs(result) / 9);
}
in edgeDetection apply the value to all three channels
private static BufferedImage edgeDetection(BufferedImage img, int[] kernel){
int height = img.getHeight();
int width = img.getWidth();
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_INT_RGB);
for(int x = 1; x < width -1 ; x++){
for(int y = 1; y < height - 1; y++){
int [] tmp = {img.getRGB(x-1, y-1)&0xff,img.getRGB(x, y-1)&0xff,img.getRGB(x+1, y-1)&0xff,
img.getRGB(x-1, y)&0xff,img.getRGB(x, y)&0xff,img.getRGB(x+1, y)&0xff,img.getRGB(x-1, y+1)&0xff,
img.getRGB(x, y+1)&0xff,img.getRGB(x+1, y+1)&0xff};
int value = convolution (kernel, tmp);
result.setRGB(x,y, 0xff000000|(value<<16)|(value<<8)|value);
}
}
return result;
}
And finally declare the images as INT_RGB type
BufferedImage result = new BufferedImage(edgesX.getWidth(), edgesX.getHeight(), BufferedImage.TYPE_INT_RGB);
BufferedImage result = new BufferedImage(width -1, height -1, BufferedImage.TYPE_INT_RGB);
HiI was wondering how to flip and image horizontally, for a practce task I was given a code that reads an image, inverting it to an image indicating it's brightness from 0-5, I had to flip an image.
This is my code of my reading an image and drawing it
public int[][] readImage(String url) throws IOException
{
// fetch the image
BufferedImage img = ImageIO.read(new URL(url));
// create the array to match the dimensions of the image
int width = img.getWidth();
int height = img.getHeight();
int[][] imageArray = new int[width][height];
// convert the pixels of the image into brightness values
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
// get the pixel at (x,y)
int rgb = img.getRGB(x,y);
Color c = new Color(rgb);
int red = c.getRed();
int green = c.getGreen();
int blue = c.getBlue();
// convert to greyscale
float[] hsb = Color.RGBtoHSB(red, green, blue, null);
int brightness = (int)Math.round(hsb[2] * (PIXEL_CHARS.length - 1));
imageArray[x][y] = brightness;
}
}
return imageArray;
}
public void draw() throws IOException
{
int[][] array = readImage("http://sfpl.org/images/graphics/chicklets/google-small.png");
for(int i=0; i<array.length; i++)
{
for(int pic=0; pic<array[i].length; pic++)
{
if(array[pic][i] == 0)
{
System.out.print("X");
}
else if(array[pic][i] == 1)
{
System.out.print("8");
}
else if(array[pic][i] == 2)
{
System.out.print("0");
}
else if(array[pic][i] == 3)
{
System.out.print(":");
}
else if(array[pic][i] == 4)
{
System.out.print(".");
}
else if (array[pic][i] == 5)
{
System.out.print(" ");
}
else
{
System.out.print("error");
break;
}
}
System.out.println();
}
}
and this is the code I tried to create to horizontally flip it,
void mirrorUpDown()
{
int[][] array = readImage("http://sfpl.org/images/graphics/chicklets/google-small.png");
int i = 0;
for (int x = 0; x < array.length; x++)
{
for (int y = 0; y < array[i].length; y++)
{{
int temp = array[x][y];
array[x][y]= array[-x][y];
array[array[i].length-x][y]=temp;
}
}
}
}
I get an error
unreported exception java.io.IException;
must be caught or declared to be thrown
I'd actually do it by this way...
BufferedImage flip(BufferedImage sprite){
BufferedImage img = new BufferedImage(sprite.getWidth(),sprite.getHeight(),BufferedImage.TYPE_INT_ARGB);
for(int xx = sprite.getWidth()-1;xx>0;xx--){
for(int yy = 0;yy < sprite.getHeight();yy++){
img.setRGB(sprite.getWidth()-xx, yy, sprite.getRGB(xx, yy));
}
}
return img;
}
Just a loop whose x starts at the end of the first image and places its rgba value on the flipped position of the second image. Clean, easy code :)
The function mirrorUpDown() , add a throws IOException there.
Also the function from which you are calling these methods, does that handle exception, does that code enclosed in a try catch block or the function is also set to throw IOException (one of either should be there)
How is your image supposed to know it should get it's data from imageArray ?
instead, you should access the raster of your image and modify the data in it.
void flip(BufferedImage image) {
WritableRaster raster = image.getRaster();
int h = raster.getHeight();
int w = raster.getWidth();
int x0 = raster.getMinX();
int y0 = raster.getMinY();
for (int x = x0; x < x0 + w; x++){
for (int y = y0; y < y0 + h / 2; y++){
int[] pix1 = new int[3];
pix1 = raster.getPixel(x, y, pix1);
int[] pix2 = new int[3];
pix2 = raster.getPixel(x, y0 + h - 1 - (y - y0), pix2);
raster.setPixel(x, y, pix2);
raster.setPixel(x, y0 + h - 1 - (y - y0), pix1);
}
}
return;
}
Sorry about posting this here over a year later but it should aid someone at a stage
try{
java.awt.image.BufferedImage bi = javax.imageio.ImageIO.read(getClass().getResource("Your image bro.jpg")) ;
int[] h = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), null, 0, bi.getWidth());
int [] h1 = new int[h.length];
System.out.println(""+h.length);
for(int j = 0;500>j;j++){
for(int i = 500;i>0;i--){
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
}
}
bi.setRGB(0, 0, bi.getWidth(), bi.getHeight(), h1, 0, bi.getWidth());
}
catch(Exception e){e.printStackTrace();}
Lets break the code down
java.awt.image.BufferedImage bi =javax.imageio.ImageIO.read(getClass().getResource("Your image bro.jpg"));
Tries to read the image and stores the read image into the BufferedImage variable bi
int[] h = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), null, 0, bi.getWidth());
int [] h1 = new int[h.length];
instantiate two arrays, h is the original RGB Array and h1 will be the horizontally flipped RGB array.
for(int j = 0;500>j;j++){
for(int i = 500;i>0;i--){
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
}
}
Lets look at something in particular more closely
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
Images are scanned from position 0;0 to x.length;y.length
but it is scanned in a coninual array. Thus we use a psuedo-array to manipulate the flipping of the image. j*500 references the Y values and (500-i) references the x values.
bi.setRGB(0, 0, bi.getWidth(), bi.getHeight(), h1, 0, bi.getWidth());
Finally, the image gets stored back into the BufferedImage variable.
Note that the 500 constant is referencing your x resolution of the image. For example, 1920 x 1080 sized image uses a max value of 1920. The logic is yours to decide.
I am using the following code:
public static BufferedImage enlarge(BufferedImage image, int n) {
int w = n * image.getWidth();
int h = n * image.getHeight();
BufferedImage enlargedImage =
new BufferedImage(w, h, image.getType());
for (int y=0; y < h; ++y)
for (int x=0; x < w; ++x)
enlargedImage.setRGB(x, y, image.getRGB(x/n, y/n));
return enlargedImage;
}
However, I wish to use it for a greyscale image. Does BufferedImage have equivalent for setRGB and getRGB for the intensity?
My real goal is to read the values from a graph in GIF format, into some meaningful data structure but in order to get started I need to be able to read the colour of each pixel of the GIF in question.
In order to test this i want to save the segment of the GIF i am reading to file for visual analysis, but am having trouble.
after reading this post I attempted to do something similar, however my output GIF always comes out completely black.
can anyone tell me what i've misunderstood?
BufferedImage bi = ImageIO.read(new URL("http://upload.wikimedia.org/wikipedia/commons/3/36/Sunflower_as_GIF.gif"));
int x = 100;
int y = 100;
int width = 100;
int height = 100;
int[] data = grabPixels(bi, x, y, width, height);
BufferedImage img = createImage(data, width, height);
ImageIO.write(img, "gif", new File("part.gif"));
...
private int[] grabPixels(BufferedImage img, int x, int y, int width, int height)
{
try
{
PixelGrabber pg = new PixelGrabber(img, x, y, width, height, true);
pg.grabPixels();
if ((pg.getStatus() & ImageObserver.ABORT) != 0)
throw new RuntimeException("image fetch aborted or errored");
return convertPixels((int[]) pg.getPixels(), width, height);
}
catch (InterruptedException e)
{
throw new RuntimeException("interrupted waiting for pixels", e);
}
}
public int[] convertPixels(int[] pixels, int width, int height)
{
int[] newPix = new int[width * height * 3];
int n = 0;
for (int j = 0; j < height; j++)
{
for (int i = 0; i < width; i++)
{
int pixel = pixels[j * width + i];
newPix[n++] = (pixel >> 16) & 0xff;
newPix[n++] = (pixel >> 8) & 0xff;
newPix[n++] = (pixel) & 0xff;
}
}
return newPix;
}
private BufferedImage createImage(int[] pixels, int width, int height)
{
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0, 0, width, height, pixels);
return image;
}
All black sounds like zeros, as if the image hadn't loaded yet. You might check the result returned by grabPixels() or specify a timeout. Once you have a BufferedImage, you could use getRaster() and work with the WritableRaster.