I am trying to run a simple Java program that tries to do the following : Extract pixel data from a given image. Then use this data to create a new image of the same type. The problem is that when I read the pixel data of this created image, the pixel values differ from the ones I have written into it. This happens not only or .jpg images but also for some .png images(so it's not even restricted to image type).
Here is my code:
package com.alex;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Test {
public static void main(String[] args) {
try{
// Read source image
BufferedImage img = ImageIO.read(new File("D:/field.png"));
int width = img.getWidth();
int height = img.getHeight();
int[] imagePixels = new int[width*height];
img.getRGB(0, 0, width, height, imagePixels, 0, width);
// Create copy image
BufferedImage destImg = new BufferedImage(img.getWidth(), img.getHeight(), img.getType());
destImg.setRGB(0, 0, img.getWidth(), img.getHeight(), imagePixels, 0, img.getWidth());
File out = new File("D:/test.png");
ImageIO.write(destImg, "png", out);
// Extract copy image pixels
BufferedImage copy = ImageIO.read(new File("D:/test.png"));
int width1 = copy.getWidth();
int height1 = copy.getHeight();
int[] extractedPixels = new int[width1*height1];
copy.getRGB(0, 0, width1, height1, extractedPixels, 0, width1);
System.out.println("The 2 dimensions are " + imagePixels.length + " " + extractedPixels.length );
// Compare the piels from the 2 images
int k=0;
for(int i=0; i<imagePixels.length; i++) {
if(imagePixels[i] != extractedPixels[i]) {
k++;
}
}
System.out.println("Number of different pixels was: " + k);
}catch(IOException e) {
System.out.println("Exception was thrown during reading of image: " + e.getMessage());
}
}
}
Unfortunately quite often and impredictable the 2 images pixel data differ. Could someone please help me find a method so that, at least for an image type, the values don't get modiied ?
Edit Here is an image that fails in the above process
Make sure you are using the correct color model for reading and writing.
According to the BufferedImage.getRGB() documentation,
Returns an array of integer pixels in the default RGB color model (TYPE_INT_ARGB) and default sRGB color space, from a portion of the image data. Color conversion takes place if the default model does not match the image ColorModel. There are only 8-bits of precision for each color component in the returned data when using this method. With a specified coordinate (x, y) in the image, the ARGB pixel can be accessed in this way:
pixel = rgbArray[offset + (y-startY)*scansize + (x-startX)];
[Edit]
You need to use the constructor BufferedImage(width, height, type, ColorModel), as indicated in the Javadoc for your image type (TYPE_BYTE_BINARY):
When this type is used as the imageType argument to the BufferedImage constructor that takes an imageType argument but no ColorModel argument, a 1-bit image is created with an IndexColorModel with two colors in the default sRGB ColorSpace: {0, 0, 0} and {255, 255, 255}.
Images with 2 or 4 bits per pixel may be constructed via the BufferedImage constructor that takes a ColorModel argument by supplying a ColorModel with an appropriate map size.
(emphasis mine)
Try this
First print same index in two array
If the result is not same means your problem in your color mode, But if same means your comparision is not work.
Related
What I want to do is very simple : I have an image which is basically a single color image with alpha.
EDIT : since my post has been light speed tagged as duplicate, here are the main differences with the other post :
The other topic's image has several colors, mine has only one
The owner accepted answer is the one I implemented... and I'm saying I have an issue with it because it's too slow
It means that all pixels are either :
White (255;255;255) and Transparent (0 alpha)
or Black (0;0;0) and Opaque (alpha between 0 and 255)
Alpha is not always the same, I have different shades of black.
My image is produced with Photoshop with the following settings :
Mode : RGB color, 8 Bits/Channel
Format : PNG
Compression : Smallest / Slow
Interlace : None
Considering it has only one color I suppose I could use other modes (such as Grayscale maybe ?) so tell me if you have suggestions about that.
What I want to do is REPLACE the Black color by another color in my java application.
Reading the other topics it seems that changing the ColorModel is the good thing to do, but I'm honestly totally lost on how to do it correctly.
Here is what I did :
public Test() {
BufferedImage image, newImage;
IndexColorModel newModel;
try {
image = ImageIO.read(new File("E:\\MyImage.png"));
}
catch (IOException e) {e.printStackTrace();}
newModel = createColorModel();
newRaster = newModel.createCompatibleWritableRaster(image.getWidth(), image.getHeight());
newImage = new BufferedImage(newModel, newRaster, false, null);
newImage.getGraphics().drawImage(image, 0, 0, null);
}
private IndexColorModel createColorModel() {
int size = 256;
byte[] r = new byte[size];
byte[] g = new byte[size];
byte[] b = new byte[size];
byte[] a = new byte[size];
for (int i = 0; i < size; i++) {
r[i] = (byte) 21;
g[i] = (byte) 0;
b[i] = (byte) 149;
a[i] = (byte) i;
}
return new IndexColorModel(16, size, r, g, b, a);
}
This produces the expected result.
newImage is the same than image with the new color (21;0;149).
But there is a flaw : the following line is too slow :
newImage.getGraphics().drawImage(image, 0, 0, null);
Doing this on a big image can take up to a few seconds, while I need this to be instantaneous.
I'm pretty sure I'm not doing this the good way.
Could you tell me how achieve this goal efficiently ?
Please consider that my image will always be single color with alpha, so suggestions about image format are welcomed.
If your input image always in palette + alpha mode, and using an IndexColorModel compatible with the one from your createColorModel(), you should be able to do:
BufferedImage image ...; // As before
IndexColorModel newModel = createColorModel();
BufferedImage newImage = new BufferedImage(newModel, image.getRaster(), newModel.isAlphaPremultiplied(), null);
This will create a new image, with the new color model, but re-using the raster from the original. No creating of large data arrays or drawing is required, thus it should be very fast. But: Note that image and newImage will now share backing buffer, so any changes drawn in one of them, will be reflected on the other (probably not a problem in your case).
PS: You might need to tweak your createColorModel() method to get the exact result you want, but as I don't have your input file, I can't very if it works or not.
I have this method that I found on stackoverflow:
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
using it like this:
image = getImageFromArray(dstpixels,img.getWidth(this),img.getHeight(this));
In order to debug I printed out Width, Height and length of dstpixels, here are the results:
700 389
272300
still I get this error
Exception in thread "AWT-EventQueue-0" java.lang.ArrayIndexOutOfBoundsException: 272300
on this exact line
raster.setPixels(0,0,width,height,pixels);
What am I missing?
It looks like Raster doesn't treat pixels as array in which each element represents single pixel. It treats it as array where each element contains single information about pixel.
So if it is ARGB type of image it looks like pixel array will contain info about first pixel in first four elements (at indexes [0,1,2,3]) where
R will be stored at position [0]
G at position [1]
B at position [2]
and A (alpha) at position [3].
Info about second pixel will be placed at [4,5,6,7] indexes, third [8,9,10,11] and so on.
So main problem from your question can be solved by assigning 4 times larger int[] pixel array than amount of pixels for ARGB type of image (for RGB 3 times larger).
Another problem in your code is that image.getData()
Returns the image as one large tile. The Raster returned is a copy of the image data is not updated if the image is changed.
(emphasis mine)
so manipulating data from that raster will not affect image. To update image with data from raster you need to add image.setData(raster); in your getImageFromArray method like
public static Image getImageFromArray(int[] pixels, int w, int h) {
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,w,h,pixels);
image.setData(raster); //<-- add this line
return image;
}
OR don't use image.getData() at all and instead manipulate raster used by image. You can get it via image.getRaster().
Demo:
public static void main(String[] args) {
int width = 200, height = 300;
//array needs to be 4 times larger than amount of pixels
int[] pixels = new int[4*width*height];
for (int i = 0; i < pixels.length; i++) {
//if (i%4==0){pixels[i]=255;}//R default 0
//if (i%4==1){pixels[i]=255;}//G default 0
if (i%4==2){pixels[i]=255;}//B default 0
//Alpha
if (i%4==3){
pixels[i]=(int)(255*(i/4%width)/(double)width);
}
}
Image image = getImageFromArray(pixels, width, height);
showImage(image);
}
public static void showImage(Image img){
JFrame frame = new JFrame();
frame.setLayout(new FlowLayout());
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JLabel lab = new JLabel(new ImageIcon(img));
frame.add(lab);
frame.pack();
frame.setVisible(true);
}
public static Image getImageFromArray(int[] pixels, int w, int h) {
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = image.getRaster();
raster.setPixels(0,0,w,h,pixels);
return image;
}
In order to debug I printed out Width, Height and length of dstpixels, here are the results: 700 389 272300
And
still I get this error Exception in thread "AWT-EventQueue-0" java.lang.ArrayIndexOutOfBoundsException: 272300
If the array size is N, then the index of the first element is 0 and the index of the last element is N-1 (in your case 0 and 272299)
Seems that one of your parameters should be 272300-1!
The exception is telling you that something accesses index 272300, which won't work if that is the SIZE of the dimension; then the last index is as said, 272300-1.
In other words: always read the exception message carefully, it tells you all you need to know!
While working on a Java application which requires rendering sprites, I thought that, instead of loading a .png or .jpg file as an Image or BufferedImage, I could load up a byte[] array containing indices for a color palette(16 colors per palette, so two pixels per byte), then render that.
The method I currently have generates a BufferedImage from the byte[] array and color palette while initializing, taking extra time to initialize but running smoothly after that, which works fine, but there are only 4 sprites in the program so far. I'm worried that when there are 100+ sprites, storing all of them as BufferedImages will be too taxing on the memory. And not only would that mean 1 BufferedImage per sprite, but actually 1 image for each sprite/palette combination I'd want to use.
This function creates the BufferedImage:
protected BufferedImage genImage(ColorPalette cp, int width, int height){ //Function to generate BufferedImage to render from the byte[]
BufferedImage ret = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); //Create the Image to return
for(int j=0; j<height; j++){ //Run a for loop for each pixel
for(int i=0; i<width; i++){
int index = (j * width + i)/2; //Get the index of the needed byte
int value = image[index] & 0x00ff; //Convert to "unsigned byte", or int
byte thing; //declare actual color index as byte
if(i % 2 == 0)thing = (byte)((value & 0b11110000) >>> 4); //If it's an even index(since it starts with 0, this includes the 1st one), get the first 4 bits of the value
else thing = (byte)(value & 0b00001111); //If it's odd, get the last four bits
ret.setRGB(i, j, cp.getColor(thing & 0x00ff).getRGB()); //Set the pixel in the image to the value in the Color Palette
}
}
return ret;
}
And this one actually renders it to the screen:
public void render(Graphics g, int x, int y){ //Graphics to render to and x/y coords
g.drawImage(texture, x, y, TILE_WIDTH, TILE_HEIGHT, null); //Render it
}
I've experimented with another method that renders from the byte[] directly w/o the need for a BufferedImage, which should theoretically succeed in saving memory by avoiding use of a BufferedImage for each sprite, but it ended up being very, very slow. It took several seconds to render each frame w/ at most 25 sprites to render on the screen! Note that g is a Graphics object.
private void drawSquare(int x, int y, int scale, Color c){ //Draw each "pixel" to scale
if(g == null){ //If null, quit
return;
}
g.setColor(c); //Set the color
for(int i=x; i<x+scale; i++){ //Loop through each pixel
if(i<0)continue;
for(int j=y; j<y+scale; j++){
if(j<0)continue;
g.fillRect(x, y, scale, scale); //Fill the rect to make the "pixel"
}
}
}
public void drawBytes(byte[] image, int x, int y, int width, int height, int scale, ColorPalette palette){ //Draw a byte[] image with given byte[], x/y coords, width/height, scale, and color palette
if(image.length < width * height / 2){ //If the image is too small, exit
return;
}
for(int j=0; j<height; j++){ //Loop through each pixel
for(int i=0; i<width; i++){
int index = (j * width + i)/2; //Get index
int value = image[index]; //get the byte
byte thing; //get the high or low value depending on even/odd
if(i % 2 == 0)thing = (byte)((value & 0b11110000) >>> 4);
else thing = (byte)(value & 0b00001111);
drawSquare((int)(x + scale * i), (int)(y + scale * j), scale, palette.getColor(thing)); //draw the pixel
}
}
}
So is there a more efficient way to render these byte[] arrays w/o the need for BufferedImage's? Or will it really not be problematic to have several hundred BufferdImage's loaded into memory?
EDIT: I've also tried doing the no-BufferedImage methods, but with g as the one large BufferedImage to which everything is rendered, and is then rendered to the Canvas. The primary difference is that g.fillRect(... is changed to g.setRGB(... in that method, but it was similarly slow.
EDIT: The images I'm dealing with are 16x16 and 32x32 pixels.
If memory usage is your main concern, I'd use BufferedImages with IndexColorModel (TYPE_BYTE_BINARY). This would perfectly reflect your byte[] image and ColorPalette, and waste very little memory. They will also be reasonably fast to draw.
This approach will use about 1/8th of the memory used by the initial use of TYPE_INT_RGB BufferedImages, because we retain the 4 bits per pixel, instead of 32 bits (an int is 32 bits) per pixel (plus some overhead for the palette, of course).
public static void main(String[] args) {
byte[] palette = new byte[16 * 3]; // 16 color palette without alpha
byte[] pixels = new byte[(16 * 16 * 4) / 8]; // 16 * 16 * 4 bit
Random random = new Random(); // For test purposes, just fill arrays with random data
random.nextBytes(palette);
random.nextBytes(pixels);
// Create ColorModel & Raster from palette and pixels
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, false, -1); // -1 for no transparency
DataBufferByte buffer = new DataBufferByte(pixels, pixels.length);
WritableRaster raster = Raster.createPackedRaster(buffer, 16, 16, 4, null);
// Create BufferedImage from CM and Raster
final BufferedImage image = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
System.out.println("image: " + image); // "image: BufferedImage#...: type = 12 ..."
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Foo");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setVisible(true);
}
});
}
The above code will create fully opaque (Transparency.OPAQUE) images, that will occupy the entire 16 x 16 pixel block.
If you want bitmask (Transparency.BITMASK) transparency, where all pixels are either fully opaque or fulle transparent, just change the last parameter in the IndexColorModel to the palette index you want to be fully transparent.
int transparentIndex = ...;
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, false, transparentIndex);
// ...everything else as above
This will allow your sprites to have any shape you want.
If you want translucent pixels (Transparency.TRANSLUCENT), where pixels can be semi-transparent, you can also have that. You will then have to change the palette array to 16 * 4 entries, and include a sample for the alpha value as the 4th sample for each entry (quadruple). Then invoke the IndexColorModel constructor with the last parameter set to true (hasAlpha):
byte[] palette = new byte[16 * 4]; // 16 color palette with alpha (translucency)
// ...
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, true); // true for palette with alpha samples
// ...everything else as above
This will allow smoother gradients between the transparent and non-transparent parts of the sprites. But with only 16 colors in the palette, you won't have many entries available for transparency.
Note that it is possible to re-use the Rasters and IndexColorModels here, in all of the above examples, to save further memory for images using the same palette, or even images using the same image data with different palettes. There's one caveat though, that is the images sharing rasters will be "live views" of each other, so if you make any changes to one, you will change them all. But if your images are never changed, you could exploit this fact.
That said, the above really is a compromise between saving memory and having "reasonable" performance. If performance (ie. frames per second) is more important, just ignore the memory usage, and create BufferedImages that are compatible with your graphics card's/the OS's native pixel layout. You can do this, by using component.createCompatibleImage(...) (where component is a JComponent subclass) or gfxConfig.createCompatibleImage(...) (where gfxConfig is a GraphicsConfiguration obtained from the local GraphicsEnvironment).
Pre: Receives a buffered image and a number of pixels to remove
Post: creates and returns a copy of the received image with the given number of the images remaining pixels removed
I am having trouble with this method because I need to remove random pixels...I have only made a new copy of the image to print, but I need to change it so that the number of pixels given are removed...can anyone help?
public static BufferedImage removePixels(BufferedImage img,int numToRemove)
{
//so far what I have gotten
BufferedImage copy = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_ARGB);
copy.getGraphics().drawImage(img, 0,0,null);
return copy;
}
bufferedImage.setRGB(int x , int y , int rgb) and bufferedImage.getRGB(int x , int y) might be what you're looking for.
I want to create the Image from 2D Array.I used BufferImage concept to construct Image.but there is difference betwwen original Image and constructed Image is displayed by image below
I am using the following code
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
/**
*
* #author pratibha
*/
public class ConstructImage{
int[][] PixelArray;
public ConstructImage(){
try{
BufferedImage bufferimage=ImageIO.read(new File("D:/q.jpg"));
int height=bufferimage.getHeight();
int width=bufferimage.getWidth();
PixelArray=new int[width][height];
for(int i=0;i<width;i++){
for(int j=0;j<height;j++){
PixelArray[i][j]=bufferimage.getRGB(i, j);
}
}
///////create Image from this PixelArray
BufferedImage bufferImage2=new BufferedImage(width, height,BufferedImage.TYPE_INT_RGB);
for(int y=0;y<height;y++){
for(int x=0;x<width;x++){
int Pixel=PixelArray[x][y]<<16 | PixelArray[x][y] << 8 | PixelArray[x][y];
bufferImage2.setRGB(x, y,Pixel);
}
}
File outputfile = new File("D:\\saved.jpg");
ImageIO.write(bufferImage2, "jpg", outputfile);
}
catch(Exception ee){
ee.printStackTrace();
}
}
public static void main(String args[]){
ConstructImage c=new ConstructImage();
}
}
You get a ARGB value from getRGB and the setRGB takes a ARGB value, so doing this is enough:
bufferImage2.setRGB(x, y, PixelArray[x][y]);
From the API of BufferedImage.getRGB:
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace.
...and from the API of BufferedImage.setRGB:
Sets a pixel in this BufferedImage to the specified RGB value. The pixel is assumed to be in the default RGB color model, TYPE_INT_ARGB, and default sRGB color space.
On the other hand I would recommend you do paint the image instead:
Graphics g = bufferImage2.getGraphics();
g.drawImage(g, 0, 0, null);
g.dispose();
Then you wouldn't need to worry about any color model etc.