How do I create an Image from a byte array? - java

I want to be able to take the data of an image as an array, modify it, and then use that array to create a modified image. Here is what I attempted:
public class Blue {
public static void main (String [] args) throws AWTException, IOException {
Robot robot = new Robot ();
Dimension d = Toolkit.getDefaultToolkit().getScreenSize();
BufferedImage img = robot.createScreenCapture(new Rectangle(0,0,d.width,d.height));
int[] pixels = ((DataBufferInt)img.getRaster().getDataBuffer()).getData();
int[] newPixels = IntStream.range(0,pixels.length).parallel().filter(i ->{
int p = pixels[i];
// get red
int r = (p >> 16) & 0xff;
// get green
int g = (p >> 8) & 0xff;
// get blue
int b = p & 0xff;
return b >= 200;
}).toArray();
int[] output = new int[pixels.length];
for(int i = 0; i<newPixels.length; i++) {
output[newPixels[i]] = 0x0000FF;
}
File f = new File("Result.jpg");
ByteBuffer byteBuffer = ByteBuffer.allocate(output.length * 4);
for (int i = 0; i< output.length; i++) {
byteBuffer.putInt(output[i]);
}
byte[] array = byteBuffer.array();
InputStream stream = new ByteArrayInputStream(array);
BufferedImage image1 = ImageIO.read(stream);
System.out.println(image1.getWidth());
ImageIO.write(image1, "png", f);
}
}
Here is how it works.
The robot takes a screen capture of the screen, which is then stored into a BufferedImage.
The data of the image is stored in an integer array
An int stream is used to select all pixel locations that correspond to sufficiently blue pixels
These blue pixels are placed in an array called output at the same locations they were taken from. However, the rest of the array has value 0.
A destination file for my modified image is created
I create a byte buffer that is 4 times the length of the output array, and data from the output array is placed in it.
I create a byte array from the buffer then create an input stream with it
Finally, I read the stream to create an Image from it
I use System.out.println() to print some data from the image to see if the image exists.
Step 9 is where the problem shows up.
I keep getting a NullPointerException, meaning that the image doesn't exist, it is null.
I don't understand what I did wrong.
I tried using ByteArrayInputStream instead of InputStream, but that doesn't work as well. Then, I thought that maybe the first couple of bytes encode the coding information for the image, so I tried copying that over to the output array, but that didn't solve the problem either. I am not sure why my byte array isn't turning into an image.

Yo summarize the comments in an answer, the problem is that you have an array of "raw" pixels, and try to pass that to ImageIO.read(). ImageIO.read() reads images stored in a defined file format, like PNG, JPEG or TIFF (while the pixel array is just pixels, it does not contain information on image dimension, color model, compression etc.). If no plugin is found for the input, the method will return null(thus the NullPointerException).
To create a BufferedImage from the pixel array, you could create a raster around the array, pick a suitable color model and create a new BufferedImage using the constructor taking a Raster and ColorModel parameter. You can see how to do that in one of my other answers.
However, as you already have a BufferedImage and access to its pixels, it's much easier (and cheaper CPU/memory wise) to just reuse that.
You can replace your code with the following (see comments for details and relation to your steps):
public class Blue {
public static void main (String [] args) throws AWTException, IOException {
// 1. Create screen capture
Robot robot = new Robot ();
Dimension d = Toolkit.getDefaultToolkit().getScreenSize();
BufferedImage img = robot.createScreenCapture(new Rectangle(0, 0, d.width, d.height));
// 2: Get backing array
int[] pixels = ((DataBufferInt) img.getRaster().getDataBuffer()).getData();
// 3: Find all "sufficiently blue" pixels
int[] bluePixels = IntStream.range(0, pixels.length).parallel()
.filter(i -> pixels[i] & 0xff >= 200).toArray();
// 4a: Clear all pixels to opaque black
for (int i = 0; i < pixels.length; i++) {
pixels[i] = 0xFF000000;
}
// 4b: Set all blue pixels to opaque blue
for (int i = 0; i < bluePixels.length; i++) {
pixels[bluePixels[i]] = 0xFF0000FF;
}
// 5: Make sure the file extension matches the file format for less confusion... 😀
File f = new File("result.png");
// 9: Print & write image (steps 6-8 is not needed)
System.out.println(img);
ImageIO.write(img, "png", f);
}
}

Related

How replace a color by another in a BufferedImage?

What I want to do is very simple : I have an image which is basically a single color image with alpha.
EDIT : since my post has been light speed tagged as duplicate, here are the main differences with the other post :
The other topic's image has several colors, mine has only one
The owner accepted answer is the one I implemented... and I'm saying I have an issue with it because it's too slow
It means that all pixels are either :
White (255;255;255) and Transparent (0 alpha)
or Black (0;0;0) and Opaque (alpha between 0 and 255)
Alpha is not always the same, I have different shades of black.
My image is produced with Photoshop with the following settings :
Mode : RGB color, 8 Bits/Channel
Format : PNG
Compression : Smallest / Slow
Interlace : None
Considering it has only one color I suppose I could use other modes (such as Grayscale maybe ?) so tell me if you have suggestions about that.
What I want to do is REPLACE the Black color by another color in my java application.
Reading the other topics it seems that changing the ColorModel is the good thing to do, but I'm honestly totally lost on how to do it correctly.
Here is what I did :
public Test() {
BufferedImage image, newImage;
IndexColorModel newModel;
try {
image = ImageIO.read(new File("E:\\MyImage.png"));
}
catch (IOException e) {e.printStackTrace();}
newModel = createColorModel();
newRaster = newModel.createCompatibleWritableRaster(image.getWidth(), image.getHeight());
newImage = new BufferedImage(newModel, newRaster, false, null);
newImage.getGraphics().drawImage(image, 0, 0, null);
}
private IndexColorModel createColorModel() {
int size = 256;
byte[] r = new byte[size];
byte[] g = new byte[size];
byte[] b = new byte[size];
byte[] a = new byte[size];
for (int i = 0; i < size; i++) {
r[i] = (byte) 21;
g[i] = (byte) 0;
b[i] = (byte) 149;
a[i] = (byte) i;
}
return new IndexColorModel(16, size, r, g, b, a);
}
This produces the expected result.
newImage is the same than image with the new color (21;0;149).
But there is a flaw : the following line is too slow :
newImage.getGraphics().drawImage(image, 0, 0, null);
Doing this on a big image can take up to a few seconds, while I need this to be instantaneous.
I'm pretty sure I'm not doing this the good way.
Could you tell me how achieve this goal efficiently ?
Please consider that my image will always be single color with alpha, so suggestions about image format are welcomed.
If your input image always in palette + alpha mode, and using an IndexColorModel compatible with the one from your createColorModel(), you should be able to do:
BufferedImage image ...; // As before
IndexColorModel newModel = createColorModel();
BufferedImage newImage = new BufferedImage(newModel, image.getRaster(), newModel.isAlphaPremultiplied(), null);
This will create a new image, with the new color model, but re-using the raster from the original. No creating of large data arrays or drawing is required, thus it should be very fast. But: Note that image and newImage will now share backing buffer, so any changes drawn in one of them, will be reflected on the other (probably not a problem in your case).
PS: You might need to tweak your createColorModel() method to get the exact result you want, but as I don't have your input file, I can't very if it works or not.

Exporting an image from screen(viewport) in java OpenGL

I am a newbie in OpenGL programming. I am making a java program with OpenGL. I drew many cubes inside. I now wanted to implement a screenshot function in my program but I just couldn't make it work. The situation is as follow :
I used FPSanimator to refresh my drawable in 60 fps
I drew dozens of cubes inside my Display.
I added a KeyListener to my panel, if I pressed the alt key, the program will run the following method :
public static void exportImage() {
int[] bb = new int[Constants.PanelSize.width*Constants.PanelSize.height*4];
IntBuffer ib = IntBuffer.wrap(bb);
ib.position(0);
Constants.gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
Constants.gl.glReadPixels(0,0,Constants.PanelSize.width,Constants.PanelSize.height,GL2.GL_RGBA,GL2.GL_UNSIGNED_BYTE,ib);
System.out.println(Constants.gl.glGetError());
ImageExport.savePixelsToPNG(bb,Constants.PanelSize.width,Constants.PanelSize.height, "imageFilename.png");
}
// Constant is a class which I store all my global variables in static type
The output in the console was 0, which means no errors. I printed the contents in the buffer and they were all zeros.
I checked the output file and it was only 1kB.
What should I do? Are there any good suggestions for me to export the screen contents to an image file using OpenGL? I heard that there are several libraries available but I don't know which one is suitable. Any help is appreciated T_T (plz forgive me if I have any grammatical mistakes ... )
You can do something like this, supposing you are drawing to the default framebuffer:
protected void saveImage(GL4 gl4, int width, int height) {
try {
GL4 gl4 = GLContext.getCurrentGL().getGL4();
BufferedImage screenshot = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Graphics graphics = screenshot.getGraphics();
ByteBuffer buffer = GLBuffers.newDirectByteBuffer(width * height * 4);
gl4.glReadBuffer(GL_BACK);
gl4.glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for (int h = 0; h < height; h++) {
for (int w = 0; w < width; w++) {
graphics.setColor(new Color((buffer.get() & 0xff), (buffer.get() & 0xff),
(buffer.get() & 0xff)));
buffer.get();
graphics.drawRect(w, height - h, 1, 1);
}
}
BufferUtils.destroyDirectBuffer(buffer);
File outputfile = new File("D:\\Downloads\\texture.png");
ImageIO.write(screenshot, "png", outputfile);
} catch (IOException ex) {
Logger.getLogger(EC_DepthPeeling.class.getName()).log(Level.SEVERE, null, ex);
}
}
Essentially you create a bufferedImage and a direct buffer. Then you use Graphics to render the content of the back buffer pixel by pixel to the bufferedImage.
You need an additional buffer.get(); because that represents the alpha value and you need also height - h to flip the image.
Edit: of course you need to read it when there is what you are looking for.
You have several options:
trigger a boolean variable and call it directly from the display method, at the end, when everything you wanted has been rendered
disable the automatic buffer swapping, call from the key listener the display() method, read the back buffer and enable the swapping again
call from the key listener the same code you would call in the display
You could use Robot class to take screenshot:
BufferedImage screenshot = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
ImageIO.write(screenshot, "png", new File("screenshot.png"));
There are two things to consider:
You take screenshot from screen, you could determine where the cordinates of you viewport are, so you can catch only the part of interest.
Something can reside a top of you viewport(another window), so the viewport could be hided by another window, it is unlikely to occur, but it can.
When you use buffers with LWJGL, they almost always need to be directly allocated. The OpenGL library doesn't really understand how to interface with Java Arraysâ„¢, and in order for the underlying memory operations to work, they need to be applied on natively-allocated (or, in this context, directly allocated) memory.
If you're using LWJGL 3.x, that's pretty simple:
//Check the math, because for an image array, given that Ints are 4 bytes, I think you can just allocate without multiplying by 4.
IntBuffer ib = org.lwjgl.BufferUtils.createIntBuffer(Constants.PanelSize.width * Constants.PanelSize.height);
And if that function isn't available, this should suffice:
//Here you actually *do* have to multiply by 4.
IntBuffer ib = java.nio.ByteBuffer.allocateDirect(Constants.PanelSize.width * Constants.PanelSize.height * 4).asIntBuffer();
And then you do your normal code:
Constants.gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
Constants.gl.glReadPixels(0, 0, Constants.PanelSize.width, Constants.PanelSize.height, GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE, ib);
System.out.println(Constants.gl.glGetError());
int[] bb = new int[Constants.PanelSize.width * Constants.PanelSize.height];
ib.get(bb); //Stores the contents of the buffer into the int array.
ImageExport.savePixelsToPNG(bb, Constants.PanelSize.width, Constants.PanelSize.height, "imageFilename.png");

Quickly Render Bytes to Canvas with a Color Palette

While working on a Java application which requires rendering sprites, I thought that, instead of loading a .png or .jpg file as an Image or BufferedImage, I could load up a byte[] array containing indices for a color palette(16 colors per palette, so two pixels per byte), then render that.
The method I currently have generates a BufferedImage from the byte[] array and color palette while initializing, taking extra time to initialize but running smoothly after that, which works fine, but there are only 4 sprites in the program so far. I'm worried that when there are 100+ sprites, storing all of them as BufferedImages will be too taxing on the memory. And not only would that mean 1 BufferedImage per sprite, but actually 1 image for each sprite/palette combination I'd want to use.
This function creates the BufferedImage:
protected BufferedImage genImage(ColorPalette cp, int width, int height){ //Function to generate BufferedImage to render from the byte[]
BufferedImage ret = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); //Create the Image to return
for(int j=0; j<height; j++){ //Run a for loop for each pixel
for(int i=0; i<width; i++){
int index = (j * width + i)/2; //Get the index of the needed byte
int value = image[index] & 0x00ff; //Convert to "unsigned byte", or int
byte thing; //declare actual color index as byte
if(i % 2 == 0)thing = (byte)((value & 0b11110000) >>> 4); //If it's an even index(since it starts with 0, this includes the 1st one), get the first 4 bits of the value
else thing = (byte)(value & 0b00001111); //If it's odd, get the last four bits
ret.setRGB(i, j, cp.getColor(thing & 0x00ff).getRGB()); //Set the pixel in the image to the value in the Color Palette
}
}
return ret;
}
And this one actually renders it to the screen:
public void render(Graphics g, int x, int y){ //Graphics to render to and x/y coords
g.drawImage(texture, x, y, TILE_WIDTH, TILE_HEIGHT, null); //Render it
}
I've experimented with another method that renders from the byte[] directly w/o the need for a BufferedImage, which should theoretically succeed in saving memory by avoiding use of a BufferedImage for each sprite, but it ended up being very, very slow. It took several seconds to render each frame w/ at most 25 sprites to render on the screen! Note that g is a Graphics object.
private void drawSquare(int x, int y, int scale, Color c){ //Draw each "pixel" to scale
if(g == null){ //If null, quit
return;
}
g.setColor(c); //Set the color
for(int i=x; i<x+scale; i++){ //Loop through each pixel
if(i<0)continue;
for(int j=y; j<y+scale; j++){
if(j<0)continue;
g.fillRect(x, y, scale, scale); //Fill the rect to make the "pixel"
}
}
}
public void drawBytes(byte[] image, int x, int y, int width, int height, int scale, ColorPalette palette){ //Draw a byte[] image with given byte[], x/y coords, width/height, scale, and color palette
if(image.length < width * height / 2){ //If the image is too small, exit
return;
}
for(int j=0; j<height; j++){ //Loop through each pixel
for(int i=0; i<width; i++){
int index = (j * width + i)/2; //Get index
int value = image[index]; //get the byte
byte thing; //get the high or low value depending on even/odd
if(i % 2 == 0)thing = (byte)((value & 0b11110000) >>> 4);
else thing = (byte)(value & 0b00001111);
drawSquare((int)(x + scale * i), (int)(y + scale * j), scale, palette.getColor(thing)); //draw the pixel
}
}
}
So is there a more efficient way to render these byte[] arrays w/o the need for BufferedImage's? Or will it really not be problematic to have several hundred BufferdImage's loaded into memory?
EDIT: I've also tried doing the no-BufferedImage methods, but with g as the one large BufferedImage to which everything is rendered, and is then rendered to the Canvas. The primary difference is that g.fillRect(... is changed to g.setRGB(... in that method, but it was similarly slow.
EDIT: The images I'm dealing with are 16x16 and 32x32 pixels.
If memory usage is your main concern, I'd use BufferedImages with IndexColorModel (TYPE_BYTE_BINARY). This would perfectly reflect your byte[] image and ColorPalette, and waste very little memory. They will also be reasonably fast to draw.
This approach will use about 1/8th of the memory used by the initial use of TYPE_INT_RGB BufferedImages, because we retain the 4 bits per pixel, instead of 32 bits (an int is 32 bits) per pixel (plus some overhead for the palette, of course).
public static void main(String[] args) {
byte[] palette = new byte[16 * 3]; // 16 color palette without alpha
byte[] pixels = new byte[(16 * 16 * 4) / 8]; // 16 * 16 * 4 bit
Random random = new Random(); // For test purposes, just fill arrays with random data
random.nextBytes(palette);
random.nextBytes(pixels);
// Create ColorModel & Raster from palette and pixels
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, false, -1); // -1 for no transparency
DataBufferByte buffer = new DataBufferByte(pixels, pixels.length);
WritableRaster raster = Raster.createPackedRaster(buffer, 16, 16, 4, null);
// Create BufferedImage from CM and Raster
final BufferedImage image = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
System.out.println("image: " + image); // "image: BufferedImage#...: type = 12 ..."
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Foo");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setVisible(true);
}
});
}
The above code will create fully opaque (Transparency.OPAQUE) images, that will occupy the entire 16 x 16 pixel block.
If you want bitmask (Transparency.BITMASK) transparency, where all pixels are either fully opaque or fulle transparent, just change the last parameter in the IndexColorModel to the palette index you want to be fully transparent.
int transparentIndex = ...;
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, false, transparentIndex);
// ...everything else as above
This will allow your sprites to have any shape you want.
If you want translucent pixels (Transparency.TRANSLUCENT), where pixels can be semi-transparent, you can also have that. You will then have to change the palette array to 16 * 4 entries, and include a sample for the alpha value as the 4th sample for each entry (quadruple). Then invoke the IndexColorModel constructor with the last parameter set to true (hasAlpha):
byte[] palette = new byte[16 * 4]; // 16 color palette with alpha (translucency)
// ...
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, true); // true for palette with alpha samples
// ...everything else as above
This will allow smoother gradients between the transparent and non-transparent parts of the sprites. But with only 16 colors in the palette, you won't have many entries available for transparency.
Note that it is possible to re-use the Rasters and IndexColorModels here, in all of the above examples, to save further memory for images using the same palette, or even images using the same image data with different palettes. There's one caveat though, that is the images sharing rasters will be "live views" of each other, so if you make any changes to one, you will change them all. But if your images are never changed, you could exploit this fact.
That said, the above really is a compromise between saving memory and having "reasonable" performance. If performance (ie. frames per second) is more important, just ignore the memory usage, and create BufferedImages that are compatible with your graphics card's/the OS's native pixel layout. You can do this, by using component.createCompatibleImage(...) (where component is a JComponent subclass) or gfxConfig.createCompatibleImage(...) (where gfxConfig is a GraphicsConfiguration obtained from the local GraphicsEnvironment).

Make background transparent in image.(Example image attached)

How can I remove white background color(white in this) from image?.
Or
Want to make image background transparent.
These are two images.
Image 1 is: Text with white background
Image 2 is: Green(or close to green color) image
I need to make white remove background transparent of image 1
Here is an example to make BLUE color transparent in this image.
Here we have an Image with a blue background like and we want to display it in an Applet with a white background. All we have to do is to look for the blue color with the "Alpha bits" set to opaque and make them transparent.
You can adapt it and make WHITE color transparent.
Also you can check this answer.
The question is that, you want to make background transparent at runtime or just this image? if just this one then you should edit the image with any tool like irfanView to remove background color. just open this image in this tool and then save it as PNG and tick save transparent color and untick use window background color and then press enter. After that select white color.
You can do it easily with photoshop.
Using the magic wand or lasso tool, select the area of the image you want to be transparent.
and then just hit DELETE button.
for additional information check this link
you can also check this tutorial
here is useful video
when i had to solve that problem i used a buffer...
private IntBuffer buffer;
public void toBuffer(File tempFile){
final Bitmap temp = BitmapFactory.decodeFile(imgSource
.getAbsolutePath());
buffer.rewind(); //setting the buffers index to '0'
temp.copyPixelsToBuffer(buffer);
}
then you can simply edit all pixels (int-values) on your buffer (as mentioned from https://stackoverflow.com/users/3850595/jordi-castilla) ...
...setting ARGB for 0xFFFFFFFF (white&transparent) to 0x00?????? (any color would suit, it's transparent anyway)
so here's your code to edit transparenty on a buffer:
public void convert(){
for (int dy = 0; dy < imgHeight; dy ++){
for (int dx = 0; dx < imgWidth; dx ++ ){
int px = buffer.get();
int a = (0xFF000000 & px) >> 24;
int r = (0x00FF0000 & px) >> 16;
int g = (0x0000FF00 & px) >> 8;
int b = (0x000000FF & px);
int result = px;
if (px == 0xFFFFFFFF){ //only adjust alpha when the color is 'white'
a = 0xFF000000; //transparent?
int result = a | r | g | b;
}
int pos = buffer.position();
buffer.put(pos-1, result);
}
}
later you want to write back your image into an converted file:
public void copyBufferIntoImage(File tempFile) throws IOException {
buffer.rewind();
Bitmap temp = Bitmap.createBitmap(imgWidth, imgHeight,
Config.ARGB_8888);
temp.copyPixelsFromBuffer(buffer);
FileOutputStream out = new FileOutputStream(tempFile);
temp.compress(Bitmap.CompressFormat.PNG, 90, out);
out.flush();
out.close();
}
maybe you still want to know how to map a buffer?
public void mapBuffer(final File tempFile, long size) throws IOException {
RandomAccessFile aFile = new RandomAccessFile(tempFile, "rw");
aFile.setLength(4 * size); // 4 byte pro int
FileChannel fc = aFile.getChannel();
buffer = fc.map(FileChannel.MapMode.READ_WRITE, 0, fc.size())
.asIntBuffer();
}

What type of array required in WritableRaster method setPixels()?

I don't understand how WritableRaster class of Java works. I tried looking at the documentation but don't understand how it takes values from an array of pixels. Plus, I am not sure what the array of pixels consists.
Here I explain.
What I want to do is : Shamir's Secret Sharing on images. For that I need to fetch an image in BuferedImage image. I take a secret image. Create shares by running a 'function' on each pixel of the image. (basically changing the pixel values by something)
Snippet:
int w = image.getWidth();
int h = image.getHeight();
for (int i = 0; i < h; i++)
{
for (int j = 0; j < w; j++)
{
int pixel = image.getRGB(j, i);
int red = (pixel >> 16) & 0xFF;
int green = (pixel >> 8) & 0xFF;
int blue = (pixel) & 0xFF;
pixels[j][i] = share1(red, green, blue);
// Now taking those rgb values. I change them using some function and return an int value. Something like this:
public int share1 (r, g, b)
{
a1 = rand.nextInt(primeNumber);
total1 = r+g+b+a1;
new_pixel = total1 % primeNumber;
return new_pixel;
}
// This 2d array pixels has all the new color values, right? But now I want to build an image using this new values. So what I did is.
First converted this pixels array to a list.
Now this list has pixel values of the new image. But to build an image using RasterObj.setPixels() method, I need an array with RGB values [I MIGHT BE WRONG HERE!]
So I take individual values of a list and find rgb values and put it consecutively in a new 1D array
pixelvector..something like this (r1,g1,b1,r2,g2,b2,r3,g3,b3...)
Size of the list is wh because it contains single pixel value of each pixel.
BUT, Size of the new array pixelvector will become wh*3 since it contains r,g,b values of each pixel..
Then to form image I do this: Snippet
BufferedImage image_share1 = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
WritableRaster rast = (WritableRaster) image_share1.getData();
rast.setPixels(0, 0, w, h, pixelvector);
image_share1.setData(rast);
ImageIO.write(image_share1,"JPG",new File("share1.jpg"));
If I put an array with just single pixel values in setPixels() method, it does not return from that function! But if I put an array with separate r,g,b values, it returns from the function. But doing the same thing for share1 , share 2 etc.. I am getting nothing but shades of blue. So, I am not even sure I will be able to reconstruct the image..
PS - This might look like a very foolish code I know. But I had just one day to do this and learn about images in Java. So I am doing the best I can.
Thanks..
A Raster (like WriteableRaster and its subclasses) consists of a SampleModel and a DataBuffer. The SampleModel describes the sample layout (is it pixel packed, pixel interleaved, band interleaved? how many bands? etc...) and dimensions, while the DataBuffer describes the actual storage (are the samples bytes, short, ints, signed or unsigned? single array or array per band? etc...).
For BufferedImage.TYPE_INT_RGB the samples will be pixel packed (all 3 R, G and B samples packed into a single int for each pixel), and data/transfer type DataBuffer.TYPE_INT.
Sorry for not answering your question regarding WritableRaster.setPixels(...) directly, but I don't think it's the method you are looking for (in most cases, it's not). :-)
For your goal, I think what you should do is something like:
// Pixels in TYPE_INT_RGB format
// (ie. 0xFFrrggbb, where rr is two bytes red, gg two bytes green etc)
int[] pixelvector = new int[w * h];
BufferedImage image_share1 = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
WritableRaster rast = image_share1.getRaster(); // Faster! No copy, and live updated
rast.setDataElements(0, 0, w, h, pixelvector);
// No need to call setData, as we modified image_share1 via it's raster
ImageIO.write(image_share1,"JPG",new File("share1.jpg"));
I'm assuming the rest of your code for modifying each pixel value is correct. :-)
But just a tip: You'll make it easier for yourself (and faster due to less conversion) if you use a 1D array instead of a 2D array. I.e.:
int[] pixels = new int[w * h]; // instead of int[][] pixels = new int[w][h];
// ...
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
// ...
pixels[y * w + x] = share1(red, green, blue); // instead of pixels[x][y];
}
}

Categories