I have a byte[] that contains ARGB image data directly. I am trying to find the most performant way to transform this into a BufferedImage without unnecessary iterations, essentially I'd like to configure the BufferedImage with the right raster and color model to use this memory area directly.
My current approach is this:
BufferedImage toBufferedImageAbgr(int width, int height, byte[] abgrData) {
int bitMasks[] = new int[]{0xf};
DataBuffer dataBuffer = new DataBufferByte(abgrData, width * height * 4, 0);
int[] masks = new int[]{0xff, 0xff, 0xff, 0xff};
DirectColorModel byteColorModel = new DirectColorModel(8,
0xff, // Red
0xff, // Green
0xff, // Blue
0xff // Alpha
);
SampleModel sampleModel = new SinglePixelPackedSampleModel(DataBuffer.TYPE_BYTE, width, height, masks);
WritableRaster raster = Raster.createWritableRaster(sampleModel, dataBuffer, null);
BufferedImage image = new BufferedImage(byteColorModel, raster, false, null);
return image;
}
I keep playing around with the color model, the bands and all that but can't figure out what's the right configuration for this relatively simple problem.
When I inspect the output image, it unfortunately looks bad, it's a grayscale image with patterns:
Here is the original image for reference:
BufferedImage toBufferedImageAbgr(int width, int height, byte[] abgrData) {
DataBuffer dataBuffer = new DataBufferByte(abgrData, width * height * 4, 0);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB),
new int[] {8,8,8,8}, true, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
WritableRaster raster = Raster.createInterleavedRaster(
dataBuffer, width, height, width * 4, 4, new int[] {3, 2, 1, 0}, null);
BufferedImage image = new BufferedImage(colorModel, raster, false, null);
return image;
}
Related
Motivation:
My goal is to convert AWT BufferedImage to SWT ImageData in the most efficient way. Typical answer to this question is pixel by pixel conversion of the whole picture, that is O(n^2) complexity. Much more efficient would be if they could exchange whole pixel matrix as it is. BufferedImage seems to be very flexible in determining in detail how colors and alpha are encoded.
To provide you with a wider context, I wrote a SVG icon on demand rasterizer, using Apache Batik, but it is for SWT (Eclipse) application. Batik renders only to a java.awt.image.BufferedImage, but SWT components require org.eclipse.swt.graphics.Image.
Their backing raster objects: java.awt.image.Raster and org.eclipse.swt.graphics.ImageData represent exactly same thing, they are just wrapper around a 2D array of byte values representing pixels. If I can make one or the other to use came color encoding, voila, I can reuse the backing array as it is.
I got pretty far, this works:
// defined blank "canvas" for Batik Transcoder for SVG to be rasterized there
public BufferedImage createCanvasForBatik(int w, int h) {
new BufferedImage(w, h, BufferedImage.TYPE_4BYTE_ABGR);
}
// convert AWT's BufferedImage to SWT's ImageData to be made into SWT Image later
public ImageData convertToSWT(BufferedImage bufferedImage) {
DataBuffer db = bufferedImage.getData().getDataBuffer();
byte[] matrix = ((DataBufferByte) db).getData();
PaletteData palette =
new PaletteData(0x0000FF, 0x00FF00, 0xFF0000); // BRG model
// the last argument contains the byte[] with the image data
int w = bufferedImage.getWidth();
int h = bufferedImage.getHeight();
ImageData swtimgdata = new ImageData(w, h, 32, palette);
swtimgdata.data = matrix; // ImageData has all field public!!
// ImageData swtimgdata = new ImageData(w, h, 32, palette, 4, matrix); ..also works
return swtimgdata;
}
It all works except transparency :(
It looks like ImageData requires (always?) alpha to be a separate raster, see ImageData.alphaData from color raster, see ImageData.data; both are byte[] types.
Is there a way how to make ImageData to accept ARGB model? That is alpha mixed with other colors? I doubt so I went the other way. To make BufferedImage to use separate arrays (aka rasters or "band") for colors and alpha. The ComponentColorModel and BandedRaster seem to intended exactly for these things.
So far I got here:
public BufferedImage createCanvasForBatik(int w, int h) {
ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_sRGB);
int[] nBits = {8, 8, 8, 8}; // ??
ComponentColorModel colorModel = new ComponentColorModel(cs, nBits, true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
WritableRaster raster = Raster.createBandedRaster(
DataBuffer.TYPE_BYTE, w, h, 4, new Point(0,0));
isPremultiplied = false;
properties = null;
return new BufferedImage(colorModel, raster, isPremultiplied, properties);
}
That creates a separate raster (band) for alpha but also for every color separately, so I end up with 4 bands (4 rasters) which is again unusable for SWT Image. Is it possible to create a banded raster with 2 bands: one for colors in RGB or BRG, and one for alpha only?
I don't know SWT in detail, but based on my understand of the API doc, the below should work:
The trick is to use a custom DataBuffer implementation that masquerades as a "banded" buffer, but internally uses a combination of interleaved RGB and separate alpha array for storage. This works nicely with the standard BandedSampleModel. You will lose any chance of special (hardware) optimizations that are normally applied to BufferedImages using this model, but that should not matter as you are using SWT for display anyway.
I suggest you create your SWT image first, and then "wrap" the color and alpha arrays from the SWT image in the custom data buffer. If you do it this way, Batik should render directly to your SWT image, and you can just throw away the BufferedImage afterwards (if this is not practical, you can of course do it the other way around as well, but you may need to expose the internal arrays of the custom data buffer class below, to create the SWT image).
Code (important parts are the SWTDataBuffer class and createImage method):
public class SplitDataBufferTest {
/** Custom DataBuffer implementation using separate arrays for RGB and alpha.*/
public static class SWTDataBuffer extends DataBuffer {
private final byte[] rgb; // RGB or BGR interleaved
private final byte[] alpha;
public SWTDataBuffer(byte[] rgb, byte[] alpha) {
super(DataBuffer.TYPE_BYTE, alpha.length, 4); // Masquerade as banded data buffer
if (alpha.length * 3 != rgb.length) {
throw new IllegalArgumentException("Bad RGB/alpha array lengths");
}
this.rgb = rgb;
this.alpha = alpha;
}
#Override
public int getElem(int bank, int i) {
switch (bank) {
case 0:
case 1:
case 2:
return rgb[i * 3 + bank];
case 3:
return alpha[i];
}
throw new IndexOutOfBoundsException(String.format("bank %d >= number of banks, %d", bank, getNumBanks()));
}
#Override
public void setElem(int bank, int i, int val) {
switch (bank) {
case 0:
case 1:
case 2:
rgb[i * 3 + bank] = (byte) val;
return;
case 3:
alpha[i] = (byte) val;
return;
}
throw new IndexOutOfBoundsException(String.format("bank %d >= number of banks, %d", bank, getNumBanks()));
}
}
public static void main(String[] args) {
// These are given from your SWT image
int w = 300;
int h = 200;
byte[] rgb = new byte[w * h * 3];
byte[] alpha = new byte[w * h];
// Create an empty BufferedImage around the SWT image arrays
BufferedImage image = createImage(w, h, rgb, alpha);
// Just to demonstrate that it works
System.out.println("image: " + image);
paintSomething(image);
showIt(image);
}
private static BufferedImage createImage(int w, int h, byte[] rgb, byte[] alpha) {
DataBuffer buffer = new SWTDataBuffer(rgb, alpha);
// SampleModel sampleModel = new BandedSampleModel(DataBuffer.TYPE_BYTE, w, h, 4); // If SWT data is RGB, you can use simpler constructor
SampleModel sampleModel = new BandedSampleModel(DataBuffer.TYPE_BYTE, w, h, w,
new int[] {2, 1, 0, 3}, // Band indices for BGRA
new int[] {0, 0, 0, 0});
WritableRaster raster = Raster.createWritableRaster(sampleModel, buffer, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
return new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
}
private static void showIt(final BufferedImage image) {
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Test");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
JLabel label = new JLabel(new ImageIcon(image));
label.setOpaque(true);
label.setBackground(Color.GRAY);
frame.add(label);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
private static void paintSomething(BufferedImage image) {
int w = image.getWidth();
int h = image.getHeight();
int qw = w / 4;
int qh = h / 4;
Graphics2D g = image.createGraphics();
g.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g.setColor(Color.ORANGE);
g.fillOval(0, 0, w, h);
g.setColor(Color.RED);
g.fillRect(5, 5, qw, qh);
g.setColor(Color.WHITE);
g.drawString("R", 5, 30);
g.setColor(Color.GREEN);
g.fillRect(5 + 5 + qw, 5, qw, qh);
g.setColor(Color.BLACK);
g.drawString("G", 5 + 5 + qw, 30);
g.setColor(Color.BLUE);
g.fillRect(5 + (5 + qw) * 2, 5, qw, qh);
g.setColor(Color.WHITE);
g.drawString("B", 5 + (5 + qw) * 2, 30);
g.dispose();
}
}
I have a byte array with type TYPE_4BYTE_ABGR, and I know its width and height, I want to change it to BufferedImage, any ideas?
The fastest way to create a BufferedImage from a byte array in TYPE_4BYTE_ABGR form, is to wrap the array in a DataBufferByte and create an interleaved WritableRaster from that. This will make sure there are no additional byte array allocations. Then create the BufferedImage from the raster, and a matching color model:
public static void main(String[] args) {
int width = 300;
int height = 200;
int samplesPerPixel = 4; // This is the *4BYTE* in TYPE_4BYTE_ABGR
int[] bandOffsets = {3, 2, 1, 0}; // This is the order (ABGR) part in TYPE_4BYTE_ABGR
byte[] abgrPixelData = new byte[width * height * samplesPerPixel];
DataBuffer buffer = new DataBufferByte(abgrPixelData, abgrPixelData.length);
WritableRaster raster = Raster.createInterleavedRaster(buffer, width, height, samplesPerPixel * width, samplesPerPixel, bandOffsets, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
System.out.println("image: " + image); // Should print: image: BufferedImage#<hash>: type = 6 ...
}
Note however, that this image will be "unmanaged" (some HW accelerations will be disabled), because you have direct access to the pixel array.
To avoid this, create the WritableRaster without the pixels, and copy the pixels into it. This will use twice as much memory, but will keep the image "managed" and thus possible better display performance:
// Skip creating the data buffer
WritableRaster raster = Raster.createInterleavedRaster(DataBuffer.TYPE_BYTE, width, height, samplesPerPixel * width, samplesPerPixel, bandOffsets, null);
raster.setDataElements(0, 0, width, height, abgrPixelData);
// ...rest of code as above.
You could even do this (which might be more familiar):
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
WritableRaster raster = image.getRaster();
raster.setDataElements(0, 0, width, height, abgrPixelData);
Might not be very efficient, but a BufferedImage can be converted to another type this way:
public static BufferedImage convertToType(BufferedImage image, int type) {
BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight(), type);
Graphics2D graphics = newImage.createGraphics();
graphics.drawImage(image, 0, 0, null);
graphics.dispose();
return newImage;
}
About the method you want to be implemented, you would have to know the width or height of the image to convert a byte[] to a BufferedImage.
Edit:
One way is converting the byte[] to int[] (data type TYPE_INT_ARGB) and using setRGB:
int[] dst = new int[width * height];
for (int i = 0, j = 0; i < dst.length; i++) {
int a = src[j++] & 0xff;
int b = src[j++] & 0xff;
int g = src[j++] & 0xff;
int r = src[j++] & 0xff;
dst[i] = (a << 24) | (r << 16) | (g << 8) | b;
}
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
image.setRGB(0, 0, width, height, dst, 0, width);
I need to take an int array and turn it into BufferImage. I really don't have any background on this subject and I learn it all from the internet so here's what I'm trying to do:
Create an array from BufferedImage(done), turn this array into IntBuffer(done) - (Later i'll need to do some opertions on the image through the IntBuffer), put the changed values from the IntBuffer in new array(done), and turn this array into WritableRaster.
(If something isn't right in my understading of the process please tell me)
Here's the line where I deal with the WritableRaster:
WritableRaster newRaster= newRaster.setPixels(0, 0, width, height, matrix);
Eclipse marks this as a mistake and says ''Type mismatch:Cannot convert from void to WritableRaster"
Please help! I'm a bit lost.
Also sorry for bad english.
EDIT:
The matrix:
int height=img.getHeight();
int width=img.getWidth();
int[]matrix=new int[width*height];
The part of the code where I try to insert values to the Raster:
BufferedImage finalImg = new BufferedImage(width,height, BufferedImage.TYPE_INT_RGB);
WritableRaster newRaster= (WritableRaster)finalImg.getData();
newRaster.setPixels(0, 0, width, height, matrix);
The error message:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 10769
at java.awt.image.SinglePixelPackedSampleModel.setPixels(Unknown Source)
at java.awt.image.WritableRaster.setPixels(Unknown Source)
You can create a WritableRaster and/or BufferedImage from an int array like this:
int w = 300;
int h = 200;
int[] matrix = new int[w * h];
// ...manipulate the matrix...
DataBufferInt buffer = new DataBufferInt(matrix, matrix.length);
int[] bandMasks = {0xFF0000, 0xFF00, 0xFF, 0xFF000000}; // ARGB (yes, ARGB, as the masks are R, G, B, A always) order
WritableRaster raster = Raster.createPackedRaster(buffer, w, h, w, bandMasks, null);
System.out.println("raster: " + raster);
ColorModel cm = ColorModel.getRGBdefault();
BufferedImage image = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
System.err.println("image: " + image);
ColorModel cm = ColorModel.getRGBdefault();
int w = 300;
int h = 200;
WritableRaster raster = cm.createCompatibleWritableRaster(w, h);
DataBufferInt buffer = (DataBufferInt) raster.getDataBuffer();
int[] bufferData = buffer.getData();
int[] array = new int[2400];
Random random = new Random();
for (int i = 0; i < 2400; i++) {
array[i] = random.nextInt(2);
}
System.arraycopy(array, 0, bufferData, 0, (array.length < bufferData.length ? array.length : bufferData.length));
BufferedImage image = new BufferedImage(cm, raster, false, null);
FileOutputStream fos = new FileOutputStream("D:\\abc\\OCR\\" + "LearningRaster" + ".png");
ImageIO.write(image, "PNG", fos);
fos.close();
setPixels returns void:
public static void setPixels(BufferedImage img,
int x, int y, int w, int h, int[] pixels)
so you need to create Raster and than set pixels to it:
WritableRaster newRaster= WritableRaster.createWritableRaster(…);
newRaster.setPixels(0, 0, width, height, matrix);
You need to put 4 int per pixel (it depends on color model, 4 for ARGB). So, matrix size must be
int[] matrix = new int[width * height * 4]
See more about WritableRaster here —
Oracle: WritableRaster
Code examples
In this question: Convert RGB to CMYK, I got a way to convert RGB int array to CMYK byte array. Now I hope to convert ARGB int array to CMYKA byte array directly instead of working with the resulting CMYK array and adding the extra alpha channel afterwards. Is it possible?
I tried to use 4 bands offset to create the raster like this:
WritableRaster raster = Raster.createPackedRaster(db, imageWidth, imageHeight, imageWidth, new int[]{0x00ff0000, 0x0000ff00, 0x000000ff, 0xff000000}, null);
But I got an error: Numbers of source Raster bands and source color space components do not match. I understand this comes from the fact the source color space only has 3 components. I am just wondering if it's possible to create some kind of 4 components color space or something to work around this.
This is the current version I am working with:
public static byte[] RGB2CMYK(ICC_ColorSpace cmykColorSpace, int[] rgb, int imageWidth, int imageHeight, boolean hasAlpha) {
DataBuffer db = new DataBufferInt(rgb, rgb.length);
WritableRaster raster = Raster.createPackedRaster(db, imageWidth, imageHeight, imageWidth, new int[]{0x00ff0000, 0x0000ff00, 0x000000ff}, null);
ColorSpace sRGB = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ColorConvertOp cco = new ColorConvertOp(sRGB, cmykColorSpace, null);
WritableRaster cmykRaster = cco.filter(raster, null);
byte[] cmyk = (byte[])cmykRaster.getDataElements(0, 0, imageWidth, imageHeight, null);
if(!hasAlpha) return cmyk;
byte[] cmyka = new byte[rgb.length*5];
for(int i = 0, j = 0, k = 0; i < rgb.length; i++) {
cmyka[k++] = cmyk[j++];
cmyka[k++] = cmyk[j++];
cmyka[k++] = cmyk[j++];
cmyka[k++] = cmyk[j++];
cmyka[k++] = (byte)(rgb[i]>>24 & 0xff);
}
return cmyka;
}
I figured out a way to do this:
// Convert RGB to CMYK w/o alpha
public static byte[] RGB2CMYK(ICC_ColorSpace cmykColorSpace, int[] rgb, int imageWidth, int imageHeight, boolean hasAlpha) {
DataBuffer db = new DataBufferInt(rgb, rgb.length);
int[] bandMasks = new int[]{0x00ff0000, 0x0000ff00, 0x000000ff};
ColorSpace sRGB = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ColorConvertOp cco = new ColorConvertOp(sRGB, cmykColorSpace, null);
ColorModel cm = null;
WritableRaster cmykRaster = null;
if(hasAlpha) {
cm = ColorModel.getRGBdefault();
bandMasks = new int[]{0x00ff0000, 0x0000ff00, 0x000000ff, 0xff000000};
} else
cm = new DirectColorModel(24, 0x00ff0000, 0x0000ff00, 0x000000ff);
WritableRaster raster = Raster.createPackedRaster(db, imageWidth, imageHeight, imageWidth, bandMasks, null);
BufferedImage rgbImage = new BufferedImage(cm, raster, false, null);
BufferedImage cmykImage = cco.filter(rgbImage, null);
cmykRaster = cmykImage.getRaster();
return (byte[])cmykRaster.getDataElements(0, 0, imageWidth, imageHeight, null);
}
I also found out it's much faster to do filter on a BufferedImage instead of a Raster. Might be some hardware acceleration.
Is there any faster way to achieve padding of pixels to a BufferedImage than drawing it centered on larger BufferedImage?
BufferedImage has a constructor where you get to specify a WriteableRaster.
Picking at the a default buffered image, storing each pixel in an int, it uses an IntegerInterleavedRaster.
The ColorModel you can use ColorModel.getRGBDefault().
int imageWidth = 638, imageHeight = 480;
int dataImageWidth = 640;
SampleModel sm = new SinglePixelPackedSampleModel(TYPE_INT, imageWidth, imageHeight, dataImageWidth, new int[] { 0xff0000, 0xff00, 0xff });
DataBuffer db = new DataBufferInt(dataImageWidth * imageHeight);
WritableRaster r = Raster.createWritableRaster(sm, db, new Point());
BufferedImage image = new BufferedImage(ColorModel.getRGBDefault(), r, false, null);
Notice the scanlineStride in SinglePixelPackedSampleModel (second last parameter).
Another much simpler approach is to use BufferedImage's getSubimage method.
BufferedImage fullImage = new BufferedImage(dataImageWidth, imageHeight);
BufferedImage subImage = fullImage.getSubimage(0, 0, imageWidth, imageHeight);
Create an ImageIcon using the BufferedImage and add the Icon to a JLabel. Then you can just add a Border to the label to get your desired padding.
To defer centering until rendering, I like this approach due to finnw, where this is a suitable component:
private BufferedImage image;
....
#Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2d = (Graphics2D) g;
g2d.translate(this.getWidth() / 2, this.getHeight() / 2);
g2d.translate(-image.getWidth() / 2, -image.getHeight() / 2);
g2d.drawImage(image, 0, 0, null);
}