In this question: Convert RGB to CMYK, I got a way to convert RGB int array to CMYK byte array. Now I hope to convert ARGB int array to CMYKA byte array directly instead of working with the resulting CMYK array and adding the extra alpha channel afterwards. Is it possible?
I tried to use 4 bands offset to create the raster like this:
WritableRaster raster = Raster.createPackedRaster(db, imageWidth, imageHeight, imageWidth, new int[]{0x00ff0000, 0x0000ff00, 0x000000ff, 0xff000000}, null);
But I got an error: Numbers of source Raster bands and source color space components do not match. I understand this comes from the fact the source color space only has 3 components. I am just wondering if it's possible to create some kind of 4 components color space or something to work around this.
This is the current version I am working with:
public static byte[] RGB2CMYK(ICC_ColorSpace cmykColorSpace, int[] rgb, int imageWidth, int imageHeight, boolean hasAlpha) {
DataBuffer db = new DataBufferInt(rgb, rgb.length);
WritableRaster raster = Raster.createPackedRaster(db, imageWidth, imageHeight, imageWidth, new int[]{0x00ff0000, 0x0000ff00, 0x000000ff}, null);
ColorSpace sRGB = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ColorConvertOp cco = new ColorConvertOp(sRGB, cmykColorSpace, null);
WritableRaster cmykRaster = cco.filter(raster, null);
byte[] cmyk = (byte[])cmykRaster.getDataElements(0, 0, imageWidth, imageHeight, null);
if(!hasAlpha) return cmyk;
byte[] cmyka = new byte[rgb.length*5];
for(int i = 0, j = 0, k = 0; i < rgb.length; i++) {
cmyka[k++] = cmyk[j++];
cmyka[k++] = cmyk[j++];
cmyka[k++] = cmyk[j++];
cmyka[k++] = cmyk[j++];
cmyka[k++] = (byte)(rgb[i]>>24 & 0xff);
}
return cmyka;
}
I figured out a way to do this:
// Convert RGB to CMYK w/o alpha
public static byte[] RGB2CMYK(ICC_ColorSpace cmykColorSpace, int[] rgb, int imageWidth, int imageHeight, boolean hasAlpha) {
DataBuffer db = new DataBufferInt(rgb, rgb.length);
int[] bandMasks = new int[]{0x00ff0000, 0x0000ff00, 0x000000ff};
ColorSpace sRGB = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ColorConvertOp cco = new ColorConvertOp(sRGB, cmykColorSpace, null);
ColorModel cm = null;
WritableRaster cmykRaster = null;
if(hasAlpha) {
cm = ColorModel.getRGBdefault();
bandMasks = new int[]{0x00ff0000, 0x0000ff00, 0x000000ff, 0xff000000};
} else
cm = new DirectColorModel(24, 0x00ff0000, 0x0000ff00, 0x000000ff);
WritableRaster raster = Raster.createPackedRaster(db, imageWidth, imageHeight, imageWidth, bandMasks, null);
BufferedImage rgbImage = new BufferedImage(cm, raster, false, null);
BufferedImage cmykImage = cco.filter(rgbImage, null);
cmykRaster = cmykImage.getRaster();
return (byte[])cmykRaster.getDataElements(0, 0, imageWidth, imageHeight, null);
}
I also found out it's much faster to do filter on a BufferedImage instead of a Raster. Might be some hardware acceleration.
Related
I have a byte[] that contains ARGB image data directly. I am trying to find the most performant way to transform this into a BufferedImage without unnecessary iterations, essentially I'd like to configure the BufferedImage with the right raster and color model to use this memory area directly.
My current approach is this:
BufferedImage toBufferedImageAbgr(int width, int height, byte[] abgrData) {
int bitMasks[] = new int[]{0xf};
DataBuffer dataBuffer = new DataBufferByte(abgrData, width * height * 4, 0);
int[] masks = new int[]{0xff, 0xff, 0xff, 0xff};
DirectColorModel byteColorModel = new DirectColorModel(8,
0xff, // Red
0xff, // Green
0xff, // Blue
0xff // Alpha
);
SampleModel sampleModel = new SinglePixelPackedSampleModel(DataBuffer.TYPE_BYTE, width, height, masks);
WritableRaster raster = Raster.createWritableRaster(sampleModel, dataBuffer, null);
BufferedImage image = new BufferedImage(byteColorModel, raster, false, null);
return image;
}
I keep playing around with the color model, the bands and all that but can't figure out what's the right configuration for this relatively simple problem.
When I inspect the output image, it unfortunately looks bad, it's a grayscale image with patterns:
Here is the original image for reference:
BufferedImage toBufferedImageAbgr(int width, int height, byte[] abgrData) {
DataBuffer dataBuffer = new DataBufferByte(abgrData, width * height * 4, 0);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB),
new int[] {8,8,8,8}, true, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
WritableRaster raster = Raster.createInterleavedRaster(
dataBuffer, width, height, width * 4, 4, new int[] {3, 2, 1, 0}, null);
BufferedImage image = new BufferedImage(colorModel, raster, false, null);
return image;
}
I am making an app for exposure fusion but I have a small hiccup on my ZTE test device. On the Pixel emulator, I am able to take an Image from the ImageReader and convert it to a Mat and then back to a Bitmap to be displayed in an ImageView. This is the code:
int width = image.getWidth();
int height = image.getHeight();
Image.Plane yPlane = image.getPlanes()[0];
Image.Plane uPlane = image.getPlanes()[1];
Image.Plane vPlane = image.getPlanes()[2];
ByteBuffer yBuffer = yPlane.getBuffer();
ByteBuffer uBuffer = uPlane.getBuffer();
ByteBuffer vBuffer = vPlane.getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
int uvPixelStride = uPlane.getPixelStride();
Mat yuvMat = new Mat(height + (height / 2), width, CvType.CV_8UC1);
byte[] data = new byte[ySize + uSize + vSize];
yBuffer.get(data, 0, ySize);
uBuffer.get(data, ySize, uSize);
vBuffer.get(data, ySize + uSize, vSize);
if (uvPixelStride == 1) {
yuvMat.put(0, 0, data);
Mat rgb = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC4);
Imgproc.cvtColor(yuvMat, rgb, Imgproc.COLOR_YUV420p2RGBA);
return rgb;
Now, for the ZTE, the pixel stride for the U and V planes are 2 but I can't seem to get it to display correctly.
This is the code I'm using right now:
Mat yuv = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC1);
yuv.put(0, 0, data);
Mat rgb = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC4);
Imgproc.cvtColor(yuv, rgb, Imgproc.COLOR_YUV2RGBA_NV21);
return rgb;
Any help would be greatly appreciated.
I have a byte array with type TYPE_4BYTE_ABGR, and I know its width and height, I want to change it to BufferedImage, any ideas?
The fastest way to create a BufferedImage from a byte array in TYPE_4BYTE_ABGR form, is to wrap the array in a DataBufferByte and create an interleaved WritableRaster from that. This will make sure there are no additional byte array allocations. Then create the BufferedImage from the raster, and a matching color model:
public static void main(String[] args) {
int width = 300;
int height = 200;
int samplesPerPixel = 4; // This is the *4BYTE* in TYPE_4BYTE_ABGR
int[] bandOffsets = {3, 2, 1, 0}; // This is the order (ABGR) part in TYPE_4BYTE_ABGR
byte[] abgrPixelData = new byte[width * height * samplesPerPixel];
DataBuffer buffer = new DataBufferByte(abgrPixelData, abgrPixelData.length);
WritableRaster raster = Raster.createInterleavedRaster(buffer, width, height, samplesPerPixel * width, samplesPerPixel, bandOffsets, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
System.out.println("image: " + image); // Should print: image: BufferedImage#<hash>: type = 6 ...
}
Note however, that this image will be "unmanaged" (some HW accelerations will be disabled), because you have direct access to the pixel array.
To avoid this, create the WritableRaster without the pixels, and copy the pixels into it. This will use twice as much memory, but will keep the image "managed" and thus possible better display performance:
// Skip creating the data buffer
WritableRaster raster = Raster.createInterleavedRaster(DataBuffer.TYPE_BYTE, width, height, samplesPerPixel * width, samplesPerPixel, bandOffsets, null);
raster.setDataElements(0, 0, width, height, abgrPixelData);
// ...rest of code as above.
You could even do this (which might be more familiar):
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
WritableRaster raster = image.getRaster();
raster.setDataElements(0, 0, width, height, abgrPixelData);
Might not be very efficient, but a BufferedImage can be converted to another type this way:
public static BufferedImage convertToType(BufferedImage image, int type) {
BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight(), type);
Graphics2D graphics = newImage.createGraphics();
graphics.drawImage(image, 0, 0, null);
graphics.dispose();
return newImage;
}
About the method you want to be implemented, you would have to know the width or height of the image to convert a byte[] to a BufferedImage.
Edit:
One way is converting the byte[] to int[] (data type TYPE_INT_ARGB) and using setRGB:
int[] dst = new int[width * height];
for (int i = 0, j = 0; i < dst.length; i++) {
int a = src[j++] & 0xff;
int b = src[j++] & 0xff;
int g = src[j++] & 0xff;
int r = src[j++] & 0xff;
dst[i] = (a << 24) | (r << 16) | (g << 8) | b;
}
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
image.setRGB(0, 0, width, height, dst, 0, width);
I need to take an int array and turn it into BufferImage. I really don't have any background on this subject and I learn it all from the internet so here's what I'm trying to do:
Create an array from BufferedImage(done), turn this array into IntBuffer(done) - (Later i'll need to do some opertions on the image through the IntBuffer), put the changed values from the IntBuffer in new array(done), and turn this array into WritableRaster.
(If something isn't right in my understading of the process please tell me)
Here's the line where I deal with the WritableRaster:
WritableRaster newRaster= newRaster.setPixels(0, 0, width, height, matrix);
Eclipse marks this as a mistake and says ''Type mismatch:Cannot convert from void to WritableRaster"
Please help! I'm a bit lost.
Also sorry for bad english.
EDIT:
The matrix:
int height=img.getHeight();
int width=img.getWidth();
int[]matrix=new int[width*height];
The part of the code where I try to insert values to the Raster:
BufferedImage finalImg = new BufferedImage(width,height, BufferedImage.TYPE_INT_RGB);
WritableRaster newRaster= (WritableRaster)finalImg.getData();
newRaster.setPixels(0, 0, width, height, matrix);
The error message:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 10769
at java.awt.image.SinglePixelPackedSampleModel.setPixels(Unknown Source)
at java.awt.image.WritableRaster.setPixels(Unknown Source)
You can create a WritableRaster and/or BufferedImage from an int array like this:
int w = 300;
int h = 200;
int[] matrix = new int[w * h];
// ...manipulate the matrix...
DataBufferInt buffer = new DataBufferInt(matrix, matrix.length);
int[] bandMasks = {0xFF0000, 0xFF00, 0xFF, 0xFF000000}; // ARGB (yes, ARGB, as the masks are R, G, B, A always) order
WritableRaster raster = Raster.createPackedRaster(buffer, w, h, w, bandMasks, null);
System.out.println("raster: " + raster);
ColorModel cm = ColorModel.getRGBdefault();
BufferedImage image = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
System.err.println("image: " + image);
ColorModel cm = ColorModel.getRGBdefault();
int w = 300;
int h = 200;
WritableRaster raster = cm.createCompatibleWritableRaster(w, h);
DataBufferInt buffer = (DataBufferInt) raster.getDataBuffer();
int[] bufferData = buffer.getData();
int[] array = new int[2400];
Random random = new Random();
for (int i = 0; i < 2400; i++) {
array[i] = random.nextInt(2);
}
System.arraycopy(array, 0, bufferData, 0, (array.length < bufferData.length ? array.length : bufferData.length));
BufferedImage image = new BufferedImage(cm, raster, false, null);
FileOutputStream fos = new FileOutputStream("D:\\abc\\OCR\\" + "LearningRaster" + ".png");
ImageIO.write(image, "PNG", fos);
fos.close();
setPixels returns void:
public static void setPixels(BufferedImage img,
int x, int y, int w, int h, int[] pixels)
so you need to create Raster and than set pixels to it:
WritableRaster newRaster= WritableRaster.createWritableRaster(…);
newRaster.setPixels(0, 0, width, height, matrix);
You need to put 4 int per pixel (it depends on color model, 4 for ARGB). So, matrix size must be
int[] matrix = new int[width * height * 4]
See more about WritableRaster here —
Oracle: WritableRaster
Code examples
I'm making with the Robot class a printscreen and I convert the BufferedImage into an int array. Then I want to convert the int array back to a bufferedimage but that gives an error. This is my code:
Dimension screen = Toolkit.getDefaultToolkit().getScreenSize();
BufferedImage printscreen = robot.createScreenCapture(new Rectangle(screen));
int[] pixels = ((DataBufferInt) printscreen.getRaster().getDataBuffer()).getData();
BufferedImage image = new BufferedImage(screen.width, screen.height, BufferedImage.TYPE_INT_RGB);
WritableRaster raster = (WritableRaster) image.getRaster();
raster.setPixels(0, 0, screen.width, screen.height, pixels);
But I get the error: ArrayIndexOutOfBoundsException: 2073600 but why?
I'm getting the exception on this line:
raster.setPixels(0, 0, screen.width, screen.height, pixels);
EDIT: It is working if I change the second bufferedimage type to TYPE_BYTE_GRAY.
int[] bitMasks = new int[]{0xFF0000, 0xFF00, 0xFF, 0xFF000000};
SinglePixelPackedSampleModel sm = new SinglePixelPackedSampleModel(
DataBuffer.TYPE_INT, width, height, bitMasks);
DataBufferInt db = new DataBufferInt(pixels, pixels.length);
WritableRaster wr = Raster.createWritableRaster(sm, db, new Point());
BufferedImage image = new BufferedImage(ColorModel.getRGBdefault(), wr, false, null);
Changed to:
getRaster().getPixels(0, 0, screen.width, screen.height, pixels)
and it works! Thanks for help anyway
The ArrayIndexOutOfBounds exception occurs as and when you try to access an element at index which is beyond the size of the array. In this case, you're passing the array to setPixels method, which accordingly to its javadocs doesn't explicitly check for the bounds or size of the array. So you should be doing that explicitly before calling that method. e.g.
if(x >= 0 && x < arr.length) {
// some code
}
This is the relevant code from SampleModel class used by WritableRaster.
public int[] getPixels(int x, int y, int w, int h,
int iArray[], DataBuffer data) {
int pixels[];
int Offset=0;
if (iArray != null)
pixels = iArray;
else
pixels = new int[numBands * w * h];
for (int i=y; i<(h+y); i++) {
for (int j=x; j<(w+x); j++) {
for(int k=0; k<numBands; k++) {
pixels[Offset++] = getSample(j, i, k, data);
}
}
}
return pixels;
}
The size of pixels in raster.setPixels(0, 0, screen.width, screen.height, pixels); should be width*height*3 when you set BufferedImage.TYPE_INT_RGB.
BufferedImage image = new BufferedImage(screen.width*3, screen.height,BufferedImage.TYPE_INT_RGB);
WritableRaster raster = (WritableRaster) image.getRaster();
raster.setPixels(0, 0, screen.width*3, screen.height, pixels);