I receive 16-bit grayscale images from a device, the images are delivered in an uncompressed raw format
, here is a 8 bytes example of how 2X2 image will look like using this format (MSB first) :
21 27 33 F6 28 F3 27 F2
----- ----- ----- -----
pixel 0,0(x,y) pixel 1,0 pixel 1,0 pixel 1,1
I need to compress the images using Kakadu JPEG2000 library that expose a Java
ImageWriter implementation, the ImageWriter.write method expect a RenderedImage as input, I'm using the following code to create a BufferedImage from the raw image data :
int[] rasterData = new int[width * height];
int rawBufferOffset = 0;
for(int i=0;i<rasterData.length;i++) {
rasterData[i] = ((int) rawBuffer[rawBufferOffset + 1] << 8) | ((int) rawBuffer[rawBufferOffset] & 0xFF);
rawBufferOffset += 2;
}
BufferedImage image = new BufferedImage(width, height,BufferedImage.TYPE_USHORT_GRAY);
image.getRaster().setPixels(0, 0, width, height, rasterData);
The code works but it's obviously not the best method to this conversion,
I was thinking about creating a RenderedImage implementation that uses the rawBuffer as the image raster data source, can anyone suggest how to do so or suggest any other method for this conversion?
The most straight forward way, is probably to use a ByteBuffer to swap the byte order, and create a new short array to hold the pixel data.
Then wrap the (short) pixel data in a DataBufferUShort. Create a matching WritableRaster and ColorModel, and finally create a BufferedImage from this. This image should be identical to the image in your code above (BufferedImage.TYPE_USHORT_GRAY), but be slightly faster to create, as you only copy the pixels once (as opposed to twice in your code).
int w = 2;
int h = 2;
int stride = 1;
byte[] rawBytes = {0x21, 0x27, 0x33, (byte) 0xF6, 0x28, (byte) 0xF3, (byte) 0x27, (byte) 0xF2};
short[] rawShorts = new short[rawBytes.length / 2];
ByteBuffer.wrap(rawBytes)
.order(ByteOrder.LITTLE_ENDIAN)
.asShortBuffer()
.get(rawShorts);
DataBuffer dataBuffer = new DataBufferUShort(rawShorts, rawShorts.length);
WritableRaster raster = Raster.createInterleavedRaster(dataBuffer, w, h, w * stride, stride, new int[]{0}, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
Another, slightly more convoluted, but probably faster way (as you don't copy the backing pixel array at all), is to create a custom SampleModel that works with MSB (little endian) byte data, but exposes them as TYPE_USHORT. This will create a TYPE_CUSTOM image.
int w = 2, h = 2, stride = 2;
byte[] rawBytes = {0x21, 0x27, 0x33, (byte) 0xF6, 0x28, (byte) 0xF3, (byte) 0x27, (byte) 0xF2};
DataBuffer dataBuffer = new DataBufferByte(rawBytes, rawBytes.length);
SampleModel sampleModel = new ComponentSampleModel(DataBuffer.TYPE_USHORT, w, h, stride, w * stride, new int[] {0}) {
#Override
public Object getDataElements(int x, int y, Object obj, DataBuffer data) {
if ((x < 0) || (y < 0) || (x >= width) || (y >= height)) {
throw new ArrayIndexOutOfBoundsException("Coordinate out of bounds!");
}
// Simplified, as we only support TYPE_USHORT
int numDataElems = getNumDataElements();
int pixelOffset = y * scanlineStride + x * pixelStride;
short[] sdata;
if (obj == null) {
sdata = new short[numDataElems];
}
else {
sdata = (short[]) obj;
}
for (int i = 0; i < numDataElems; i++) {
sdata[i] = (short) (data.getElem(bankIndices[i], pixelOffset + bandOffsets[i] + 1) << 8|
data.getElem(bankIndices[i], pixelOffset + bandOffsets[i]));
}
return sdata;
}
};
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
WritableRaster raster = Raster.createWritableRaster(sampleModel, dataBuffer, null);
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
I don't really see a reason for creating a RenderedImage subclass for this.
Related
I have a byte array with type TYPE_4BYTE_ABGR, and I know its width and height, I want to change it to BufferedImage, any ideas?
The fastest way to create a BufferedImage from a byte array in TYPE_4BYTE_ABGR form, is to wrap the array in a DataBufferByte and create an interleaved WritableRaster from that. This will make sure there are no additional byte array allocations. Then create the BufferedImage from the raster, and a matching color model:
public static void main(String[] args) {
int width = 300;
int height = 200;
int samplesPerPixel = 4; // This is the *4BYTE* in TYPE_4BYTE_ABGR
int[] bandOffsets = {3, 2, 1, 0}; // This is the order (ABGR) part in TYPE_4BYTE_ABGR
byte[] abgrPixelData = new byte[width * height * samplesPerPixel];
DataBuffer buffer = new DataBufferByte(abgrPixelData, abgrPixelData.length);
WritableRaster raster = Raster.createInterleavedRaster(buffer, width, height, samplesPerPixel * width, samplesPerPixel, bandOffsets, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
System.out.println("image: " + image); // Should print: image: BufferedImage#<hash>: type = 6 ...
}
Note however, that this image will be "unmanaged" (some HW accelerations will be disabled), because you have direct access to the pixel array.
To avoid this, create the WritableRaster without the pixels, and copy the pixels into it. This will use twice as much memory, but will keep the image "managed" and thus possible better display performance:
// Skip creating the data buffer
WritableRaster raster = Raster.createInterleavedRaster(DataBuffer.TYPE_BYTE, width, height, samplesPerPixel * width, samplesPerPixel, bandOffsets, null);
raster.setDataElements(0, 0, width, height, abgrPixelData);
// ...rest of code as above.
You could even do this (which might be more familiar):
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
WritableRaster raster = image.getRaster();
raster.setDataElements(0, 0, width, height, abgrPixelData);
Might not be very efficient, but a BufferedImage can be converted to another type this way:
public static BufferedImage convertToType(BufferedImage image, int type) {
BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight(), type);
Graphics2D graphics = newImage.createGraphics();
graphics.drawImage(image, 0, 0, null);
graphics.dispose();
return newImage;
}
About the method you want to be implemented, you would have to know the width or height of the image to convert a byte[] to a BufferedImage.
Edit:
One way is converting the byte[] to int[] (data type TYPE_INT_ARGB) and using setRGB:
int[] dst = new int[width * height];
for (int i = 0, j = 0; i < dst.length; i++) {
int a = src[j++] & 0xff;
int b = src[j++] & 0xff;
int g = src[j++] & 0xff;
int r = src[j++] & 0xff;
dst[i] = (a << 24) | (r << 16) | (g << 8) | b;
}
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
image.setRGB(0, 0, width, height, dst, 0, width);
I've written a Java methods ,but i have to use this method in android project,so someone can help me to convert it into android or help me what should i do?
public Image getImage(){
ColorModel cm = grayColorModel() ;
if( n == 1){// in case it's a 8 bit/pixel image
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h,cm, pixData, 0, w));
}//endif
}
protected ColorModel grayColorModel()
{
byte[] r = new byte[256] ;
for (int i = 0; i <256 ; i++ )
r[i] = (byte)(i & 0xff ) ;
return (new IndexColorModel(8,256,r,r,r));
}
For instance, to convert a grayscale image (byte array, imageSrc) to drawable:
byte[] imageSrc= [...];
// That's where the RGBA array goes.
byte[] imageRGBA = new byte[imageSrc.length * 4];
int i;
for (i = 0; i < imageSrc.length; i++) {
imageRGBA[i * 4] = imageRGBA[i * 4 + 1] = imageRGBA[i * 4 + 2] = ((byte) ~imageSrc[i]);
// Invert the source bits
imageRGBA[i * 4 + 3] = -1;// 0xff, that's the alpha.
}
// Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(imageRGBA));
Code may differ depending of input format.
How can I convert a BufferedImage to a Mat in OpenCV?
I'm using the JAVA wrapper for OpenCV(not JavaCV). As I am new to OpenCV I have some problems understanding how Mat works.
I want to do something like this. (Based on Ted W. reply):
BufferedImage image = ImageIO.read(b.getClass().getResource("Lena.png"));
int rows = image.getWidth();
int cols = image.getHeight();
int type = CvType.CV_16UC1;
Mat newMat = new Mat(rows, cols, type);
for (int r = 0; r < rows; r++) {
for (int c = 0; c < cols; c++) {
newMat.put(r, c, image.getRGB(r, c));
}
}
Highgui.imwrite("Lena_copy.png", newMat);
This doesn't work. Lena_copy.png is just a black picture with the correct dimensions.
I also was trying to do the same thing, because of need to combining image processed with two libraries. And what I’ve tried to do is to put byte[] in to Mat instead of RGB value. And it worked! So what I did was:
1.Converted BufferedImage to byte array with:
byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
2. Then you can simply put it to Mat if you set type to CV_8UC3
image_final.put(0, 0, pixels);
Edit:
Also you can try to do the inverse as on this answer
Don't want to deal with big pixel array? Simply use this
BufferedImage to Mat
public static Mat BufferedImage2Mat(BufferedImage image) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
ImageIO.write(image, "jpg", byteArrayOutputStream);
byteArrayOutputStream.flush();
return Imgcodecs.imdecode(new MatOfByte(byteArrayOutputStream.toByteArray()), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
}
Mat to BufferedImage
public static BufferedImage Mat2BufferedImage(Mat matrix)throws IOException {
MatOfByte mob=new MatOfByte();
Imgcodecs.imencode(".jpg", matrix, mob);
return ImageIO.read(new ByteArrayInputStream(mob.toArray()));
}
Note, Though it's very negligible. However, in this way, you can get a reliable solution but it uses encoding + decoding. So you lose some performance. It's generally 10 to 20 milliseconds. JPG encoding loses some image quality also it's slow (may take 10 to 20ms). BMP is lossless and fast (1 or 2 ms) but requires little more memory (negligible). PNG is lossless but a little more time to encode than BMP. Using BMP should fit the most cases I think.
This one worked fine for me, and it takes from 0 to 1 ms to be performed.
public static Mat bufferedImageToMat(BufferedImage bi) {
Mat mat = new Mat(bi.getHeight(), bi.getWidth(), CvType.CV_8UC3);
byte[] data = ((DataBufferByte) bi.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
return mat;
}
I use following code in my program.
protected Mat img2Mat(BufferedImage in) {
Mat out;
byte[] data;
int r, g, b;
if (in.getType() == BufferedImage.TYPE_INT_RGB) {
out = new Mat(in.getHeight(), in.getWidth(), CvType.CV_8UC3);
data = new byte[in.getWidth() * in.getHeight() * (int) out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, in.getWidth(), in.getHeight(), null, 0, in.getWidth());
for (int i = 0; i < dataBuff.length; i++) {
data[i * 3] = (byte) ((dataBuff[i] >> 0) & 0xFF);
data[i * 3 + 1] = (byte) ((dataBuff[i] >> 8) & 0xFF);
data[i * 3 + 2] = (byte) ((dataBuff[i] >> 16) & 0xFF);
}
} else {
out = new Mat(in.getHeight(), in.getWidth(), CvType.CV_8UC1);
data = new byte[in.getWidth() * in.getHeight() * (int) out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, in.getWidth(), in.getHeight(), null, 0, in.getWidth());
for (int i = 0; i < dataBuff.length; i++) {
r = (byte) ((dataBuff[i] >> 0) & 0xFF);
g = (byte) ((dataBuff[i] >> 8) & 0xFF);
b = (byte) ((dataBuff[i] >> 16) & 0xFF);
data[i] = (byte) ((0.21 * r) + (0.71 * g) + (0.07 * b));
}
}
out.put(0, 0, data);
return out;
}
Reference: here
I found a solution here.
The solution is similar to Andriys.
Camera c;
c.Connect();
c.StartCapture();
Image f2Img, cf2Img;
c.RetrieveBuffer(&f2Img);
f2Img.Convert( FlyCapture2::PIXEL_FORMAT_BGR, &cf2Img );
unsigned int rowBytes = (double)cf2Img.GetReceivedDataSize()/(double)cf2Img.GetRows();
cv::Mat opencvImg = cv::Mat( cf2Img.GetRows(), cf2Img.GetCols(), CV_8UC3, cf2Img.GetData(),rowBytes );
To convert from BufferedImage to Mat I use the method below:
public static Mat img2Mat(BufferedImage image) {
image = convertTo3ByteBGRType(image);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, data);
return mat;
}
Before converting into Mat, I change the type of bufferedImage to TYPE_3BYTE_BGR, because to some types BufferedImages the method ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); may return int[] and that would break the code.
Below is the method for converting to TYPE_3BYTE_BGR.
private static BufferedImage convertTo3ByteBGRType(BufferedImage image) {
BufferedImage convertedImage = new BufferedImage(image.getWidth(), image.getHeight(),
BufferedImage.TYPE_3BYTE_BGR);
convertedImage.getGraphics().drawImage(image, 0, 0, null);
return convertedImage;
}
When you use as JavaCP wrapper bytedeco library (version 1.5.3) then you can use Java2DFrameUtils.
Simple usage is:
import org.bytedeco.javacv.Java2DFrameUtils;
...
BufferedImage img = ImageIO.read(new File("some/image.jpg");
Mat mat = Java2DFrameUtils.toMat(img);
Note: don't mix different wrappers, bytedeco Mat is different than opencv Mat.
One simple way would be to create a new using
Mat newMat = Mat(rows, cols, type);
then get the pixel values from your BufferedImage and put into newMat using
newMat.put(row, col, pixel);
You can do it in OpenCV as follows:
File f4 = new File("aa.png");
Mat mat = Highgui.imread(f4.getAbsolutePath());
I have a byte array containing pixel values from a .bmp file. It was generated by doing this:
BufferedImage readImage = ImageIO.read(new File(fileName));
byte imageData[] = ((DataBufferByte)readImage.getData().getDataBuffer()).getData();
Now I need to recreate the .bmp image. I tried to make a BufferedImage and set the pixels of the WritableRaster by calling the setPixels method. But there I have to provide an int[], float[] or double[] array. Maybe I need to convert the byte array into one of these. But I don't know how to do that. I also tried the setDataElements method. But I am not sure how to use this method either.
Can anyone explain how to create a bmp image from a byte array?
Edit: #Perception
This is what I have done so far:
private byte[] getPixelArrayToBmpByteArray(byte[] pixelData, int width, int height, int depth) throws Exception{
int[] pixels = byteToInt(pixelData);
BufferedImage image = null;
if(depth == 8) {
image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
}
else if(depth == 24){
image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
}
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0, 0, width, height, pixels);
image.setData(raster);
return getBufferedImageToBmpByteArray(image);
}
private byte[] getBufferedImageToBmpByteArray(BufferedImage image) {
byte[] imageData = null;
try {
ByteArrayOutputStream bas = new ByteArrayOutputStream();
ImageIO.write(image, "bmp", bas);
imageData = bas.toByteArray();
bas.close();
} catch (Exception e) {
e.printStackTrace();
}
return imageData;
}
private int[] byteToInt(byte[] data) {
int[] ints = new int[data.length];
for (int i = 0; i
You need to pack three bytes into each integer you make. Depending on the format of the buffered image, this will be 0xRRGGBB.
byteToInt will have to consume three bytes like this:
private int[] byteToInt(byte[] data) {
int[] ints = new int[data.length / 3];
int byteIdx = 0;
for (int pixel = 0; pixel < ints.length) {
int rByte = (int) pixels[byteIdx++] & 0xFF;
int gByte = (int) pixels[byteIdx++] & 0xFF;
int bByte = (int) pixels[byteIdx++] & 0xFF;
int rgb = (rByte << 16) | (gByte << 8) | bByte
ints[pixel] = rgb;
}
}
You can also use ByteBuffer.wrap(arr, offset, length).toInt()
Having just a byte array is not enough. You also need to construct a header (if you are reading from a raw format, such as inside a DICOM file).
I'm trying to convert from RGB to GrayScale Image.
The method that does this task is the following:
public BufferedImage rgbToGrayscale(BufferedImage in)
{
int width = in.getWidth();
int height = in.getHeight();
BufferedImage grayImage = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
WritableRaster raster = grayImage.getRaster();
int [] rgbArray = new int[width * height];
in.getRGB(0, 0, width, height, rgbArray, 0, width);
int [] outputArray = new int[width * height];
int red, green, blue, gray;
for(int i = 0; i < (height * width); i++)
{
red = (rgbArray[i] >> 16) & 0xff;
green = (rgbArray[i] >> 8) & 0xff;
blue = (rgbArray[i]) & 0xff;
gray = (int)( (0.30 * red) + (0.59 * green) + (0.11 * blue));
if(gray < 0)
gray = 0;
if(gray > 255)
gray = 255;
outputArray[i] = (gray & 0xff);
}
}
raster.setPixels(0, 0, width, height, outputArray);
return grayImage;
}
I have a method that saves the pixels value in a file:
public void writeImageValueToFile(BufferedImage in, String fileName)
{
int width = in.getWidth();
int height = in.getHeight();
try
{
FileWriter fstream = new FileWriter(fileName + ".txt");
BufferedWriter out = new BufferedWriter(fstream);
int [] grayArray = new int[width * height];
in.getRGB(0, 0, width, height, grayArray, 0, width);
for(int i = 0; i < (height * width); i++)
{
out.write((grayArray[i] & 0xff) + "\n");
}
out.close();
} catch (Exception e)
{
System.err.println("Error: " + e.getMessage());
}
}
The problem that I have is that, the RGB value I get from my method, is always bigger than the expected one.
I created an image and I filled it with color 128, 128, 128. According to the first method, if I print the outputArray's data, I get:
r, g, b = 128, 128, 128. Final = 127 ---> correct :D
However, when I called the second method, I got the RGB value 187 which is incorrect.
Any suggestion?
Thanks!!!
Take a look at javax.swing.GrayFilter, it uses the RBGImageFilter class to accomplish the same thing and has very similar implementation. It may make your life simpler.
I'm not an expert at these things but aren't RGB values stored as hex (base16)? If so, theproblem lies in your assumption that the operation & 0xff will cause your int to be stored/handled as base16. It is just a notation and default int usage in strings will always be base10.
int a = 200;
a = a & 0xff;
System.out.println(a);
// output
200
You need to use an explicit base16 toString() method.
System.out.println(Integer.toHexString(200));
// output
c8