Converting `BufferedImage` to `Mat` in OpenCV - java

How can I convert a BufferedImage to a Mat in OpenCV?
I'm using the JAVA wrapper for OpenCV(not JavaCV). As I am new to OpenCV I have some problems understanding how Mat works.
I want to do something like this. (Based on Ted W. reply):
BufferedImage image = ImageIO.read(b.getClass().getResource("Lena.png"));
int rows = image.getWidth();
int cols = image.getHeight();
int type = CvType.CV_16UC1;
Mat newMat = new Mat(rows, cols, type);
for (int r = 0; r < rows; r++) {
for (int c = 0; c < cols; c++) {
newMat.put(r, c, image.getRGB(r, c));
}
}
Highgui.imwrite("Lena_copy.png", newMat);
This doesn't work. Lena_copy.png is just a black picture with the correct dimensions.

I also was trying to do the same thing, because of need to combining image processed with two libraries. And what I’ve tried to do is to put byte[] in to Mat instead of RGB value. And it worked! So what I did was:
1.Converted BufferedImage to byte array with:
byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
2. Then you can simply put it to Mat if you set type to CV_8UC3
image_final.put(0, 0, pixels);
Edit:
Also you can try to do the inverse as on this answer

Don't want to deal with big pixel array? Simply use this
BufferedImage to Mat
public static Mat BufferedImage2Mat(BufferedImage image) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
ImageIO.write(image, "jpg", byteArrayOutputStream);
byteArrayOutputStream.flush();
return Imgcodecs.imdecode(new MatOfByte(byteArrayOutputStream.toByteArray()), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
}
Mat to BufferedImage
public static BufferedImage Mat2BufferedImage(Mat matrix)throws IOException {
MatOfByte mob=new MatOfByte();
Imgcodecs.imencode(".jpg", matrix, mob);
return ImageIO.read(new ByteArrayInputStream(mob.toArray()));
}
Note, Though it's very negligible. However, in this way, you can get a reliable solution but it uses encoding + decoding. So you lose some performance. It's generally 10 to 20 milliseconds. JPG encoding loses some image quality also it's slow (may take 10 to 20ms). BMP is lossless and fast (1 or 2 ms) but requires little more memory (negligible). PNG is lossless but a little more time to encode than BMP. Using BMP should fit the most cases I think.

This one worked fine for me, and it takes from 0 to 1 ms to be performed.
public static Mat bufferedImageToMat(BufferedImage bi) {
Mat mat = new Mat(bi.getHeight(), bi.getWidth(), CvType.CV_8UC3);
byte[] data = ((DataBufferByte) bi.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
return mat;
}

I use following code in my program.
protected Mat img2Mat(BufferedImage in) {
Mat out;
byte[] data;
int r, g, b;
if (in.getType() == BufferedImage.TYPE_INT_RGB) {
out = new Mat(in.getHeight(), in.getWidth(), CvType.CV_8UC3);
data = new byte[in.getWidth() * in.getHeight() * (int) out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, in.getWidth(), in.getHeight(), null, 0, in.getWidth());
for (int i = 0; i < dataBuff.length; i++) {
data[i * 3] = (byte) ((dataBuff[i] >> 0) & 0xFF);
data[i * 3 + 1] = (byte) ((dataBuff[i] >> 8) & 0xFF);
data[i * 3 + 2] = (byte) ((dataBuff[i] >> 16) & 0xFF);
}
} else {
out = new Mat(in.getHeight(), in.getWidth(), CvType.CV_8UC1);
data = new byte[in.getWidth() * in.getHeight() * (int) out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, in.getWidth(), in.getHeight(), null, 0, in.getWidth());
for (int i = 0; i < dataBuff.length; i++) {
r = (byte) ((dataBuff[i] >> 0) & 0xFF);
g = (byte) ((dataBuff[i] >> 8) & 0xFF);
b = (byte) ((dataBuff[i] >> 16) & 0xFF);
data[i] = (byte) ((0.21 * r) + (0.71 * g) + (0.07 * b));
}
}
out.put(0, 0, data);
return out;
}
Reference: here

I found a solution here.
The solution is similar to Andriys.
Camera c;
c.Connect();
c.StartCapture();
Image f2Img, cf2Img;
c.RetrieveBuffer(&f2Img);
f2Img.Convert( FlyCapture2::PIXEL_FORMAT_BGR, &cf2Img );
unsigned int rowBytes = (double)cf2Img.GetReceivedDataSize()/(double)cf2Img.GetRows();
cv::Mat opencvImg = cv::Mat( cf2Img.GetRows(), cf2Img.GetCols(), CV_8UC3, cf2Img.GetData(),rowBytes );

To convert from BufferedImage to Mat I use the method below:
public static Mat img2Mat(BufferedImage image) {
image = convertTo3ByteBGRType(image);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, data);
return mat;
}
Before converting into Mat, I change the type of bufferedImage to TYPE_3BYTE_BGR, because to some types BufferedImages the method ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); may return int[] and that would break the code.
Below is the method for converting to TYPE_3BYTE_BGR.
private static BufferedImage convertTo3ByteBGRType(BufferedImage image) {
BufferedImage convertedImage = new BufferedImage(image.getWidth(), image.getHeight(),
BufferedImage.TYPE_3BYTE_BGR);
convertedImage.getGraphics().drawImage(image, 0, 0, null);
return convertedImage;
}

When you use as JavaCP wrapper bytedeco library (version 1.5.3) then you can use Java2DFrameUtils.
Simple usage is:
import org.bytedeco.javacv.Java2DFrameUtils;
...
BufferedImage img = ImageIO.read(new File("some/image.jpg");
Mat mat = Java2DFrameUtils.toMat(img);
Note: don't mix different wrappers, bytedeco Mat is different than opencv Mat.

One simple way would be to create a new using
Mat newMat = Mat(rows, cols, type);
then get the pixel values from your BufferedImage and put into newMat using
newMat.put(row, col, pixel);

You can do it in OpenCV as follows:
File f4 = new File("aa.png");
Mat mat = Highgui.imread(f4.getAbsolutePath());

Related

How to convert a JavaFX image to a OpenCV matrix?

I am unable to successfully convert a javafx.scene.image.Image to a org.opencv.core.Mat. The resulting matrix produces a black image. I've not used PixelReader before so I am unsure wether or not I am using it correctly.
Here is my code:
public static Mat imageToMat(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] buffer = new byte[width * height * 3];
PixelReader reader = image.getPixelReader();
WritablePixelFormat format = WritablePixelFormat.getByteBgraInstance();
reader.getPixels(0, 0, width, height, format, buffer, 0, 0);
Mat mat = new Mat(height, width, CvType.CV_8UC3);
mat.put(0, 0, buffer);
return mat;
}
Any help/solutions would be greatly appreciated! :) Thank you.
That stuff is still circumstantial. I've found 2 working solutions. I'll just post my OpenCvUtils class, hope it helps until someone comes up with a better solution:
import java.awt.AlphaComposite;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.ByteArrayInputStream;
import java.net.URISyntaxException;
import java.nio.file.Paths;
import javafx.embed.swing.SwingFXUtils;
import javafx.scene.image.Image;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.imgcodecs.Imgcodecs;
public class OpenCvUtils {
/**
* Convert a Mat object (OpenCV) in the corresponding Image for JavaFX
*
* #param frame
* the {#link Mat} representing the current frame
* #return the {#link Image} to show
*/
public static Image mat2Image(Mat frame) {
// create a temporary buffer
MatOfByte buffer = new MatOfByte();
// encode the frame in the buffer, according to the PNG format
Imgcodecs.imencode(".png", frame, buffer);
// build and return an Image created from the image encoded in the
// buffer
return new Image(new ByteArrayInputStream(buffer.toArray()));
}
public static Mat image2Mat( Image image) {
BufferedImage bImage = SwingFXUtils.fromFXImage(image, null);
return bufferedImage2Mat( bImage);
}
// http://www.codeproject.com/Tips/752511/How-to-Convert-Mat-to-BufferedImage-Vice-Versa
public static Mat bufferedImage2Mat(BufferedImage in)
{
Mat out;
byte[] data;
int r, g, b;
int height = in.getHeight();
int width = in.getWidth();
if(in.getType() == BufferedImage.TYPE_INT_RGB || in.getType() == BufferedImage.TYPE_INT_ARGB)
{
out = new Mat(height, width, CvType.CV_8UC3);
data = new byte[height * width * (int)out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, width, height, null, 0, width);
for(int i = 0; i < dataBuff.length; i++)
{
data[i*3 + 2] = (byte) ((dataBuff[i] >> 16) & 0xFF);
data[i*3 + 1] = (byte) ((dataBuff[i] >> 8) & 0xFF);
data[i*3] = (byte) ((dataBuff[i] >> 0) & 0xFF);
}
}
else
{
out = new Mat(height, width, CvType.CV_8UC1);
data = new byte[height * width * (int)out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, width, height, null, 0, width);
for(int i = 0; i < dataBuff.length; i++)
{
r = (byte) ((dataBuff[i] >> 16) & 0xFF);
g = (byte) ((dataBuff[i] >> 8) & 0xFF);
b = (byte) ((dataBuff[i] >> 0) & 0xFF);
data[i] = (byte)((0.21 * r) + (0.71 * g) + (0.07 * b)); //luminosity
}
}
out.put(0, 0, data);
return out;
}
public static String getOpenCvResource(Class<?> clazz, String path) {
try {
return Paths.get( clazz.getResource(path).toURI()).toString();
} catch (URISyntaxException e) {
throw new RuntimeException(e);
}
}
// Convert image to Mat
// alternate version http://stackoverflow.com/questions/21740729/converting-bufferedimage-to-mat-opencv-in-java
public static Mat bufferedImage2Mat_v2(BufferedImage im) {
im = toBufferedImageOfType(im, BufferedImage.TYPE_3BYTE_BGR);
// Convert INT to BYTE
//im = new BufferedImage(im.getWidth(), im.getHeight(),BufferedImage.TYPE_3BYTE_BGR);
// Convert bufferedimage to byte array
byte[] pixels = ((DataBufferByte) im.getRaster().getDataBuffer()).getData();
// Create a Matrix the same size of image
Mat image = new Mat(im.getHeight(), im.getWidth(), CvType.CV_8UC3);
// Fill Matrix with image values
image.put(0, 0, pixels);
return image;
}
private static BufferedImage toBufferedImageOfType(BufferedImage original, int type) {
if (original == null) {
throw new IllegalArgumentException("original == null");
}
// Don't convert if it already has correct type
if (original.getType() == type) {
return original;
}
// Create a buffered image
BufferedImage image = new BufferedImage(original.getWidth(), original.getHeight(), type);
// Draw the image onto the new buffer
Graphics2D g = image.createGraphics();
try {
g.setComposite(AlphaComposite.Src);
g.drawImage(original, 0, 0, null);
}
finally {
g.dispose();
}
return image;
}
}
Thanks to Nikos Paraskevopoulos for suggesting setting the scanlineStride parameter of the PixelReader::getPixels() method, this has solved it. :)
Working code below:
public static Mat imageToMat(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] buffer = new byte[width * height * 4];
PixelReader reader = image.getPixelReader();
WritablePixelFormat<ByteBuffer> format = WritablePixelFormat.getByteBgraInstance();
reader.getPixels(0, 0, width, height, format, buffer, 0, width * 4);
Mat mat = new Mat(height, width, CvType.CV_8UC4);
mat.put(0, 0, buffer);
return mat;
}
You need convert : Mat > BufferedImage > FXImage
private Image mat2Image(Mat src)
{
BufferedImage image = ImageConverter.toImage(src);
return SwingFXUtils.toFXImage(image, null);
}
Class:
public class ImageConverter {
/**
* Converts/writes a Mat into a BufferedImage.
*
* #param src Mat of type CV_8UC3 or CV_8UC1
* #return BufferedImage of type TYPE_3BYTE_BGR or TYPE_BYTE_GRAY
*/
public static BufferedImage toImage(Mat src)
{
if ( src != null ) {
int cols = src.cols();
int rows = src.rows();
int elemSize = (int)src.elemSize();
byte[] data = new byte[cols * rows * elemSize];
int type;
src.data().get(data);
switch (src.channels()) {
case 1:
type = BufferedImage.TYPE_BYTE_GRAY;
break;
case 3:
type = BufferedImage.TYPE_3BYTE_BGR;
// bgr to rgb
byte b;
for(int i=0; i<data.length; i=i+3) {
b = data[i];
data[i] = data[i+2];
data[i+2] = b;
}
break;
default:
return null;
}
BufferedImage bimg = new BufferedImage(cols, rows, type);
bimg.getRaster().setDataElements(0, 0, cols, rows, data);
return bimg;
}
return null;
}
}
Following the solution above it may be also necessary to convert the format from four bytes (CvType.CV_8UC4) to three bytes (CvType.CV_8UC3) depending on what you are finally seeking. For example, if I read a xx.jpa image, it is RGB format.
if (isRGB)
Imgproc.cvtColor(mat,mat,Imgproc.COLOR_RGBA2RGB);
//or...COLOR_BGR2RGB,COLOR_BGRA2RGB,COLOR_BGR2BGRA

Java RenderedImage implementation with custom raw DataBuffer

I receive 16-bit grayscale images from a device, the images are delivered in an uncompressed raw format
, here is a 8 bytes example of how 2X2 image will look like using this format (MSB first) :
21 27 33 F6 28 F3 27 F2
----- ----- ----- -----
pixel 0,0(x,y) pixel 1,0 pixel 1,0 pixel 1,1
I need to compress the images using Kakadu JPEG2000 library that expose a Java
ImageWriter implementation, the ImageWriter.write method expect a RenderedImage as input, I'm using the following code to create a BufferedImage from the raw image data :
int[] rasterData = new int[width * height];
int rawBufferOffset = 0;
for(int i=0;i<rasterData.length;i++) {
rasterData[i] = ((int) rawBuffer[rawBufferOffset + 1] << 8) | ((int) rawBuffer[rawBufferOffset] & 0xFF);
rawBufferOffset += 2;
}
BufferedImage image = new BufferedImage(width, height,BufferedImage.TYPE_USHORT_GRAY);
image.getRaster().setPixels(0, 0, width, height, rasterData);
The code works but it's obviously not the best method to this conversion,
I was thinking about creating a RenderedImage implementation that uses the rawBuffer as the image raster data source, can anyone suggest how to do so or suggest any other method for this conversion?
The most straight forward way, is probably to use a ByteBuffer to swap the byte order, and create a new short array to hold the pixel data.
Then wrap the (short) pixel data in a DataBufferUShort. Create a matching WritableRaster and ColorModel, and finally create a BufferedImage from this. This image should be identical to the image in your code above (BufferedImage.TYPE_USHORT_GRAY), but be slightly faster to create, as you only copy the pixels once (as opposed to twice in your code).
int w = 2;
int h = 2;
int stride = 1;
byte[] rawBytes = {0x21, 0x27, 0x33, (byte) 0xF6, 0x28, (byte) 0xF3, (byte) 0x27, (byte) 0xF2};
short[] rawShorts = new short[rawBytes.length / 2];
ByteBuffer.wrap(rawBytes)
.order(ByteOrder.LITTLE_ENDIAN)
.asShortBuffer()
.get(rawShorts);
DataBuffer dataBuffer = new DataBufferUShort(rawShorts, rawShorts.length);
WritableRaster raster = Raster.createInterleavedRaster(dataBuffer, w, h, w * stride, stride, new int[]{0}, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
Another, slightly more convoluted, but probably faster way (as you don't copy the backing pixel array at all), is to create a custom SampleModel that works with MSB (little endian) byte data, but exposes them as TYPE_USHORT. This will create a TYPE_CUSTOM image.
int w = 2, h = 2, stride = 2;
byte[] rawBytes = {0x21, 0x27, 0x33, (byte) 0xF6, 0x28, (byte) 0xF3, (byte) 0x27, (byte) 0xF2};
DataBuffer dataBuffer = new DataBufferByte(rawBytes, rawBytes.length);
SampleModel sampleModel = new ComponentSampleModel(DataBuffer.TYPE_USHORT, w, h, stride, w * stride, new int[] {0}) {
#Override
public Object getDataElements(int x, int y, Object obj, DataBuffer data) {
if ((x < 0) || (y < 0) || (x >= width) || (y >= height)) {
throw new ArrayIndexOutOfBoundsException("Coordinate out of bounds!");
}
// Simplified, as we only support TYPE_USHORT
int numDataElems = getNumDataElements();
int pixelOffset = y * scanlineStride + x * pixelStride;
short[] sdata;
if (obj == null) {
sdata = new short[numDataElems];
}
else {
sdata = (short[]) obj;
}
for (int i = 0; i < numDataElems; i++) {
sdata[i] = (short) (data.getElem(bankIndices[i], pixelOffset + bandOffsets[i] + 1) << 8|
data.getElem(bankIndices[i], pixelOffset + bandOffsets[i]));
}
return sdata;
}
};
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
WritableRaster raster = Raster.createWritableRaster(sampleModel, dataBuffer, null);
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
I don't really see a reason for creating a RenderedImage subclass for this.

it's possible to use Java.awt.Image android application

I've written a Java methods ,but i have to use this method in android project,so someone can help me to convert it into android or help me what should i do?
public Image getImage(){
ColorModel cm = grayColorModel() ;
if( n == 1){// in case it's a 8 bit/pixel image
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h,cm, pixData, 0, w));
}//endif
}
protected ColorModel grayColorModel()
{
byte[] r = new byte[256] ;
for (int i = 0; i <256 ; i++ )
r[i] = (byte)(i & 0xff ) ;
return (new IndexColorModel(8,256,r,r,r));
}
For instance, to convert a grayscale image (byte array, imageSrc) to drawable:
byte[] imageSrc= [...];
// That's where the RGBA array goes.
byte[] imageRGBA = new byte[imageSrc.length * 4];
int i;
for (i = 0; i < imageSrc.length; i++) {
imageRGBA[i * 4] = imageRGBA[i * 4 + 1] = imageRGBA[i * 4 + 2] = ((byte) ~imageSrc[i]);
// Invert the source bits
imageRGBA[i * 4 + 3] = -1;// 0xff, that's the alpha.
}
// Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(imageRGBA));
Code may differ depending of input format.

How to make bmp image from pixel byte array in java

I have a byte array containing pixel values from a .bmp file. It was generated by doing this:
BufferedImage readImage = ImageIO.read(new File(fileName));
byte imageData[] = ((DataBufferByte)readImage.getData().getDataBuffer()).getData();
Now I need to recreate the .bmp image. I tried to make a BufferedImage and set the pixels of the WritableRaster by calling the setPixels method. But there I have to provide an int[], float[] or double[] array. Maybe I need to convert the byte array into one of these. But I don't know how to do that. I also tried the setDataElements method. But I am not sure how to use this method either.
Can anyone explain how to create a bmp image from a byte array?
Edit: #Perception
This is what I have done so far:
private byte[] getPixelArrayToBmpByteArray(byte[] pixelData, int width, int height, int depth) throws Exception{
int[] pixels = byteToInt(pixelData);
BufferedImage image = null;
if(depth == 8) {
image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
}
else if(depth == 24){
image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
}
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0, 0, width, height, pixels);
image.setData(raster);
return getBufferedImageToBmpByteArray(image);
}
private byte[] getBufferedImageToBmpByteArray(BufferedImage image) {
byte[] imageData = null;
try {
ByteArrayOutputStream bas = new ByteArrayOutputStream();
ImageIO.write(image, "bmp", bas);
imageData = bas.toByteArray();
bas.close();
} catch (Exception e) {
e.printStackTrace();
}
return imageData;
}
private int[] byteToInt(byte[] data) {
int[] ints = new int[data.length];
for (int i = 0; i 
You need to pack three bytes into each integer you make. Depending on the format of the buffered image, this will be 0xRRGGBB.
byteToInt will have to consume three bytes like this:
private int[] byteToInt(byte[] data) {
int[] ints = new int[data.length / 3];
int byteIdx = 0;
for (int pixel = 0; pixel < ints.length) {
int rByte = (int) pixels[byteIdx++] & 0xFF;
int gByte = (int) pixels[byteIdx++] & 0xFF;
int bByte = (int) pixels[byteIdx++] & 0xFF;
int rgb = (rByte << 16) | (gByte << 8) | bByte
ints[pixel] = rgb;
}
}
You can also use ByteBuffer.wrap(arr, offset, length).toInt()
Having just a byte array is not enough. You also need to construct a header (if you are reading from a raw format, such as inside a DICOM file).

Java equivalent of JavaScript's Canvas getImageData

I'm porting a HTML5's Canvas sample to Java, so far so good, until i get on this function call :
Canvas.getContext('2d').getImageData(0, 0, 100, 100).data
I googled for a while and found this page of the canvas specification
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#pixel-manipulation
After reading it, I created this function below :
public int[] getImageDataPort(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int[] ret = new int[width * height * 4];
int idx = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int color = image.getRGB(x, y);
ret[idx++] = getRed(color);
ret[idx++] = getGreen(color);
ret[idx++] = getBlue(color);
ret[idx++] = getAlpha(color);
}
}
return ret;
}
public int getRed(int color) {
return (color >> 16) & 0xFF;
}
public int getGreen(int color) {
return (color >> 8) & 0xFF;
}
public int getBlue(int color) {
return (color >> 0) & 0xFF;
}
public int getAlpha(int color) {
return (color >> 24) & 0xff;
}
There is any class on Java Graphics API that has this function built-in or i should use the one that i had created?
I think the closest thing you'll find in the standard Java API is the Raster class. You can get hold of a WritableRaster (used for low-level image manipulation) through BufferedImage.getRaster. The Raster class then provides methods such as getSamples which fills an int[] with image data.
Thanks aioobe, i've looked at the WritableRaster class and found the getPixels function which does exactly what i needed, the final result is :
public int[] getImageDataPort(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int[] ret = null;
ret = image.getRaster().getPixels(0, 0, width, height, ret);
return ret;
}
The only problem that may happen is when the image.getType isn't a type that supports alpha in comparison with the code of the question, resulting in a smaller int[] ret, but one can simply convert the image type with :
public BufferedImage convertType(BufferedImage image,int type){
BufferedImage ret = new BufferedImage(image.getWidth(), image.getHeight(), type);
ColorConvertOp xformOp = new ColorConvertOp(null);
xformOp.filter(image, ret);
return ret;
}
Try
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(bi, "jpg", baos);
where bi - BufferendImage

Categories