I am trying to get frames from a gif using OpenCV. I found Convert each animated GIF frame to a separate BufferedImage and used the second suggestion. I modified it slightly to return an array of Mats instead of BufferedImages.
I tried two methods to get bufferedImages from the gif. Each presented different problems.
With the previous thread's suggestion
BufferedImage fImage=ir.read(i);
The program calls a "ArrayIndexOutOfBoundsException: 4096"
With the original code from the previous thread.
BufferedImage fImage=ir.getRawImageType(i).createBufferedImage(ir.getWidth(i),ir.getHeight(i));
Each frame is a monotone color(not all black though) and the mat derived from the BufferedImage is empty.
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
ArrayList<Mat> frames = new ArrayList<Mat>();
ImageReader ir = new GIFImageReader(new GIFImageReaderSpi());
ir.setInput(ImageIO.createImageInputStream(new File("ronPaulTestImage.gif")));
for(int i = 0; i < ir.getNumImages(true); i++){
BufferedImage fImage=ir.read(i);
//BufferedImage fImage=ir.getRawImageType(i).createBufferedImage(ir.getWidth(i), ir.getHeight(i));
fImage = toBufferedImageOfType(fImage, BufferedImage.TYPE_3BYTE_BGR);
//byte[] pixels = ((DataBufferByte) r.getRaster().getDataBuffer()).getData();
Mat m=new Mat();
//m.put(0,0,pixels);
m.put(0, 0,((DataBufferByte) fImage.getRaster().getDataBuffer()).getData());
if(i==40){
//a test, writes the mat and the image at the specified frame to files, exits
ImageIO.write(fImage,"jpg",new File("TestError.jpg"));
Imgcodecs.imwrite("TestErrorMat.jpg",m);
System.exit(0);
}
Here is the gif I used
Following Spektre's advice I found a better gif which fixed the monochromatic bufferedImages. The lack of viewable Mats was caused by my usage of the default constructor when declaring the Mat.
Working Code
public static ArrayList<Mat> getFrames(File gif) throws IOException{
ArrayList<Mat> frames = new ArrayList<Mat>();
ImageReader ir = new GIFImageReader(new GIFImageReaderSpi());
ir.setInput(ImageIO.createImageInputStream(gif));
for(int i = 0; i < ir.getNumImages(true); i++){
BufferedImage fImage=ir.read(i);
fImage = toBufferedImageOfType(fImage, BufferedImage.TYPE_3BYTE_BGR);
byte[] pixels = ((DataBufferByte) fImage.getRaster().getDataBuffer()).getData();
Mat m=new Mat(fImage.getHeight(), fImage.getWidth(), CvType.CV_8UC3);
m.put(0,0,pixels);
if(i==15){//a test, writes the mat and the image at the specified frame to files, exits
ImageIO.write(fImage,"jpg",new File("TestError.jpg"));
Imgcodecs.imwrite("TestErrorMat.jpg",m);
System.exit(0);
}
frames.add(m);
}
return frames;
}
I am not using libs for gif nor Java nor OpenCV but the ArrayIndexOutOfBoundsException: 4096
means that the dictionary is not cleared properly. The gif of yours is buggy I tested it and it contains errors not enough clear codes are present for some frames. If your GIF decoder does not check/handle such case then it simply crash because its dictionary growth more then GIF limit 4096/12bit
Try another GIF not some buggy ones ...
have tested your gif and it has around 7 clear codes per frame and contains 941 errors in total (absence of clear code resulting in dictionary overrun)
If you have source code for the GIF decoder
then just find part of decoder where new item is added to dictionary and add
if (dictionary_items<4096)
before it ... If you ignore the wrong entries the image looks still OK most likely the encoder in which this was created was not properly coded.
with bytedeco opencv, simple way to get the first frame of gif:
import java.awt.image.{BufferedImage, DataBufferByte}
import java.io.ByteArrayInputStream
import com.sun.imageio.plugins.gif.{GIFImageReader, GIFImageReaderSpi}
import javax.imageio.ImageIO
import javax.swing.JFrame
import org.bytedeco.javacv.{CanvasFrame, OpenCVFrameConverter}
import org.bytedeco.opencv.opencv_core.Mat
import org.opencv.core.CvType
def toMat(bi: BufferedImage): Mat = {
val convertedImage = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_3BYTE_BGR)
val graphics = convertedImage.getGraphics
graphics.drawImage(bi, 0, 0, null)
graphics.dispose()
val data = convertedImage.getRaster.getDataBuffer.asInstanceOf[DataBufferByte].getData
val mat = new Mat(convertedImage.getHeight, convertedImage.getWidth, CvType.CV_8UC3)
mat.data.put(data: _*)
mat
}
def show(image: Mat, title: String): Unit = {
val canvas = new CanvasFrame(title, 1)
canvas.setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE)
canvas.showImage(new OpenCVFrameConverter.ToMat().convert(image))
}
val imageByteArrayOfGif = Array(....) // can be downloaded with java.net.HttpURLConnection
val ir = new GIFImageReader(new GIFImageReaderSpi())
val in = new ByteArrayInputStream(imageByteArray)
ir.setInput(ImageIO.createImageInputStream(in))
val bi = ir.read(0) // first frame of gif
val mat = toMat(bi)
show(mat, "buffered2mat")
Related
I have an issue converting Tiff-Files to JPEGs with JAI. This is my Code:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TIFFDecodeParam param = null;
ImageDecoder dec = ImageCodec.createImageDecoder("tiff", new FileSeekableStream(inPath), param);
RenderedImage op = dec.decodeAsRenderedImage(0);
JPEGEncodeParam jpgparam = new JPEGEncodeParam();
jpgparam.setQuality(67);
ImageEncoder en = ImageCodec.createImageEncoder("jpeg", baos, jpgparam);
en.encode(op);
Mostly this code works fine, but with some Images, I got the following error:
java.lang.RuntimeException: Only 1, or 3-band byte data may be written.
at com.sun.media.jai.codecimpl.JPEGImageEncoder.encode(JPEGImageEncoder.java:142)
I cant find any related Problems over here and i have no idea how to fix it. The Images who throw this error have a high Resolution (9000 x 7000 or more) and are mostly scans of old pictures.
Image with this ColorModel works:
ColorModel:
#pixelBits = 24
numComponents = 3
color space = java.awt.color.ICC_ColorSpace#21981a50
transparency = 1 has alpha = false
isAlphaPre = false
This not:
ColorModel:
#pixelBits = 16
numComponents = 1
color space = java.awt.color.ICC_ColorSpace#88a30ad
transparency = 1 has alpha = false
isAlphaPre = false
I tried reading the JPEG standard, and it is not readily clear whether this is a limitation of the JPEG format or just the Encoder.
The encoder provide with java only encodes 1 or 3 byte bands, so in your case there are 16bit gray scale images. One way to solve this, as it appears you have done, is to save the image using a PNG encoder. It would not support the compression quality parameter.
The other way to handle this would be to save your image as an 8bit gray scale image.
I made a simple example to test this w/out JAI.
public static void main(String[] args) throws Exception{
BufferedImage img = new BufferedImage(256, 256, BufferedImage.TYPE_USHORT_GRAY);
Iterator<ImageWriter> writers = ImageIO.getImageWritersBySuffix("jpg");
while( writers.hasNext() ){
ImageWriter writer = writers.next();
ImageOutputStream ios = ImageIO.createImageOutputStream( new File("junk.jpg") );
writer.setOutput(ios);
writer.write(img);
}
}
The simplest way I can see to convert it is to create a new image and draw to it with a graphics.
BufferedImage img2 = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics g = img2.getGraphics();
g.drawImage(img, 0, 0);
g.dispose();
Then img2 can be saved as JPG.
It's my first time to post question. I've been building a object classification program using tensorflow lite for Android device(Java). I already made the same function program by using Python and Keras so I converted the model to tflite form and I used it on android. But the result is quite different than I got in Python. I suspect the image processing method before inference was incorrect.
In Python, the image process before inference is below:
img = cv2.imread(image_dir)/255.
img = cv2.resize(img, (64,64), interpolation = cv2.INTER_AREA)
img_list.append(img)
img_list = np.array(img_list).astype(np.float32)
model.predict(img_list, batch_size=512, verbose=0)
In Android(Java), the image process before the inference is below:
Mat img = new Mat();
Utils.bitmapToMat(bitmap, img, true);
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2GRAY); //to grayscale
Size size = new Size(64,64);
Imgproc.resize(img, img, size, Imgproc.INTER_AREA); //resize
Bitmap dst = Bitmap.createBitmap(img.width(), img.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(img, dst); //convert to bitmap
int imageTensorIndex = 0;
DataType imageDataType = tensorFlowLiteModel.getInputTensor(imageTensorIndex).dataType(); //tensorFlowLiteModel is interpreter which also inferences (loaded from XXXX.tflite)
TensorImage tfImage = new TensorImage(imageDataType);
tfImage.load(dst);
int probabilityTensorIndex = 0;
int[] probabilityShape =
tensorFlowLiteModel.getOutputTensor(probabilityTensorIndex).shape();
DataType probabilityDataType = tensorFlowLiteModel.getOutputTensor(probabilityTensorIndex).dataType();
outputProbabilityBuffer = TensorBuffer.createFixedSize(probabilityShape, probabilityDataType);
tensorFlowLiteModel.run(tfImage.getBuffer(), outputProbabilityBuffer.getBuffer().rewind())
It actually worked and showed result but the result is different from Python(Keras).
I guess the image process before inference was wrong.
I also know image process using ByteBuffer but it was didn't work because of OOM(Out of memory) so I want to use TensorImage class.
If you know how to deal with this problem, please let me know.
I am attempting to perform a basic kernel convolution pass on an image using the BufferedImageOp package in java.awt.image. This is the code I have:
BufferedImage img = null;
File f = null;
//read image
try {
f = new File("keys.JPG");
img = ImageIO.read(f);
} catch (IOException e) {
System.out.println(e);
}
float[] gaussian = {
1/16f, 1/8f, 1/16f,
1/8f, 1/4f, 1/8f,
1/16f, 1/8f, 1/16f,
};
BufferedImageOp op = new ConvolveOp(new Kernel(3, 3, gaussian));
BufferedImage dest = op.filter(img, null);
File outputfile = new File("image.jpg");
ImageIO.write(dest, "jpg", outputfile);
My code attempts to load the image keys.JPG and then convolve this image with the Gaussian blur kernel and save the image to the file image.jpg. When I run the code, it processes for a bit then terminates and saves the image successfully but when I compare the original and the new images, they are identical.
Looking online at some code examples, my code should work. Am I missing something?
Thanks
As #haraldK mentioned, my image was too large to notice a difference. The code works as expected.
I'm trying to hide a message inside a .gif for a steganography project.
I've converted the input gif to an ArrayList of BufferedImages ana applied my steganography algorithm.
But, i came across an issue with converting the ArrayList of BufferedImages back to a .gif.
I used this GifSequenceWriter class to convert the BufferedImages array to a new .gif after getting the original delay between frames from the original gif image metadata.
File encoded_img = new File("output.gif");
ImageOutputStream output = new FileImageOutputStream(encoded_img);
GifSequenceWriter writer = new GifSequenceWriter(output, frames.get(0).getType(), delayTimeMS, true);
writer.writeToSequence(frames.get(0));
for(int k=1; k<frames.size()-1; k++) {
writer.writeToSequence(frames.get(k));
}
writer.close();
output.close();
But, the resulting .gif looks really bad, and i've saved the individual frames with and without the steganography algorithm and they look fine. You can check out an example of the original image, the 10 saved frames and the resulting .gif here.
Is there a better way to create .gifs in java?
Thanks in advance.
There's a problem with the the GifSequenceWriter when using palette images (BufferedImage.TYPE_BYTE_INDEXED with IndexColorModel). This will create metadata based on a default 216 color palette (the web safe palette), which is clearly different from the colors in your image.
The problematic lines in GifSequenceWriter:
ImageTypeSpecifier imageTypeSpecifier = ImageTypeSpecifier.createFromBufferedImageType(imageType);
imageMetaData = gifWriter.getDefaultImageMetadata(imageTypeSpecifier, imageWriteParam);
Instead, the metadata should be based on the color palette in the index color model of your image. But the good news is, it works fine without it.
You can simply use:
GifSequenceWriter writer = new GifSequenceWriter(output, BufferedImage.TYPE_INT_ARGB, delayTimeMS, true);
...and the writer will automatically create the palette as needed, from your actual image data.
It's also possible to fix the GifSequenceWriter, to accept an ImageTypeSpecifier instead of the int imageType, however, this will only work if all frames use the same palette, I think:
public GifSequenceWriter(
ImageOutputStream outputStream,
ImageTypeSpecifier imageTypeSpecifier,
int timeBetweenFramesMS,
boolean loopContinuously) throws IIOException, IOException {
// my method to create a writer
gifWriter = getWriter();
imageWriteParam = gifWriter.getDefaultWriteParam();
imageMetaData = gifWriter.getDefaultImageMetadata(imageTypeSpecifier, imageWriteParam);
// ... rest of the method unchanged.
Usage:
ColorModel cm = firstImage.getColorModel();
ImageTypeSpecifier imageType = new ImageTypeSpecifier(cm, cm.createCompatibleSampleModel(1, 1));
GifSequenceWriter writer = new GifSequenceWriter(output, imageType, delayTimeMS, true);
I'm trying to read parts from a big image in java. My image size is more than 700 MB. I have used this code which normally reads pixels without loading the whole image into memory:
Rectangle sourceRegion = new Rectangle(0, 0, 512, 512); // The region you want to extract
ImageInputStream stream = ImageIO.createImageInputStream( new File("/home/dhoha/Downloads/BreastCancer.jp2")); // File or input stream
final Iterator<ImageReader> readers = ImageIO.getImageReaders(stream);
if (readers.hasNext()) {
ImageReader reader = (ImageReader)readers.next();
reader.setInput(stream, true);
ImageReadParam param = reader.getDefaultReadParam();
param.setSourceRegion(sourceRegion); // Set region
BufferedImage image = reader.read(0, param); // Will read only the region specified
However, I got the error:
Exception in thread "main" java.lang.IllegalArgumentException: Dimensions (width=95168 height=154832) are too large
at java.awt.image.SampleModel.<init>(SampleModel.java:130)
at java.awt.image.ComponentSampleModel.<init>(ComponentSampleModel.java:146)
at java.awt.image.PixelInterleavedSampleModel.<init>(PixelInterleavedSampleModel.java:87)
at com.sun.media.imageioimpl.plugins.jpeg2000.J2KRenderedImageCodecLib.createSampleModel(J2KRenderedImageCodecLib.java:741)
at com.sun.media.imageioimpl.plugins.jpeg2000.J2KRenderedImageCodecLib.createOriginalSampleModel(J2KRenderedImageCodecLib.java:729)
at com.sun.media.imageioimpl.plugins.jpeg2000.J2KRenderedImageCodecLib.<init>(J2KRenderedImageCodecLib.java:261)
at com.sun.media.imageioimpl.plugins.jpeg2000.J2KImageReaderCodecLib.read(J2KImageReaderCodecLib.java:364)
at testJai2.test3.main(test3.java:21)
Any help please to read parts from this big image?
There are different ways to load parts of image to memory and then process it afterwards. You can try out the following method to read fragments:
public static BufferedImage readFragment(InputStream stream, Rectangle rect)
throws IOException {
ImageInputStream imageStream = ImageIO.createImageInputStream(stream);
ImageReader reader = ImageIO.getImageReaders(imageStream).next();
ImageReadParam param = reader.getDefaultReadParam();
param.setSourceRegion(rect);
reader.setInput(imageStream, true, true);
BufferedImage image = reader.read(0, param);
reader.dispose();
imageStream.close();
return image;
}
And calling it like this:
URL url = new URL("..."); // You can use your own stream instead of URL
Image chunk = readFragment(url.openStream(), new Rectangle(150, 150, 300, 250));
This is marked as a correct answer in this thread.
You can use this technique to finally read the whole image into the memory if you need by doing some simple calculations.
EDIT:
The resolution of the image you are trying to process is larger than an array can have (95168x154832). So basically you will not be able to read the image, since ImageIO.createImageInputStream() tries to load the whole image into an array AFAIK.
What you can do is use a library called ImgLib2. Here you can find some examples. ImgLib2 uses multidimensional arrays to read the (big) image data and so it's larger than ImageIO can handle.