Hello, I'm using https://github.com/artclarke/humble-video to take a thumbnail from a video.
So far I have successfully managed to take a snapshot from a video at start with following method.
private static Path generateThumbnail(final Path videoFile)
throws InterruptedException, IOException {
final Demuxer demuxer = Demuxer.make();
demuxer.open(videoFile.toString(), null, false, true, null, null);
int streamIndex = -1;
Decoder videoDecoder = null;
String rotate = null;
final int numStreams = demuxer.getNumStreams();
for (int i = 0; i < numStreams; ++i) {
final DemuxerStream stream = demuxer.getStream(i);
final KeyValueBag metaData = stream.getMetaData();
final Decoder decoder = stream.getDecoder();
if (decoder != null
&& decoder.getCodecType() == MediaDescriptor.Type.MEDIA_VIDEO) {
videoDecoder = decoder;
streamIndex = i;
rotate = metaData.getValue("rotate", KeyValueBag.Flags.KVB_NONE);
break;
}
}
if (videoDecoder == null) {
throw new IOException("Not a valid video file");
}
videoDecoder.open(null, null);
final MediaPicture picture = MediaPicture.make(videoDecoder.getWidth(),
videoDecoder.getHeight(), videoDecoder.getPixelFormat());
final MediaPictureConverter converter = MediaPictureConverterFactory
.createConverter(MediaPictureConverterFactory.HUMBLE_BGR_24, picture);
final MediaPacket packet = MediaPacket.make();
BufferedImage image = null;
MUX : while (demuxer.read(packet) >= 0) {
if (packet.getStreamIndex() != streamIndex) {
continue;
}
int offset = 0;
int bytesRead = 0;
videoDecoder.decodeVideo(picture, packet, offset);
do {
bytesRead += videoDecoder.decode(picture, packet, offset);
if (picture.isComplete()) {
image = converter.toImage(null, picture);
break MUX;
}
offset += bytesRead;
} while (offset < packet.getSize());
}
if (image == null) {
throw new IOException("Unable to find a complete video frame");
}
if (rotate != null) {
final AffineTransform transform = new AffineTransform();
transform.translate(0.5 * image.getHeight(), 0.5 * image.getWidth());
transform.rotate(Math.toRadians(Double.parseDouble(rotate)));
transform.translate(-0.5 * image.getWidth(), -0.5 * image.getHeight());
final AffineTransformOp op = new AffineTransformOp(transform,
AffineTransformOp.TYPE_BILINEAR);
image = op.filter(image, null);
}
final Path target = videoFile.getParent()
.resolve(videoFile.getFileName() + ".thumb.jpg");
final double mul;
if (image.getWidth() > image.getHeight()) {
mul = 216 / (double) image.getWidth();
} else {
mul = 216 / (double) image.getHeight();
}
final int newW = (int) (image.getWidth() * mul);
final int newH = (int) (image.getHeight() * mul);
final Image thumbnailImage = image.getScaledInstance(newW, newH,
Image.SCALE_SMOOTH);
image = new BufferedImage(newW, newH, BufferedImage.TYPE_INT_BGR);
final Graphics2D g2d = image.createGraphics();
g2d.drawImage(thumbnailImage, 0, 0, null);
g2d.dispose();
ImageIO.write(image, "jpeg", target.toFile());
return target.toAbsolutePath(); }
Now, what I want to do is take a snapshot after 2 seconds after the video starts, is it possible? I
have tried using the "Demuxer" -s seek method but no luck.
I have successfully made it with the following code
The method from the library
public int seek(int stream_index, long min_ts, long ts, long max_ts, int flags);
Parameters are
stream_index index of the stream which is used as time base reference
min_ts smallest acceptable timestamp
ts target timestamp
max_ts largest acceptable timestamp
My Implementation was
final int success = demuxer.seek(streamIndex, 0, 700, 99999999,VideoJNI.Demuxer_SEEK_FRAME_get());
Related
I've got some code from user xil3 where it merges the bitmaps horizontally. Does anyone know how I can make it to do it vertically instead?
public Bitmap combineImages(Bitmap c, Bitmap s, String loc) { // can add a 3rd parameter 'String loc' if you want to save the new image - left some code to do that at the bottom
Bitmap cs = null;
int width, height = 0;
if (c.getHeight() > s.getHeight()) {
width = c.getWidth() + s.getWidth();
height = c.getHeight();
} else {
width = c.getWidth() + s.getWidth();
height = s.getHeight();
}
cs = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_4444);
Canvas comboImage = new Canvas(cs);
comboImage.drawBitmap(c, 0f, 0f, null);
comboImage.drawBitmap(s, c.getHeight(), 0f, null);
// this is an extra bit I added, just incase you want to save the new image somewhere and then return the location
String tmpImg = String.valueOf(System.currentTimeMillis()) + ".png";
OutputStream os = null;
try {
os = new FileOutputStream(loc + tmpImg);
cs.compress(Bitmap.CompressFormat.PNG, 100, os);
} catch (IOException e) {
Log.e("combineImages", "problem combining images", e);
}
return cs;
}
I ended up finding a new piece of code here which saves them vertically-
private Bitmap mergeMultiple(ArrayList<Bitmap> parts) {
int w = 0, h = 0;
for (int i = 0; i < parts.size(); i++) {
if (i < parts.size() - 1) {
w = parts.get(i).getWidth() > parts.get(i + 1).getWidth() ? parts.get(i).getWidth() : parts.get(i + 1).getWidth();
}
h += parts.get(i).getHeight();
}
Bitmap temp = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(temp);
Paint paint = new Paint();
paint.setColor(Color.WHITE);
int top = 0;
for (int i = 0; i < parts.size(); i++) {
top = (i == 0 ? 0 : top + parts.get(i).getHeight() + 100);
canvas.drawBitmap(parts.get(i), 0f, top,paint );
}
return temp;
}
In my Java Spring MVC Web Application, I have an option where users can upload images for displaying on web pages. Currently, I am using JAI API to resize the uploaded images that are above a specified width using the following Util Class I have:
public class ImageUtil
{
private static RenderingHints hints;
static
{
hints = new RenderingHints(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BICUBIC);
hints.put(RenderingHints.KEY_COLOR_RENDERING,RenderingHints.VALUE_COLOR_RENDER_QUALITY);
hints.put(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY);
/*
* this block is to silence the warning that we're not using JAI
* native acceleration but are using the pure java implementation.
*/
Properties p = new Properties(System.getProperties());
p.put("com.sun.media.jai.disableMediaLib", "true");
System.setProperties(p);
}
public static byte[] getScaledInstance(byte[] image, int maxWidth)
{
InputStream in = new ByteArrayInputStream(image);
BufferedImage img = null;
try
{
img = ImageIO.read(in);
double scale = (double) maxWidth / img.getWidth();
if ( scale > 1.0d )
{
//return getByteArray(getScaledUpGraphics(img, scale));
return image;
}
else if (scale > 0.5d && scale < 1.0d)
{
return getByteArray(getScaledDownByGraphics(img, scale));
}
else if (scale <= 0.5d)
{
return getByteArray(getScaledDownByJAI(img, scale));
}
// else scale == 1.0d; no change required.
}
catch (IOException e)
{
e.printStackTrace();
}
return image;
}
public static byte[] getByteArray(BufferedImage img)
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] imageInByte = null;
try
{
ImageIO.write( img, "jpg", baos );
baos.flush();
imageInByte = baos.toByteArray();
baos.close();
}
catch (IOException e)
{
e.printStackTrace();
}
return imageInByte;
}
/**
* See http://www.digitalsanctuary.com/tech-blog/java/how-to-resize-uploaded-images-using-java-better-way.html
* This instance seems to produce quality images ONLY when you are
* scaling down to something less than 50% of the original size.
* #param img
* #param scale
* #return the scaled image
*/
private static BufferedImage getScaledDownByJAI(BufferedImage img, double scale)
{
if(scale > 1.0d) {
throw new RuntimeException("Can't scale according to " + scale + " : This method only scales down.");
}
PlanarImage originalImage = PlanarImage.wrapRenderedImage(img);
// now resize the image
ParameterBlock paramBlock = new ParameterBlock();
paramBlock.addSource(originalImage); // The source image
paramBlock.add(scale); // The xScale
paramBlock.add(scale); // The yScale
paramBlock.add(0.0); // The x translation
paramBlock.add(0.0); // The y translation
RenderedOp resizedImage = JAI.create("SubsampleAverage", paramBlock, hints);
return resizedImage.getAsBufferedImage();
}
/**
* This method produces high quality images when target scale is greater
* than 50% of the original.
* #param img
* #param scale
* #return the scaled image
*/
private static BufferedImage getScaledDownByGraphics(BufferedImage img, double scale)
{
final float scaleFactor = 0.8f;
BufferedImage ret = (BufferedImage)img;
int w = img.getWidth();
int h = img.getHeight();
int targetWidth = (int)(img.getWidth() * scale);
int targetHeight = (int)(img.getHeight() * scale);
int loopCount = 0;
int maxLoopCount = 20;
BufferedImage tmp;
do
{
if (w > targetWidth) {
w *= scaleFactor;
if (w < targetWidth) {
w = targetWidth;
}
}
if (h > targetHeight) {
h *= scaleFactor;
if (h < targetHeight) {
h = targetHeight;
}
}
tmp = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
Graphics2D g2 = tmp.createGraphics();
g2.addRenderingHints(hints);
g2.drawImage(ret, 0, 0, w, h, null);
g2.dispose();
ret = tmp;
if(++loopCount > maxLoopCount) {
throw new RuntimeException("Hit maximum loop count " + maxLoopCount);
}
} while (w != targetWidth || h != targetHeight);
return ret;
}
}
The code works fine for images with the width more than a specified maximum width. But I am trying to resize all the uploaded images to a web standard size, irrespective of its width. This way I can preserve the aspect ratio and get image with a web standard size.
So I have an image file that has a volcano on it. Everything else is 0xFFFF00FF (opaque magenta). I want to replace every pixel that contains that color with 0 (transparent). So far my method looks like this:
public static BufferedImage replace(BufferedImage image, int target, int preferred) {
int width = image.getWidth();
int height = image.getHeight();
BufferedImage newImage = new BufferedImage(width, height, image.getType());
int color;
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
color = image.getRGB(i, j);
if (color == target) {
newImage.setRGB(i, j, preferred);
}
else {
newImage.setRGB(i, j, color);
}
}
}
return newImage;
}
This works fine but seems VERY slow. I have seen someone do this another way, but I have no idea what was going on. If someone knows a better way to do this, I would very much like to hear it.
While I haven't had a chance to test this thoroughly yet, using a LookupOp may well benefit from acceleration:
public class ColorMapper
extends LookupTable {
private final int[] from;
private final int[] to;
public ColorMapper(Color from,
Color to) {
super(0, 4);
this.from = new int[] {
from.getRed(),
from.getGreen(),
from.getBlue(),
from.getAlpha(),
};
this.to = new int[] {
to.getRed(),
to.getGreen(),
to.getBlue(),
to.getAlpha(),
};
}
#Override
public int[] lookupPixel(int[] src,
int[] dest) {
if (dest == null) {
dest = new int[src.length];
}
int[] newColor = (Arrays.equals(src, from) ? to : src);
System.arraycopy(newColor, 0, dest, 0, newColor.length);
return dest;
}
}
Using it is as easy as creating a LookupOp:
Color from = Color.decode("#ff00ff");
Color to = new Color(0, true);
BufferedImageOp lookup = new LookupOp(new ColorMapper(from, to), null);
BufferedImage convertedImage = lookup.filter(image, null);
To avoid iterating through the pixels, change the underlying ColorModel. Here is an example. Below is the snippet where the author takes the original BufferedImage and applies the new color model.
private static BufferedImage createImage() {
int width = 200;
int height = 200;
// Generate the source pixels for our image
// Lets just keep it to a simple blank image for now
byte[] pixels = new byte[width * height];
DataBuffer dataBuffer = new DataBufferByte(pixels, width*height, 0);
SampleModel sampleModel = new SinglePixelPackedSampleModel(
DataBuffer.TYPE_BYTE, width, height, new int[] {(byte)0xf});
WritableRaster raster = Raster.createWritableRaster(
sampleModel, dataBuffer, null);
return new BufferedImage(createColorModel(0), raster, false, null);
}
private static ColorModel createColorModel(int n) {
// Create a simple color model with all values mapping to
// a single shade of gray
// nb. this could be improved by reusing the byte arrays
byte[] r = new byte[16];
byte[] g = new byte[16];
byte[] b = new byte[16];
for (int i = 0; i < r.length; i++) {
r[i] = (byte) n;
g[i] = (byte) n;
b[i] = (byte) n;
}
return new IndexColorModel(4, 16, r, g, b);
}
private BufferedImage image = createImage();
image = new BufferedImage(createColorModel(e.getX()), image.getRaster(), false, null);
You can get the pixels[] array of the buffered image like so
int[] pixels = ((DataBufferInt) newImg().getDataBuffer()).getData();
and then set your colors like so
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
color = pixels[y * width + x];
if (color == target) {
pixels[y * width + x] = preferred;
}
else {
pixels[y * width + x] = color;
}
}
}
This is a slight speed up over using setRGB()
I'm using Play!Framework 1.x, one of its useful tool is the class Images, which allow me to resize an Image on the fly.
Here's the code from Images.resize :
/**
* Resize an image
* #param originalImage The image file
* #param to The destination file
* #param w The new width (or -1 to proportionally resize) or the maxWidth if keepRatio is true
* #param h The new height (or -1 to proportionally resize) or the maxHeight if keepRatio is true
* #param keepRatio : if true, resize will keep the original image ratio and use w and h as max dimensions
*/
public static void resize(File originalImage, File to, int w, int h, boolean keepRatio) {
try {
BufferedImage source = ImageIO.read(originalImage);
int owidth = source.getWidth();
int oheight = source.getHeight();
double ratio = (double) owidth / oheight;
int maxWidth = w;
int maxHeight = h;
if (w < 0 && h < 0) {
w = owidth;
h = oheight;
}
if (w < 0 && h > 0) {
w = (int) (h * ratio);
}
if (w > 0 && h < 0) {
h = (int) (w / ratio);
}
if(keepRatio) {
h = (int) (w / ratio);
if(h > maxHeight) {
h = maxHeight;
w = (int) (h * ratio);
}
if(w > maxWidth) {
w = maxWidth;
h = (int) (w / ratio);
}
}
String mimeType = "image/jpeg";
if (to.getName().endsWith(".png")) {
mimeType = "image/png";
}
if (to.getName().endsWith(".gif")) {
mimeType = "image/gif";
}
// out
BufferedImage dest = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
Image srcSized = source.getScaledInstance(w, h, Image.SCALE_SMOOTH);
Graphics graphics = dest.getGraphics();
graphics.setColor(Color.WHITE);
graphics.fillRect(0, 0, w, h);
graphics.drawImage(srcSized, 0, 0, null);
ImageWriter writer = ImageIO.getImageWritersByMIMEType(mimeType).next();
ImageWriteParam params = writer.getDefaultWriteParam();
FileImageOutputStream toFs = new FileImageOutputStream(to);
writer.setOutput(toFs);
IIOImage image = new IIOImage(dest, null, null);
writer.write(null, image, params);
toFs.flush();
toFs.close();
writer.dispose();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
Here's how I use it :
File old = new File("1.jpg");
File n = new File("output.jpg");
Images.resize(old, n, 800, 800, true);
The original image 1.jpg:
And the output.jpg:
Can anyone explain what's going on here ? Thanks !
I've seen this, too and believe it's an JRE bug. I worked around it by not using getScaledInstance and instead scaling while drawing.
Ah. I was too slow to answer. But yes, it is most likely a bug. Came across this problem myself. You have to work around it. Try scaling the image as Waldheinz said. That was what I did too.
This is the link I used to do my scaling :http://www.rgagnon.com/javadetails/java-0243.html, so you have more references in adition to waldheinz's
I want to be able to take an animated GIF as input, count the frames (and perhaps other metadata), and convert each to a BufferedImage.
How can I do this?
If you want all the frames to be the same size (for optimized GIFs) try something like this:
try {
String[] imageatt = new String[]{
"imageLeftPosition",
"imageTopPosition",
"imageWidth",
"imageHeight"
};
ImageReader reader = (ImageReader)ImageIO.getImageReadersByFormatName("gif").next();
ImageInputStream ciis = ImageIO.createImageInputStream(new File("house2.gif"));
reader.setInput(ciis, false);
int noi = reader.getNumImages(true);
BufferedImage master = null;
for (int i = 0; i < noi; i++) {
BufferedImage image = reader.read(i);
IIOMetadata metadata = reader.getImageMetadata(i);
Node tree = metadata.getAsTree("javax_imageio_gif_image_1.0");
NodeList children = tree.getChildNodes();
for (int j = 0; j < children.getLength(); j++) {
Node nodeItem = children.item(j);
if(nodeItem.getNodeName().equals("ImageDescriptor")){
Map<String, Integer> imageAttr = new HashMap<String, Integer>();
for (int k = 0; k < imageatt.length; k++) {
NamedNodeMap attr = nodeItem.getAttributes();
Node attnode = attr.getNamedItem(imageatt[k]);
imageAttr.put(imageatt[k], Integer.valueOf(attnode.getNodeValue()));
}
if(i==0){
master = new BufferedImage(imageAttr.get("imageWidth"), imageAttr.get("imageHeight"), BufferedImage.TYPE_INT_ARGB);
}
master.getGraphics().drawImage(image, imageAttr.get("imageLeftPosition"), imageAttr.get("imageTopPosition"), null);
}
}
ImageIO.write(master, "GIF", new File( i + ".gif"));
}
} catch (IOException e) {
e.printStackTrace();
}
None of the answers here are correct and suitable for animation. There are many problems in each solution so I wrote something that actually works with all gif files. For instance, this takes into account the actual width and height of the image instead of taking the width and height of the first frame assuming it will fill the entire canvas, no, unfortunately it's not that simple. Second, this doesn't leave any transparent pickles. Third, this takes into account disposal Methods. Fourth, this gives you delays between frames (* 10 if you want to use it in Thread.sleep()).
private ImageFrame[] readGif(InputStream stream) throws IOException{
ArrayList<ImageFrame> frames = new ArrayList<ImageFrame>(2);
ImageReader reader = (ImageReader) ImageIO.getImageReadersByFormatName("gif").next();
reader.setInput(ImageIO.createImageInputStream(stream));
int lastx = 0;
int lasty = 0;
int width = -1;
int height = -1;
IIOMetadata metadata = reader.getStreamMetadata();
Color backgroundColor = null;
if(metadata != null) {
IIOMetadataNode globalRoot = (IIOMetadataNode) metadata.getAsTree(metadata.getNativeMetadataFormatName());
NodeList globalColorTable = globalRoot.getElementsByTagName("GlobalColorTable");
NodeList globalScreeDescriptor = globalRoot.getElementsByTagName("LogicalScreenDescriptor");
if (globalScreeDescriptor != null && globalScreeDescriptor.getLength() > 0){
IIOMetadataNode screenDescriptor = (IIOMetadataNode) globalScreeDescriptor.item(0);
if (screenDescriptor != null){
width = Integer.parseInt(screenDescriptor.getAttribute("logicalScreenWidth"));
height = Integer.parseInt(screenDescriptor.getAttribute("logicalScreenHeight"));
}
}
if (globalColorTable != null && globalColorTable.getLength() > 0){
IIOMetadataNode colorTable = (IIOMetadataNode) globalColorTable.item(0);
if (colorTable != null) {
String bgIndex = colorTable.getAttribute("backgroundColorIndex");
IIOMetadataNode colorEntry = (IIOMetadataNode) colorTable.getFirstChild();
while (colorEntry != null) {
if (colorEntry.getAttribute("index").equals(bgIndex)) {
int red = Integer.parseInt(colorEntry.getAttribute("red"));
int green = Integer.parseInt(colorEntry.getAttribute("green"));
int blue = Integer.parseInt(colorEntry.getAttribute("blue"));
backgroundColor = new Color(red, green, blue);
break;
}
colorEntry = (IIOMetadataNode) colorEntry.getNextSibling();
}
}
}
}
BufferedImage master = null;
boolean hasBackround = false;
for (int frameIndex = 0;; frameIndex++) {
BufferedImage image;
try{
image = reader.read(frameIndex);
}catch (IndexOutOfBoundsException io){
break;
}
if (width == -1 || height == -1){
width = image.getWidth();
height = image.getHeight();
}
IIOMetadataNode root = (IIOMetadataNode) reader.getImageMetadata(frameIndex).getAsTree("javax_imageio_gif_image_1.0");
IIOMetadataNode gce = (IIOMetadataNode) root.getElementsByTagName("GraphicControlExtension").item(0);
NodeList children = root.getChildNodes();
int delay = Integer.valueOf(gce.getAttribute("delayTime"));
String disposal = gce.getAttribute("disposalMethod");
if (master == null){
master = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
master.createGraphics().setColor(backgroundColor);
master.createGraphics().fillRect(0, 0, master.getWidth(), master.getHeight());
hasBackround = image.getWidth() == width && image.getHeight() == height;
master.createGraphics().drawImage(image, 0, 0, null);
}else{
int x = 0;
int y = 0;
for (int nodeIndex = 0; nodeIndex < children.getLength(); nodeIndex++){
Node nodeItem = children.item(nodeIndex);
if (nodeItem.getNodeName().equals("ImageDescriptor")){
NamedNodeMap map = nodeItem.getAttributes();
x = Integer.valueOf(map.getNamedItem("imageLeftPosition").getNodeValue());
y = Integer.valueOf(map.getNamedItem("imageTopPosition").getNodeValue());
}
}
if (disposal.equals("restoreToPrevious")){
BufferedImage from = null;
for (int i = frameIndex - 1; i >= 0; i--){
if (!frames.get(i).getDisposal().equals("restoreToPrevious") || frameIndex == 0){
from = frames.get(i).getImage();
break;
}
}
{
ColorModel model = from.getColorModel();
boolean alpha = from.isAlphaPremultiplied();
WritableRaster raster = from.copyData(null);
master = new BufferedImage(model, raster, alpha, null);
}
}else if (disposal.equals("restoreToBackgroundColor") && backgroundColor != null){
if (!hasBackround || frameIndex > 1){
master.createGraphics().fillRect(lastx, lasty, frames.get(frameIndex - 1).getWidth(), frames.get(frameIndex - 1).getHeight());
}
}
master.createGraphics().drawImage(image, x, y, null);
lastx = x;
lasty = y;
}
{
BufferedImage copy;
{
ColorModel model = master.getColorModel();
boolean alpha = master.isAlphaPremultiplied();
WritableRaster raster = master.copyData(null);
copy = new BufferedImage(model, raster, alpha, null);
}
frames.add(new ImageFrame(copy, delay, disposal, image.getWidth(), image.getHeight()));
}
master.flush();
}
reader.dispose();
return frames.toArray(new ImageFrame[frames.size()]);
}
And the ImageFrame class:
import java.awt.image.BufferedImage;
public class ImageFrame {
private final int delay;
private final BufferedImage image;
private final String disposal;
private final int width, height;
public ImageFrame (BufferedImage image, int delay, String disposal, int width, int height){
this.image = image;
this.delay = delay;
this.disposal = disposal;
this.width = width;
this.height = height;
}
public ImageFrame (BufferedImage image){
this.image = image;
this.delay = -1;
this.disposal = null;
this.width = -1;
this.height = -1;
}
public BufferedImage getImage() {
return image;
}
public int getDelay() {
return delay;
}
public String getDisposal() {
return disposal;
}
public int getWidth() {
return width;
}
public int getHeight() {
return height;
}
}
Right, I have never done anything even slightly like this before, but a bit of Googling and fiddling in Java got me this:
public ArrayList<BufferedImage> getFrames(File gif) throws IOException{
ArrayList<BufferedImage> frames = new ArrayList<BufferedImage>();
ImageReader ir = new GIFImageReader(new GIFImageReaderSpi());
ir.setInput(ImageIO.createImageInputStream(gif));
for(int i = 0; i < ir.getNumImages(true); i++)
frames.add(ir.getRawImageType(i).createBufferedImage(ir.getWidth(i), ir.getHeight(i)));
return frames;
}
Edit: see Ansel Zandegran's modification to my answer.
To split an animated GIF into separate BufferedImage frames:
try {
ImageReader reader = ImageIO.getImageReadersByFormatName("gif").next();
File input = new File("input.gif");
ImageInputStream stream = ImageIO.createImageInputStream(input);
reader.setInput(stream);
int count = reader.getNumImages(true);
for (int index = 0; index < count; index++) {
BufferedImage frame = reader.read(index);
// Here you go
}
} catch (IOException ex) {
// An I/O problem has occurred
}
Alex's answer covers most cases, but it does have a couple of problems. It doesn't handle transparency correctly (at least according to common convention) and it is applying the current frame's disposal method to the previous frame which is incorrect. Here's a version that does handle those cases correctly:
private ImageFrame[] readGIF(ImageReader reader) throws IOException {
ArrayList<ImageFrame> frames = new ArrayList<ImageFrame>(2);
int width = -1;
int height = -1;
IIOMetadata metadata = reader.getStreamMetadata();
if (metadata != null) {
IIOMetadataNode globalRoot = (IIOMetadataNode) metadata.getAsTree(metadata.getNativeMetadataFormatName());
NodeList globalScreenDescriptor = globalRoot.getElementsByTagName("LogicalScreenDescriptor");
if (globalScreenDescriptor != null && globalScreenDescriptor.getLength() > 0) {
IIOMetadataNode screenDescriptor = (IIOMetadataNode) globalScreenDescriptor.item(0);
if (screenDescriptor != null) {
width = Integer.parseInt(screenDescriptor.getAttribute("logicalScreenWidth"));
height = Integer.parseInt(screenDescriptor.getAttribute("logicalScreenHeight"));
}
}
}
BufferedImage master = null;
Graphics2D masterGraphics = null;
for (int frameIndex = 0;; frameIndex++) {
BufferedImage image;
try {
image = reader.read(frameIndex);
} catch (IndexOutOfBoundsException io) {
break;
}
if (width == -1 || height == -1) {
width = image.getWidth();
height = image.getHeight();
}
IIOMetadataNode root = (IIOMetadataNode) reader.getImageMetadata(frameIndex).getAsTree("javax_imageio_gif_image_1.0");
IIOMetadataNode gce = (IIOMetadataNode) root.getElementsByTagName("GraphicControlExtension").item(0);
int delay = Integer.valueOf(gce.getAttribute("delayTime"));
String disposal = gce.getAttribute("disposalMethod");
int x = 0;
int y = 0;
if (master == null) {
master = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
masterGraphics = master.createGraphics();
masterGraphics.setBackground(new Color(0, 0, 0, 0));
} else {
NodeList children = root.getChildNodes();
for (int nodeIndex = 0; nodeIndex < children.getLength(); nodeIndex++) {
Node nodeItem = children.item(nodeIndex);
if (nodeItem.getNodeName().equals("ImageDescriptor")) {
NamedNodeMap map = nodeItem.getAttributes();
x = Integer.valueOf(map.getNamedItem("imageLeftPosition").getNodeValue());
y = Integer.valueOf(map.getNamedItem("imageTopPosition").getNodeValue());
}
}
}
masterGraphics.drawImage(image, x, y, null);
BufferedImage copy = new BufferedImage(master.getColorModel(), master.copyData(null), master.isAlphaPremultiplied(), null);
frames.add(new ImageFrame(copy, delay, disposal));
if (disposal.equals("restoreToPrevious")) {
BufferedImage from = null;
for (int i = frameIndex - 1; i >= 0; i--) {
if (!frames.get(i).getDisposal().equals("restoreToPrevious") || frameIndex == 0) {
from = frames.get(i).getImage();
break;
}
}
master = new BufferedImage(from.getColorModel(), from.copyData(null), from.isAlphaPremultiplied(), null);
masterGraphics = master.createGraphics();
masterGraphics.setBackground(new Color(0, 0, 0, 0));
} else if (disposal.equals("restoreToBackgroundColor")) {
masterGraphics.clearRect(x, y, image.getWidth(), image.getHeight());
}
}
reader.dispose();
return frames.toArray(new ImageFrame[frames.size()]);
}
private class ImageFrame {
private final int delay;
private final BufferedImage image;
private final String disposal;
public ImageFrame(BufferedImage image, int delay, String disposal) {
this.image = image;
this.delay = delay;
this.disposal = disposal;
}
public BufferedImage getImage() {
return image;
}
public int getDelay() {
return delay;
}
public String getDisposal() {
return disposal;
}
}
There is a good description of how GIF animations work in this ImageMagick tutorial.
I wrote a GIF image decoder on my own and released it under the Apache License 2.0 on GitHub. You can download it here: https://github.com/DhyanB/Open-Imaging. Example usage:
void example(final byte[] data) throws Exception {
final GifImage gif = GifDecoder .read(data);
final int width = gif.getWidth();
final int height = gif.getHeight();
final int background = gif.getBackgroundColor();
final int frameCount = gif.getFrameCount();
for (int i = 0; i < frameCount; i++) {
final BufferedImage img = gif.getFrame(i);
final int delay = gif.getDelay(i);
ImageIO.write(img, "png", new File(OUTPATH + "frame_" + i + ".png"));
}
}
The decoder supports GIF87a, GIF89a, animation, transparency and interlacing. Frames will have the width and height of the image itself and be placed on the correct position on the canvas. It respects frame transparency and disposal methods. Checkout the project description for more details such as the handling of background colors.
Additionally, the decoder doesn't suffer from this ImageIO bug: ArrayIndexOutOfBoundsException: 4096 while reading gif file.
I'd be happy to get some feedback. I've been testing with a representive set of images, however, some real field testing would be good.
Using c24w's solution, replace:
frames.add(ir.getRawImageType(i).createBufferedImage(ir.getWidth(i), ir.getHeight(i)));
With:
frames.add(ir.read(i));