How to re-code a movie file with Xuggler - java

This must be a very stupid question, but how does one recode with Xuggler?
Simplified I have:
IMediaReader reader = ToolFactory.makeReader(sourceUrl);
IMediaWriter writer = ToolFactory.makeWriter(url, reader);
MediaSegmenterWriter writerListener = new MediaSegmenterWriter();
writer.open();
while (reader.readPacket() == null)
do {
}
while(false);
Now, I want to recode the file in the reader to another bitrate and resolution. How do I do that? On creating the writer I have tried to add IMediaStreams with a copy of the original coder with the necessary changes, but that does not work:
int numStreams = reader.getContainer().getNumStreams();
for(int i = 0; i < numStreams; i++)
{
final IStream stream = reader.getContainer().getStream(i);
final IStreamCoder coder = stream.getStreamCoder();
IStreamCoder newCoder = IStreamCoder.make(IStreamCoder.Direction.ENCODING, coder);
if(newCoder == null ){
continue;
}
writer.getContainer().addNewStream(i);
int streams = writer.getContainer().getNumStreams();
System.out.println("Current amount of streams in writer: " + streams);
System.out.println("Coder: " + coder.toString());
if (coderSetting != null && newCoder != null){
if (newCoder.getCodecType().equals(ICodec.Type.CODEC_TYPE_VIDEO)) {
newCoder.setWidth(320);
newCoder.setHeight(240);
}
IStream outputStream = writer.getContainer().getStream(i);
outputStream.setStreamCoder(newCoder);
newCoder.open();
}
}
But this just gives the same result as leaving the code out (e.g. 1920x1080 from original)
Also tried to add a listener to the writer and replace the coder, but either got an error (coder already opened_ or no effect. (on onOpen, onAddStream, onOpenCoder))
I looked for tutorials, but non seem to do this simple operation.
Any help would be REALLY appreciated!!!

In order to resize the content as well as recode you need to create a MediaToolAdapter like:
private static class MediaResizer extends MediaToolAdapter {
private IVideoResampler videoResampler = null;
private int mediaHeight;
private int mediaWidth;
public MediaResizer (int aHight, int aWidth) {
mediaWidth = aWidth;
mediaHeight = aHeight;
}
#Override
public void onVideoPicture(IVideoPictureEvent event) {
// In case of audio only, do not re-size as it is not needed
if(job.role == MediaRole.MediaRoleEnum.LS_AUDIO) super.onVideoPicture(event);
IVideoPicture pic = event.getPicture();
if (videoResampler == null) {
videoResampler = IVideoResampler.make(job.getCoderSettings().width, job.getCoderSettings().height, pic.getPixelType(), pic.getWidth(), pic.getHeight(), pic.getPixelType());
}
IVideoPicture out = IVideoPicture.make(pic.getPixelType(), mediaWidth, mediaHeight);
videoResampler.resample(out, pic);
IVideoPictureEvent asc = new VideoPictureEvent(event.getSource(), out, event.getStreamIndex());
super.onVideoPicture(asc);
out.delete();
}
}
You add this as a listener to your reader, and then your writer to you resized. It should be something like:
IMediaReader reader = ToolFactory.makeReader(sourceUrl);
MediaResizer resizer = new MediaResizer(job);
IMediaWriter currentWriter = ToolFactory.makeWriter(destinationDir, reader);
reader.addListener(resizer);
resizer.addListener(currentWriter);

#Muhammad Umar,
Maybe he means:
#Override
public void onVideoPicture(IVideoPictureEvent event) {
// Logger.info("onAddStream(): now I am in VideoConverter.onVideoPicture().....");
IVideoPicture pic = event.getPicture();
if (videoResampler == null) {
videoResampler = IVideoResampler.make(VIDEO_WIDTH, VIDEO_HEIGHT,
pic.getPixelType(), pic.getWidth(), pic.getHeight(),
pic.getPixelType());
}
IVideoPicture out = IVideoPicture.make(pic.getPixelType(), VIDEO_WIDTH, VIDEO_HEIGHT);
videoResampler.resample(out, pic);
IVideoPictureEvent asc = new VideoPictureEvent(event.getSource(), out, event.getStreamIndex());
super.onVideoPicture(asc);
out.delete();
}

Related

Merge/Mux multiple mp4 video files on Android

I have a series of mp4 files saved on the device that need to be merged together to make a single mp4 file.
video_p1.mp4 video_p2.mp4 video_p3.mp4 > video.mp4
The solutions I have researched such as the mp4parser framework use deprecated code.
The best solution I could find is using a MediaMuxer and MediaExtractor.
The code runs but my videos are not merged (only the content in video_p1.mp4 is displayed and it is in landscape orientation, not portrait).
Can anyone help me sort this out?
public static boolean concatenateFiles(File dst, File... sources) {
if ((sources == null) || (sources.length == 0)) {
return false;
}
boolean result;
MediaExtractor extractor = null;
MediaMuxer muxer = null;
try {
// Set up MediaMuxer for the destination.
muxer = new MediaMuxer(dst.getPath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
//int bufferSize = MAX_SAMPLE_SIZE;
int bufferSize = 1 * 1024 * 1024;
int frameCount = 0;
int offset = 100;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
long timeOffsetUs = 0;
int dstTrackIndex = -1;
for (int fileIndex = 0; fileIndex < sources.length; fileIndex++) {
int numberOfSamplesInSource = getNumberOfSamples(sources[fileIndex]);
// Set up MediaExtractor to read from the source.
extractor = new MediaExtractor();
extractor.setDataSource(sources[fileIndex].getPath());
// Set up the tracks.
SparseIntArray indexMap = new SparseIntArray(extractor.getTrackCount());
for (int i = 0; i < extractor.getTrackCount(); i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
if (dstTrackIndex < 0) {
dstTrackIndex = muxer.addTrack(format);
muxer.start();
}
indexMap.put(i, dstTrackIndex);
}
long lastPresentationTimeUs = 0;
int currentSample = 0;
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
sawEOS = true;
bufferInfo.size = 0;
timeOffsetUs += (lastPresentationTimeUs + 0);
}
else {
lastPresentationTimeUs = extractor.getSampleTime();
bufferInfo.presentationTimeUs = extractor.getSampleTime() + timeOffsetUs;
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
if ((currentSample < numberOfSamplesInSource) || (fileIndex == sources.length - 1)) {
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
}
extractor.advance();
frameCount++;
currentSample++;
Log.d("tag2", "Frame (" + frameCount + ") " +
"PresentationTimeUs:" + bufferInfo.presentationTimeUs +
" Flags:" + bufferInfo.flags +
" TrackIndex:" + trackIndex +
" Size(KB) " + bufferInfo.size / 1024);
}
}
extractor.release();
extractor = null;
}
result = true;
}
catch (IOException e) {
result = false;
}
finally {
if (extractor != null) {
extractor.release();
}
if (muxer != null) {
muxer.stop();
muxer.release();
}
}
return result;
}
public static int getNumberOfSamples(File src) {
MediaExtractor extractor = new MediaExtractor();
int result;
try {
extractor.setDataSource(src.getPath());
extractor.selectTrack(0);
result = 0;
while (extractor.advance()) {
result ++;
}
}
catch(IOException e) {
result = -1;
}
finally {
extractor.release();
}
return result;
}
I'm using this library for muxing videos: ffmpeg-android-java
gradle dependency:
implementation 'com.writingminds:FFmpegAndroid:0.3.2'
Here's how I use it in my project to mux video and audio in kotlin: VideoAudioMuxer
So basically it works like the ffmpeg in terminal but you're inputing your command to a method as an array of strings along with a listener.
fmpeg.execute(arrayOf("-i", videoPath, "-i", audioPath, "$targetPath.mp4"), object : ExecuteBinaryResponseHandler() {
You'll have to search how to merge videos in ffmpeg and convert the commands into array of strings for the argument you need.
You could probably do almost anything, since ffmpeg is a very powerful tool.

Why does the stream position go to the end

I have a csv file, after I overwrite 1 line with the Write method, after re-writing to the file everything is already added to the end of the file, and not to a specific line
using System.Collections;
using System.Collections.Generic;
using UnityEngine.UI;
using UnityEngine;
using System.Text;
using System.IO;
public class LoadQuestion : MonoBehaviour
{
int index;
string path;
FileStream file;
StreamReader reader;
StreamWriter writer;
public Text City;
public string[] allQuestion;
public string[] addedQuestion;
private void Start()
{
index = 0;
path = Application.dataPath + "/Files/Questions.csv";
allQuestion = File.ReadAllLines(path, Encoding.GetEncoding(1251));
file = new FileStream(path, FileMode.Open, FileAccess.ReadWrite);
writer = new StreamWriter(file, Encoding.GetEncoding(1251));
reader = new StreamReader(file, Encoding.GetEncoding(1251));
writer.AutoFlush = true;
List<string> _questions = new List<string>();
for (int i = 0; i < allQuestion.Length; i++)
{
char status = allQuestion[i][0];
if (status == '0')
{
_questions.Add(allQuestion[i]);
}
}
addedQuestion = _questions.ToArray();
City.text = ParseToCity(addedQuestion[0]);
}
private string ParseToCity(string current)
{
string _city = "";
string[] data = current.Split(';');
_city = data[2];
return _city;
}
private void OnApplicationQuit()
{
writer.Close();
reader.Close();
file.Close();
}
public void IKnow()
{
string[] quest = addedQuestion[index].Split(';');
int indexFromFile = int.Parse(quest[1]);
string questBeforeAnsver = "";
for (int i = 0; i < quest.Length; i++)
{
if (i == 0)
{
questBeforeAnsver += "1";
}
else
{
questBeforeAnsver += ";" + quest[i];
}
}
Debug.Log("indexFromFile : " + indexFromFile);
for (int i = 0; i < allQuestion.Length; i++)
{
if (i == indexFromFile)
{
writer.Write(questBeforeAnsver);
break;
}
else
{
reader.ReadLine();
}
}
reader.DiscardBufferedData();
reader.BaseStream.Seek(0, SeekOrigin.Begin);
if (index < addedQuestion.Length - 1)
{
index++;
}
City.text = ParseToCity(addedQuestion[index]);
}
}
There are lines in the file by type :
0;0;Africa
0;1;London
0;2;Paris
The bottom line is that this is a game, and only those questions whose status is 0, that is, unanswered, are downloaded from the file. And if during the game the user clicks that he knows the answer, then there is a line in the file and is overwritten, only the status is no longer 0, but 1 and when the game is repeated, this question will not load.
It turns out for me that the first question is overwritten successfully, and all subsequent ones are simply added at the end of the file :
1;0;Africa
0;1;London
0;2;Paris1;1;London1;2;Paris
What's wrong ?
The video shows everything in detail

Memory Consumption is always increasing in Humble-Video

I have been using humble-video in a live streaming project to convert flv to mp4. I've realized that java process's(in which humble-video codes are running) memory usage is always increasing when looking with top command.
After that I changed the demo source code of the humble-video and put the segmentFile function in an infinite loop and memory usage of the process is again always increasing when looking with top command. It is over 2.5GiB and has been running for about 30 mins.
I expect the process's memory consumption to stay stable somewhere between 40-50MB not to keep increasing always.
Do you have any idea about that?
I've resolved the problem.
The problem is Garbage Collector does not clear WeakReferences so that JNIMemoryManager does not delete native objects. Calling System.gc() after every iteration is helping but it is not the exact solution.
The solution is that calling delete() at the end of each iteration. Some objects are created which you may not expect during the execution so please look at which objects are created with JNIMemoryManager.getMgr().dumpMemoryLog(); and look at how many objects are alive with JNIMemoryManager.getMgr().getNumPinnedObjects();
The last state of the segmentFile function is as below and memory consumption still stays about 80 MiB at the end of 15min.
private void segmentFile(String input, String output, int hls_start,
int hls_time, int hls_list_size, int hls_wrap, String hls_base_url,
String vFilter,
String aFilter) throws InterruptedException, IOException {
JNIMemoryManager.getMgr().setMemoryDebugging(true);
Demuxer demuxer = Demuxer.make();
demuxer.open(input, null, false, true, null, null);
// we're forcing this to be HTTP Live Streaming for this demo.
Muxer muxer = Muxer.make(output, null, "hls");
muxer.setProperty("start_number", hls_start);
muxer.setProperty("hls_time", hls_time);
muxer.setProperty("hls_list_size", hls_list_size);
muxer.setProperty("hls_wrap", hls_wrap);
if (hls_base_url != null && hls_base_url.length() > 0)
muxer.setProperty("hls_base_url", hls_base_url);
MuxerFormat format = MuxerFormat.guessFormat("mp4", null, null);
/**
* Create bit stream filters if we are asked to.
*/
BitStreamFilter vf = vFilter != null ? BitStreamFilter.make(vFilter) : null;
BitStreamFilter af = aFilter != null ? BitStreamFilter.make(aFilter) : null;
int n = demuxer.getNumStreams();
DemuxerStream[] demuxerStreams = new DemuxerStream[n];
Decoder[] decoders = new Decoder[n];
List<MuxerStream> muxerStreamList = new ArrayList();
for(int i = 0; i < n; i++) {
demuxerStreams[i] = demuxer.getStream(i);
decoders[i] = demuxerStreams[i].getDecoder();
Decoder d = decoders[i];
if (d != null) {
// neat; we can decode. Now let's see if this decoder can fit into the mp4 format.
if (!format.getSupportedCodecs().contains(d.getCodecID())) {
throw new RuntimeException("Input filename (" + input + ") contains at least one stream with a codec not supported in the output format: " + d.toString());
}
if (format.getFlag(MuxerFormat.Flag.GLOBAL_HEADER))
d.setFlag(Coder.Flag.FLAG_GLOBAL_HEADER, true);
d.open(null, null);
muxerStreamList.add(muxer.addNewStream(d));
}
}
muxer.open(null, null);
n = muxer.getNumStreams();
MuxerStream[] muxerStreams = new MuxerStream[n];
Coder[] coder = new Coder[n];
for (int i = 0; i < n; i++) {
muxerStreams[i] = muxer.getStream(i);
if (muxerStreams[i] != null) {
coder[i] = muxerStreams[i].getCoder();
}
}
MediaPacket packet = MediaPacket.make();
while(demuxer.read(packet) >= 0) {
/**
* Now we have a packet, but we can only write packets that had decoders we knew what to do with.
*/
final Decoder d = decoders[packet.getStreamIndex()];
if (packet.isComplete() && d != null) {
// check to see if we are using bit stream filters, and if so, filter the audio
// or video.
if (vf != null && d.getCodecType() == Type.MEDIA_VIDEO)
vf.filter(packet, null);
else if (af != null && d.getCodecType() == Type.MEDIA_AUDIO)
af.filter(packet, null);
muxer.write(packet, false);
}
}
// It is good practice to close demuxers when you're done to free
// up file handles. Humble will EVENTUALLY detect if nothing else
// references this demuxer and close it then, but get in the habit
// of cleaning up after yourself, and your future girlfriend/boyfriend
// will appreciate it.
muxer.close();
demuxer.close();
muxer.delete();
demuxer.delete();
packet.delete();
format.delete();
vf.delete();
muxer = null;
demuxer = null;
packet = null;
format = null;
vf = null;
for (int i=0; i < muxerStreams.length; i++) {
if (muxerStreams[i] != null) {
muxerStreams[i].delete();
muxerStreams[i] = null;
}
if (coder[i] != null) {
coder[i].delete();
coder[i] = null;
}
}
for (int i=0; i < demuxerStreams.length; i++) {
if (demuxerStreams[i] != null) {
demuxerStreams[i].delete();
demuxerStreams[i] = null;
}
if (decoders[i] != null) {
decoders[i].delete();
decoders[i] = null;
}
}
for (Iterator iterator = muxerStreamList.iterator(); iterator.hasNext();) {
MuxerStream muxerStream = (MuxerStream) iterator.next();
if (muxerStream != null) {
muxerStream.delete();
muxerStream = null;
}
}
muxerStreamList.clear();
muxerStreamList = null;
System.out.println("number of alive objects:" + JNIMemoryManager.getMgr().getNumPinnedObjects());
}

Capture only one thumbnail image from a video

i am working to generate thumbnail images from a video. I am able to do it but i need only one thumbnail image from a video , but what i get is more than one images at different times of the video. I have used the following code to generate the thumbnails . Please suggest me what should i modify in the following code to get only one thumbnail from the middle portion of the video . The code i used is as follows ( I have used Xuggler ):
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import com.xuggle.mediatool.IMediaReader;
import com.xuggle.mediatool.MediaListenerAdapter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.mediatool.event.IVideoPictureEvent;
import com.xuggle.xuggler.Global;
public class Main {
public static final double SECONDS_BETWEEN_FRAMES = 10;
private static final String inputFilename = "D:\\k\\Knock On Wood Lesson.flv";
private static final String outputFilePrefix = "D:\\pix\\";
// The video stream index, used to ensure we display frames from one and
// only one video stream from the media container.
private static int mVideoStreamIndex = -1;
// Time of last frame write
private static long mLastPtsWrite = Global.NO_PTS;
public static final long MICRO_SECONDS_BETWEEN_FRAMES =
(long) (Global.DEFAULT_PTS_PER_SECOND * SECONDS_BETWEEN_FRAMES);
public static void main(String[] args) {
IMediaReader mediaReader = ToolFactory.makeReader(inputFilename);
// stipulate that we want BufferedImages created in BGR 24bit color space
mediaReader.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);
mediaReader.addListener(new ImageSnapListener());
// read out the contents of the media file and
// dispatch events to the attached listener
while (mediaReader.readPacket() == null);
}
private static class ImageSnapListener extends MediaListenerAdapter {
public void onVideoPicture(IVideoPictureEvent event) {
if (event.getStreamIndex() != mVideoStreamIndex) {
// if the selected video stream id is not yet set, go ahead an
// select this lucky video stream
if (mVideoStreamIndex == -1) {
mVideoStreamIndex = event.getStreamIndex();
} // no need to show frames from this video stream
else {
return;
}
}
// if uninitialized, back date mLastPtsWrite to get the very first frame
if (mLastPtsWrite == Global.NO_PTS) {
mLastPtsWrite = event.getTimeStamp() - MICRO_SECONDS_BETWEEN_FRAMES;
}
// if it's time to write the next frame
if (event.getTimeStamp() - mLastPtsWrite
>= MICRO_SECONDS_BETWEEN_FRAMES) {
String outputFilename = dumpImageToFile(event.getImage());
// indicate file written
double seconds = ((double) event.getTimeStamp())
/ Global.DEFAULT_PTS_PER_SECOND;
System.out.printf("at elapsed time of %6.3f seconds wrote: %s\n",
seconds, outputFilename);
// update last write time
mLastPtsWrite += MICRO_SECONDS_BETWEEN_FRAMES;
}
}
private String dumpImageToFile(BufferedImage image) {
try {
String outputFilename = outputFilePrefix
+ System.currentTimeMillis() + ".png";
ImageIO.write(image, "png", new File(outputFilename));
return outputFilename;
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
}
}
This is how you can.
public class ThumbsGenerator {
private static void processFrame(IVideoPicture picture, BufferedImage image) {
try {
File file=new File("C:\\snapshot\thimbnailpic.png");//name of pic
ImageIO.write(image, "png", file);
} catch (Exception e) {
e.printStackTrace();
}
}
#SuppressWarnings("deprecation")
public static void main(String[] args) throws NumberFormatException,IOException {
String filename = "your_video.mp4";
if (!IVideoResampler.isSupported(IVideoResampler.Feature.FEATURE_COLORSPACECONVERSION))
throw new RuntimeException("you must install the GPL version of Xuggler (with IVideoResampler support) for this demo to work");
IContainer container = IContainer.make();
if (container.open(filename, IContainer.Type.READ, null) < 0)
throw new IllegalArgumentException("could not open file: "
+ filename);
String seconds=container.getDuration()/(1000000*2)+""; // time of thumbnail
int numStreams = container.getNumStreams();
// and iterate through the streams to find the first video stream
int videoStreamId = -1;
IStreamCoder videoCoder = null;
for (int i = 0; i < numStreams; i++) {
// find the stream object
IStream stream = container.getStream(i);
// get the pre-configured decoder that can decode this stream;
IStreamCoder coder = stream.getStreamCoder();
if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
videoStreamId = i;
videoCoder = coder;
break;
}
}
if (videoStreamId == -1)
throw new RuntimeException(
"could not find video stream in container: " + filename);
if (videoCoder.open() < 0)
throw new RuntimeException(
"could not open video decoder for container: " + filename);
IVideoResampler resampler = null;
if (videoCoder.getPixelType() != IPixelFormat.Type.BGR24) {
resampler = IVideoResampler.make(videoCoder.getWidth(), videoCoder
.getHeight(), IPixelFormat.Type.BGR24, videoCoder
.getWidth(), videoCoder.getHeight(), videoCoder
.getPixelType());
if (resampler == null)
throw new RuntimeException(
"could not create color space resampler for: "
+ filename);
}
IPacket packet = IPacket.make();
IRational timeBase = container.getStream(videoStreamId).getTimeBase();
System.out.println("Timebase " + timeBase.toString());
long timeStampOffset = (timeBase.getDenominator() / timeBase.getNumerator())
* Integer.parseInt(seconds);
System.out.println("TimeStampOffset " + timeStampOffset);
long target = container.getStartTime() + timeStampOffset;
container.seekKeyFrame(videoStreamId, target, 0);
boolean isFinished = false;
while(container.readNextPacket(packet) >= 0 && !isFinished ) {
if (packet.getStreamIndex() == videoStreamId) {
IVideoPicture picture = IVideoPicture.make(videoCoder
.getPixelType(), videoCoder.getWidth(), videoCoder
.getHeight());
int offset = 0;
while (offset < packet.getSize()) {
int bytesDecoded = videoCoder.decodeVideo(picture, packet,
offset);
if (bytesDecoded < 0) {
System.err.println("WARNING!!! got no data decoding " +
"video in one packet");
}
offset += bytesDecoded;
picture from
if (picture.isComplete()) {
IVideoPicture newPic = picture;
if (resampler != null) {
newPic = IVideoPicture.make(resampler
.getOutputPixelFormat(), picture.getWidth(),
picture.getHeight());
if (resampler.resample(newPic, picture) < 0)
throw new RuntimeException(
"could not resample video from: "
+ filename);
}
if (newPic.getPixelType() != IPixelFormat.Type.BGR24)
throw new RuntimeException(
"could not decode video as BGR 24 bit data in: "
+ filename);
BufferedImage javaImage = Utils.videoPictureToImage(newPic);
processFrame(newPic, javaImage);
isFinished = true;
}
}
}
}
if (videoCoder != null) {
videoCoder.close();
videoCoder = null;
}
if (container != null) {
container.close();
container = null;
}
} }
I know this is an old question but I found the same piece of tutorial code while playing with Xuggler today. The reason you are getting multiple thumbnails is due to the following line:
public static final double SECONDS_BETWEEN_FRAMES = 10;
This variable specifies the number of seconds between calls to dumpImageToFile. So a frame thumbnail will be written at 0.00 seconds, at 10.00 seconds, at 20.00 seconds, and so on:
if (event.getTimeStamp() - mLastPtsWrite >= MICRO_SECONDS_BETWEEN_FRAMES)
To get a frame thumbnail from the middle of the video you can calculate the duration of the video using more Xuggler capability which I found in a tutorial at JavaCodeGeeks. Then change your code in the ImageSnapListener to only write a single frame once the IVideoPictureEvent event timestamp exceeds the calculated mid point.
I hope that helps anyone who stumbles across this question.

Loading an animated image to a BufferedImage array

I'm trying to implement animated textures into an OpenGL game seamlessly. I made a generic ImageDecoder class to translate any BufferedImage into a ByteBuffer. It works perfectly for now, though it doesn't load animated images.
I'm not trying to load an animated image as an ImageIcon. I need the BufferedImage to get an OpenGL-compliant ByteBuffer.
How can I load every frames as a BufferedImage array in an animated image ?
On a similar note, how can I get the animation rate / period ?
Does Java handle APNG ?
The following code is an adaption from my own implementation to accommodate the "into array" part.
The problem with gifs is: There are different disposal methods which have to be considered, if you want this to work with all of them. The code below tries to compensate for that. For example there is a special implementation for "doNotDispose" mode, which takes all frames from start to N and paints them on top of each other into a BufferedImage.
The advantage of this method over the one posted by chubbsondubs is that it does not have to wait for the gif animation delays, but can be done basically instantly.
BufferedImage[] array = null;
ImageInputStream imageInputStream = ImageIO.createImageInputStream(new ByteArrayInputStream(data)); // or any other source stream
Iterator<ImageReader> imageReaders = ImageIO.getImageReaders(imageInputStream);
while (imageReaders.hasNext())
{
ImageReader reader = (ImageReader) imageReaders.next();
try
{
reader.setInput(imageInputStream);
frames = reader.getNumImages(true);
array = new BufferedImage[frames];
for (int frameId : frames)
{
int w = reader.getWidth(0);
int h = reader.getHeight(0);
int fw = reader.getWidth(frameId);
int fh = reader.getHeight(frameId);
if (h != fh || w != fw)
{
GifMeta gm = getGifMeta(reader.getImageMetadata(frameId));
// disposalMethodNames: "none", "doNotDispose","restoreToBackgroundColor","restoreToPrevious",
if ("doNotDispose".equals(gm.disposalMethod))
{
image = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
Graphics2D g = (Graphics2D) image.getGraphics();
for (int f = 0; f <= frameId; f++)
{
gm = getGifMeta(reader.getImageMetadata(f));
if ("doNotDispose".equals(gm.disposalMethod))
{
g.drawImage(reader.read(f), null, gm.imageLeftPosition, gm.imageTopPosition);
}
else
{
// XXX "Unimplemented disposalMethod (" + getName() + "): " + gm.disposalMethod);
}
}
g.dispose();
}
else
{
image = reader.read(frameId);
// XXX "Unimplemented disposalMethod (" + getName() + "): " + gm.disposalMethod;
}
}
else
{
image = reader.read(frameId);
}
if (image == null)
{
throw new NullPointerException();
}
array[frame] = image;
}
}
finally
{
reader.dispose();
}
}
return array;
private final static class GifMeta
{
String disposalMethod = "none";
int imageLeftPosition = 0;
int imageTopPosition = 0;
int delayTime = 0;
}
private GifMeta getGifMeta(IIOMetadata meta)
{
GifMeta gm = new GifMeta();
final IIOMetadataNode gifMeta = (IIOMetadataNode) meta.getAsTree("javax_imageio_gif_image_1.0");
NodeList childNodes = gifMeta.getChildNodes();
for (int i = 0; i < childNodes.getLength(); ++i)
{
IIOMetadataNode subnode = (IIOMetadataNode) childNodes.item(i);
if (subnode.getNodeName().equals("GraphicControlExtension"))
{
gm.disposalMethod = subnode.getAttribute("disposalMethod");
gm.delayTime = Integer.parseInt(subnode.getAttribute("delayTime"));
}
else if (subnode.getNodeName().equals("ImageDescriptor"))
{
gm.imageLeftPosition = Integer.parseInt(subnode.getAttribute("imageLeftPosition"));
gm.imageTopPosition = Integer.parseInt(subnode.getAttribute("imageTopPosition"));
}
}
return gm;
}
I don't think Java supports APNG by default, but you can use an 3rd party library to parse it:
http://code.google.com/p/javapng/source/browse/trunk/javapng2/src/apng/com/sixlegs/png/AnimatedPngImage.java?r=300
That might be your easiest method. As for getting the frames from an animated gif you have to register an ImageObserver:
new ImageIcon( url ).setImageObserver( new ImageObserver() {
public void imageUpdate( Image img, int infoFlags, int x, int y, int width, int height ) {
if( infoFlags & ImageObserver.FRAMEBITS == ImageObserver.FRAMEBITS ) {
// another frame was loaded do something with it.
}
}
});
This loads asynchronously on another thread so imageUpdate() won't be called immediately. But it will be called for each frame as it parses it.
http://docs.oracle.com/javase/1.4.2/docs/api/java/awt/image/ImageObserver.html

Categories