Tapping android screen solves media skipping, how to fix? - java

I wrote an Android app that plays multi-track audio files and it works completely in the simulator. On the device, it plays for a few seconds and then starts skipping and popping every few seconds. If I continuously tap the screen in the dead space of the app, the skipping doesn't occur and then recurs about 5 seconds after screen tapping ceases. I presume that this has something to do with thread priority, but I log the thread priority in the play loop and it never changes.
I'm hoping that somebody can tell me either:
a hack where I can simulate a screen tap every second so that I can run a beta test without the app skipping
explain a way to debug activity/thread/etc priority when it seems that my thread priority isn't changing when it seems like it is.
Here is how the player code is executed:
private class DecodeOperation extends AsyncTask<Void, Void, Void> {
#Override
protected Void doInBackground(Void... values) {
AudioTrackPlayer.this.decodeLoop();
return null;
}
#Override
protected void onPreExecute() {
}
#Override
protected void onProgressUpdate(Void... values) {
}
}
Here is the relevant player code:
private void decodeLoop()
{
ByteBuffer[] codecInputBuffers;
ByteBuffer[] codecOutputBuffers;
// extractor gets information about the stream
extractor = new MediaExtractor();
try {
extractor.setDataSource(this.mUrlString);
} catch (Exception e) {
mDelegateHandler.onRadioPlayerError(AudioTrackPlayer.this);
return;
}
MediaFormat format = extractor.getTrackFormat(0);
String mime = format.getString(MediaFormat.KEY_MIME);
// the actual decoder
codec = MediaCodec.createDecoderByType(mime);
codec.configure(format, null /* surface */, null /* crypto */, 0 /* flags */);
codec.start();
codecInputBuffers = codec.getInputBuffers();
codecOutputBuffers = codec.getOutputBuffers();
// get the sample rate to configure AudioTrack
int sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
Log.i(LOG_TAG,"mime "+mime);
Log.i(LOG_TAG,"sampleRate "+sampleRate);
// create our AudioTrack instance
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_OUT_5POINT1,
AudioFormat.ENCODING_PCM_16BIT,
AudioTrack.getMinBufferSize (
sampleRate,
AudioFormat.CHANNEL_OUT_5POINT1,
AudioFormat.ENCODING_PCM_16BIT
),
AudioTrack.MODE_STREAM
);
// start playing, we will feed you later
audioTrack.play();
extractor.selectTrack(0);
// start decoding
final long kTimeOutUs = 10000;
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
boolean sawInputEOS = false;
boolean sawOutputEOS = false;
int noOutputCounter = 0;
int noOutputCounterLimit = 50;
while (!sawOutputEOS && noOutputCounter < noOutputCounterLimit && !doStop) {
//Log.i(LOG_TAG, "loop ");
noOutputCounter++;
if (!sawInputEOS) {
inputBufIndex = codec.dequeueInputBuffer(kTimeOutUs);
bufIndexCheck++;
// Log.d(LOG_TAG, " bufIndexCheck " + bufIndexCheck);
if (inputBufIndex >= 0) {
ByteBuffer dstBuf = codecInputBuffers[inputBufIndex];
int sampleSize =
extractor.readSampleData(dstBuf, 0 /* offset */);
//Log.d(LOG_TAG, "SampleLength = " + String.valueOf(sampleSize));
long presentationTimeUs = 0;
if (sampleSize < 0) {
Log.d(LOG_TAG, "saw input EOS.");
sawInputEOS = true;
sampleSize = 0;
} else {
presentationTimeUs = extractor.getSampleTime();
}
// can throw illegal state exception (???)
codec.queueInputBuffer(
inputBufIndex,
0 /* offset */,
sampleSize,
presentationTimeUs,
sawInputEOS ? MediaCodec.BUFFER_FLAG_END_OF_STREAM : 0);
if (!sawInputEOS) {
extractor.advance();
}
}
else
{
Log.e(LOG_TAG, "inputBufIndex " +inputBufIndex);
}
}
int res = codec.dequeueOutputBuffer(info, kTimeOutUs);
if (res >= 0) {
//Log.d(LOG_TAG, "got frame, size " + info.size + "/" + info.presentationTimeUs);
if (info.size > 0) {
noOutputCounter = 0;
}
int outputBufIndex = res;
ByteBuffer buf = codecOutputBuffers[outputBufIndex];
final byte[] chunk = new byte[info.size];
buf.get(chunk);
buf.clear();
audioTrack.write(chunk,0,chunk.length);
if(this.mState != State.Playing)
{
mDelegateHandler.onRadioPlayerPlaybackStarted(AudioTrackPlayer.this);
}
this.mState = State.Playing;
}
codec.releaseOutputBuffer(outputBufIndex, false /* render */);
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.d(LOG_TAG, "saw output EOS.");
sawOutputEOS = true;
}
} else if (res == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
codecOutputBuffers = codec.getOutputBuffers();
Log.d(LOG_TAG, "output buffers have changed.");
} else if (res == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat oformat = codec.getOutputFormat();
Log.d(LOG_TAG, "output format has changed to " + oformat);
} else {
Log.d(LOG_TAG, "dequeueOutputBuffer returned " + res);
}
}
Log.d(LOG_TAG, "stopping...");
relaxResources(true);
this.mState = State.Stopped;
doStop = true;
// attempt reconnect
if(sawOutputEOS)
{
try {
AudioTrackPlayer.this.play();
return;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
if(noOutputCounter >= noOutputCounterLimit)
{
mDelegateHandler.onRadioPlayerError(AudioTrackPlayer.this);
}
else
{
mDelegateHandler.onRadioPlayerStopped(AudioTrackPlayer.this);
}
}

Have you monitored the CPU frequency while your application is running? The CPU governor is probably scaling the CPU up on touch and scaling back down on a timer. Increasing the priority on your background thread to THREAD_PRIORITY_DEFAULT will probably fix the issue, the default priority for AsyncTask is quite low and not appropriate for Audio.
You could also increase the size of the AudioTrack's buffer to some multiple of the value returned by getMinBufferSize, that method only returns the minimum possible buffer for the Class to operate, it does not guarantee smooth playback.

Related

Merge/Mux multiple mp4 video files on Android

I have a series of mp4 files saved on the device that need to be merged together to make a single mp4 file.
video_p1.mp4 video_p2.mp4 video_p3.mp4 > video.mp4
The solutions I have researched such as the mp4parser framework use deprecated code.
The best solution I could find is using a MediaMuxer and MediaExtractor.
The code runs but my videos are not merged (only the content in video_p1.mp4 is displayed and it is in landscape orientation, not portrait).
Can anyone help me sort this out?
public static boolean concatenateFiles(File dst, File... sources) {
if ((sources == null) || (sources.length == 0)) {
return false;
}
boolean result;
MediaExtractor extractor = null;
MediaMuxer muxer = null;
try {
// Set up MediaMuxer for the destination.
muxer = new MediaMuxer(dst.getPath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
//int bufferSize = MAX_SAMPLE_SIZE;
int bufferSize = 1 * 1024 * 1024;
int frameCount = 0;
int offset = 100;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
long timeOffsetUs = 0;
int dstTrackIndex = -1;
for (int fileIndex = 0; fileIndex < sources.length; fileIndex++) {
int numberOfSamplesInSource = getNumberOfSamples(sources[fileIndex]);
// Set up MediaExtractor to read from the source.
extractor = new MediaExtractor();
extractor.setDataSource(sources[fileIndex].getPath());
// Set up the tracks.
SparseIntArray indexMap = new SparseIntArray(extractor.getTrackCount());
for (int i = 0; i < extractor.getTrackCount(); i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
if (dstTrackIndex < 0) {
dstTrackIndex = muxer.addTrack(format);
muxer.start();
}
indexMap.put(i, dstTrackIndex);
}
long lastPresentationTimeUs = 0;
int currentSample = 0;
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
sawEOS = true;
bufferInfo.size = 0;
timeOffsetUs += (lastPresentationTimeUs + 0);
}
else {
lastPresentationTimeUs = extractor.getSampleTime();
bufferInfo.presentationTimeUs = extractor.getSampleTime() + timeOffsetUs;
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
if ((currentSample < numberOfSamplesInSource) || (fileIndex == sources.length - 1)) {
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
}
extractor.advance();
frameCount++;
currentSample++;
Log.d("tag2", "Frame (" + frameCount + ") " +
"PresentationTimeUs:" + bufferInfo.presentationTimeUs +
" Flags:" + bufferInfo.flags +
" TrackIndex:" + trackIndex +
" Size(KB) " + bufferInfo.size / 1024);
}
}
extractor.release();
extractor = null;
}
result = true;
}
catch (IOException e) {
result = false;
}
finally {
if (extractor != null) {
extractor.release();
}
if (muxer != null) {
muxer.stop();
muxer.release();
}
}
return result;
}
public static int getNumberOfSamples(File src) {
MediaExtractor extractor = new MediaExtractor();
int result;
try {
extractor.setDataSource(src.getPath());
extractor.selectTrack(0);
result = 0;
while (extractor.advance()) {
result ++;
}
}
catch(IOException e) {
result = -1;
}
finally {
extractor.release();
}
return result;
}
I'm using this library for muxing videos: ffmpeg-android-java
gradle dependency:
implementation 'com.writingminds:FFmpegAndroid:0.3.2'
Here's how I use it in my project to mux video and audio in kotlin: VideoAudioMuxer
So basically it works like the ffmpeg in terminal but you're inputing your command to a method as an array of strings along with a listener.
fmpeg.execute(arrayOf("-i", videoPath, "-i", audioPath, "$targetPath.mp4"), object : ExecuteBinaryResponseHandler() {
You'll have to search how to merge videos in ffmpeg and convert the commands into array of strings for the argument you need.
You could probably do almost anything, since ffmpeg is a very powerful tool.

ImageSwitcher took several seconds when Grow heap happen (already down scale)

I've already read a lot of posts,still can't solve it.
I have an app that have one problem related to OOM. It can show a picture stored(PNG type,approximately 1.33MB, 3072*1728 pixel) on server. My system heap size is 125MB
get by: long maxMemory = rt.maxMemory();
I new a thread( AsyncTask )to decode it by using new a handle to manage it.
I also down scale the picture when it is too big. Image can show in local device.
But, it still happen Grow heap (frag case) to 10.389MB for 2654224-byte allocation
And, it will take 2~10 sec to collect the garbage.
It's bad user experience. Did any one meet the same problem? Any Suggestion?
public class DownloadImageTask extends AsyncTask<String, Void, Bitmap> {
public static String TAG = "DownloadImageTask";
public enum DOWNLOAD_STATE {
eIdle, eProgressing, eError, eSuccess,
}
private DOWNLOAD_STATE state = DOWNLOAD_STATE.eIdle;
private Bitmap dlBitmap;
public DownloadImageTask(String imageUrl) {
execute(imageUrl);
}
public DOWNLOAD_STATE getDownloadState() {
return state;
}
public Bitmap getDownloadBitmap() {
return dlBitmap;
}
public Drawable getDownloadDrawable() {
#SuppressWarnings("deprecation")
Drawable dlDrawable = new BitmapDrawable(dlBitmap);
return dlDrawable;
}
protected Bitmap doInBackground(String... urls) {
String urldisplay = urls[0];
try {
dlBitmap = getBitmap(urldisplay);
}
if(dlBitmap==null)
state = DOWNLOAD_STATE.eError;
else
state = DOWNLOAD_STATE.eSuccess;
} catch (Exception e) {
dlBitmap = null;
state = DOWNLOAD_STATE.eError;
}
return dlBitmap;
}
protected void onPostExecute(Bitmap result) {
result = null;
}
public Bitmap getBitmap(String urldisplay) {
try {
Log.d(TAG,"getBitmap File");
InputStream in = new java.net.URL(urldisplay).openStream();
if(in == null)
Log.d(TAG,"in null");
// first decode, get the length & width of picture,didn't load the picture into memory
BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inJustDecodeBounds = true;
BitmapFactory.decodeStream(in, null, opts);
//in.close();
//compute the SampleSize (power of 2 is great)
int sampleSize = computeSampleSize(opts, -1, 1920 * 1080);//monitor limitation
Log.d(TAG,"samplesize ="+sampleSize);
// second decode,set sample size , generate thumbnail
in = new java.net.URL(urldisplay).openStream();
opts = new BitmapFactory.Options();
opts.inPreferredConfig = Bitmap.Config.RGB_565;
opts.inSampleSize = sampleSize;
opts.inInputShareable = true;
opts.inPurgeable = true;
Bitmap bmp = BitmapFactory.decodeStream(in, null, opts);
in.close();
return bmp;
} catch (Exception err) {
Log.e(TAG, "error: " + err.toString());
return null;
}
}
public static int computeSampleSize(BitmapFactory.Options options,
int minSideLength, int maxNumOfPixels) {
Log.d(TAG,"computeSampleSize");
int initialSize = computeInitialSampleSize(options, minSideLength,
maxNumOfPixels);
int roundedSize;
if (initialSize <= 8) {
roundedSize = 1;
while (roundedSize < initialSize) {
roundedSize <<= 1;
}
} else {
roundedSize = (initialSize + 7) / 8 * 8;
}
return roundedSize;
}
private static int computeInitialSampleSize(BitmapFactory.Options options,
int minSideLength, int maxNumOfPixels) {
Log.d(TAG,"computeInitialSampleSize");
try{
double w = options.outWidth;
double h = options.outHeight;
int lowerBound = (maxNumOfPixels == -1) ? 1 : (int) Math.ceil(Math
.sqrt(w * h / maxNumOfPixels));
int upperBound = (minSideLength == -1) ? 128 : (int) Math.min(
Math.floor(w / minSideLength), Math.floor(h / minSideLength));
if (upperBound < lowerBound) {
// return the larger one when there is no overlapping zone.
return lowerBound;
}
if ((maxNumOfPixels == -1) && (minSideLength == -1)) {
return 1;
} else if (minSideLength == -1) {
return lowerBound;
} else {
return upperBound;
}
}catch (Exception e){
Log.d(TAG,"computeInitialSampleSize err:"+e.toString());
return 1;
}
}
}
And, show the picture in ImageSwitcher
myImageSwitcher.setImageDrawable(task.getDownloadDrawable());

MediaCodec H264 Encoder not working on Snapdragon 800 devices

I have written a H264 Stream Encoder using the MediaCodec API of Android. I tested it on about ten different devices with different processors and it worked on all of them, except on Snapdragon 800 powered ones (Google Nexus 5 and Sony Xperia Z1). On those devices I get the SPS and PPS and the first Keyframe, but after that mEncoder.dequeueOutputBuffer(mBufferInfo, 0) only returns MediaCodec.INFO_TRY_AGAIN_LATER. I already experimented with different timeouts, bitrates, resolutions and other configuration options, to no avail. The result is always the same.
I use the following code to initialise the Encoder:
mBufferInfo = new MediaCodec.BufferInfo();
encoder = MediaCodec.createEncoderByType("video/avc");
MediaFormat mediaFormat = MediaFormat.createVideoFormat("video/avc", 640, 480);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 768000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 30);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, mEncoderColorFormat);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 10);
encoder.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
where the selected color format is:
MediaCodecInfo.CodecCapabilities capabilities = mCodecInfo.getCapabilitiesForType(MIME_TYPE);
for (int i = 0; i < capabilities.colorFormats.length && selectedColorFormat == 0; i++)
{
int format = capabilities.colorFormats[i];
switch (format) {
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedSemiPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_TI_FormatYUV420PackedSemiPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_QCOM_FormatYUV420SemiPlanar:
selectedColorFormat = format;
break;
default:
LogHandler.e(LOG_TAG, "Unsupported color format " + format);
break;
}
}
And I get the data by doing
ByteBuffer[] inputBuffers = mEncoder.getInputBuffers();
ByteBuffer[] outputBuffers = mEncoder.getOutputBuffers();
int inputBufferIndex = mEncoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
{
// fill inputBuffers[inputBufferIndex] with valid data
ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(rawFrame);
mEncoder.queueInputBuffer(inputBufferIndex, 0, rawFrame.length, 0, 0);
LogHandler.e(LOG_TAG, "Queue Buffer in " + inputBufferIndex);
}
while(true)
{
int outputBufferIndex = mEncoder.dequeueOutputBuffer(mBufferInfo, 0);
if (outputBufferIndex >= 0)
{
Log.d(LOG_TAG, "Queue Buffer out " + outputBufferIndex);
ByteBuffer buffer = outputBuffers[outputBufferIndex];
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0)
{
// Config Bytes means SPS and PPS
Log.d(LOG_TAG, "Got config bytes");
}
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_SYNC_FRAME) != 0)
{
// Marks a Keyframe
Log.d(LOG_TAG, "Got Sync Frame");
}
if (mBufferInfo.size != 0)
{
// adjust the ByteBuffer values to match BufferInfo (not needed?)
buffer.position(mBufferInfo.offset);
buffer.limit(mBufferInfo.offset + mBufferInfo.size);
int nalUnitLength = 0;
while((nalUnitLength = parseNextNalUnit(buffer)) != 0)
{
switch(mVideoData[0] & 0x0f)
{
// SPS
case 0x07:
{
Log.d(LOG_TAG, "Got SPS");
break;
}
// PPS
case 0x08:
{
Log.d(LOG_TAG, "Got PPS");
break;
}
// Key Frame
case 0x05:
{
Log.d(LOG_TAG, "Got Keyframe");
}
//$FALL-THROUGH$
default:
{
// Process Data
break;
}
}
}
}
mEncoder.releaseOutputBuffer(outputBufferIndex, false);
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0)
{
// Stream is marked as done,
// break out of while
Log.d(LOG_TAG, "Marked EOS");
break;
}
}
else if(outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED)
{
outputBuffers = mEncoder.getOutputBuffers();
Log.d(LOG_TAG, "Output Buffer changed " + outputBuffers);
}
else if(outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED)
{
MediaFormat newFormat = mEncoder.getOutputFormat();
Log.d(LOG_TAG, "Media Format Changed " + newFormat);
}
else if(outputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
{
// No Data, break out
break;
}
else
{
// Unexpected State, ignore it
Log.d(LOG_TAG, "Unexpected State " + outputBufferIndex);
}
}
Thanks for your help!
You need to set the presentationTimeUs parameter in your call to queueInputBuffer.
Most encoders ignore this and you can encode for streaming without issues. The encoder used for Snapdragon 800 devices doesn't.
This parameter represents the recording time of your frame and needs therefore to increase by the number of us between the frame that you want to encode and the previous frame.
If the parameter set is the same value as in the previous frame the encoder drops it.
If the parameter is set to a too small value (e.g. 100000 on a 30 FPS recording) the quality of the encoded frames drops.
encodeCodec.queueInputBuffer(inputBufferIndex, 0, input.length, (System.currentTimeMillis() - startMs) * 1000, 0);

Java video output can only be opened using Quicktime but no other media player can open it

I have a program and that captures the screen and then takes those images and turns them into a movie. (Using the JpegImagesToMovies.java that was customized to work with .png) I have tried multiple extensions: .mov, .mp4, .avi (I am open to trying others). However, regardless of what extension I use, when I try to open the file with Windows Media Player I get the following error:
Windows Media Player encountered a problem while playing the file.
Error code C00D11B1
I've also tried opening the file using VLC but that produces the following error:
No suitable decoder module:
VLC does not support the audio or video format "twos". Unfortunately there is no way for you to fix this.
Opening the file using QuickTime does work.
So the question is, how can I produce a video file that can be opened with most if not all media players.
Here is my JpegImagesToMovie.java
package maple;
/*
* #(#)JpegImagesToMovie.java 1.3 01/03/13
*
* Copyright (c) 1999-2001 Sun Microsystems, Inc. All Rights Reserved.
*
* Sun grants you ("Licensee") a non-exclusive, royalty free, license to use,
* modify and redistribute this software in source and binary code form,
* provided that i) this copyright notice and license appear on all copies of
* the software; and ii) Licensee does not utilize the software in a manner
* which is disparaging to Sun.
*
* This software is provided "AS IS," without a warranty of any kind. ALL
* EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY
* IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
* NON-INFRINGEMENT, ARE HEREBY EXCLUDED. SUN AND ITS LICENSORS SHALL NOT BE
* LIABLE FOR ANY DAMAGES SUFFERED BY LICENSEE AS A RESULT OF USING, MODIFYING
* OR DISTRIBUTING THE SOFTWARE OR ITS DERIVATIVES. IN NO EVENT WILL SUN OR ITS
* LICENSORS BE LIABLE FOR ANY LOST REVENUE, PROFIT OR DATA, OR FOR DIRECT,
* INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL OR PUNITIVE DAMAGES, HOWEVER
* CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF THE USE OF
* OR INABILITY TO USE SOFTWARE, EVEN IF SUN HAS BEEN ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*
* This software is not designed or intended for use in on-line control of
* aircraft, air traffic, aircraft navigation or aircraft communications; or in
* the design, construction, operation or maintenance of any nuclear
* facility. Licensee represents and warrants that it will not use or
* redistribute the Software for such purposes.
*/
import java.io.*;
import java.util.*;
import java.awt.Dimension;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import javax.media.*;
import javax.media.control.*;
import javax.media.protocol.*;
import javax.media.datasink.*;
import javax.media.format.RGBFormat;
import javax.media.format.VideoFormat;
/**
* This program takes a list of JPEG image files and convert them into a
* QuickTime movie.
*/
public class JpegImagesToMovie implements ControllerListener, DataSinkListener {
static private Vector<String> getImageFilesPathsVector(
String imagesFolderPath) {
File imagesFolder = new File(imagesFolderPath);
String[] imageFilesArray = imagesFolder.list();
Vector<String> imageFilesPathsVector = new Vector<String>();
for (String imageFileName : imageFilesArray) {
if (!imageFileName.toLowerCase().endsWith("png"))
continue;
imageFilesPathsVector.add(imagesFolder.getAbsolutePath()
+ File.separator + imageFileName);
}
return imageFilesPathsVector;
}
public boolean doIt(int width, int height, int frameRate,
Vector<String> inFiles, MediaLocator outML) {
ImageDataSource ids = new ImageDataSource(width, height, frameRate,
inFiles);
Processor p;
try {
System.err
.println("- create processor for the image datasource ...");
p = Manager.createProcessor(ids);
} catch (Exception e) {
System.err
.println("Yikes! Cannot create a processor from the data source.");
return false;
}
p.addControllerListener(this);
// Put the Processor into configured state so we can set
// some processing options on the processor.
p.configure();
if (!waitForState(p, Processor.Configured)) {
System.err.println("Failed to configure the processor.");
return false;
}
// Set the output content descriptor to QuickTime.
p.setContentDescriptor(new ContentDescriptor(
FileTypeDescriptor.QUICKTIME));// FileTypeDescriptor.MSVIDEO
// Query for the processor for supported formats.
// Then set it on the processor.
TrackControl tcs[] = p.getTrackControls();
Format f[] = tcs[0].getSupportedFormats();
if (f == null || f.length <= 0) {
System.err.println("The mux does not support the input format: "
+ tcs[0].getFormat());
return false;
}
tcs[0].setFormat(f[0]);
System.err.println("Setting the track format to: " + f[0]);
// We are done with programming the processor. Let's just
// realize it.
p.realize();
if (!waitForState(p, Controller.Realized)) {
System.err.println("Failed to realize the processor.");
return false;
}
// Now, we'll need to create a DataSink.
DataSink dsink;
if ((dsink = createDataSink(p, outML)) == null) {
System.err
.println("Failed to create a DataSink for the given output MediaLocator: "
+ outML);
return false;
}
dsink.addDataSinkListener(this);
fileDone = false;
System.err.println("start processing...");
// OK, we can now start the actual transcoding.
try {
p.start();
dsink.start();
} catch (IOException e) {
System.err.println("IO error during processing");
return false;
}
// Wait for EndOfStream event.
waitForFileDone();
// Cleanup.
try {
dsink.close();
} catch (Exception e) {
}
p.removeControllerListener(this);
System.err.println("...done processing.");
return true;
}
/**
* Create the DataSink.
*/
DataSink createDataSink(Processor p, MediaLocator outML) {
DataSource ds;
if ((ds = p.getDataOutput()) == null) {
System.err
.println("Something is really wrong: the processor does not have an output DataSource");
return null;
}
DataSink dsink;
try {
System.err.println("- create DataSink for: " + outML);
dsink = Manager.createDataSink(ds, outML);
dsink.open();
} catch (Exception e) {
System.err.println("Cannot create the DataSink: " + e);
return null;
}
return dsink;
}
Object waitSync = new Object();
boolean stateTransitionOK = true;
/**
* Block until the processor has transitioned to the given state. Return
* false if the transition failed.
*/
boolean waitForState(Processor p, int state) {
synchronized (waitSync) {
try {
while (p.getState() < state && stateTransitionOK)
waitSync.wait();
} catch (Exception e) {
}
}
return stateTransitionOK;
}
/**
* Controller Listener.
*/
public void controllerUpdate(ControllerEvent evt) {
if (evt instanceof ConfigureCompleteEvent
|| evt instanceof RealizeCompleteEvent
|| evt instanceof PrefetchCompleteEvent) {
synchronized (waitSync) {
stateTransitionOK = true;
waitSync.notifyAll();
}
} else if (evt instanceof ResourceUnavailableEvent) {
synchronized (waitSync) {
stateTransitionOK = false;
waitSync.notifyAll();
}
} else if (evt instanceof EndOfMediaEvent) {
evt.getSourceController().stop();
evt.getSourceController().close();
}
}
Object waitFileSync = new Object();
boolean fileDone = false;
boolean fileSuccess = true;
/**
* Block until file writing is done.
*/
boolean waitForFileDone() {
synchronized (waitFileSync) {
try {
while (!fileDone)
waitFileSync.wait();
} catch (Exception e) {
}
}
return fileSuccess;
}
/**
* Event handler for the file writer.
*/
public void dataSinkUpdate(DataSinkEvent evt) {
if (evt instanceof EndOfStreamEvent) {
synchronized (waitFileSync) {
fileDone = true;
waitFileSync.notifyAll();
}
} else if (evt instanceof DataSinkErrorEvent) {
synchronized (waitFileSync) {
fileDone = true;
fileSuccess = false;
waitFileSync.notifyAll();
}
}
}
public static void main(String args[]) {
// changed this method a bit
if (args.length == 0)
prUsage();
// Parse the arguments.
int i = 0;
int width = -1, height = -1, frameRate = -1;
Vector<String> inputFiles = new Vector<String>();
String rootDir = null;
String outputURL = null;
while (i < args.length) {
if (args[i].equals("-w")) {
i++;
if (i >= args.length)
prUsage();
width = new Integer(args[i]).intValue();
} else if (args[i].equals("-h")) {
i++;
if (i >= args.length)
prUsage();
height = new Integer(args[i]).intValue();
} else if (args[i].equals("-f")) {
i++;
if (i >= args.length)
prUsage();
// new Integer(args[i]).intValue();
frameRate = Integer.parseInt(args[i]);
} else if (args[i].equals("-o")) {
i++;
if (i >= args.length)
prUsage();
outputURL = args[i];
} else if (args[i].equals("-i")) {
i++;
if (i >= args.length)
prUsage();
rootDir = args[i];
} else {
System.out.println(".");
prUsage();
}
i++;
}
if (rootDir == null) {
System.out
.println("Since no input (-i) forder provided, assuming this JAR is inside JPEGs folder.");
rootDir = (new File(".")).getAbsolutePath();
}
inputFiles = getImageFilesPathsVector(rootDir);
if (inputFiles.size() == 0)
prUsage();
if (outputURL == null) {
outputURL = (new File(rootDir)).getAbsolutePath() + File.separator
+ "pngs2movie.mov";
}
if (!outputURL.toLowerCase().startsWith("file:///")) {
outputURL = "file:///" + outputURL;
}
// Check for output file extension.
if (!outputURL.toLowerCase().endsWith(".mov")) {
prUsage();
outputURL += ".mov";
System.out
.println("outputURL should be ending with mov. Making this happen.\nNow outputURL is: "
+ outputURL);
}
if (width < 0 || height < 0) {
prUsage();
System.out.println("Trying to guess movie size from first image");
BufferedImage firstImageInFolder = getFirstImageInFolder(rootDir);
width = firstImageInFolder.getWidth();
height = firstImageInFolder.getHeight();
System.out.println("width = " + width);
System.out.println("height = " + height);
}
// Check the frame rate.
if (frameRate < 1)
frameRate = 30;
// Generate the output media locators.
MediaLocator oml;
if ((oml = createMediaLocator(outputURL)) == null) {
System.err.println("Cannot build media locator from: " + outputURL);
System.exit(0);
}
JpegImagesToMovie imageToMovie = new JpegImagesToMovie();
imageToMovie.doIt(width, height, frameRate, inputFiles, oml);
System.exit(0);
}
private static BufferedImage getFirstImageInFolder(String rootDir) {
File rootFile = new File(rootDir);
String[] list = (rootFile).list();
BufferedImage bufferedImage = null;
for (String filePath : list) {
if (!filePath.toLowerCase().endsWith(".png")
&& !filePath.toLowerCase().endsWith(".png")) {
continue;
}
try {
bufferedImage = ImageIO.read(new File(rootFile
.getAbsoluteFile() + File.separator + filePath));
break;
} catch (IOException e) {
e.printStackTrace();
}
}
return bufferedImage;
}
static void prUsage() {
System.err
.println("Usage: java JpegImagesToMovie [-w <width>] [-h <height>] [-f <frame rate>] [-o <output URL>] -i <input JPEG files dir Path>");
// System.exit(-1);
}
/**
* Create a media locator from the given string.
*/
#SuppressWarnings("unused")
public static MediaLocator createMediaLocator(String url) {
MediaLocator ml;
if (url.indexOf(":") > 0 && (ml = new MediaLocator(url)) != null)
return ml;
if (url.startsWith(File.separator)) {
if ((ml = new MediaLocator("file:" + url)) != null)
return ml;
} else {
String file = "file:" + System.getProperty("user.dir")
+ File.separator + url;
if ((ml = new MediaLocator(file)) != null)
return ml;
}
return null;
}
// /////////////////////////////////////////////
//
// Inner classes.
// /////////////////////////////////////////////
/**
* A DataSource to read from a list of JPEG image files and turn that into a
* stream of JMF buffers. The DataSource is not seekable or positionable.
*/
class ImageDataSource extends PullBufferDataSource {
ImageSourceStream streams[];
ImageDataSource(int width, int height, int frameRate,
Vector<String> images) {
streams = new ImageSourceStream[1];
streams[0] = new PngImageSourceStream(width, height, frameRate, images);
}
public void setLocator(MediaLocator source) {
}
public MediaLocator getLocator() {
return null;
}
/**
* Content type is of RAW since we are sending buffers of video frames
* without a container format.
*/
public String getContentType() {
return ContentDescriptor.RAW;
}
public void connect() {
}
public void disconnect() {
}
public void start() {
}
public void stop() {
}
/**
* Return the ImageSourceStreams.
*/
public PullBufferStream[] getStreams() {
return streams;
}
/**
* We could have derived the duration from the number of frames and
* frame rate. But for the purpose of this program, it's not necessary.
*/
public Time getDuration() {
return DURATION_UNKNOWN;
}
public Object[] getControls() {
return new Object[0];
}
public Object getControl(String type) {
return null;
}
}
/**
* The source stream to go along with ImageDataSource.
*/
class ImageSourceStream implements PullBufferStream {
Vector<String> images;
int width, height;
VideoFormat format;
int nextImage = 0; // index of the next image to be read.
boolean ended = false;
public ImageSourceStream(int width, int height, int frameRate,
Vector<String> images) {
this.width = width;
this.height = height;
this.images = images;
format = new VideoFormat(VideoFormat.JPEG, new Dimension(width,
height), Format.NOT_SPECIFIED, Format.byteArray,
(float) frameRate);
}
/**
* We should never need to block assuming data are read from files.
*/
public boolean willReadBlock() {
return false;
}
/**
* This is called from the Processor to read a frame worth of video
* data.
*/
public void read(Buffer buf) throws IOException {
// Check if we've finished all the frames.
if (nextImage >= images.size()) {
// We are done. Set EndOfMedia.
System.err.println("Done reading all images.");
buf.setEOM(true);
buf.setOffset(0);
buf.setLength(0);
ended = true;
return;
}
String imageFile = (String) images.elementAt(nextImage);
nextImage++;
System.err.println(" - reading image file: " + imageFile);
// Open a random access file for the next image.
RandomAccessFile raFile;
raFile = new RandomAccessFile(imageFile, "r");
byte data[] = null;
// Check the input buffer type & size.
if (buf.getData() instanceof byte[])
data = (byte[]) buf.getData();
// Check to see the given buffer is big enough for the frame.
if (data == null || data.length < raFile.length()) {
data = new byte[(int) raFile.length()];
buf.setData(data);
}
// Read the entire JPEG image from the file.
raFile.readFully(data, 0, (int) raFile.length());
System.err.println(" read " + raFile.length() + " bytes.");
buf.setOffset(0);
buf.setLength((int) raFile.length());
buf.setFormat(format);
buf.setFlags(buf.getFlags() | Buffer.FLAG_KEY_FRAME);
// Close the random access file.
raFile.close();
}
/**
* Return the format of each video frame. That will be JPEG.
*/
public Format getFormat() {
return format;
}
public ContentDescriptor getContentDescriptor() {
return new ContentDescriptor(ContentDescriptor.RAW);
}
public long getContentLength() {
return 0;
}
public boolean endOfStream() {
return ended;
}
public Object[] getControls() {
return new Object[0];
}
public Object getControl(String type) {
return null;
}
}
class PngImageSourceStream extends ImageSourceStream {
public PngImageSourceStream(int width, int height, int frameRate,
Vector<String> images) {
super(width, height, frameRate, images);
// configure the new format as RGB format
format = new RGBFormat(new Dimension(width, height),
Format.NOT_SPECIFIED, Format.byteArray, frameRate,
24, // 24 bits per pixel
1, 2, 3); // red, green and blue masks when data are in the form of byte[]
}
public void read(Buffer buf) throws IOException {
// Check if we've finished all the frames.
if (nextImage >= images.size()) {
// We are done. Set EndOfMedia.
System.err.println("Done reading all images.");
buf.setEOM(true);
buf.setOffset(0);
buf.setLength(0);
ended = true;
return;
}
String imageFile = (String) images.elementAt(nextImage);
nextImage++;
System.err.println(" - reading image file: " + imageFile);
// read the PNG image
BufferedImage image = ImageIO.read(new File(imageFile));
boolean hasAlpha = image.getColorModel().hasAlpha();
Dimension size = format.getSize();
// convert 32-bit RGBA to 24-bit RGB
byte[] imageData = convertTo24Bit(hasAlpha, image.getRaster().getPixels(0, 0, size.width, size.height, (int[]) null));
buf.setData(imageData);
System.err.println(" read " + imageData.length + " bytes.");
buf.setOffset(0);
buf.setLength(imageData.length);
buf.setFormat(format);
buf.setFlags(buf.getFlags() | Buffer.FLAG_KEY_FRAME);
}
private void convertIntByteToByte(int[] src, int srcIndex, byte[] out, int outIndex) {
// Note: the int[] returned by bufferedImage.getRaster().getPixels()
// is an int[]
// where each int is the value for one color i.e. the first 4 ints
// contain the RGBA values for the first pixel
int r = src[srcIndex];
int g = src[srcIndex + 1];
int b = src[srcIndex + 2];
out[outIndex] = (byte) (r & 0xFF);
out[outIndex + 1] = (byte) (g & 0xFF);
out[outIndex + 2] = (byte) (b & 0xFF);
}
private byte[] convertTo24Bit(boolean hasAlpha, int[] input) {
int dataLength = input.length;
int newSize = (hasAlpha ? dataLength * 3 / 4 : dataLength);
byte[] convertedData = new byte[newSize];
// for every 4 int values of the original array (RGBA) write 3
// bytes (RGB) to the output array
// if there is no alpha (i.e. RGB image) then just convert int to byte
for (int i = 0, j = 0; i < dataLength; i += 3, j += 3) {
convertIntByteToByte(input, i, convertedData, j);
if (hasAlpha) {
i++; // skip an extra byte if the original image has an
// extra int for transparency
}
}
return convertedData;
}
}
}
And I make the video doing the following
public void makeVideo (String movFile) throws MalformedURLException {
JpegImagesToMovie imageToMovie = new JpegImagesToMovie();
Vector<String> imgList = new Vector <String>();
File f = new File(JavCapture.tmpLocation + "\\tmp\\");
File[] fileList = f.listFiles();
for (int i = 0; i < fileList.length; i++) {
imgList.add(fileList[i].getAbsolutePath());
}
MediaLocator ml;
if ((ml = imageToMovie.createMediaLocator(movFile)) == null) {
System.exit(0);
}
setWidth();
setHeight();
imageToMovie.doIt(width, height, (1000/125), imgList, ml);
}

Capture only one thumbnail image from a video

i am working to generate thumbnail images from a video. I am able to do it but i need only one thumbnail image from a video , but what i get is more than one images at different times of the video. I have used the following code to generate the thumbnails . Please suggest me what should i modify in the following code to get only one thumbnail from the middle portion of the video . The code i used is as follows ( I have used Xuggler ):
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import com.xuggle.mediatool.IMediaReader;
import com.xuggle.mediatool.MediaListenerAdapter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.mediatool.event.IVideoPictureEvent;
import com.xuggle.xuggler.Global;
public class Main {
public static final double SECONDS_BETWEEN_FRAMES = 10;
private static final String inputFilename = "D:\\k\\Knock On Wood Lesson.flv";
private static final String outputFilePrefix = "D:\\pix\\";
// The video stream index, used to ensure we display frames from one and
// only one video stream from the media container.
private static int mVideoStreamIndex = -1;
// Time of last frame write
private static long mLastPtsWrite = Global.NO_PTS;
public static final long MICRO_SECONDS_BETWEEN_FRAMES =
(long) (Global.DEFAULT_PTS_PER_SECOND * SECONDS_BETWEEN_FRAMES);
public static void main(String[] args) {
IMediaReader mediaReader = ToolFactory.makeReader(inputFilename);
// stipulate that we want BufferedImages created in BGR 24bit color space
mediaReader.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);
mediaReader.addListener(new ImageSnapListener());
// read out the contents of the media file and
// dispatch events to the attached listener
while (mediaReader.readPacket() == null);
}
private static class ImageSnapListener extends MediaListenerAdapter {
public void onVideoPicture(IVideoPictureEvent event) {
if (event.getStreamIndex() != mVideoStreamIndex) {
// if the selected video stream id is not yet set, go ahead an
// select this lucky video stream
if (mVideoStreamIndex == -1) {
mVideoStreamIndex = event.getStreamIndex();
} // no need to show frames from this video stream
else {
return;
}
}
// if uninitialized, back date mLastPtsWrite to get the very first frame
if (mLastPtsWrite == Global.NO_PTS) {
mLastPtsWrite = event.getTimeStamp() - MICRO_SECONDS_BETWEEN_FRAMES;
}
// if it's time to write the next frame
if (event.getTimeStamp() - mLastPtsWrite
>= MICRO_SECONDS_BETWEEN_FRAMES) {
String outputFilename = dumpImageToFile(event.getImage());
// indicate file written
double seconds = ((double) event.getTimeStamp())
/ Global.DEFAULT_PTS_PER_SECOND;
System.out.printf("at elapsed time of %6.3f seconds wrote: %s\n",
seconds, outputFilename);
// update last write time
mLastPtsWrite += MICRO_SECONDS_BETWEEN_FRAMES;
}
}
private String dumpImageToFile(BufferedImage image) {
try {
String outputFilename = outputFilePrefix
+ System.currentTimeMillis() + ".png";
ImageIO.write(image, "png", new File(outputFilename));
return outputFilename;
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
}
}
This is how you can.
public class ThumbsGenerator {
private static void processFrame(IVideoPicture picture, BufferedImage image) {
try {
File file=new File("C:\\snapshot\thimbnailpic.png");//name of pic
ImageIO.write(image, "png", file);
} catch (Exception e) {
e.printStackTrace();
}
}
#SuppressWarnings("deprecation")
public static void main(String[] args) throws NumberFormatException,IOException {
String filename = "your_video.mp4";
if (!IVideoResampler.isSupported(IVideoResampler.Feature.FEATURE_COLORSPACECONVERSION))
throw new RuntimeException("you must install the GPL version of Xuggler (with IVideoResampler support) for this demo to work");
IContainer container = IContainer.make();
if (container.open(filename, IContainer.Type.READ, null) < 0)
throw new IllegalArgumentException("could not open file: "
+ filename);
String seconds=container.getDuration()/(1000000*2)+""; // time of thumbnail
int numStreams = container.getNumStreams();
// and iterate through the streams to find the first video stream
int videoStreamId = -1;
IStreamCoder videoCoder = null;
for (int i = 0; i < numStreams; i++) {
// find the stream object
IStream stream = container.getStream(i);
// get the pre-configured decoder that can decode this stream;
IStreamCoder coder = stream.getStreamCoder();
if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
videoStreamId = i;
videoCoder = coder;
break;
}
}
if (videoStreamId == -1)
throw new RuntimeException(
"could not find video stream in container: " + filename);
if (videoCoder.open() < 0)
throw new RuntimeException(
"could not open video decoder for container: " + filename);
IVideoResampler resampler = null;
if (videoCoder.getPixelType() != IPixelFormat.Type.BGR24) {
resampler = IVideoResampler.make(videoCoder.getWidth(), videoCoder
.getHeight(), IPixelFormat.Type.BGR24, videoCoder
.getWidth(), videoCoder.getHeight(), videoCoder
.getPixelType());
if (resampler == null)
throw new RuntimeException(
"could not create color space resampler for: "
+ filename);
}
IPacket packet = IPacket.make();
IRational timeBase = container.getStream(videoStreamId).getTimeBase();
System.out.println("Timebase " + timeBase.toString());
long timeStampOffset = (timeBase.getDenominator() / timeBase.getNumerator())
* Integer.parseInt(seconds);
System.out.println("TimeStampOffset " + timeStampOffset);
long target = container.getStartTime() + timeStampOffset;
container.seekKeyFrame(videoStreamId, target, 0);
boolean isFinished = false;
while(container.readNextPacket(packet) >= 0 && !isFinished ) {
if (packet.getStreamIndex() == videoStreamId) {
IVideoPicture picture = IVideoPicture.make(videoCoder
.getPixelType(), videoCoder.getWidth(), videoCoder
.getHeight());
int offset = 0;
while (offset < packet.getSize()) {
int bytesDecoded = videoCoder.decodeVideo(picture, packet,
offset);
if (bytesDecoded < 0) {
System.err.println("WARNING!!! got no data decoding " +
"video in one packet");
}
offset += bytesDecoded;
picture from
if (picture.isComplete()) {
IVideoPicture newPic = picture;
if (resampler != null) {
newPic = IVideoPicture.make(resampler
.getOutputPixelFormat(), picture.getWidth(),
picture.getHeight());
if (resampler.resample(newPic, picture) < 0)
throw new RuntimeException(
"could not resample video from: "
+ filename);
}
if (newPic.getPixelType() != IPixelFormat.Type.BGR24)
throw new RuntimeException(
"could not decode video as BGR 24 bit data in: "
+ filename);
BufferedImage javaImage = Utils.videoPictureToImage(newPic);
processFrame(newPic, javaImage);
isFinished = true;
}
}
}
}
if (videoCoder != null) {
videoCoder.close();
videoCoder = null;
}
if (container != null) {
container.close();
container = null;
}
} }
I know this is an old question but I found the same piece of tutorial code while playing with Xuggler today. The reason you are getting multiple thumbnails is due to the following line:
public static final double SECONDS_BETWEEN_FRAMES = 10;
This variable specifies the number of seconds between calls to dumpImageToFile. So a frame thumbnail will be written at 0.00 seconds, at 10.00 seconds, at 20.00 seconds, and so on:
if (event.getTimeStamp() - mLastPtsWrite >= MICRO_SECONDS_BETWEEN_FRAMES)
To get a frame thumbnail from the middle of the video you can calculate the duration of the video using more Xuggler capability which I found in a tutorial at JavaCodeGeeks. Then change your code in the ImageSnapListener to only write a single frame once the IVideoPictureEvent event timestamp exceeds the calculated mid point.
I hope that helps anyone who stumbles across this question.

Categories