Skip frames in video playback with MediaExtractor and MediaCodec - java

I'm using MediaExtractor/MediaCodec to decode a video and render it to a TextureView. As a template, I've used the code from: https://github.com/vecio/MediaCodecDemo/blob/master/src/io/vec/demo/mediacodec/DecodeActivity.java
I'd like to be able to playback the video at 2x speed. Luckily, the media encoding/decoding is fast enough so that I can accomplish this by letting the MediaCodec decode every frame, and then only render every other frame to the screen. However this doesn't feel like a great solution, particularly if you want to increase playback by an arbitrary value. For example, at 10x speeds, the Codec is not able to decode frames fast enough to playback every 10th frame at 30 fps.
So instead I'd like control playback by calling MediaExtractor.advance() multiple times to skip frames that don't need to be decoded. For example:
...
mDecoder.queueInputBuffer(inIndex, 0, sampleSize, mExtractor.getSampleTime(), 0);
for (i = 0; i < playbackSpeedIncrease; i++) {
mExtractor.advance();
}
...
With this code, in theory the extractor should only extract every nth frame, where n is defined by the variable 'playbackSpeedIncrease'. For example, if n = 5, this should advance past frames 1-4 and only extract frame 5.
However, this does not work in practice. When I run this code, the image rendered to the screen is distorted:
Does anyone know why this is? Any suggestions for better ways to playback videos at an arbitrary speed increase?

You can't generally do this with AVC video.
The encoded video has "key" (or "sync" or "I") frames that hold full images, but the frames between key frames hold "diffs" from the previous frames. You can find some articles at wikipedia about video encoding methods, e.g. this one. You're getting nasty video because you skipped a key frame and now the diffs are being computed against the wrong image.
If you've ever seen video fast-forward smoothly but fast-reverse clunkily, e.g. on a TiVo, this is why: the video decoder plays forward quickly, but in reverse it just plays the I-frames, holding them on screen long enough to get the desired rate. At "faster" forward/reverse it evens out because the device is just playing I-frames. You can do something similar by watching for the SAMPLE_FLAG_SYNC flag on the frames you get from MediaExtractor.
In general you're limited to either playing the video as fast as the device can decode it, or playing just the key frames. (If you know enough about the layout of a specific video in a specific encoding you may be able to do better, e.g. play the I and P but not the B, but I'm not sure that's even possible in AVC.)
The frequency of I frames is determined by the video encoder. It tends to be one or two per second, but if you get videos from different sources then you can expect the GOP size (group-of-pictures, i.e. how many frames there are between I-frames) to vary.

Related

How do I analyze golf shot using cameraX api in android

I am creating an android app which will be used to analyze golf shots. I am using cameraX api to get the frames into surfaceview and analyze callback to further processing.
Initially I am using Google Ml-kit object detection to detect the ball but video lags too much that most of the frames are skipped.
How can I use frames in realtime as there maybe 2,3 frames having the ball because the shot will be too fast?
I have tried to gather the frames as bitmaps into an ArrayList and then use that list for further processing but the number of frames vary for a 5 second video.
Should I record the video first for a constant time 5s and then extract frames from it?
Note: The above implementation is for starting because later I need to detect shot in realtime which I tried to detect the ball + detecting the specific sound but in vien.
Furthermore, I need to calculate framrate also which will later be used to calculate the speed of the ball.
Please suggest of any better implementation.
Thanks

Screen recorder, issue when converting

I'm using BufferedImage to capture the screen images and then JpegImagesToMovie class to convert it to .mov, which I found online.
When I run the output file, it's at a super speed and not the original speed I recorded at. Can someone tell me what need to do in order to get a real-time speed video?
You probably need to put in Thread.sleep(1 / fps). Try to see if that works.
You are possibly using the Oracle's example to convert images to .mov. The problem is that the size of the file generated is very very large. You need to move to something more efficient and something with a little bit more abstraction. How about using Xuggler for making screen recorder ?
Now, to the question of making the movie slower. You need to reduce the frame rate. If you have need 60 fps, your 1 second will need to be shared by 60 frames. So for n fps, you need to have (1/n) sleep duration for your thread.

video-steganography in java

I need to create steganographic videos (videos with data hidden in them) for my project.
I need to carry this out by extracting all the frames from a video and then hiding data in the selected frames by replacing bits in the LSB of the pixel color value and then encoding all the frames to create a new video(note here that lossless formats are required otherwise I might end up losing hidden data).
My research motivated me to use xuggler for manipulating videos, 'png' format to save the extracted images as it is a lossless format(handling them as BufferedImage objects), and using 'avi' video files.
As of now I am able to extract all frames from a video and encode my hidden data in the lsb's.
But I am having problems in creating the new avi video file using xuggler. When I extract the frames from the new video they lose the hidden data. I don't understand how to get this right and keep the data intact. This could be due to some lossy compression technique being used to create the new video.The size of the new video does not matter to me. I also can't find the correct codec_id to create the new video. I am extensively using xuggler tutorial available on wiki.
decode and capture frames http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/ws/workingcopy/src/com/xuggle/mediatool/demos/DecodeAndCaptureFrames.java
I can post my code as required...
The problem is in the algorithm you are using , as mpeg or other famous video compression techniques are lossy compression techniques you will be losing data when you convert the frames back to video stream . So in lossy video codecs you cannot use LSB techniques for steganography .
Instead what you can do is change the motion vectors of the videos in someway to hide steganographic data . The problem in this is that xuggler being a higher level api might not give you a way to find/alter the motion vectors of the p/b frame . ffmpeg which xuggler uses does have a option for you to visualize the motion vectors so your best bet for motion vectors algo is alter the source code of ffmpeg as its a open-source project . Do reply back if you find a better way to find motion vectors .
Well , there is a simpler video steganography method
You can refer to Real steganography with truecrypt
But if you really want to go with mpeg video compression you can refer to the wonderful
paper : Steganography in Compressed Video Stream but the problem still remains extracting and manipulating the motion vectors

How do I control the speed of an animated GIF?

I would like to control the speed of an animated GIF in a Java applet. Is there a way to do this? If not, is there a way to access the data of an animated GIF so the applet can draw the animation image by image on its own?
I think that the frame rate is embedded into the GIF. You could somehow extract the images from the GIF, but that's harder than starting with the individual images and animating them in JS, which is harder than recreating the GIF with your preferred frame rate.
If you're going to use the GIF only once and the frame rate isn't going to change, just recreate the GIF. If you need to change the speed based on inputs from your applet, you could use the approach here. It alternates between two gifs, but there's nothing stopping you from loading in PNGs and alternating through an Array of those.
The animated GIF format is consists of data for each frame along with a delay value (how long to show that frame). The delay is separate for each frame, and is stored as two bytes and measures as hundred's of a second.
Netscape (back when it was the web), couldn't show the frames faster than 10 per second. So lots of tools just said screw it, and set delay for all the frames to 0. Lots of old gifs and old tools, have keep these screwed up frame delay times around.
With faster computers and browsers, they worked around this by checking if any of the frames had a delay <= 50ms (20+ fps). IF they did, the delay was increased to 100ms (10fps).
In principle, the best solution would be to just fix the GIF you're using to have accurate frame delays in them. If that isn't viable, use that same old workaround. Break the frames out of the animated GIF and do the animation yourself, defaults to a 100ms delay if the specified delay is <= 50ms. This will give you the same behavior as what you see in most web browsers.
Read about this a while ago. Think most of the details on mentioned on wikipedia (including the animated GIF format and the per frame delays). If it you really want some solid references, I can dig them for you.

Produce a simple Video - uncompressed, frame by frame

I need to an algorithm, to write frames (Pictures) into a file, which can be read by some Video-Cutting/Producing-Software to work with.
So I got frames, and I want to give them a input into a function/Method.
Let's do i in Java.
How can I do this?
Is there a simple way, I can write videofiles without using any systemcodecs?
I just need a uncompressed video with a constant Framerate (25 fps or 50 fps)
that will take my true-Color pictures (2D-arrays of Colors), so that I can use that video in my Videoprogramm to work with.
I never found any fileformat that fits to that.
Can You help me?
Greetings from Austria, and thanks. Flo.
Depending on the program you want to use to further process your movie you can also simply create PNGs (or TGAs or BMPs) for the single frames. VirtualDub e.g. can use images as frames for a movie.
The AVI container format can contain streams of uncompressed video, of which there are many types to choose from. Have a look here http://fourcc.org/ at the RGB and YUV formats, and here http://www.alexander-noe.com/video/documentation/avi.pdf for details on the AVI file format.

Categories