Is there a FFMPEG command, where if we pass a video file, on every scene changes it should produce a keyframe for it. And Keyframe to my understanding is a series of files(image or video) files for an video, which can be used for playing on hover of the video. Kindly let know if we can do this?
Is there a FFMPEG command, where if we pass a video file, on every
scene changes it should produce a keyframe for it.
Well, It depends on what codec, and what you are calling a scene. x264 has the scenecut parameter for adjusting scene sensitivity. However, what x264 calls a scene, may not be the same thing you call a scene.
A Michael bay movies for example has a hard cut every 4 or 5 seconds. x264 may consider every "cut" a scene. Anything more clever than a cut, or a fade, ffmpeg will not handle.
And Keyframe to my understanding is a series of files(image or video)
files for an video, which can be used for playing on hover of the
video. Kindly let know if we can do this?
No, not at all.
A key frame is a single frame, not a series of frames or files. It also has nothing to do with "hover". A keyframe is just an independent frame, meaning you can decode it independently without first having to decode any frames that it may reference.
Video compression does not just encode every frame. It will encode a frame, then for the next frame, encode only the parts that changed. This is called a "predicted frame" and it is not decodable without decoding the referenced frame. A key frame is just a frame that does not reference any other frames.
Sometimes some players may make optimizations where it will preview keyframes on hover because keyframes are faster to decode than predicted frames. But this is 100% a player optimization and not all plays do it.
To me, thed sounds line an xyproblem.
Related
I'm using MediaExtractor/MediaCodec to decode a video and render it to a TextureView. As a template, I've used the code from: https://github.com/vecio/MediaCodecDemo/blob/master/src/io/vec/demo/mediacodec/DecodeActivity.java
I'd like to be able to playback the video at 2x speed. Luckily, the media encoding/decoding is fast enough so that I can accomplish this by letting the MediaCodec decode every frame, and then only render every other frame to the screen. However this doesn't feel like a great solution, particularly if you want to increase playback by an arbitrary value. For example, at 10x speeds, the Codec is not able to decode frames fast enough to playback every 10th frame at 30 fps.
So instead I'd like control playback by calling MediaExtractor.advance() multiple times to skip frames that don't need to be decoded. For example:
...
mDecoder.queueInputBuffer(inIndex, 0, sampleSize, mExtractor.getSampleTime(), 0);
for (i = 0; i < playbackSpeedIncrease; i++) {
mExtractor.advance();
}
...
With this code, in theory the extractor should only extract every nth frame, where n is defined by the variable 'playbackSpeedIncrease'. For example, if n = 5, this should advance past frames 1-4 and only extract frame 5.
However, this does not work in practice. When I run this code, the image rendered to the screen is distorted:
Does anyone know why this is? Any suggestions for better ways to playback videos at an arbitrary speed increase?
You can't generally do this with AVC video.
The encoded video has "key" (or "sync" or "I") frames that hold full images, but the frames between key frames hold "diffs" from the previous frames. You can find some articles at wikipedia about video encoding methods, e.g. this one. You're getting nasty video because you skipped a key frame and now the diffs are being computed against the wrong image.
If you've ever seen video fast-forward smoothly but fast-reverse clunkily, e.g. on a TiVo, this is why: the video decoder plays forward quickly, but in reverse it just plays the I-frames, holding them on screen long enough to get the desired rate. At "faster" forward/reverse it evens out because the device is just playing I-frames. You can do something similar by watching for the SAMPLE_FLAG_SYNC flag on the frames you get from MediaExtractor.
In general you're limited to either playing the video as fast as the device can decode it, or playing just the key frames. (If you know enough about the layout of a specific video in a specific encoding you may be able to do better, e.g. play the I and P but not the B, but I'm not sure that's even possible in AVC.)
The frequency of I frames is determined by the video encoder. It tends to be one or two per second, but if you get videos from different sources then you can expect the GOP size (group-of-pictures, i.e. how many frames there are between I-frames) to vary.
I have to take in a video file (.mpg or .avi or .mov ect...) using JMF and where the user stops the video (or pauses) I need to get that frame. If I can get it into a frame buffer then I'm golden (or even save the frame as a image file like jpg). As once I have the frame I just need to get RGB values from the pixels in the frame.(which I already have made a method for)
My issue here is I have not got any experience in JMF, but I have a source file that opens a window and then I can browse for a video file which seems to only work half the time.
I gather this is a bit of a tall order as I am pretty much in the dark on how to do this, and everything I try looking up is not of any real help, if someone knows a link that has some example code that would be wonderful.
Thanks.
I'm developing AppEngine application. One of it's features is splitting an animated .gif image into separate frames. I've searched a lot to find the way how to do it and finally found the solution. Unfortunately the solution is based on ImageReader and I cant use it on the server, because:
javax.imageio.ImageReader is not supported by Google App Engine's Java
runtime environment
Are there any other ways to decode GIF-image without this class?
First some thing about frame itself. There are two implications about splitting an animated .gif image into separate frames. 1) Literally, a frame is a frame in the sense of an animated GIF. The problem is frames which constitute an animated GIF image are related. The disposal method of an animated GIF dictates what to do with the previous frame when drawing the current frame. You can override it; fill with background color before drawing the new frame, or you can do whatever you think appropriate before drawing the new frame. If you think the above situation is complicated, what about transparency of frames? logical position to draw each frames?
If we go along this road, there is no need to use a dedicated ImageReader, just read relevant parts of the image and copy each frame data, save it along with a header and color palette. The consequence is: the resulting image might look weird and meaningless. Look at the example below:
The first frame
The second frame
And the original
You can see the second frame doesn't look so good. The truth is, the second frame is a transparent one which build on top of the first frame (this animated GIF only contains 2 frames). You are expect to see through the second frame and altogether, they make an animation.
Now let's see what the second implication of splitting an animated .gif image into separate frames. 2) In this case, the frame is a actually is a composite which builds upon the previous frames and which is what we are seeing when viewing an animated GIF. In order to achieve this, we have to take into effect the history of the frame loop, the logical position of each frame, and the transparency of the frames themselves.
Let's see what we get now:
The first frame
The second frame
Now the first frame is the same as in the first situation, but the second frame is constructed on top of the first one and it's not transparent anymore.
In the second case, we do have to decode and encode the frames to achieve the desired result. Besides looking nice, another good thing about this is you can save the resulting images in any format the encoder support.
The examples in this post are generated by the GIF related part of iCafe
I would like to control the speed of an animated GIF in a Java applet. Is there a way to do this? If not, is there a way to access the data of an animated GIF so the applet can draw the animation image by image on its own?
I think that the frame rate is embedded into the GIF. You could somehow extract the images from the GIF, but that's harder than starting with the individual images and animating them in JS, which is harder than recreating the GIF with your preferred frame rate.
If you're going to use the GIF only once and the frame rate isn't going to change, just recreate the GIF. If you need to change the speed based on inputs from your applet, you could use the approach here. It alternates between two gifs, but there's nothing stopping you from loading in PNGs and alternating through an Array of those.
The animated GIF format is consists of data for each frame along with a delay value (how long to show that frame). The delay is separate for each frame, and is stored as two bytes and measures as hundred's of a second.
Netscape (back when it was the web), couldn't show the frames faster than 10 per second. So lots of tools just said screw it, and set delay for all the frames to 0. Lots of old gifs and old tools, have keep these screwed up frame delay times around.
With faster computers and browsers, they worked around this by checking if any of the frames had a delay <= 50ms (20+ fps). IF they did, the delay was increased to 100ms (10fps).
In principle, the best solution would be to just fix the GIF you're using to have accurate frame delays in them. If that isn't viable, use that same old workaround. Break the frames out of the animated GIF and do the animation yourself, defaults to a 100ms delay if the specified delay is <= 50ms. This will give you the same behavior as what you see in most web browsers.
Read about this a while ago. Think most of the details on mentioned on wikipedia (including the animated GIF format and the per frame delays). If it you really want some solid references, I can dig them for you.
Best reader,
I'm stuck on one of my concepts.
I'm making a program which classroom children can measure themselves with.
This is what the program includes;
- 1 webcam (only used for a simple webcam view.)
- 2 phidgets (don't mind these.)
So, this was my plan. I'll draw a rectangle on the webcamview and make it repaint itself constantly.
When the repainting is stopped by one of the phidgets, the rectangle's value will be returned in centimeters or meters.
I've already written the code of the rectangle that's repainting itself and this was my result:
(It's a roundRectangle, the lines are kind of hard to see in this image, sorry about that.)
As you can see, the background is now simply black.
I want to set the background of this JFrame as a webcam view (if possible) and then draw the
rectangle over the webcam view instead of the black background.
I've already looked into jmf, fmj and such but am getting errors even after checking my webcam path and adding the needed jar libraries. So I want to try other options.
So;
- I simply want to open my webcam, use it as background (yes live stream, if possible in some way).
And then draw this rectangle over it.
I'm thus wondering if this is possible, or if there's other options for me to achieve this.
Hope you understand my situation, and please ask if anything's unclear.
EDIT:
I got my camera to open now trough java. The running camera is of type "Process".
This is where I got the code for my camera to open: http://www.linglom.com/2007/06/06/how-to-run-command-line-or-execute-external-application-from-java/
I adjusted mine a little so it'll open my camera instead.
But now I'm wondering; is it possible to set a process as background of a JFrame?
Or can I somehow add the process to a JPanel and then add it to a JFrame?
I've tried several things without any succes.
My program as it is now, when I run it, opens the measuring frame and the camera view seperatly.
But the goal is to fuse them and make the repainting-rectangle paint over the camera view.
Help much appreciated!
I don't think it's a matter of setting a webcam stream as the background for your interface. More likely, you need to create a media player component, add it to your GUI, and then overlay your rectangles on top of that component.
As you probably know from searching for Java webcam solutions in Stack Overflow already, it's not easy, but hopefully the JMF Specs and API Guide will help you through it. The API guide is a PDF and has sections on receiving media streams, as well as sample code.