I am developing application to monitor 20 video streams at a time. I will have JFrame, and 20 boxes (e.g JPanel) inside JFrame to display 20 streams. I am able to load stream and decode using xuggler, but now how can I display this over Swing JPanel?
I am able to play sound on SourceDataLine, my problem is only, how to display 20 * 30 = 600 video frames in second on Swing component?
Also xuggler outputs YUV420P pixel format decoded frames, is there overhead in converting this to RGB, create BufferedImage and display on Swing Component?
Please guide me on this. I want to display 20 video streams at a time in swing components.
Here's some code I Googled that will convert a YUV420 file to BufferedImage frames. You can use this as a base for what you want.
You probably won't be able to process 600 video frames a second on a PC either. You'll have to see how many video frames you can convert per second, and drop the rest of the frames.
Probably, the best way to process 20 video feeds is to have 20 threads grabbing a video frame, converting the video frame to a BufferedImage and passing the BufferedImage to the Event Dispatch Thread (EDT) for Swing to draw on the corresponding JPanel. When the thread comes back to grab the next video frame, you'll have automatically dropped the frames that the PC didn't have time to process.
Related
Is there a FFMPEG command, where if we pass a video file, on every scene changes it should produce a keyframe for it. And Keyframe to my understanding is a series of files(image or video) files for an video, which can be used for playing on hover of the video. Kindly let know if we can do this?
Is there a FFMPEG command, where if we pass a video file, on every
scene changes it should produce a keyframe for it.
Well, It depends on what codec, and what you are calling a scene. x264 has the scenecut parameter for adjusting scene sensitivity. However, what x264 calls a scene, may not be the same thing you call a scene.
A Michael bay movies for example has a hard cut every 4 or 5 seconds. x264 may consider every "cut" a scene. Anything more clever than a cut, or a fade, ffmpeg will not handle.
And Keyframe to my understanding is a series of files(image or video)
files for an video, which can be used for playing on hover of the
video. Kindly let know if we can do this?
No, not at all.
A key frame is a single frame, not a series of frames or files. It also has nothing to do with "hover". A keyframe is just an independent frame, meaning you can decode it independently without first having to decode any frames that it may reference.
Video compression does not just encode every frame. It will encode a frame, then for the next frame, encode only the parts that changed. This is called a "predicted frame" and it is not decodable without decoding the referenced frame. A key frame is just a frame that does not reference any other frames.
Sometimes some players may make optimizations where it will preview keyframes on hover because keyframes are faster to decode than predicted frames. But this is 100% a player optimization and not all plays do it.
To me, thed sounds line an xyproblem.
I'm using MediaExtractor/MediaCodec to decode a video and render it to a TextureView. As a template, I've used the code from: https://github.com/vecio/MediaCodecDemo/blob/master/src/io/vec/demo/mediacodec/DecodeActivity.java
I'd like to be able to playback the video at 2x speed. Luckily, the media encoding/decoding is fast enough so that I can accomplish this by letting the MediaCodec decode every frame, and then only render every other frame to the screen. However this doesn't feel like a great solution, particularly if you want to increase playback by an arbitrary value. For example, at 10x speeds, the Codec is not able to decode frames fast enough to playback every 10th frame at 30 fps.
So instead I'd like control playback by calling MediaExtractor.advance() multiple times to skip frames that don't need to be decoded. For example:
...
mDecoder.queueInputBuffer(inIndex, 0, sampleSize, mExtractor.getSampleTime(), 0);
for (i = 0; i < playbackSpeedIncrease; i++) {
mExtractor.advance();
}
...
With this code, in theory the extractor should only extract every nth frame, where n is defined by the variable 'playbackSpeedIncrease'. For example, if n = 5, this should advance past frames 1-4 and only extract frame 5.
However, this does not work in practice. When I run this code, the image rendered to the screen is distorted:
Does anyone know why this is? Any suggestions for better ways to playback videos at an arbitrary speed increase?
You can't generally do this with AVC video.
The encoded video has "key" (or "sync" or "I") frames that hold full images, but the frames between key frames hold "diffs" from the previous frames. You can find some articles at wikipedia about video encoding methods, e.g. this one. You're getting nasty video because you skipped a key frame and now the diffs are being computed against the wrong image.
If you've ever seen video fast-forward smoothly but fast-reverse clunkily, e.g. on a TiVo, this is why: the video decoder plays forward quickly, but in reverse it just plays the I-frames, holding them on screen long enough to get the desired rate. At "faster" forward/reverse it evens out because the device is just playing I-frames. You can do something similar by watching for the SAMPLE_FLAG_SYNC flag on the frames you get from MediaExtractor.
In general you're limited to either playing the video as fast as the device can decode it, or playing just the key frames. (If you know enough about the layout of a specific video in a specific encoding you may be able to do better, e.g. play the I and P but not the B, but I'm not sure that's even possible in AVC.)
The frequency of I frames is determined by the video encoder. It tends to be one or two per second, but if you get videos from different sources then you can expect the GOP size (group-of-pictures, i.e. how many frames there are between I-frames) to vary.
I'm developing AppEngine application. One of it's features is splitting an animated .gif image into separate frames. I've searched a lot to find the way how to do it and finally found the solution. Unfortunately the solution is based on ImageReader and I cant use it on the server, because:
javax.imageio.ImageReader is not supported by Google App Engine's Java
runtime environment
Are there any other ways to decode GIF-image without this class?
First some thing about frame itself. There are two implications about splitting an animated .gif image into separate frames. 1) Literally, a frame is a frame in the sense of an animated GIF. The problem is frames which constitute an animated GIF image are related. The disposal method of an animated GIF dictates what to do with the previous frame when drawing the current frame. You can override it; fill with background color before drawing the new frame, or you can do whatever you think appropriate before drawing the new frame. If you think the above situation is complicated, what about transparency of frames? logical position to draw each frames?
If we go along this road, there is no need to use a dedicated ImageReader, just read relevant parts of the image and copy each frame data, save it along with a header and color palette. The consequence is: the resulting image might look weird and meaningless. Look at the example below:
The first frame
The second frame
And the original
You can see the second frame doesn't look so good. The truth is, the second frame is a transparent one which build on top of the first frame (this animated GIF only contains 2 frames). You are expect to see through the second frame and altogether, they make an animation.
Now let's see what the second implication of splitting an animated .gif image into separate frames. 2) In this case, the frame is a actually is a composite which builds upon the previous frames and which is what we are seeing when viewing an animated GIF. In order to achieve this, we have to take into effect the history of the frame loop, the logical position of each frame, and the transparency of the frames themselves.
Let's see what we get now:
The first frame
The second frame
Now the first frame is the same as in the first situation, but the second frame is constructed on top of the first one and it's not transparent anymore.
In the second case, we do have to decode and encode the frames to achieve the desired result. Besides looking nice, another good thing about this is you can save the resulting images in any format the encoder support.
The examples in this post are generated by the GIF related part of iCafe
I would like to control the speed of an animated GIF in a Java applet. Is there a way to do this? If not, is there a way to access the data of an animated GIF so the applet can draw the animation image by image on its own?
I think that the frame rate is embedded into the GIF. You could somehow extract the images from the GIF, but that's harder than starting with the individual images and animating them in JS, which is harder than recreating the GIF with your preferred frame rate.
If you're going to use the GIF only once and the frame rate isn't going to change, just recreate the GIF. If you need to change the speed based on inputs from your applet, you could use the approach here. It alternates between two gifs, but there's nothing stopping you from loading in PNGs and alternating through an Array of those.
The animated GIF format is consists of data for each frame along with a delay value (how long to show that frame). The delay is separate for each frame, and is stored as two bytes and measures as hundred's of a second.
Netscape (back when it was the web), couldn't show the frames faster than 10 per second. So lots of tools just said screw it, and set delay for all the frames to 0. Lots of old gifs and old tools, have keep these screwed up frame delay times around.
With faster computers and browsers, they worked around this by checking if any of the frames had a delay <= 50ms (20+ fps). IF they did, the delay was increased to 100ms (10fps).
In principle, the best solution would be to just fix the GIF you're using to have accurate frame delays in them. If that isn't viable, use that same old workaround. Break the frames out of the animated GIF and do the animation yourself, defaults to a 100ms delay if the specified delay is <= 50ms. This will give you the same behavior as what you see in most web browsers.
Read about this a while ago. Think most of the details on mentioned on wikipedia (including the animated GIF format and the per frame delays). If it you really want some solid references, I can dig them for you.
I am working on VLCJ. I want to play a video where every frame's duration time is recorded in an independent file. So I plan to modify the input module of VLCJ in order to read the video content file and frame time file at the same time. Finally, the result should be VLCJ playing the video frame by frame and frame time file decides how long a video frame is playing.
To implement this, anybody knows which modules of VLCJ source code should be modified?
I'm not even sure you can do this with libvlc - do you mean the amount of time the frame should display for on the screen or actually does? If you mean should, then you should just be able to calculate this from the frame rate.
If you mean does then you might want to look at an API such as Xuggler which works with each frame directly.