I want to continuously capture the entire desktop inside of a java application. As I'm capturing, I'd like to chunk the stream of data into small video files (mp4, WebM) for storage. From my research, it would seem that the Robot Java class and the FFmpeg tool are my best options. However, Robot seems to best-fit the use case of obtaining images, not videos. FFmpeg seems like it may support this, but I've struggled to find definitive documentation. I'm looking to emulate what can be done through Chrome's getUserMedia and desktopCapture APIs along with the MediaStreamRecorder JavaScript library. Does anyone have a suggestion for a similar and elegant solution in Java?
Related
I was able to follow the examples of how to encode video with io.humble easily enough. But, the only example of including audio that I can find simply encodes audio at the beginning of the video. I can't figure out how to encode samples at arbitrary locations. Using setTimestamp doesn't do anything.
Here is the example I found:
https://www.javatips.net/api/myLib-master/myLib.AGPLv3/myLib.humble.test/src/test/java/com/ttProject/humble/test/BeepSoundTest.java
If I modify the beepSamples() method to increase the "sampleNum" value, I can create a longer tone. But calling the method multiple times or setting samples.setTimestamp() to other values or calling setTimestamp() on the packets, all do nothing.
No matter what I do, the audio always shows up at the beginning of the video.
Ultimately, I want to be able to load arbitrary mp3 files of various audioclips and then merge them into the audio stream of the video at specific timestamps. But I can't even get this example to encode at different points in the video stream.
The author of this tool unfortunately is not interested in maintaining it or providing examples. Luckily, I found JavaCV which is an alternative that turned out to be really easy to use.
So to anyone else having this problem, I recommend switching to JavaCV. Other options are also JCodec and Xuggler, but Xuggler is deprecated (same author as io.humble) and JCodec apparently is slow and produces much larger files.
If you need support with these kind of projects. I maintain a fork of Xuggler (https://github.com/olivierayache/xuggle-xuggler)..I can provide help on these topics.
I have two files - audio(mp3 or wav) and video(mp4 or avi) with the same duration. I want to merge them and send to the front.
Which java library will help me to implement that?
If you mean you want to merge the audio and the video on the server side, so that the merged video can then be streamed to the client, then using ffmpeg via a wrapper may be the easiest approach.
The ffmpeg command line is well used and it is quite easy to ask and receive answers to any particular syntax. Using a Java wrapper approach allows you leverage this syntax and give you the flexablity to use other ffmpeg functionality in the future if you need it.
A popular up to date Java Wrapper is available here:
https://github.com/bramp/ffmpeg-cli-wrapper
If you actually want to stream the audio and the video to the browser separately and do the merging there, then, if you are not worried about an exact match (e.g. needing to synch audio to speech to keep it in lip synch), you can actually just start the audio player and video player simultaneously and the browser will play both together. This worked on all major browsers I tested it on for a project several years ago and I am not aware of anything changing to stop this working.
I am a beginner in java, recently i have studied about JMF (Java Media Framework) from this link, I have learned that how to play a video file in java programs using JMF, Now what I need to do is, I have to capture frames from given video files and then process it using some image processing algorithm and then I have to send these from to that player for displaying. Can anyone please suggest me that how to do that.
I have already read this link
If you're not required to use JMF, it is probably worthwhile to consider other options at this point. Unfortunately, Xuggle/Xuggler is apparently on hiatus - but if the state of its last release will work for you, they have a Frame Capture Demo that should be a good starting point.
If you are sticking with JMF, perhaps Accessing Individual Decoded Video Frames
will point you in the right direction with its info on using a pass-through codec. Note that you'll need to search for a copy of FrameAccess.java if you want the demo code for this option (the link seems to be broken on that page).
We have a java web application where users can upload all kinds of files including any kind of video files. Now we want to allow them to stream these video files they own. So I need to make sure that they are the owner and then stream video. Also possibly stream a preview.
Do I need to convert these video files before streaming and where should I look to get started?
The best video playback/encoding library I have ever seen is ffmpeg. It plays everything you throw at it. (It is used by MPlayer.) It is written in C but I found some Java wrappers.
FFMPEG-Java: A Java wrapper around ffmpeg using JNA.
jffmpeg: This one integrates to JMF.
I am creating a simple video editing application using Java, JNI, C, and FFmpeg. The UI is being designed in Java and I am talking with FFmpeg from a C file using JNI. Does anyone have any suggestions as to the best way I should go about saving part of a video file using FFmpeg? I am allowing the users to choose parts of the video to save and what I am thinking as of right now is to basically loop through all of the packets and decode each frame (if need to encode to a different format) then save the frame to a file. All the while seeking to different parts of the video based on the users start and stop sections of their crops. If this doesn't make sense I would be glad to clear it up. Any ideas are much appreciated as I am just looking to create the most efficient and correct way to go about doing this. Thanks!
Use xuggler? It will do all for you without you having to figure out the jni bindings.