I need to an algorithm, to write frames (Pictures) into a file, which can be read by some Video-Cutting/Producing-Software to work with.
So I got frames, and I want to give them a input into a function/Method.
Let's do i in Java.
How can I do this?
Is there a simple way, I can write videofiles without using any systemcodecs?
I just need a uncompressed video with a constant Framerate (25 fps or 50 fps)
that will take my true-Color pictures (2D-arrays of Colors), so that I can use that video in my Videoprogramm to work with.
I never found any fileformat that fits to that.
Can You help me?
Greetings from Austria, and thanks. Flo.
Depending on the program you want to use to further process your movie you can also simply create PNGs (or TGAs or BMPs) for the single frames. VirtualDub e.g. can use images as frames for a movie.
The AVI container format can contain streams of uncompressed video, of which there are many types to choose from. Have a look here http://fourcc.org/ at the RGB and YUV formats, and here http://www.alexander-noe.com/video/documentation/avi.pdf for details on the AVI file format.
Related
I'm using BufferedImage to capture the screen images and then JpegImagesToMovie class to convert it to .mov, which I found online.
When I run the output file, it's at a super speed and not the original speed I recorded at. Can someone tell me what need to do in order to get a real-time speed video?
You probably need to put in Thread.sleep(1 / fps). Try to see if that works.
You are possibly using the Oracle's example to convert images to .mov. The problem is that the size of the file generated is very very large. You need to move to something more efficient and something with a little bit more abstraction. How about using Xuggler for making screen recorder ?
Now, to the question of making the movie slower. You need to reduce the frame rate. If you have need 60 fps, your 1 second will need to be shared by 60 frames. So for n fps, you need to have (1/n) sleep duration for your thread.
I want to stream desktop screen captures using sockets.
I don't know the exact way to do this, so I went with AWT's robot :)
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
The problem is that images coming from the robot are too large to make a stream.
A 1440x900 capture is about 0.3MB and I can't transfer it fast enough to create a smooth 24fps stream.
Currently I'm using a TCP socket, because I had problems cutting down the image into multiple parts and sending them over with UDP.
Probably this isn't the right method, but what is? How are HD video streams transferred?
Thanks in advance
I think you'll need external library to create video (may be platform dependent).
The approach with images is simple but you'll need to send each frame. When you use video codec size is smaller because it sends some of frames in full size and other contain only changed part of the picture
See here:
http://en.wikipedia.org/wiki/Key_frame
http://en.wikipedia.org/wiki/I-frame
Here some open-source libs I just googled:
https://code.google.com/p/java-screen-recorder/
http://www.xuggle.com/xuggler/
I think You also can find some libs to create video stream from images...
How are HD video streams transferred?
Typically as a video stream, which a 'group of images' is not. Video codecs often have clever ways to compress groups of images further, e.g. by only showing the part of the next frame that is different to the previous one.
You might also want to look into encoding the images as a high compression JPEG.
Having said that, I doubt you'll get a very good transfer rate at that size in pixels.
Like soundcloud and zippyshare1, how can I generate an audio waveform image using java? are there any frameworks or open source libraries available for such case?
I wanted to generate an audio waveform as an image, and upon loading a track, the waveform image with will be loaded.
Start with this answer. The "further processing.." in this case might be to add each instantaneous value to a GeneralPath, then (scale that path to fit within the painting area and) draw it.
Here are two questions with answers on SO:
Java Program to create a PNG waveform for an audio file
How can I draw sound data from my wav file?
I like my answer to the second question the best because it explains how to do it in all cases, but it doesn't give code.
There are lots of other answers, too.
I need to create steganographic videos (videos with data hidden in them) for my project.
I need to carry this out by extracting all the frames from a video and then hiding data in the selected frames by replacing bits in the LSB of the pixel color value and then encoding all the frames to create a new video(note here that lossless formats are required otherwise I might end up losing hidden data).
My research motivated me to use xuggler for manipulating videos, 'png' format to save the extracted images as it is a lossless format(handling them as BufferedImage objects), and using 'avi' video files.
As of now I am able to extract all frames from a video and encode my hidden data in the lsb's.
But I am having problems in creating the new avi video file using xuggler. When I extract the frames from the new video they lose the hidden data. I don't understand how to get this right and keep the data intact. This could be due to some lossy compression technique being used to create the new video.The size of the new video does not matter to me. I also can't find the correct codec_id to create the new video. I am extensively using xuggler tutorial available on wiki.
decode and capture frames http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/ws/workingcopy/src/com/xuggle/mediatool/demos/DecodeAndCaptureFrames.java
I can post my code as required...
The problem is in the algorithm you are using , as mpeg or other famous video compression techniques are lossy compression techniques you will be losing data when you convert the frames back to video stream . So in lossy video codecs you cannot use LSB techniques for steganography .
Instead what you can do is change the motion vectors of the videos in someway to hide steganographic data . The problem in this is that xuggler being a higher level api might not give you a way to find/alter the motion vectors of the p/b frame . ffmpeg which xuggler uses does have a option for you to visualize the motion vectors so your best bet for motion vectors algo is alter the source code of ffmpeg as its a open-source project . Do reply back if you find a better way to find motion vectors .
Well , there is a simpler video steganography method
You can refer to Real steganography with truecrypt
But if you really want to go with mpeg video compression you can refer to the wonderful
paper : Steganography in Compressed Video Stream but the problem still remains extracting and manipulating the motion vectors
Updated:
Since my original request appears to be almost impossible, what's the next simplest solution? Invoke the swftools app? Make a JNI call to the ffmpeg lib?
Original:
This is related to "how to extract flash frames programmatically" but I am constrained to Java libraries only (and no JNI calls to C please). This also implies no calls to console apps like swftools. I'm looking for a pure Java (or at least JVM) solution.
Or if you want to do it directly from Java, you can use this code. It uses Xuggler, an open-source library for encoding and decoding video from java. Xuggler does use JNI behind the scenes to FFmpeg, but when using it that's totally invisible to you. Hop that helps.
Art
It is possible... if you can accept the end result being one frame FLV file (and not a proper image, like PNG or JPEG.) What is a single frame video, really? It's an image! This may very well give you the functionality you are looking for although it might seem a bit strange.
What you need to do is parse the FLV file. It's actually a very simple format. First, read up on some basic video compression terms. Second, read up on the FLV File Format specification on Adobe's site.
Roughly, an example FLV file would look like this inside:
'FLV' header
Meta data
Frame 0 - Audio
Frame 1 - Video I-Frame (all information to create a full image)
Frame 2 - Video P-Frame (just differential from last frame)
Frame 3 - Video P-Frame (just differential from last frame)
Frame 1 - Video I-Frame (all information to create a full image)
Frame 2 - Video P-Frame (just differential from last frame)
Frame 3 - Video P-Frame (just differential from last frame)
Frame 0 - Audio
Frame 5
...
Frame n
EOF
So, you'll search for the video I-frame you want as a picture. That, along with a basic FLV file header, together is all you need. Our ouput (be it a socket or a file) will be just:
'FLV' header
Frame 0 - Video I-Frame (all information to create a full image)
This all can be done in Java without any special tools. I've done it myself. For real.
Note that this approach applies only to FLV files, and not F4V (or other MP4-based file formats.)
Note that playback in SWF is controlled by ActionScript and declarative transformations, so to view that "first frame" properly in all cases you'd need to emulate the whole player. I'd say -- call external tools. What's the problem with them anyway? Is it something religion-mandated?