Xuggler encoding and muxing - java

I'm trying to use Xuggler (which I believe uses ffmpeg under the hood) to do the following:
Accept a raw MPJPEG video bitstream (from a small TTL serial camera) and encode/transcode it to h.264; and
Accept a raw audio bitsream (from a microphone) and encode it to AAC; then
Mux the two (audio and video) bitsreams together into a MPEG-TS container
I've watched/read some of their excellent tutorials, and so far here's what I've got:
// I'll worry about implementing this functionality later, but
// involves querying native device drivers.
byte[] nextMjpeg = getNextMjpegFromSerialPort();
// I'll also worry about implementing this functionality as well;
// I'm simply providing these for thoroughness.
BufferedImage mjpeg = MjpegFactory.newMjpeg(nextMjpeg);
// Specify a h.264 video stream (how?)
String h264Stream = "???";
IMediaWriter writer = ToolFactory.makeWriter(h264Stream);
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264);
writer.encodeVideo(0, mjpeg);
For one, I think I'm close here, but it's still not correct; and I've only gotten this far by reading the video code examples (not the audio - I can't find any good audio examples).
Literally, I'll be getting byte-level access to the raw video and audio feeds coming into my Xuggler implementation. But for the life of me I can't figure out how to get them into an h.264/AAC/MPEG-TS format. Thanks in advance for any help here.

Looking at Xuggler this sample code, the following should work to encode video as H.264 and mux it into a MPEG2TS container:
IMediaWriter writer = ToolFactory.makeWriter("output.ts");
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, width, height);
for (...)
{
BufferedImage mjpeg = ...;
writer.encodeVideo(0, mjpeg);
}
The container type is guessed from the file extension, the codec is specified explicitly.
To mux audio and video, you would do something like this:
writer.addVideoStream(videoStreamIndex, 0, videoCodec, width, height);
writer.addAudioStream(audioStreamIndex, 0, audioCodec, channelCount, sampleRate);
while (... have more data ...)
{
BufferedImage videoFrame = ...;
long videoFrameTime = ...; // this is the time to display this frame
writer.encodeVideo(videoStreamIndex, videoFrame, videoFrameTime, DEFAULT_TIME_UNIT);
short[] audioSamples = ...; // the size of this array should be number of samples * channelCount
long audioSamplesTime = ...; // this is the time to play back this bit of audio
writer.encodeAudio(audioStreamIndex, audioSamples, audioSamplesTime, DEFAULT_TIME_UNIT);
}
In this case I believe your code is responsible for interleaving the audio and video: you want to call either encodeAudio() or encodeVideo() on each pass through the loop, based on which data available (a chunk of audio samples or a video frame) has an earlier timestamp.
There is another, lower-level API you may end up using, based on IStreamCoder, which gives more control over various parameters. I don't think you will need to use that.
To answer the specific questions you asked:
(1) "Encode a BufferedImage (M/JPEG) into a h.264 stream" - you already figured that out, writer.addVideoStream(..., ICodec.ID.CODEC_ID_H264) makes sure you get the H.264 codec. To get a transport stream (MPEG2 TS) container, simply call makeWriter() with a filename with a .ts extension.
(2) "Figure out what the "BufferedImage-equivalent" for a raw audio feed is" - that is either a short[] or an IAudioSamples object (both seem to work, but IAudioSamples has to be constructed from an IBuffer which is much less straightforward).
(3) "Encode this audio class into an AAC audio stream" - call writer.addAudioStream(..., ICodec.ID.CODEC_ID_AAC, channelCount, sampleRate)
(4) "multiplex both stream into the same MPEG-TS container" - call makeWriter() with a .ts filename, which sets the container type. For correct audio/video sync you probably need to call encodeVideo()/encodeAudio() in the correct order.
P.S. Always pass the earliest audio/video available first. For example, if you have audio chunks which are 440 samples long (at 44000 Hz sample rate, 440 / 44000 = 0.01 seconds), and video at exactly 25fps (1 / 25 = 0.04 seconds), you would give them to the writer in this order:
video0 # 0.00 sec
audio0 # 0.00 sec
audio1 # 0.01 sec
audio2 # 0.02 sec
audio3 # 0.03 sec
video1 # 0.04 sec
audio4 # 0.04 sec
audio5 # 0.05 sec
... and so forth
Most playback devices are probably ok with the stream as long as the consecutive audio/video timestamps are relatively close, but this is what you'd do for a perfect mux.
P.S. There are a few docs you may want to refer to: Xuggler class diagram, ToolFactory, IMediaWriter, ICodec.

I think you should look at gstreamer: http://gstreamer.freedesktop.org/ You would have to look for plugin that can capture the camera input and then pipe it to libx264 and aac plugins and them pass them through a mpegts muxer.
A pipeline in gstreamer would look like:
v4l2src queue-size=15 ! video/x-raw,framerate=25/1,width=384,height=576 ! \
avenc_mpeg4 name=venc \
alsasrc ! audio/x-raw,rate=48000,channels=1 ! audioconvert ! lamemp3enc name=aenc \
avimux name=mux ! filesink location=rec.avi venc. ! mux. aenc. ! mux.
In this pipeline mpeg4 and mp3 encoders are being used and the stream is muxed to avi. You should be able to find plugins for libx264 and aac. Let me know if you need further pointers.

Related

How to get the length of an audio file from a link to a website in Kotlin / Java?

I'm trying to get the length of an audio file. Sadly I run into some issues trying to retrieve that file. This is my code (which is in Kotlin):
val inputStream = AudioSystem.getAudioInputStream(URL(url))
val format = inputStream.format
val durationSeconds = inputStream.frameLength / format.frameRate
lengthTicks = (durationSeconds * 20).toDouble()
The link I use is https://cdn.bandithemepark.net/lindburgh/HyperionHal.mp3
When my code gets ran, I get "UnsupportedAudioFileException: URL of unsupported format"
I am unsure why I am getting this error, since MP3 looks like a pretty normal file format to me. I also tried using an MP4 file, but I got the same error with that. Does anybody know what is happening here?
According to the docs:
The provided reference implementation of this API supports the following features:
Audio file formats: AIFF, AU and WAV
Music file formats: MIDI Type 0, MIDI Type 1, and Rich Music Format (RMF)
So it does looks like mp3 and mp4 are not supported. You'll most likely need a library/plugin.
Deciding on which one you might need is beyond the scope of SO, as that would be an opinion-based answer and is not considered acceptable.

Photos being stored on Amazon S3 bucket through Java's AWS SDK gain 1 pixel in height

I've encountered this strange issue while trying to transfer images from my Java webserver to my Amazon S3 bucket. I am developing a mobile application for both Android and iOS (with Swift), which allows users to upload images from their device to the server.
The images (Bitmaps for Android, and UIImage for iOS) are both converted to Base 64 before being sent to my webserver (made in Java to conform to HTTP/1.1 standards). The data arrives to the server intact (meaning by printing the base 64 before it is sent, and after it arrives to the server, they are identical). From there I decode the String into a byte array. I have isolated the issue to have something to do with the actual sending process to the Amazon S3 bucket. Take the following snippet for example:
//image is a byte array, and imageid a String
InputStream is = new ByteArrayInputStream(image);
AmazonS3 s3 = new AmazonS3Client();
ObjectMetadata meta = new ObjectMetadata();
meta.setContentType("image/jpeg");
meta.setContentLength(image.length);
s3.putObject(new PutObjectRequest("bucket", "images/" + imageid + ".jpg", is, meta));
This code works fine for Android, where the images upload at 1080x1080 without an issue. However, when I try to upload an image from iPhone, the resolution becomes 1080x1081. Confused by this, I decided to try saving the image locally as well as send it to the server using ImageIO. Strangely enough, using:
BufferedImage img = ImageIO.read(new ByteArrayInputStream(image));
File f = new File("image.jpg"); //Removed file creation for brevity
ImageIO.write(img, "jpg", f);
When I used the "identify" command from Imagick on the local image, the output is:
image.jpg JPEG 1080x1080 1080x1080+0+0 8-bit DirectClass 53.4KB 0.000u 0:00.000
However when I transfer the image from the S3 bucket and do the same thing, I get:
image2.jpg JPEG 1080x1081 1080x1081+0+0 8-bit DirectClass 56.2KB 0.000u 0:00.000
Again, this only occurs with uploads from iOS. The image itself is still valid and opens without an issue. For example:
Face Down Camera Image
(Picture was taken while the camera was facing down, hence the dark image)
I suspect the issue may have something to do with differences in how the images are encoded into Base 64. I use the Bouncycastle library to do the encoding on Android, as well as the decoding on the webserver, and use an NSData function in swift as described below:
let imageData = UIImageJPEGRepresentation(normImg, 0.4)
let base64 = NSString(data: imageData!.base64EncodedDataWithOptions(NSDataBase64EncodingOptions(rawValue: 0)), encoding: NSUTF8StringEncoding) as! String
What confuses me the most is how ~3KB of data (which makes sense for 1080 extra pixels at 3 bytes per RGB pixel) seems to just be appearing out of nowhere, and why it seems to be isolated to iOS.
Hopefully I was clear enough with my question and the debugging I have done to provide as much information as possible. If you need any more information, please let me know!
Edit: Just ran a test and uploaded a 1080x1080 image by command line to the S3 bucket. It also adds the extra pixels.
Edit2: Decided to change how the file uploading process worked. It is no longer being converted to base 64, and the NSData of the UIImage is being converted into a byte (UInt8) array and uploaded. The still leads to the same result of having the extra pixel, which rules out my original speculation that it had something to do with the base64 encoding. Android still seems to function properly on the other hand, which leaves me to believe that the binary data of the image is not completely valid for whatever reason. I believe the EXIF data is formatted differently between Android and iOS, though I don't know what would cause this.

Write region (100x100px) to huge file without reading the target jpeg

Is it actually possible to write a region (small 100x100px) to an image (250k x 250k px) without reading the whole target image? My region is only 100px in square and I like to store it at a certain location in an huge Jpeg.
Thanks for your hints,
Durin
This is probably not what you are looking for, but I'm adding the answer, should anyone else need a solution. :-)
The ImageIO API does support writing a region into a file. However, this support is format specific, and as already pointed out by other answers, JPEG (and most other compressed formats) is not such a format.
public void replacePixelsTest(BufferedImage replacement) throws IOException {
// Should point to an existing image, in a format supported (not tested)
File target = new File("path/to/file.tif");
// Find writer, use suffix of existing file
ImageWriter writer = ImageIO.getImageWritersBySuffix(FileUtils.suffix(target)).next();
ImageWriteParam param = writer.getDefaultWriteParam();
ImageOutputStream output = ImageIO.createImageOutputStream(target);
writer.setOutput(output);
// Test if the writer supports replacing pixels
if (writer.canReplacePixels(0)) {
// Set the region we want to replace
writer.prepareReplacePixels(0, new Rectangle(0, 0, 100, 100));
// Replacement image is clipped against region prepared above
writer.replacePixels(replacement, param);
// We're done updating the image
writer.endReplacePixels();
}
else {
// If the writer don't support it, we're out of luck...
}
output.close(); // You probably want this in a finally block, but it clutters the example...
}
For a raw format like BMP you would just need to know where to write to.
But JPEG is a (lossy) compressed format. You would have to keep the data coherent with the compression algorithm. So writing something into the middle of the image would require the algorithm to support this. I don't know JPEG in detail, but I don't think this is a feature of it.

How to play 3gp file in Java?

It is a Java question.
I try to use Fobs4jmf to play 3gp. I can see the video but without any sound.
Is there any solution?
And I try a newer library called xuggler, but I only see how to manipulate ,modify the video instead of playing a video file. Is it possible to use it play video,sound?
Here is the audio file that can not be played by Fobs4jmf (Pure sound file)
http://gonow.no-ip.org/example.3gp
Thanks
From your example.3gp, it states its properties as follows (using KMP Player):
C:\Documents and Settings\DEVELOPER\Desktop\example.3gp
General
Complete name : C:\Documents and Settings\DEVELOPER\Desktop\example.3gp
Format : MPEG-4
Format profile : 3GPP Media Release 4
Codec ID : 3gp4
File size : 5.23 KiB
Duration : 3s 460ms
Overall bit rate : 12.4 Kbps
Audio #1
ID : 1
Format : AMR
Format/Info : Adaptive Multi-Rate
Format profile : Narrow band
Codec ID : samr
Duration : 3s 460ms
Bit rate mode : Constant
Bit rate : 5 200 bps
Channel(s) : 1 channel
Sampling rate : 8 000 Hz
Resolution : 16 bits
Stream size : 2.20 KiB (42%)
Title : SoundHandle
Writing library :
You can use libVLC/VLC library via VLCJ which is supposed to be able to open any media format and containers. But, the problem is that GPL version of libVLC/VLC doesn't support AMR audio format used in 3GP container due to following statment (from Wikipedia):
To use AMR as audio codec, VLC and FFmpeg must be compiled with AMR
support. This is because the AMR license is not compatible with the
VLC license.
Moreover, when referring to this message http://mailman.videolan.org/pipermail/vlc-devel/2011-February/078807.html, it says:
In any case, parsing AMR is done by libavformat and libavcodec from
the FFmpeg project, not directly by the VideoLAN project.
Going through the message threads above, even if it is able to open AMR audio format via restricted version, it does have problem with AMR file seeking:
http://mailman.videolan.org/pipermail/vlc-devel/2011-February/078814.html

Can I use libjpeg to read JPEGs with an alpha channel?

There seems to be some debate about whether JPEGs with alpha channels are valid or not. The answer I had always understood to be correct is that in the JPEG FAQ, which is essentially "No". (This is reiterated in another question on Stack Overflow.)
However, Java's JPEGImageWriter in Sun's ImageIO library will happily write and read grayscale and RGB images with an alpha channel, even though there are virtually no applications on Linux I've tried so far that will load such JPEGs correctly. This has been reported in the past as a bug, but Sun's response is that these are valid files:
This is not an Image I/O bug, but rather a deficiency in the other applications
the submitter mentions. The IIO JPEGImageWriter is able to write images with
a color model that contains an alpha channel (referred to in the IJG native
source code as the "NIFTY" color spaces, such as RGBA, YCbCrA, etc.), but many applications are not aware of these color spaces. So even though these images
written by the IIO JPEG writer are compliant with the JPEG specification (which
is blind to the various color space possiblities), some applications may not
recognize color spaces that contain an alpha channel and may throw an
error or render a corrupted image, as the submitter describes.
Developers wishing to maintain compatibility with these other alpha-unaware
applications should write images that do not contain an alpha channel (such as
TYPE_INT_RGB). Developers who want the capability of writing/reading an image
containing an alpha channel in the JPEG format can do so using the Image I/O
API, but need to be aware that many native applications out there are not quite
compliant with the YCbCrA and RGBA formats.
For more information, see the Image I/O JPEG Metadata Format Specification and Usage Notes:
http://java.sun.com/j2se/1.4.1/docs/api/javax/imageio/metadata/doc-files/jpeg_metadata.html
Closing as "not a bug".
xxxxx#xxxxx 2003-03-24
I'm working with a Java application that creates files like these, and want to write some C code that will load these as fast as possible. (Essentially the problem is that the Java ImageIO library is remarkably slow at decompressing these files, and we would like to replace the loader with native code via JNI that improves this - it's a performance bottleneck at the moment.)
There are some example files here - apologies to anyone who's coulrophobic:
http://mythic-beasts.com/~mark/example-jpegs/
And here you can see the results of trying to view the grayscale+alpha and RGB+alpha images with various bit of Linux software that I believe use libjpeg:
grayscale image with alpha channel view with various programs http://mythic-beasts.com/~mark/all-alpha-bridges.png
(source: mark at mythic-beasts.com)
So it looks as if the color space is just being misinterpreted in each case. The only allowed values in jpeglib.h are:
/* Known color spaces. */
typedef enum {
JCS_UNKNOWN, /* error/unspecified */
JCS_GRAYSCALE, /* monochrome */
JCS_RGB, /* red/green/blue */
JCS_YCbCr, /* Y/Cb/Cr (also known as YUV) */
JCS_CMYK, /* C/M/Y/K */
JCS_YCCK /* Y/Cb/Cr/K */
} J_COLOR_SPACE;
... which doesn't look promising.
If I load these images with a slightly modified version of example.c from libjpeg, the values of cinfo.jpeg_color_space and cinfo.out_color_space for each image after reading the header are as follows:
gray-normal.jpg: jpeg_color_space is JCS_GRAYSCALE, out_color_space is JCS_GRAYSCALE
gray-alpha.jpg: jpeg_color_space is JCS_CMYK, out_color_space is JCS_CMYK
rgb-normal.jpg: jpeg_color_space is JCS_YCbCr, out_color_space is JCS_RGB
rgb-alpha.jpg: jpeg_color_space is JCS_CMYK, out_color_space is JCS_CMYK
So, my questions are:
Can libjpeg be used to correctly read these files?
If not, is there an alternative C library I can use which will cope with them?
Obviously there are at least two other solutions to the more general problem:
Change the software to output normal JPEGs + a PNG file representing the alpha channel
Somehow improve the performance of Sun's ImageIO
... but the first would involve a lot of code changes, and it's not clear how to go about the latter. In any case, I think that the question of how to use libjpeg to load such files is likely to be one of more general interest.
Any suggestions would be much appreciated.
Even if you store your images as 4 channel jpeg images, there's no standardized way I know of how to specify the color format in the jpeg file.
The JFIF standard assumes YCbCr.
Have you already tried libjpeg-turbo? It is supposed to be able to decode RGBA and there is already a Java wrapper for it.
I have tried running libjpeg-turbo on color images that have an alpha channel and which have been saved with java's ImageIO as jpeg.
This is how I compiled libjpeg-turbo for linux 64-bit:
$ autoreconf -fiv
$ mkdir build
$ cd build
$ sh ../configure --with-java CPPFLAGS="-I$JAVA_HOME/include -I$JAVA_HOME/include/linux"
$ make
$ cd ..
$ mkdir my-install
$ cd build
$ make install prefix=$PWD/../my-install libdir=$PWD/../my-install/lib64
This is how I run Fiji with the correct library path and classpath, to include libjpeg-turbo:
$ cd Programming/fiji
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PWD/../java/libjpeg-turbo/libjpeg-turbo/my-install/lib64
$ ./fiji -cp $PWD/../java/libjpeg-turbo/libjpeg-turbo/my-install/classes/turbojpeg.jar
This is a small jython script to read such jpeg+alpha files:
######
path = "/home/albert/Desktop/t2/trakem2.1263462814399.1347985440.1111111/trakem2.mipmaps/0/17.07may04b_GridID02043_Insertion001_00013gr_00005sq_00014ex.tif.jpg"
from org.libjpegturbo.turbojpeg import TJDecompressor, TJ
from java.io import File, FileInputStream
from java.awt.image import BufferedImage
from jarray import zeros
f = File(path)
fis = FileInputStream(f)
b = zeros(fis.available(), 'b')
print len(b)
fis.read(b)
fis.close()
d = TJDecompressor(b)
print d.getWidth(), d.getHeight()
bi = d.decompress(d.getWidth(), d.getHeight(), BufferedImage.TYPE_INT_ARGB, 0)
ImagePlus("that", ColorProcessor(bi)).show()
####
The problem: no matter what flag I use (the '0' above in the call to decompress) from the TJ class (see http://libjpeg-turbo.svn.sourceforge.net/viewvc/libjpeg-turbo/trunk/java/doc/org/libjpegturbo/turbojpeg/TJ.html), I cannot get the jpeg to load.
Here's the error message:
Started turbojpeg.py at Thu Jun 02 12:36:58 EDT 2011
Traceback (most recent call last):
File "", line 15, in
at org.libjpegturbo.turbojpeg.TJDecompressor.decompressHeader(Native Method)
at org.libjpegturbo.turbojpeg.TJDecompressor.setJPEGImage(TJDecompressor.java:89)
at org.libjpegturbo.turbojpeg.TJDecompressor.(TJDecompressor.java:58)
at sun.reflect.GeneratedConstructorAccessor10.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.python.core.PyReflectedConstructor.constructProxy(PyReflectedConstructor.java:210)
java.lang.Exception: java.lang.Exception: tjDecompressHeader2(): Could not determine subsampling type for JPEG image
So it appears that either libjpeg-turbo cannot read jpeg with alpha as saved by ImageIO, or there is a very non-obvious setting in the call to "decompress" that I am unable to grasp.

Categories