How to play 3gp file in Java? - java

It is a Java question.
I try to use Fobs4jmf to play 3gp. I can see the video but without any sound.
Is there any solution?
And I try a newer library called xuggler, but I only see how to manipulate ,modify the video instead of playing a video file. Is it possible to use it play video,sound?
Here is the audio file that can not be played by Fobs4jmf (Pure sound file)
http://gonow.no-ip.org/example.3gp
Thanks

From your example.3gp, it states its properties as follows (using KMP Player):
C:\Documents and Settings\DEVELOPER\Desktop\example.3gp
General
Complete name : C:\Documents and Settings\DEVELOPER\Desktop\example.3gp
Format : MPEG-4
Format profile : 3GPP Media Release 4
Codec ID : 3gp4
File size : 5.23 KiB
Duration : 3s 460ms
Overall bit rate : 12.4 Kbps
Audio #1
ID : 1
Format : AMR
Format/Info : Adaptive Multi-Rate
Format profile : Narrow band
Codec ID : samr
Duration : 3s 460ms
Bit rate mode : Constant
Bit rate : 5 200 bps
Channel(s) : 1 channel
Sampling rate : 8 000 Hz
Resolution : 16 bits
Stream size : 2.20 KiB (42%)
Title : SoundHandle
Writing library :
You can use libVLC/VLC library via VLCJ which is supposed to be able to open any media format and containers. But, the problem is that GPL version of libVLC/VLC doesn't support AMR audio format used in 3GP container due to following statment (from Wikipedia):
To use AMR as audio codec, VLC and FFmpeg must be compiled with AMR
support. This is because the AMR license is not compatible with the
VLC license.
Moreover, when referring to this message http://mailman.videolan.org/pipermail/vlc-devel/2011-February/078807.html, it says:
In any case, parsing AMR is done by libavformat and libavcodec from
the FFmpeg project, not directly by the VideoLAN project.
Going through the message threads above, even if it is able to open AMR audio format via restricted version, it does have problem with AMR file seeking:
http://mailman.videolan.org/pipermail/vlc-devel/2011-February/078814.html

Related

Accessing GRIB data using GRIB2Tools causes IndexOutOfBoundException

I'm manipulating GRIB2 forecast files and I'm having trouble using the GRIB2Tools library.
I have an Array[Byte] representing the content of a GRIB2 dataset. Because I want to be able to get value at specific location, I wrote this variable's content to a file which I'm then loading as an InputStream to use it with the getValueAtLocation(id, lat, long) and/or interpolateValueAtLocation(id, lat, long). I can perfectly read the metadata of the file, but as soon as I call one of those 2 previous methods, I get an IndexOutOfBoundException.
Here is the Scala code I use to write the GRIB2 bytes array (variable bytes) on a file and then load it as an InputStream:
val file: File = new File("my-data.grib")
val temp = FileUtils.writeByteArrayToFile(file, bytes)
val input = new FileInputStream("my-data.grib")
val grib: RandomAccessGribFile = new RandomAccessGribFile("my-grib", "my-data.grib")
grib.importFromStream(input, 0)
According to the README.md I am doing it right, isn't it?
Then I can easily get those metadata from the GRIB2 (using some code of the GRIB2FileTest.java):
Body format : GRIB2
Date: 12.10.2021
Time: 9:0.0
Generating centre: 85
Forecast time: 5
Parameter category: 0
Parameter number: 0
Covered area:
from (latitude, longitude): 51.47, 348.0
to: (latitude, longitude): 37.5, 16.0
When calling getValueAtLocation(id, lat, long) and interpolateValueAtLocation(id, lat, long) with id = 0 and lat = 48 and long = 2 (which seems to be ok when reading the metadata) I got this :
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:551)
at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:327)
at com.ph.grib2tools.grib2file.RandomAccessGribFile.interpolateValueAt(RandomAccessGribFile.java:196)
at com.ph.grib2tools.grib2file.RandomAccessGribFile.interpolateValueAtLocation(RandomAccessGribFile.java:133)
The faulting line seems to be this one in this file RandomAccessGribFile.java:196 :
float val11 = sec5.calcValue(ByteBuffer.wrap(data).getShort((jidx1*gridDefinition.numberPointsLon+iidx1)*bytesperval));
Am I doing something wrong or is there an issue with the library source code or my GRIB file? The file comes from a national forecast agency and should be ok. I give you the structure of the GRIB2 file in the screenshot attached (from the Panoply software).
After some research with the dev of the Grib2Tools library, it turns out that the issue was coming from my GRIB file. The national forecast organization of my country, Météo France, add a bitmap on top of the data to indicate whether or not there's a value available at given coordinates. This feature wasn't supported by Grib2Tools, wich lead to data misunderstanding and errors at execution. This bitmap feature will be supported soon thanks to the dev.

How to get the length of an audio file from a link to a website in Kotlin / Java?

I'm trying to get the length of an audio file. Sadly I run into some issues trying to retrieve that file. This is my code (which is in Kotlin):
val inputStream = AudioSystem.getAudioInputStream(URL(url))
val format = inputStream.format
val durationSeconds = inputStream.frameLength / format.frameRate
lengthTicks = (durationSeconds * 20).toDouble()
The link I use is https://cdn.bandithemepark.net/lindburgh/HyperionHal.mp3
When my code gets ran, I get "UnsupportedAudioFileException: URL of unsupported format"
I am unsure why I am getting this error, since MP3 looks like a pretty normal file format to me. I also tried using an MP4 file, but I got the same error with that. Does anybody know what is happening here?
According to the docs:
The provided reference implementation of this API supports the following features:
Audio file formats: AIFF, AU and WAV
Music file formats: MIDI Type 0, MIDI Type 1, and Rich Music Format (RMF)
So it does looks like mp3 and mp4 are not supported. You'll most likely need a library/plugin.
Deciding on which one you might need is beyond the scope of SO, as that would be an opinion-based answer and is not considered acceptable.

Can I read FDX-B RFID using Android smartphone?

My pet have ISO 11784/5 FDX-B tag and theoretically, it's an RFID tag, but can I read this tag using my Android smartphone? Which properties shoud I use in my custome app to read this tag?
NFC (13,56 Mhz) is a subset of RFID and the FDX-B uses 134,2 kHz, so those are not compatible.
But there are cheap 125 kHz RFID readers available (around 12€), which can read the tag. Here is a tutorial, that shows how you could connect it to an Arduino: https://www.instructables.com/id/Arduino-and-RFID-from-seeedstudio/

How to detect streams and types within a media container

I have a Java servlet method that evaluates an uploaded file to determine its type and then that file is passed on to ffmpeg for processing. Tika is used to detect the type of file. Right now it seems that either Tika, or my implementation of it is only able to determine container rather than the stream within the container and that leads to a detection issue where the container might be an MP4 but the stream within is an AAC audio. In my Java code I have the following:
String fileType = tika.detect(uploadedFile);
String[] mediaAndType = fileType.split("/");
media = mediaAndType[0];
mediatype = mediaAndType[1];
I have an file with the extension of .m4a that Tika detects as "video/3gpp"
and MediaInfo detects as:
Format : MPEG-4
Format profile : 3GPP Media Release 4
Codec ID : 3gp4 (isom/3gp4)
There is only one stream in the file:
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : mp4a-40-2
If I can detect that the file has only an audio format stream in the container, then I can hand it off to ffmpeg for audio processing vs video processing (which throws an error when processing the file above).
I chose Tika originally because it is simple and fast and easy to implement so if there is a way to use Tika to get what I want, that would be best but I am open to other tools to determine the number and types of streams within the container. I tried using the MP4Parser in Tika but it didn't return anything useful.
So is there a better way/tool to detect the format of the stream within the container that is fairly lightweight?

Xuggler encoding and muxing

I'm trying to use Xuggler (which I believe uses ffmpeg under the hood) to do the following:
Accept a raw MPJPEG video bitstream (from a small TTL serial camera) and encode/transcode it to h.264; and
Accept a raw audio bitsream (from a microphone) and encode it to AAC; then
Mux the two (audio and video) bitsreams together into a MPEG-TS container
I've watched/read some of their excellent tutorials, and so far here's what I've got:
// I'll worry about implementing this functionality later, but
// involves querying native device drivers.
byte[] nextMjpeg = getNextMjpegFromSerialPort();
// I'll also worry about implementing this functionality as well;
// I'm simply providing these for thoroughness.
BufferedImage mjpeg = MjpegFactory.newMjpeg(nextMjpeg);
// Specify a h.264 video stream (how?)
String h264Stream = "???";
IMediaWriter writer = ToolFactory.makeWriter(h264Stream);
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264);
writer.encodeVideo(0, mjpeg);
For one, I think I'm close here, but it's still not correct; and I've only gotten this far by reading the video code examples (not the audio - I can't find any good audio examples).
Literally, I'll be getting byte-level access to the raw video and audio feeds coming into my Xuggler implementation. But for the life of me I can't figure out how to get them into an h.264/AAC/MPEG-TS format. Thanks in advance for any help here.
Looking at Xuggler this sample code, the following should work to encode video as H.264 and mux it into a MPEG2TS container:
IMediaWriter writer = ToolFactory.makeWriter("output.ts");
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, width, height);
for (...)
{
BufferedImage mjpeg = ...;
writer.encodeVideo(0, mjpeg);
}
The container type is guessed from the file extension, the codec is specified explicitly.
To mux audio and video, you would do something like this:
writer.addVideoStream(videoStreamIndex, 0, videoCodec, width, height);
writer.addAudioStream(audioStreamIndex, 0, audioCodec, channelCount, sampleRate);
while (... have more data ...)
{
BufferedImage videoFrame = ...;
long videoFrameTime = ...; // this is the time to display this frame
writer.encodeVideo(videoStreamIndex, videoFrame, videoFrameTime, DEFAULT_TIME_UNIT);
short[] audioSamples = ...; // the size of this array should be number of samples * channelCount
long audioSamplesTime = ...; // this is the time to play back this bit of audio
writer.encodeAudio(audioStreamIndex, audioSamples, audioSamplesTime, DEFAULT_TIME_UNIT);
}
In this case I believe your code is responsible for interleaving the audio and video: you want to call either encodeAudio() or encodeVideo() on each pass through the loop, based on which data available (a chunk of audio samples or a video frame) has an earlier timestamp.
There is another, lower-level API you may end up using, based on IStreamCoder, which gives more control over various parameters. I don't think you will need to use that.
To answer the specific questions you asked:
(1) "Encode a BufferedImage (M/JPEG) into a h.264 stream" - you already figured that out, writer.addVideoStream(..., ICodec.ID.CODEC_ID_H264) makes sure you get the H.264 codec. To get a transport stream (MPEG2 TS) container, simply call makeWriter() with a filename with a .ts extension.
(2) "Figure out what the "BufferedImage-equivalent" for a raw audio feed is" - that is either a short[] or an IAudioSamples object (both seem to work, but IAudioSamples has to be constructed from an IBuffer which is much less straightforward).
(3) "Encode this audio class into an AAC audio stream" - call writer.addAudioStream(..., ICodec.ID.CODEC_ID_AAC, channelCount, sampleRate)
(4) "multiplex both stream into the same MPEG-TS container" - call makeWriter() with a .ts filename, which sets the container type. For correct audio/video sync you probably need to call encodeVideo()/encodeAudio() in the correct order.
P.S. Always pass the earliest audio/video available first. For example, if you have audio chunks which are 440 samples long (at 44000 Hz sample rate, 440 / 44000 = 0.01 seconds), and video at exactly 25fps (1 / 25 = 0.04 seconds), you would give them to the writer in this order:
video0 # 0.00 sec
audio0 # 0.00 sec
audio1 # 0.01 sec
audio2 # 0.02 sec
audio3 # 0.03 sec
video1 # 0.04 sec
audio4 # 0.04 sec
audio5 # 0.05 sec
... and so forth
Most playback devices are probably ok with the stream as long as the consecutive audio/video timestamps are relatively close, but this is what you'd do for a perfect mux.
P.S. There are a few docs you may want to refer to: Xuggler class diagram, ToolFactory, IMediaWriter, ICodec.
I think you should look at gstreamer: http://gstreamer.freedesktop.org/ You would have to look for plugin that can capture the camera input and then pipe it to libx264 and aac plugins and them pass them through a mpegts muxer.
A pipeline in gstreamer would look like:
v4l2src queue-size=15 ! video/x-raw,framerate=25/1,width=384,height=576 ! \
avenc_mpeg4 name=venc \
alsasrc ! audio/x-raw,rate=48000,channels=1 ! audioconvert ! lamemp3enc name=aenc \
avimux name=mux ! filesink location=rec.avi venc. ! mux. aenc. ! mux.
In this pipeline mpeg4 and mp3 encoders are being used and the stream is muxed to avi. You should be able to find plugins for libx264 and aac. Let me know if you need further pointers.

Categories