How to get audio data from a MP3? - java

I'm working on an application that has to process audio files. When using mp3 files I'm not sure how to handle data (the data I'm interested in are the the audio bytes, the ones that represent what we hear).
If I'm using a wav file I know I have a 44 bytes header and then the data. When it comes to an mp3, I've read that they are composed by frames, each frame containing a header and audio data. Is it possible to get all the audio data from a mp3 file?
I'm using java (I've added MP3SPI, Jlayer, and Tritonus) and I'm able to get the bytes from the file, but I'm not sure about what these bytes represent or how to handle then.

From the documentation for MP3SPI:
File file = new File(filename);
AudioInputStream in= AudioSystem.getAudioInputStream(file);
AudioInputStream din = null;
AudioFormat baseFormat = in.getFormat();
AudioFormat decodedFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
baseFormat.getSampleRate(),
16,
baseFormat.getChannels(),
baseFormat.getChannels() * 2,
baseFormat.getSampleRate(),
false);
din = AudioSystem.getAudioInputStream(decodedFormat, in);
You then just read data from din - it will be the "raw" data as per decodedFormat. (See the docs for AudioFormat for more information.)
(Note that this sample code doesn't close the stream or anything like that - use appropriate try/finally blocks as normal.)

The data that you want are the actual samples, while MP3 represents the data differently. So, like what everyone else has said - you need a library to decode the MP3 data into actual samples for your purpose.

As mentioned in the other answers, you need a decoder to decode MP3 into regular audio samples.
One popular option would be JavaLayer (LGPL).

Related

Issues with SourceDataLine format support

I have an application written in Java in which I need to play audio. I used OpenAL (with java-openal library) for the task however I would like to use WSOLA which is not supported by OpenAL directly. I found a nice java-native library called TarsosDSP which has support for WSOLA.
The library uses standard Java APIs for audio output. The issue occurs during the SourceDataLine setup:
IllegalArgumentException: No line matching interface SourceDataLine supporting format PCM_UNSIGNED 16000.0 Hz, 16 bit, mono, 2 bytes/frame, little-endian is supported.
I made sure the issue is not caused by the lack of permissions (ran it as root on Linux + tried it on Windows 10) and there are no other SourceDataLines used in the project.
After tinkering with the format I found out that the format is accepted when it's changed from PCM_UNSIGNED to PCM_SIGNED. It seems like a minor issue since only moving the byte range form unsigned to signed should be pretty easy. However it's weird that it's not supported natively.
So, is there some solution in which I wouldn't have to modify the source data?
Thanks, Jan
You don't have to move the byte range by hand. After you've created an AudioInputStream, you create another AudioInputStream, with a signed format and that is connected to the first unsigned stream. If you then read the data using the signed stream, the Sound API automatically converts the format. This way you don't need to modify the source data.
File fileWithUnsignedFormat;
AudioInputStream sourceInputStream;
AudioInputStream targetInputStream;
AudioFormat sourceFormat;
AudioFormat targetFormat;
SourceDataLine sourceDataLine;
sourceInputStream = AudioSystem.getAudioInputStream(fileWithUnsignedFormat);
sourceFormat = sourceInputStream.getFormat();
targetFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
sourceFormat.getSampleRate(),
sourceFormat.getSampleSizeInBits(),
sourceFormat.getChannels(),
sourceFormat.getFrameSize(),
sourceFormat.getFrameRate(),
false);
targetInputStream = AudioSystem.getAudioInputStream(targetFormat, sourceInputStream);
DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, targetFormat);
sourceDataLine = (SourceDataLine) AudioSystem.getLine(dataLineInfo);
sourceDataLine.open(targetFormat);
sourceLine.start();
// schematic
targetInputStream.read(byteArray, 0, byteArray.length);
sourceDataLine.write(byteArray, 0, byteArray.length);

How can I write the contents of a SourceDataLine to a file?

I am modifying an application that plays audio data to write the data to a file instead. As it is currently implemented, a byte array is filled dynamically, and the contents of this buffer are written to a SourceDataLine each time it is filled. I basically want to write that buffer out to a file in a specified format.
I have read through this official tutorial and came across this code snipped for writing audio data to a file:
File fileOut = new File(someNewPathName);
AudioFileFormat.Type fileType = fileFormat.getType();
if (AudioSystem.isFileTypeSupported(fileType,
audioInputStream)) {
AudioSystem.write(audioInputStream, fileType, fileOut);
}
I see from the API documentation that I can construct an AudioInputStream using a TargetDataLine, however in my case I have a SourceDataLine. I don't know how to get the data from my byte array into the TargetDataLine since it implements the read() method instead of write(). Other uses of the AudioInputStream in that and other documentation treat it as a way of reading from a file, so I'm a little confused by its use with AudioSystem.write().
So, how can I get the data from a SourceDataLine, or from the buffer directly, into a TargetDataLine or AudioInputStream so that it can be written out to a file?
Use the byte[] to establish a ByteArrayInputStream
Provide the BAIS to AudioSystem.getAudioInputStream(InputStream)
Use the AIS in AudioSystem.write(..)

Audio Streaming from disk in java Servlets

To stream audio file I have implemented following code. But i am getting Exception:
javax.sound.sampled.UnsupportedAudioFileException: could not get audio input stream from input file
at javax.sound.sampled.AudioSystem.getAudioInputStream(AudioSystem.java:1170)
Can Any one help me please......
try {
// From file
System.out.println("hhhhhhhhhhhhhhhh");
AudioInputStream stream = AudioSystem.getAudioInputStream(new File("C:\\track1.mp3"));
System.out.println("stream created");
AudioFormat format = stream.getFormat();
if (format.getEncoding() != AudioFormat.Encoding.PCM_SIGNED) {
System.out.println("in if");
format = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
format.getSampleRate(),
format.getSampleSizeInBits()*2,
format.getChannels(),
format.getFrameSize()*2,
format.getFrameRate(),
true); // big endian
stream = AudioSystem.getAudioInputStream(format, stream);
}
// Create line
SourceDataLine.Info info = new DataLine.Info(
SourceDataLine.class, stream.getFormat(),
((int)stream.getFrameLength()*format.getFrameSize()));
SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info);
line.open(stream.getFormat());
line.start();
// Continuously read and play chunks of audio
int numRead = 0;
byte[] buf = new byte[line.getBufferSize()];
while ((numRead = stream.read(buf, 0, buf.length)) >= 0) {
int offset = 0;
while (offset < numRead) {
offset += line.write(buf, offset, numRead-offset);
}
}
line.drain();
line.stop();
}
That you're doing this job in a servlet class gives me the impression that your intent is to play the mp3 file whenever someone visits your website and that the visitor should hear this mp3 file.
If true, I'm sorry to say, but you're approaching this entirely wrong. Java servlet code runs in webserver machine and not in webbrowser machine. Whenever someone visits your website, this way the mp3 file would only be played at the webserver machine. This is usually a physically completely different machine which runs at the other side of the network connection and the visitor ain't ever going to hear the music.
You want to send the mp3 file raw (unmodified byte by byte) from webserver to the webbrowser without massaging it by some Java Audio API and instruct the webbrowser to play this file. The easist way is to just drop the mp3 file in public webcontent (there where your HTML/JSP files also are) and use HTML <embed> tag to embed it in your HTML/JSP file. The below example assumes the MP3 file to be in the same folder as the HTML/JSP file:
<embed src="file.mp3" autostart="true"></embed>
That's all and this is supported in practically every browser and it will show a player as well.
If the MP3 file is by business requirement stored outside public webcontent, then you may indeed need a servlet for this, but the servlet should do absolutely nothing more than getting an InputStream of it in some way and write it unmodified to the OutputStream of the HttpServletResponse the usual Java IO way. You only need to set the HTTP Content-Type header to audio/mpeg beforehand and if possible also the HTTP Content-Length header. Then point the src to the servlet's URL instead.
<embed src="mp3servlet" autostart="true"></embed>
Default java AudioInputStream does not support mp3 files. You have to plug in MP3SPI to let it decode mp3.
ALso, what do you mean by streaming? This code will play the audio file, not stream it as in internet radio streaming.

How can I play an ALAW file in a Java application?

I need to be able to play ALAW files in a Java (desktop) application.
I've tried to follow the example at:
How to play audio in Java Application
I've created a File object from the ALAW file (which exists, according to check) and sent that File to a method where the first thing that happens is this:
AudioInputStream ais = AudioSystem.getAudioInputStream(file);
But this is where the execution stops, since I get this exception:
javax.sound.sampled.UnsupportedAudioFileException: could not get audio input stream from input file
I see that there is a way to convert ALAW files if the check (ais.getFormat().getEncoding() == AudioFormat.Encoding.ALAW) is true, but how can I get there if it's not even possible to create the AudioInputStream?
Anyone who has worked with ALAW files and has an idea of what I should do?
Is there a way to convert the ALAW files programmatically before calling AudioSystem.getAudioInputStream(file)?
I really need to make this work!
Get existing file format from your AudioInputStream:
filepath is String with path to your file,which you obtain for example:
String filename="x.y";
File file = new File(filename);
String filepath=file.getCanonicalPath();
Then main conversion is done by:
AudioInputStream inputStream = AudioSystem.getAudioInputStream(new File(filepath));
AudioFormat format = inputStream.getFormat();
AudioInputStream convertedInputStream;
After that put condition, which checks if your file encoding is alaw or ulaw, and converts it to PCM which can be played by SoundCard:
if ((format.getEncoding() == AudioFormat.Encoding.ULAW) || (format.getEncoding() == AudioFormat.Encoding.ALAW))
AudioFormat tmp = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
format.getSampleRate(),
format.getSampleSizeInBits() * 2,
format.getChannels(),
format.getFrameSize() * 2,
format.getFrameRate(), true);
convertedInputStream = AudioSystem.getAudioInputStream(tmp,inputStream);
format = tmp;}
This code will convert ALAW/ULAW format of your AudioInputStream to PCM_SIGNED
JMF will help in this case.
http://www2.sys-con.com/itsg/virtualcd/java/archives/0503/decarmo/index.html

How can I decode OGG vorbis data from a ByteBuffer?

The libraries I founded so far only have methods to decode from a file or InputStream. I have a ByteBuffer with OGG vorbis data and I need it decoded to PCM without having to write it to a file first.
There seem to be 2 parts to this problem.
1) Getting Java Sound to deal with OGG Vorbis format.
2) Avoiding the File.
For (1), the Java Sound API allows the addition of extra formats via the Service Provider Interface. The idea is to put an encoder/decoder into a Jar and use a standard path and format of file to identify the class that does the encoding/decoding.
For (2), it is simply a matter of supplying an InputStream and required AudioFormat to the relevant methods of the AudioSystem static functions. E.G. (Pseudo code..)
byte[] b = byteBuffer.array();
ByteArrayInputStream bais = new ByteArrayInputStream(b);
InputStream is = new InputStream(bais);
AudioInputStrream aisOgg = AudioSystem.getAudioInputStream(is);
AudioInputStrream aisPcm = AudioSystem.
getAudioInputStream(pcmAudioFormat, aisOgg);
You can use ByteArrayInputStream which is a subclass of InputStream.
If your stream is very large you probably will have to write to file.

Categories