I've confirmed my program "works" with a variety of signed integer output formats. However, when I attempt to use 32-bit floating point, the output audio metadata indicates it to be 32-bit signed integer, and this results in broken playback.
Here's my audio format:
AudioFormat audioFormat = new AudioFormat(AudioFormat.Encoding.PCM_FLOAT,
48000, // Hz sample rate
32, // bits per sample
2, // channels
8, // bytes per frame
48000, // Hz frame rate
false), // not big-endian
This is sent to a processor function (which I've confirmed "works" using other output formats):
public void mixToFile(AudioFormat format,
String outputPath,
int totalFrames) throws Exception {
ByteBuffer outputBytes = byteBufferOf(mix()); // the big show
AudioInputStream ais = new AudioInputStream(
new ByteArrayInputStream(outputBytes.array()), outputFormat,
totalFrames
);
AudioSystem.write(ais, AudioFileFormat.Type.WAVE, new File(outputPath));
}
The result is failure; the file has metadata format 32-bit signed int, see:
Playing WAVE '/tmp/output.wav' : Signed 32 bit Little Endian, Rate 48000 Hz, Stereo
I'm looking for the equivalent of How to write wav file with 32-bit float data?
which I've dealt with before, manually setting the wFormat tag in the 'fmt' chunk to WAVE_FORMAT_IEEE_FLOAT (3) when writing a RIFF container.
Is it possible to achieve this using AudioSystem.write, AudioInputStream and AudioFormat?
Related
Trying to get the frame size of an audio file I am getting instead -1. I tried to look for the interpretation of of this result in the JavaDoc but it does not mention anything big. Here's the source code :
import javazoom.spi.mpeg.sampled.file.MpegAudioFileReader;
/*....*/
File file = new File("/home/songs/audio.mp3");
MpegAudioFileReader mpegAudioFileReader = new MpegAudioFileReader();
AudioInputStream audioInputStream = mpegAudioFileReader.getAudioInputStream(file);
AudioFormat format = audioInputStream.getFormat();
long frameSize = format.getFrameSize();//frameSize = -1
float frameRate = format.getFrameRate();//frameRate = 38.28125
Inspecting he format object gives this : MPEG1L3 44100.0 Hz, unknown bits per sample, stereo, unknown frame size, 38.28125 frames/second,
I do not know why the frame size is unknown although it does appear on my audio file properties :
Any help is more than appreciated. Thanks.
getFormat() etc is implemented by the MPEG guys so it returns what they have - probably they left this blank or unable to extract;
If you put another .wav file you will probably get 2:
try {
audioInputStream=AudioSystem.getAudioInputStream(new File(".......wav"));
System.out.println(audioInputStream.getFormat().getFrameSize());
} catch (Exception e) {
e.printStackTrace();
}
Other notes: I dont see the Frame size in your display; it's rather the sample/bit rate so be sure to differentiate about that.
But for mp3 you have to live with that.
You can also create your own format if that helps - dont know your application
AudioFormat format = audioInputStream.getFormat();
newFormat=new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
format.getSampleRate(),
16,
format.getChannels(),
format.getChannels() * 2,
format.getSampleRate(),
false);
I am writing an Android application, which sends recorded sound to a server and I need to adapt its format to the one which is required. I was told that the server's audio format is specified by javax.sound.sampled.AudioFormat class constructor with the following parameters: AudioFormat(44100, 8, 1, true, true), which means that the required sound should have 44100 sample rate, 8 bit sample size, mono channel, be signed and encoded with big endian byte order. My question is how can I convert my recorded sound to the one I want? I think that the biggest problem might be Android's 16b restriction as far as the smallest sample size is concerned
You can record 44100 8bit directly by AudioRecord, specifying the format in the constructor
int bufferSize = Math.max(
AudioRecord.getMinBufferSize(44100,
AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_8BIT),
ENOUGH_SIZE_FOR_BUFFER);
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_8BIT, bufferSize);
then pull data from audioRecord, using read(byte[], int, int) method:
byte[] myBuf = new byte[bufferSize];
audioRecord.startRecording();
while (audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
int l = audioRecord.read(myBuf, 0, myBuf.length);
if (l > 0) {
// process data
}
}
in this case the data in the buffer will be as you want: 8 bit, mono, 44100.
But, some devices may not support 8 bit recording. In this case you can record the data in 16 bit format, and obtain it using read(short[], int, int) method. In this case you need to resample data on your own:
short[] recordBuf = new short[bufferSize];
byte[] myBuf = new byte[bufferSize];
...
int l = audioRecord.read(recordBuf, 0, recordBuf.length);
if (l > 0) {
for (int i = 0; i < l; i++) {
myBuf[i] = (byte)(recordBuffer[I] >> 8);
}
// process data
}
Using the same approach, you can resample any PCM format to any another format;
I am trying to play a audio stream that is returned to me by a server via UDP. The server uses DPCM to encode the audio, thus every byte contains two audio samples. When I play the audio with 8 bits/sample everything works fine, but when I try with 16 doing AudioFormat DPCM = new AudioFormat(8000,16,1,true,false); the clip is shorter and not so clear. What am I doing wrong?
ByteArrayOutputStream sound_buffer = new ByteArrayOutputStream();
clientRequest = new DatagramPacket( sound_request_buffer, sound_request_buffer.length );
server.send(clientRequest);
for(int i=0;i<100;i++){
buffer = new byte[128];
serverResponse = new DatagramPacket( buffer, buffer.length);
client.receive(serverResponse);
sound_buffer.write(buffer);
}
byte[] encoded_sound = sound_buffer.toByteArray();
byte[] decoded_sound = new byte[2*encoded_sound.length];
byte msnibble = (byte)((encoded_sound[0]>>4) & 0x000F);
decoded_sound[0] = (byte)(msnibble - 8);
byte lsnibble = (byte)(encoded_sound[0] & 0x000F );
decoded_sound[1] = (byte) (decoded_sound[0] + lsnibble - 8);
for(int i=1;i<encoded_sound.length;i++){
msnibble = (byte)((encoded_sound[i] >> 4) & 0x000F);
decoded_sound[2*i] = (byte)(decoded_sound[2*i-1] + msnibble - 8);
lsnibble = (byte)(encoded_sound[i] & 0x000F );
decoded_sound[2*i+1] = (byte)(decoded_sound[2*i] + lsnibble - 8);
}
AudioFormat DPCM = new AudioFormat(8000,8,1,true,false);
SourceDataLine lineOut=AudioSystem.getSourceDataLine(DPCM);
lineOut.open(DPCM,decoded_sound.length);
lineOut.start();
lineOut.write(decoded_sound,0,decoded_sound.length);
The problem is that you are giving the SourceDataLine 8-bit audio and telling it to play it as if it were 16-bit audio. This will make it halve the playback time (because it uses twice the number of bits per sample). It also does weird stuff with the actual numbers that are used for the sound, but I'm not exactly sure what (I haven't tested your example.)
The AudioFormat doesn't format the audio, it tells the SourceDataLine how your audio is currently formatted so that it plays it correctly.
I'm not really sure what you want to do, and I guess it would depend on why you want 16-bit audio. You might need to request 16-bit audio from the server instead of 8-bit, or you might not even need the audio to be 16-bit.
How can I convert a wav file in java
AudioFormat targetFormat = new AudioFormat(
sourceFormat.getEncoding(),
fTargetFrameRate,
16,
sourceFormat.getChannels(),
sourceFormat.getFrameSize(),
fTargetFrameRate,
false);
in result Exception :
java.lang.IllegalArgumentException: Unsupported conversion:
ULAW 8000.0 Hz, **16 bit**, mono, 1 bytes/frame, **from** ULAW 8000.0 Hz, **8 bit**, mono, 1 bytes/frame
it is possible in java?
I need get wav file 16 bit, from 8
Here is a method that will convert an 8-bit uLaw encoded binary file into a 16-bit WAV file using built-in Java methods.
public static void convertULawFileToWav(String filename) {
File file = new File(filename);
if (!file.exists())
return;
try {
long fileSize = file.length();
int frameSize = 160;
long numFrames = fileSize / frameSize;
AudioFormat audioFormat = new AudioFormat(Encoding.ULAW, 8000, 8, 1, frameSize, 50, true);
AudioInputStream audioInputStream = new AudioInputStream(new FileInputStream(file), audioFormat, numFrames);
AudioSystem.write(audioInputStream, Type.WAVE, new File("C:\\file.wav"));
} catch (IOException e) {
e.printStackTrace();
}
}
Look at this one: Conversion of Audio Format it is similar to your issue suggesting looking at http://docs.oracle.com/javase/6/docs/api/javax/sound/sampled/AudioSystem.html
You can always use FFMPEG, http://ffmpeg.org/, to do the conversion. Your Java program can call FFMPEG to do the conversion.
FFMPEG works on all OS.
I'm trying to generate sound with Java. In the end, I'm willing to continuously send sound to the sound card, but for now I would be able to send a unique sound wave.
So, I filled an array with 44100 signed integers representing a simple sine wave, and I would like to send it to my sound card, but I just can't get it to work.
int samples = 44100; // 44100 samples/s
int[] data = new int[samples];
// Generate all samples
for ( int i=0; i<samples; ++i )
{
data[i] = (int) (Math.sin((double)i/(double)samples*2*Math.PI)*(Integer.MAX_VALUE/2));
}
And I send it to a sound line using:
AudioFormat format = new AudioFormat(Encoding.PCM_SIGNED, 44100, 16, 1, 1, 44100, false);
Clip clip = AudioSystem.getClip();
AudioInputStream inputStream = new AudioInputStream(ais,format,44100);
clip.open(inputStream);
clip.start();
My problem resides between these to code snippets. I just can't find a way to convert my int[] to an input stream!
Firstly I think you want short samples rather than int:
short[] data = new short[samples];
because your AudioFormat specifies 16-bit samples. short is 16-bits wide but int is 32 bits.
An easy way to convert it to a stream is:
Allocate a ByteBuffer
Populate it using putShort calls
Wrap the resulting byte[] in a ByteArrayInputStream
Create an AudioInputStream from the ByteArrayInputStream and format
Example:
float frameRate = 44100f; // 44100 samples/s
int channels = 2;
double duration = 1.0;
int sampleBytes = Short.SIZE / 8;
int frameBytes = sampleBytes * channels;
AudioFormat format =
new AudioFormat(Encoding.PCM_SIGNED,
frameRate,
Short.SIZE,
channels,
frameBytes,
frameRate,
true);
int nFrames = (int) Math.ceil(frameRate * duration);
int nSamples = nFrames * channels;
int nBytes = nSamples * sampleBytes;
ByteBuffer data = ByteBuffer.allocate(nBytes);
double freq = 440.0;
// Generate all samples
for ( int i=0; i<nFrames; ++i )
{
double value = Math.sin((double)i/(double)frameRate*freq*2*Math.PI)*(Short.MAX_VALUE);
for (int c=0; c<channels; ++ c) {
int index = (i*channels+c)*sampleBytes;
data.putShort(index, (short) value);
}
}
AudioInputStream stream =
new AudioInputStream(new ByteArrayInputStream(data.array()), format, nFrames*2);
Clip clip = AudioSystem.getClip();
clip.open(stream);
clip.start();
clip.drain();
Note: I changed your AudioFormat to stereo, because it threw an exception when I requested a mono line. I also increased the frequency of your waveform to something in the audible range.
Update - the previous modification (writing directly to the data line) was not necessary - using a Clip works fine. I have also introduced some variables to make the calculations clearer.
If you want to play a simple Sound, you should use a SourceDataLine.
Here's an example:
import javax.sound.sampled.*;
public class Sound implements Runnable {
//Specify the Format as
//44100 samples per second (sample rate)
//16-bit samples,
//Mono sound,
//Signed values,
//Big-Endian byte order
final AudioFormat format=new AudioFormat(44100f,16,2,true,true);
//Your output line that sends the audio to the speakers
SourceDataLine line;
public Sound(){
try{
line=AudioSystem.getSourceDataLine(format);
line.open(format);
}catch(LineUnavailableExcecption oops){
oops.printStackTrace();
}
new Thread(this).start();
}
public void run(){
//a buffer to store the audio samples
byte[] buffer=new byte[1000];
int bufferposition=0;
//a counter to generate the samples
long c=0;
//The pitch of your sine wave (440.0 Hz in this case)
double wavelength=44100.0/440.0;
while(true){
//Generate a sample
short sample=(short) (Math.sin(2*Math.PI*c/wavelength)*32000);
//Split the sample into two bytes and store them in the buffer
buffer[bufferposition]=(byte) (sample>>>8);
bufferposition++;
buffer[bufferposition]=(byte) (sample & 0xff);
bufferposition++;
//if the buffer is full, send it to the speakers
if(bufferposition>=buffer.length){
line.write(buffer,0,buffer.length);
line.start();
//Reset the buffer
bufferposition=0;
}
}
//Increment the counter
c++;
}
public static void main(String[] args){
new Sound();
}
}
In this example you're continuosly generating a sine wave, but you can use this code to play sound from any source you want. You just have to make sure that you format the samples right. In this case, I'm using raw, uncompressed 16-bit samples at a sample rate of 44100 Hz. However, if you want to play audio from a file, you can use a Clip object
public void play(File file){
Clip clip=AudioSystem.getClip();
clip.open(AudioSystem.getAudioInputStream(file));
clip.loop(1);
}