I am using tarosdsp to extract features from audio. I have been able to extract mfcc features. To extract other features such as zero crossing rate, pitch do I need to define new audio dispatcher or I should use the same dispatcher and add audio processor. An example will help.
I can do it separately for mfcc and pitch
final List<float[]>mfccList = new ArrayList<>(200);
String file3= source+'/'+file2.getName();
int sampleRate = 44100;
int bufferSize = 8192;
int bufferOverlap = 128;
AudioDispatcher dispatcher = AudioDispatcherFactory.fromPipe(file3, sampleRate,bufferSize, bufferOverlap);
final MFCC mfcc = new MFCC(bufferSize, sampleRate, 40, 50, 300, 3000);
dispatcher.addAudioProcessor(mfcc);
dispatcher.addAudioProcessor(new AudioProcessor() {
#Override
public void processingFinished() {
}
#Override
public boolean process(AudioEvent audioEvent) {
mfcc.process(audioEvent);
//final float audio_float[] = mfcc.getMFCC();
audio_float=mfcc.getMFCC();
//mfccList.add( mfcc.getMFCC());
System.out.print(Arrays.toString(audio_float));
I would like to save all the features in an array as [mfcc,pitch,zcr]
Use the same dispatcher with a new AudioProcessor.
Related
I am using SimpleLameLibForAndroid to convert a pcm file that created using AudioRecord class in android,to mp3. I read the pcm file and encoded it into mp3 and then I write it in the file. the result mp3 file but is not correct and it has a lot of noise on it and really hard to understand that it was recorded pcm file.
these are recorded audio specifications(pcm file):
private static final int RECORDER_SAMPLERATE = 8000;
private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
int BufferElements2Rec = 1024; // want to play 2048 (2K) since 2 bytes we use only 1024
int BytesPerElement = 2; // 2 bytes in 16bit format
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS,
RECORDER_AUDIO_ENCODING, BufferElements2Rec * BytesPerElement);
and this is my code that uses liblame for encode mp3 and write it to file:
//Encoder.Builder(int inSamplerate,int outChannel,int outSampleRate,int outBitrate)
Encoder en = new Encoder.Builder(8000, 1,8000,128).quality(7).create();
private int PCM_BUF_SIZE = 8192;
private int MP3_SIZE = 8192;
private void readFile() {
File pcm = new File("/sdcard/voice8K16bitmono.pcm");
File mp3 = new File("/sdcard/BOOOB.mp3");
pcm.setReadable(true);
mp3.setWritable(true);
try {
InputStream is = new FileInputStream(pcm);
BufferedInputStream bis = new BufferedInputStream(is);
bis.skip(44);//skip pcm header
OutputStream os = new FileOutputStream(mp3);
FileOutputStream fos = new FileOutputStream(mp3);
int n_bytes_read ;
int n_bytes_write;
int i;
byte mp3_buffer[] = new byte[MP3_SIZE];
byte pcm_buffer1[] = new byte[PCM_BUF_SIZE * 2];
do {
n_bytes_read = bis.read(pcm_buffer1 , 0 , PCM_BUF_SIZE);
if (n_bytes_read == 0){
n_bytes_write = en.flush(mp3_buffer);
}
else{
n_bytes_write = en.encodeBufferInterleaved(byte2short(pcm_buffer1) ,n_bytes_read , mp3_buffer);
}
bof.write(mp3_buffer, 0, PCM_BUF_SIZE);
} while (n_bytes_read > 0);
bis.close();
fos.close();
is.close();
en.close();
}catch (IOException e) {
e.printStackTrace();
}
}
private short[] byte2short(byte[] pcm_buffer1) {
short[] shorts = new short[pcm_buffer1.length/2];
ByteBuffer.wrap(pcm_buffer1).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
return shorts;
}
how can i fix this code, is the bufferSizes true? using BufferedInputStream is correct? and...
I implemented a PCM to MP3 encoder just yesterday for my application using lame. I suggest not using SimpleLameLibForAndroid and instead adding lame to your project yourself. If you are using Android Studio, here is a good guide to get you started on that if you haven't done NDK before.
http://www.shaneenishry.com/blog/2014/08/17/ndk-with-android-studio/
As for implementing lame itself, below is a really good guide that I followed to get my application up and running. Use the wrapper.c from the .zip at the top of the page. This exposes useful methods so that you can avoid all the nasty Stream and Buffer stuff.
http://developer.samsung.com/technical-doc/view.do?v=T000000090
When all is said and done, the actual calls to the lame encoder are super simple as follows.
For initializing (use whatever settings you like):
public static final int NUM_CHANNELS = 1;
public static final int SAMPLE_RATE = 16000;
public static final int BITRATE = 64;
public static final int MODE = 1;
public static final int QUALITY = 7;
...
initEncoder(NUM_CHANNELS, SAMPLE_RATE, BITRATE, MODE, QUALITY);
For encoding (very easy):
int result = encodeFile(pcm.getPath(), mp3.getPath());
if (result == 0) {
//success
}
And of course destroy the encoder when done with destroyEncoder().
I've created a byte array WebSocket that receives audio chunks in real time from the client's mic (navigator.getUserMedia). I'm already recording this stream to a WAV file in the server, after some time that the WebSocket stops to receive new byte arrays. The following code represents the current situation.
WebSocket
#OnMessage
public void message(byte[] b) throws IOException{
if(byteOutputStream == null) {
byteOutputStream = new ByteArrayOutputStream();
byteOutputStream.write(b);
} else {
byteOutputStream.write(b);
}
}
Thread that stores the WAV file
public void store(){
byte b[] = byteOutputStream.toByteArray();
try {
AudioFormat audioFormat = new AudioFormat(44100, 16, 1, true, true);
ByteArrayInputStream byteStream = new ByteArrayInputStream(b);
AudioInputStream audioStream = new AudioInputStream(byteStream, audioFormat, b.length);
DateTime date = new DateTime();
File file = new File("/tmp/"+date.getMillis()+ ".wav");
AudioSystem.write(audioStream, AudioFileFormat.Type.WAVE, file);
audioStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
But instead of record a WAV file, my goal with this WebSocket is to process audio in real time using YIN pitch detection algorithm implemented on TarsosDSP library. In other words, this is basically execute the PitchDetectorExample, but using the data from the WebSocket instead of the Default Audio Device (OS mic). The following code represents how PitchDetectorExample is currently initializing live audio processing using the mic line provided by the OS.
private void setNewMixer(Mixer mixer) throws LineUnavailableException, UnsupportedAudioFileException {
if(dispatcher!= null){
dispatcher.stop();
}
currentMixer = mixer;
float sampleRate = 44100;
int bufferSize = 1024;
int overlap = 0;
final AudioFormat format = new AudioFormat(sampleRate, 16, 1, true, true);
final DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, format);
TargetDataLine line;
line = (TargetDataLine) mixer.getLine(dataLineInfo);
final int numberOfSamples = bufferSize;
line.open(format, numberOfSamples);
line.start();
final AudioInputStream stream = new AudioInputStream(line);
JVMAudioInputStream audioStream = new JVMAudioInputStream(stream);
// create a new dispatcher
dispatcher = new AudioDispatcher(audioStream, bufferSize, overlap);
// add a processor
dispatcher.addAudioProcessor(new PitchProcessor(algo, sampleRate, bufferSize, this));
new Thread(dispatcher,"Audio dispatching").start();
}
There is a way to deal with WebSocket data as a TargetDataLine, so it will be possible to hook it up with AudioDispatcher and PitchProcessor? Somehow, i need to send the byte arrays received from the WebSocket to the audio processing Thread.
Another ideas on how reach this objective are welcome. Thanks!
I'm not sure you need an audioDispatcher. If you know how the bytes are encoded (PCM, 16bits le mono?) then you can convert them to floating points real-time and feed them to the pitchdetector algorithm, in your websocket you can do something like this (and forget about the inputstreams and audiodispatcher):
int index;
byte[] buffer = new byte[2048];
float[] floatBuffer = new float[1024];
FastYin detector = new FastYin(44100,1024);
public void message(byte[] b){
for(int i = 0 ; i < b.length; i++){
buffer[index] = b[i];
index++
if(index==2048){
AudioFloatConverter converter = AudioFloatConverter.getConverter(new Format(16bits, little endian, mono,...));
//converts the byte buffer to float
converter.toFloatArray(buffer,floatBuffer);
float pitch = detector.getPitch(floatBuffer);
//here you have your pitch info that you can use
index = 0;
}
}
You do need to watch the number of bytes that have passed: since two bytes represent one float (if 16bits pcm encoding is used) you need to start on even bytes. The endianness and samplerate are also important.
Regards
Joren
I'm working with sounds at the moment.
I obtain my byte[]'s by reading a wav file and skipping the first 44 bytes (According to WAV Specification, the data begins at byte 44).
I use 22500Hz and export it via Audacity to 16 bit PCM.
My Method looks like this
private static final AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_UNSIGNED,22500f,16,1,2,2,false);
public static void play(Sound s) throws LineUnavailableException, IOException
{
int selectedSample = (int) (Math.random() * ( s.samples.length ));
Clip clip = AudioSystem.getClip();
AudioInputStream ais;
ais = new AudioInputStream(new ByteArrayInputStream(s.samples[selectedSample]),format,s.samples[selectedSample].length);
clip.open (ais);
clip.start ();
}
My SoundClass looks like this
public class Sound
{
public byte[][] samples;
float pitchLow;
float pitchHigh;
float volumeLow;
float volumeHigh;
byte chance;
}
What am I doing wrong, I don't get an LineUnavailableException and I don't hear a thing.
I am a university student. I am developing a music identification system for my final year project. According to the "Robust Audio Fingerprint Extraction Algorithm Based on 2-D Chroma" research paper, the following functions should need to be included in my system.
Capture Audio Signal ----> Framing Window (hanning window) -----> FFT ----->
High Pass Filter -----> etc.....
I was able to code for Audio Capture function and I was applied the FFT API as well to the code. But I am confused about how to apply the hanning window function to my the my code. Please can someone help me to do this function? Tell me where do I need to add this function and how do I need to add it to the code.
Here is the my Audio capturing code and applying FFT code:
private class RecordAudio extends AsyncTask<Void, double[], Void> {
#Override
protected Void doInBackground(Void... params) {
started = true;
try {
DataOutputStream dos = new DataOutputStream(
new BufferedOutputStream(new FileOutputStream(
recordingFile)));
int bufferSize = AudioRecord.getMinBufferSize(frequency,
channelConfiguration, audioEncoding);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
frequency, channelConfiguration, audioEncoding,
bufferSize);
short[] buffer = new short[blockSize];
double[] toTransform = new double[blockSize];
long t = System.currentTimeMillis();
long end = t + 15000;
audioRecord.startRecording();
double[] w = new double[blockSize];
while (started && System.currentTimeMillis() < end) {
int bufferReadResult = audioRecord.read(buffer, 0,
blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0;
dos.writeShort(buffer[i]);
}
// new part
toTransform = hanning (toTransform);
transformer.ft(toTransform);
publishProgress(toTransform);
}
audioRecord.stop();
dos.close();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;
}
These links are providing hanning window algorithm and code snippets:
WindowFunction.java
Hanning - MATLAB
The following code I have used to apply hanning function to the my application and it works for me....
public double[] hanningWindow(double[] recordedData) {
// iterate until the last line of the data buffer
for (int n = 1; n < recordedData.length; n++) {
// reduce unnecessarily performed frequency part of each and every frequency
recordedData[n] *= 0.5 * (1 - Math.cos((2 * Math.PI * n)
/ (recordedData.length - 1)));
}
// return modified buffer to the FFT function
return recordedData;
}
At first, I think you should consider having your FFT length fixed. If I understand your code correctly, you are now using some kind of minimum buffer size also as the FFT length. FFT length has huge effect on the performance and resolution of your calculation.
Your link to WindowFunction.java can generate you an array, that should be the same length as your FFT length (blockSize in your case, I think). You should then multiply each sample of your buffer with the value returned from the WindowFunction that has the same id in the array.
This should be done before the FFT.
I'm trying to generate sound with Java. In the end, I'm willing to continuously send sound to the sound card, but for now I would be able to send a unique sound wave.
So, I filled an array with 44100 signed integers representing a simple sine wave, and I would like to send it to my sound card, but I just can't get it to work.
int samples = 44100; // 44100 samples/s
int[] data = new int[samples];
// Generate all samples
for ( int i=0; i<samples; ++i )
{
data[i] = (int) (Math.sin((double)i/(double)samples*2*Math.PI)*(Integer.MAX_VALUE/2));
}
And I send it to a sound line using:
AudioFormat format = new AudioFormat(Encoding.PCM_SIGNED, 44100, 16, 1, 1, 44100, false);
Clip clip = AudioSystem.getClip();
AudioInputStream inputStream = new AudioInputStream(ais,format,44100);
clip.open(inputStream);
clip.start();
My problem resides between these to code snippets. I just can't find a way to convert my int[] to an input stream!
Firstly I think you want short samples rather than int:
short[] data = new short[samples];
because your AudioFormat specifies 16-bit samples. short is 16-bits wide but int is 32 bits.
An easy way to convert it to a stream is:
Allocate a ByteBuffer
Populate it using putShort calls
Wrap the resulting byte[] in a ByteArrayInputStream
Create an AudioInputStream from the ByteArrayInputStream and format
Example:
float frameRate = 44100f; // 44100 samples/s
int channels = 2;
double duration = 1.0;
int sampleBytes = Short.SIZE / 8;
int frameBytes = sampleBytes * channels;
AudioFormat format =
new AudioFormat(Encoding.PCM_SIGNED,
frameRate,
Short.SIZE,
channels,
frameBytes,
frameRate,
true);
int nFrames = (int) Math.ceil(frameRate * duration);
int nSamples = nFrames * channels;
int nBytes = nSamples * sampleBytes;
ByteBuffer data = ByteBuffer.allocate(nBytes);
double freq = 440.0;
// Generate all samples
for ( int i=0; i<nFrames; ++i )
{
double value = Math.sin((double)i/(double)frameRate*freq*2*Math.PI)*(Short.MAX_VALUE);
for (int c=0; c<channels; ++ c) {
int index = (i*channels+c)*sampleBytes;
data.putShort(index, (short) value);
}
}
AudioInputStream stream =
new AudioInputStream(new ByteArrayInputStream(data.array()), format, nFrames*2);
Clip clip = AudioSystem.getClip();
clip.open(stream);
clip.start();
clip.drain();
Note: I changed your AudioFormat to stereo, because it threw an exception when I requested a mono line. I also increased the frequency of your waveform to something in the audible range.
Update - the previous modification (writing directly to the data line) was not necessary - using a Clip works fine. I have also introduced some variables to make the calculations clearer.
If you want to play a simple Sound, you should use a SourceDataLine.
Here's an example:
import javax.sound.sampled.*;
public class Sound implements Runnable {
//Specify the Format as
//44100 samples per second (sample rate)
//16-bit samples,
//Mono sound,
//Signed values,
//Big-Endian byte order
final AudioFormat format=new AudioFormat(44100f,16,2,true,true);
//Your output line that sends the audio to the speakers
SourceDataLine line;
public Sound(){
try{
line=AudioSystem.getSourceDataLine(format);
line.open(format);
}catch(LineUnavailableExcecption oops){
oops.printStackTrace();
}
new Thread(this).start();
}
public void run(){
//a buffer to store the audio samples
byte[] buffer=new byte[1000];
int bufferposition=0;
//a counter to generate the samples
long c=0;
//The pitch of your sine wave (440.0 Hz in this case)
double wavelength=44100.0/440.0;
while(true){
//Generate a sample
short sample=(short) (Math.sin(2*Math.PI*c/wavelength)*32000);
//Split the sample into two bytes and store them in the buffer
buffer[bufferposition]=(byte) (sample>>>8);
bufferposition++;
buffer[bufferposition]=(byte) (sample & 0xff);
bufferposition++;
//if the buffer is full, send it to the speakers
if(bufferposition>=buffer.length){
line.write(buffer,0,buffer.length);
line.start();
//Reset the buffer
bufferposition=0;
}
}
//Increment the counter
c++;
}
public static void main(String[] args){
new Sound();
}
}
In this example you're continuosly generating a sine wave, but you can use this code to play sound from any source you want. You just have to make sure that you format the samples right. In this case, I'm using raw, uncompressed 16-bit samples at a sample rate of 44100 Hz. However, if you want to play audio from a file, you can use a Clip object
public void play(File file){
Clip clip=AudioSystem.getClip();
clip.open(AudioSystem.getAudioInputStream(file));
clip.loop(1);
}