I've written this small signal generating method. My goal is to generate a beep with a slight time delay between the two channels (left and right) or a slight difference in gain between the channels.
Currently I create the delay by filling a buffer with zeros for one channel and values for the second and further down swapping the behavior between the channels (If you have any tips or ideas how to do this better it would be appreciated.)
The next stage is doing something similar with the gain. I have seen that Java gives built in gain control via FloatControl:
FloatControl gainControl =
(FloatControl) sdl.getControl(FloatControl.Type.MASTER_GAIN);
But I am not sure how to control the gain for each channel separately. Is there a built in way to do this?
Would I need two separate streams, one for each channel? If so how do I play them simultaneously?
I am rather new to sound programming, if there are better ways to do this please let me know. Any help is very much appreciated.
This is my code so far:
public static void generateTone(int delayR, int delayL, double gainRightDB, double gainLeftDB)
throws LineUnavailableException, IOException {
// in hz, number of samples in one second
int sampleRate = 100000; // let sample rate and frequency be the same
// how much to add to each side:
double gainLeft = 100;//Math.pow(10.0, gainLeftDB / 20.0);
double gainRight = 100;// Math.pow(10.0, gainRightDB / 20.0);;
// click duration = 40 us
double duration = 0.08;
double durationInSamples = Math.ceil(duration * sampleRate);
// single delay window duration = 225 us
double baseDelay = 0.000225;
double samplesPerDelay = Math.ceil(baseDelay * sampleRate);
AudioFormat af;
byte buf[] = new byte[sampleRate * 4]; // one second of audio in total
af = new AudioFormat(sampleRate, 16, 2, true, true); // 44100 Hz, 16 bit, 2 channels
SourceDataLine sdl = AudioSystem.getSourceDataLine(af);
sdl.open(af);
sdl.start();
// only one should be delayed at a time
int delayRight = delayR;
int delayLeft = delayL;
int freq = 1000;
/*
* NOTE:
* The buffer holds data in groups of 4. Every 4 bytes represent a single sample. The first 2 bytes
* are for the left side, the other two are for the right. We take 2 each time because of a 16 bit rate.
*
*
*/
for(int i = 0; i < sampleRate * 4; i++){
double time = ((double)i/((double)sampleRate));
// Left side:
if (i >= delayLeft * samplesPerDelay * 4 // when the left side plays
&& i % 4 < 2 // access first two bytes in sample
&& i <= (delayLeft * 4 * samplesPerDelay)
+ (4 * durationInSamples)) // make sure to stop after your delay window
buf[i] = (byte) ((1+gainLeft) * Math.sin(2*Math.PI*(freq)*time)); // sound in left ear
//Right side:
else if (i >= delayRight * samplesPerDelay * 4 // time for right side
&& i % 4 >= 2 // use second 2 bytes
&& i <= (delayRight * 4 * samplesPerDelay)
+ (4 * durationInSamples)) // stop after your delay window
buf[i] = (byte) ((1+gainRight) * Math.sin(2*Math.PI*(freq)*time)); // sound in right ear
}
for (byte b : buf)
System.out.print(b + " ");
System.out.println();
sdl.write(buf,0,buf.length);
sdl.drain();
sdl.stop();
sdl.close();
}
How far apart did you want to have your beeps? I wrote a program that made sine beeps sound up to a couple hundred frames (at 44100 fps) apart, and posted it with source code here which you are welcome to inspect/copy/rewrite.
At such low levels of separation, the sound remains fused, perceptually, but can start to move to one ear or another. I wrote this because I wanted to compare volume panning with delay-based panning. In order to be able to flexibly test multiple files, the code is a slightly more modular than what you have started with. I'm not going to claim what I wrote is any better, though.
One class takes a mono PCM (range is floats, -1 to 1) array and converts it to a stereo array with the desired frame delay between the channels. That same class can also split the mono file into a stereo file where the only difference is volume, and has a third option where you can use a combination of delay and volume differences when you turn the mono data to stereo.
Monofile: F1, F2, F3, ...
Stereofile F1L, F1R, F2L, F2R, F3L, F3R, ...
but if you add delay, say 2 frames to the right:
Stereofile F1L, 0, F2L, 0, F3L, F1R, F4L, F2R, ...
Where F is a normalized float (between -1 and 1) representing an audio wave.
Making the first mono array of a beep is just a matter of using a sine function pretty much as you do. You might 'round off the edges' by ramping the volume over the course of some frames to minimize the clicks that come from the discontinuities of suddenly starting or stopping.
Another class was written whose sole purpose is to output stereo float arrays via a SourceDataLine. Volume is handled by multiplying the audio output by a factor that ranges from 0 to 1. The normalized values are multiplied by 32767 to convert them to signed shorts, and the shorts broken into bytes for the format that I use (16-bit, 44100 fps, stereo, little-endian).
Having an array-playing audio class is kind of neat. The arrays for it are a lot like Clips, but you have direct access to the data. With this class, you can build and reuse many sound arrays. I think I have some code included that loads a wav file into this DIY clip, as well.
There is more discussion of this code on this thread at Java-Gaming.org.
I eventually used some of what I learned here to make a simplified real-time 3D sound system. The "best" way to set up something like this, though, depends on your goals. For my 3D, for example, I wrote a delay tool that allows separate reads from stereo left and right, and the audio mixer & playback is more involved than the simple array-to-SourceDataLine player.
Related
The bounty expires in 5 days. Answers to this question are eligible for a +50 reputation bounty.
droid is looking for a more detailed answer to this question.
I’m (still) working on a small project controlling DMX-lights (using Art-Net).
At the moment I’m working on the “Movement-generator” and what I basically do is to use sine and cosine to calculate the DMX values (0-255) for the pan- and tilt-channel, like with this method:
public void runSineMovement() {
double degrees = x;
double radians = Math.toRadians(degrees);
double sine = Math.sin(radians);
double dmxValue = (int) ((sine * 127) + 127);
dmxValuesOBJ.setDmxValuesInArray(1, (int) dmxValue);
SendArtnet.SendArtnetNow();
x = x + 1;
if (x > 360) {
x = 1;
}
}
x = 1
I then have a ScheduledExecutorService that will call that method on a regular interval, like this:
int speed = 100;
ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
executorService.scheduleAtFixedRate(SineMovement::runSineMovement, 0, 100000 * speed, TimeUnit.NANOSECONDS);
Above is working just fine, moving head (tilt-channel in this example) is moving perfectly. Now I want to use the “fine-channel”, that is, go from 8bit to 16bit (from 1 channel to 2 channels controlling the tilt-channel) so I can get smooth movement even at very slow speed. Remember, "fine-channel" have to go from 0 to 255 first and then "coarse-channel" can go to 1, then "fine-channel" from 0 to 255 and then "coarse-channel" to 2, and so on.
Earlier I build a movement-generator with “triangle-effect” where I looped from 0 to 65.536 and back to 0 and so on, and on every run I calculated the “coarse-channel” (counter/256) and the “fine-channel” (counter % 256) and that approach is working just fine.
Any ideas on how to approach this when using sine and cosine when generating the effect? Can I use the approach from the triangle-generator calculating “coarse” and “fine” using division and modulus?
EDIT: When thinking about it, I don't think the "fine" should have the form as a sine-wave, I mean, the "fine" will (if using sine) go very, very, fast, both up and down, and that will mess things up if the "coarse" is still going "up". I guess the correct is that the "fine" will always have the sawtooth-shape -> sawtooth from zero to max when coarse is going up, and sawtooth from max to zero when coarse is going down. Does that makes sense?
Thanks 😊
Off Topic: Let me start by saying Java is completely new to me. I've been programming for over 15 years and never have had a need for it beyond modifying others' codebases, so please forgive my ignorance and possibly improper terminology. I'm also not very familiar with RF, so if I'm way left field here, please let me know!
I'm building an SDR (Software Defined Radio) radio transmitter, and while I can successfully transmit on a frequency, when I send the stream (either from the device's microphone or bytes from a tone generator), what is coming through my handheld receiver sounds like static.
I believe this to be due to my receiver being set up to receive NFM (Narrowband Frequency Modulation) and WFM (Wideband Frequency Modulation) while the transmission coming from my SDR is sending raw, unmodulated data.
My question is: how do I modulate audio bytes (i.e. an InputStream) so that the resulting bytes are modulated in FM (Frequency Modulation) or AM (Amplitude Modulation), which I can then transmit through the SDR?
I can't seem to find a class or package that handles modulation (eventually I'm going to have to modulate WFM, FM, AM, SB, LSB, USB, DSB, etc.) despite there being quite a few open-source SDR codebases, but if you know where I can find this, that basically answers this question. Everything I've found so far has been for demodulation.
This is a class I've built around Xarph's Answer here on StackOverflow, it simply returns a byte array containing a simple, unmodulated audio signal, which can then be used to play sound through speakers (or transmit over an SDR, but due to the result not being properly modulated, it doesn't come through correctly on the receiver's end, which is what I'm having trouble figuring out)
public class ToneGenerator {
public static byte[] generateTone() {
return generateTone(60, 1000, 8000);
}
public static byte[] generateTone(double duration) {
return generateTone(duration, 1000, 8000);
}
public static byte[] generateTone(double duration, double freqOfTone) {
return generateTone(duration, freqOfTone, 8000);
}
public static byte[] generateTone(double duration, double freqOfTone, int sampleRate) {
double dnumSamples = duration * sampleRate;
dnumSamples = Math.ceil(dnumSamples);
int numSamples = (int) dnumSamples;
double sample[] = new double[numSamples];
byte generatedSnd[] = new byte[2 * numSamples];
for (int i = 0; i < numSamples; ++i) { // Fill the sample array
sample[i] = Math.sin(freqOfTone * 2 * Math.PI * i / (sampleRate));
}
// convert to 16 bit pcm sound array
// assumes the sample buffer is normalized.
// convert to 16 bit pcm sound array
// assumes the sample buffer is normalised.
int idx = 0;
int i = 0 ;
int ramp = numSamples / 20 ; // Amplitude ramp as a percent of sample count
for (i = 0; i< ramp; ++i) { // Ramp amplitude up (to avoid clicks)
double dVal = sample[i];
// Ramp up to maximum
final short val = (short) ((dVal * 32767 * i/ramp));
// in 16 bit wav PCM, first byte is the low order byte
generatedSnd[idx++] = (byte) (val & 0x00ff);
generatedSnd[idx++] = (byte) ((val & 0xff00) >>> 8);
}
for (i = i; i< numSamples - ramp; ++i) { // Max amplitude for most of the samples
double dVal = sample[i];
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
generatedSnd[idx++] = (byte) (val & 0x00ff);
generatedSnd[idx++] = (byte) ((val & 0xff00) >>> 8);
}
for (i = i; i< numSamples; ++i) { // Ramp amplitude down
double dVal = sample[i];
// Ramp down to zero
final short val = (short) ((dVal * 32767 * (numSamples-i)/ramp ));
// in 16 bit wav PCM, first byte is the low order byte
generatedSnd[idx++] = (byte) (val & 0x00ff);
generatedSnd[idx++] = (byte) ((val & 0xff00) >>> 8);
}
return generatedSnd;
}
}
An answer to this doesn't necessarily need to be code, actually theory and an understanding of how FM or AM modulation works when it comes to processing a byte array and converting it to the proper format would probably be more valuable since I'll have to implement more modes in the future.
There is a lot that I don't know about radio. But I think I can say a couple things about the basics of modulation and the problem at hand given the modicum of physics that I have and the experience of coding an FM synthesizer.
First off, I think you might find it easier to work with the source signal's PCM data points if you convert them to normalized floats (ranging from -1f to 1f), rather than working with shorts.
The target frequency of the receiver, 510-1700 kHz (AM radio) is significantly faster than the sample rate of the source sound (presumably 44.1kHz). Assuming you have a way to output the resulting data, the math would involve taking a PCM value from your signal, scaling it appropriately (IDK how much) and multiplying the value against the PCM data points generated by your carrier signal that corresponds to the time interval.
For example, if the carrier signal were 882 kHz, you would multiply a sequence of 20 carrier signal values with the source signal value before moving on to the next source signal value. Again, my ignorance: the tech may have some sort of smoothing algorithm for the transition between the source signal data points. I really don't know about that or not, or at what stage it occurs.
For FM, we have carrier signals in the MHz range, so we are talking orders of magnitude more data being generated per each source signal value than with AM. I don't know the exact algorithm used but here is a simple conceptual way to implement frequency modulation of a sine that I used with my FM synthesizer.
Let's say you have a table with 1000 data points that represents a single sine wave that ranges between -1f to 1f. Let's say you have a cursor that repeatedly traverses the table. If the cursor advanced exactly 1 data point at 44100 fps and delivered the values at that rate, the resulting tone would be 44.1 Hz, yes? But you can also traverse the table via intervals larger than 1, for example 1.5. When the cursor lands in between two table values, one can use linear interpolation to determine the value to output. The cursor increment of 1.5 would result in the sine wave being pitched at 66.2 Hz.
What I think is happening with FM is that this cursor increment is continuously varied, and the amount it is varied depends on some sort of scaling from the source signal translated into a range of increments.
The specifics of the scaling are unknown to me. But suppose a signal is being transmitted with a carrier of 10MHz and ranges ~1% (roughly from 9.9 MHz to 10.1 MHz), the normalized source signal would have some sort of algorithm where a PCM value of -1 match an increment that traverses the carrier wave causing it to produce the slower frequency and +1 match an increment that traverses the carrier wave causing it to produce the higher frequency. So, if an increment of +1 delivers 10 MHz, maybe a source wave PCM signal of -1 elicits a cursor increment of +0.99, a PCM value of -0.5 elicits an increment of +0.995, a value of +0.5 elicits an increment of +1.005, a value of +1 elicits a cursor increment of 1.01.
This is pure speculation on my part as to the relationship between the source PCM values and how that are used to modulate the carrier frequency. But maybe it helps give a concrete image of the basic mechanism?
(I use something similar, employing a cursor to iterate over wav PCM data points at arbitrary increments, in AudioCue (a class for playing back audio data based on the Java Clip), for real time frequency shifting. Code line 1183 holds the cursor that iterates over the PCM data that was imported from the wav file, with the variable idx holding the cursor increment amount. Line 1317 is where we fetch the audio value after incrementing the cursor. Code lines 1372 has the method readFractionalFrame() which performs the linear interpolation. Real time volume changes are also implemented, and I use smoothing on the values that are provided from the public input hooks.)
Again, IDK if any sort of smoothing is used between source signal values or not. In my experience a lot of the tech involves filtering and other tricks of various sorts that improve fidelity or processing calculations.
I have 10 lists of points (each point is time-amplitude pair), where each list belongs to known frequency
So i have a class InputValue with two fields sampleDate (long) and sampleValue (double), and 10 lists - List samples800Hz, samples400Hz and so on.
800Hz list contains about 1600 points (not fixed value because data sampler can have un-predictable delays) for each second, 400Hz list contains about 800 points for each second and so on.
How can i:
Generate sound from list of points
Mix several or all lists in one sound?
If i got it right, i need to resample each list to one sample rate (can java soundformat take custom sample rates like 1600, or i should use standart ones, where lowest is 8000?) and then fill samples buffer like
AudioFormat af = new AudioFormat( (float )1600, 8, 1, true, false );
SourceDataLine sdl = AudioSystem.getSourceDataLine( af );
sdl.open();
sdl.start();
for( int i = 0; i < 1600; i++ ) {
buf[ 0 ] = ???
sdl.write( buf, 0, 1 );
}
sdl.drain();
sdl.stop();
But how can i tell sdl that my aplitude value belongs to some frequency? and how can i mix different frequencies?
BTW, can i, instead of resampling each list, create 10 audioformats with different sample rates (1600 for 800Hz, 800 for 400Hz and so on) and later mix 10 sdls in one?
It sounds like you're trying to use a wavetable for your sound generation. If you're simply just recreating an 800 Hz tone, this is easy:
static int sample = 0;
for (int i = 0; i < 1600; i++) {
buf[i] = samples800Hz[sample];
sample = (sample + 1) % SAMPLES_800HZ_SIZE;
}
Lets say you want to combine an 800 Hz and 1600 Hz tone... just add it together (you might have to mix the values so they don't clip):
static int sample1 = 0, sample2 = 0;
for (int i = 0; i < 1600; i++) {
// Multiply each sample by 0.5; this gives us a 50% mix between the two
buf[i] = (samples800Hz[sample1] * 0.5) + (samples1600Hz[sample2] * 0.5);
sample1 = (sample1 + 1) % SAMPLES_800HZ_SIZE;
sample2 = (sample2 + 1) % SAMPLES_1600HZ_SIZE;
}
Now my answer doesn't consider how many times/number of frames your system is running its callback. You'll have to figure that out on your own. Also, if you wanted to have multiple tone generation instead of endlessly making lists, I would urge you to look up wavetable oscillators. A wavetable is basically creating one array of a tone and then adjusting the speed/phase you read the table to generate a desired frequency.
Good luck!
I want to find the fundamental frequency for human voice in an Android Application. I'm calculating this one with this FFT class and this Complex class.
My code to calculate FFT is this:
public double calculateFFT(byte[] signal)
{
final int mNumberOfFFTPoints =1024;
double mMaxFFTSample;
double temp;
Complex[] y;
Complex[] complexSignal = new Complex[mNumberOfFFTPoints];
double[] absSignal = new double[mNumberOfFFTPoints/2];
for(int i = 0; i < mNumberOfFFTPoints; i++){
temp = (double)((signal[2*i] & 0xFF) | (signal[2*i+1] << 8)) / 32768.0F;
complexSignal[i] = new Complex(temp,0.0);
}
y = FFT.fft(complexSignal);
mMaxFFTSample = 0.0;
int mPeakPos = 0;
for(int i = 0; i < (mNumberOfFFTPoints/2); i++)
{
absSignal[i] = Math.sqrt(Math.pow(y[i].re(), 2) + Math.pow(y[i].im(), 2));
if(absSignal[i] > mMaxFFTSample)
{
mMaxFFTSample = absSignal[i];
mPeakPos = i;
}
}
return ((1.0 * sampleRate) / (1.0 * mNumberOfFFTPoints)) * mPeakPos;
}
and I have the same values as
How do I obtain the frequencies of each value in an FFT?
Is it possible to find the fundamental frequency from these values? Can someone help me?
Thanks in advance.
Fundamental frequency detection for human voice is an active area of research, as the references below suggest. Your approach must be carefully designed and must depend on the nature of the data.
For example if your source is a person singing a single note, with no music or other background sounds in the recording, a modified peak detector might give reasonable results.
If your source is generalized human speech, you will not get a unique fundamental frequency for anything other than the individual formants within the speech.
The graph below illustrates an easy detection problem. It shows the frequency spectrum of a female soprano holding a B-flat-3 (Bb3) note. The fundamental frequency of Bb3 is 233 Hz but the soprano is actually singing a 236 Hz fundamental (the left-most and highest peak.) A simple peak detector yields the correct fundamental frequency in this case.
The graph below illustrates one of the challenges of fundamental frequency detection, even for individually sung notes, let alone for generalized human speech. It shows the frequency spectrum of a female soprano holding an F4 note. The fundamental frequency of F4 is 349 Hz but the soprano is actually singing a 360 Hz fundamental (the left-most peak.)
However, in this case, the highest peak is not the fundamental, but rather the first harmonic at 714 Hz. Your modified peak detector would have to contend with these cases.
In generalized human speech, the concept of fundamental frequency is not really applicable to any subset of longer duration than each individual formant within the speech. This is because the frequency spectrum of generalized human speech is highly time-variant.
See these references:
Speech Signal Analysis
Human Speech Formants
Fundamental frequency detection
FFT, graphs, and audio data from Sooeet.com FFT calculator
Sounds like you've already chosen a solution (FFTs) to your problem. I'm no DSP expert, but I'd venture that you're not going to get very good results this way. See a much more detailed discussion here: How do you analyse the fundamental frequency of a PCM or WAV sample?
If you do choose to stick with this method:
Consider using more than 1024 points if you need accuracy at lower frequencies - remember a (spoken) human voice is surprisingly low.
Choose your sampling frequency wisely - apply a low-pass filter if you can. There's a reason that telephones have a bandwidth of only ~3KHz, the rest is not truly necessary for hearing human voices.
Then, examine the first half of your output values, and pick the lowest biggest one: this is where the hard part is - there may be several (Further peaks should appear at the harmonics (fixed multiples) of this too, but this is hard to check as your buckets are not of a useful size here). This is the range of frequencies that the true fundamental hopefully lies within.
Again though, maybe worth thinking of the other ways of solving this as FFT might give you disappointing results in the real world.
My code for autocorrelation in this:
public double calculateFFT(double[] signal)
{
final int mNumberOfFFTPoints =1024;
double[] magnitude = new double[mNumberOfFFTPoints/2];
DoubleFFT_1D fft = new DoubleFFT_1D(mNumberOfFFTPoints);
double[] fftData = new double[mNumberOfFFTPoints*2];
double max_index=-1;
double max_magnitude=-1;
final float sampleRate=44100;
double frequency;
for (int i=0;i<mNumberOfFFTPoints;i++){
//fftData[2 * i] = buffer[i+firstSample];
fftData[2 * i] = signal[i]; //da controllare
fftData[2 * i + 1] = 0;
fft.complexForward(fftData);
}
for(int i = 0; i < mNumberOfFFTPoints/2; i++){
magnitude[i]=Math.sqrt((fftData[2*i] * fftData[2*i]) + (fftData[2*i + 1] * fftData[2*i + 1]));
if (max_magnitude<magnitude[i]){
max_magnitude=magnitude[i];
max_index=i;
}
}
return frequency=sampleRate*(double)max_index/(double)mNumberOfFFTPoints;
}
The value of "return" is my fundamental frequency?
An FFT maxima returns the peak bin frequency, which may not be the fundamental frequency, but the FFT result bin nearest an overtone or harmonic of the fundamental frequency instead. A longer FFT using more data will give you more closely spaced FFT result bins, and thus a bin probably nearer the peak frequency. You might also be able to interpolate the peak if it is between bins. But if you are dealing with a signal that has a strong harmonic content, such as voice or music, the you may need to use a pitch detection/estimation algorithm instead of an FFT peak algorithm.
I have recorded an array[1024] of data from my mic on my Android phone, passed it through a 1D forward DFT of the real data (setting a further 1024 bits to 0). I saved the array to a text file, and repeated this 8 times.
I got back 16384 results. I opened the text file in Excel and made a graph to see what it looked like(x=index of array, y=size of number returned). There are some massive spikes (both positive and negative) in magnitude around 110, 232, and small spikes continuing in that fashion until around 1817 and 1941 where the spikes get big again, then drop again.
My problem is that wherever I look for help on the topic it mentions gettng the real and imaginary numbers, I only have a 1D array, that I got back from the method I used from Piotr Wendykier's class:
DoubleFFT_1D.realForwardFull(audioDataArray); // from the library JTransforms.
My question is: What do I need to do to this data to return a frequency?
The sound recorded was me playing an 'A' on the bottom string (5th fret) of my guitar (at roughly 440Hz) .
The complex data is interleaved, with real components at even indices and imaginary components at odd indices, i.e. the real components are at index 2*i, the imaginary components are at index 2*i+1.
To get the magnitude of the spectrum at index i, you want:
re = fft[2*i];
im = fft[2*i+1];
magnitude[i] = sqrt(re*re+im*im);
Then you can plot magnitude[i] for i = 0 to N / 2 to get the power spectrum. Depending on the nature of your audio input you should see one or more peaks in the spectrum.
To get the approximate frequency of any given peak you can convert the index of the peak as follows:
freq = i * Fs / N;
where:
freq = frequency in Hz
i = index of peak
Fs = sample rate in Hz (e.g. 44100 Hz, or whatever you are using)
N = size of FFT (e.g. 1024 in your case)
Note: if you have not previously applied a suitable window function to the time-domain input data then you will get a certain amount of spectral leakage and the power spectrum will look rather "smeared".
To expand on this further, here is pseudo-code for a complete example where we take audio data and identify the frequency of the largest peak:
N = 1024 // size of FFT and sample window
Fs = 44100 // sample rate = 44.1 kHz
data[N] // input PCM data buffer
fft[N * 2] // FFT complex buffer (interleaved real/imag)
magnitude[N / 2] // power spectrum
// capture audio in data[] buffer
// ...
// apply window function to data[]
// ...
// copy real input data to complex FFT buffer
for i = 0 to N - 1
fft[2*i] = data[i]
fft[2*i+1] = 0
// perform in-place complex-to-complex FFT on fft[] buffer
// ...
// calculate power spectrum (magnitude) values from fft[]
for i = 0 to N / 2 - 1
re = fft[2*i]
im = fft[2*i+1]
magnitude[i] = sqrt(re*re+im*im)
// find largest peak in power spectrum
max_magnitude = -INF
max_index = -1
for i = 0 to N / 2 - 1
if magnitude[i] > max_magnitude
max_magnitude = magnitude[i]
max_index = i
// convert index of largest peak to frequency
freq = max_index * Fs / N