I am using Processing 3 with the Beads library in order to analyse a number of samples but each time I run the analysis on the same data, I get very different results. Here's the sample and analysis setup:
import beads.*;
import org.jaudiolibs.beads.*;
AudioContext ac;
GranularSamplePlayer sample;
Gain gain;
ShortFrameSegmenter sfs;
FFT fft;
PowerSpectrum ps;
Frequency f;
SpectralPeaks sp;
float[][] meanHarmonics;
int numPeaks = 6;
void setup() {
size(1600, 900);
ac = new AudioContext();
ac.start();
println(dataPath("") + "1.wav");
sample = new GranularSamplePlayer(ac, SampleManager.sample(dataPath("") + "\\1.wav"));
gain = new Gain(ac, 1, 1);
// input chaining
gain.addInput(sample);
ac.out.addInput(gain);
// setup analysis
// break audio into more manageable chunks
sfs = new ShortFrameSegmenter(ac);
sfs.addInput(sample);
// fast fourier transform to analyse the harmonic spectrum
fft = new FFT();
sfs.addListener(fft);
// PowerSpectrum turns the raw FFT output into proper audio data.
ps = new PowerSpectrum();
fft.addListener(ps);
// Frequency tries to determine the strongest frequency in the wave
// which is the fundamental that determines the pitch of the sound
f = new Frequency(44100.0f);
ps.addListener(f);
// Listens for harmonics
sp = new SpectralPeaks(ac, numPeaks);
ps.addListener(sp);
meanHarmonics = new float[numPeaks][2];
// initialise meanHarmonics
for(int i = 0; i < numPeaks; i++) {
for(int j = 0; j < 2; j++) {
meanHarmonics[i][j] = 0;
}
}
ac.out.addDependent(sfs);
int startTime = millis();
int loops = 0;
float meanFrequency = 0.0;
while(millis() - startTime < 1500) {
loops++;
if(loops == 1) {
sample.start(0);
}
Float inputFrequency = f.getFeatures();
if(inputFrequency != null) {
meanFrequency += inputFrequency;
}
float[][] harmonics = sp.getFeatures();
if(harmonics != null) {
for(int feature = 0; feature < numPeaks; feature++) {
// harmonic must be in human audible range
// and its amplitude must be large enough to be audible
if(harmonics[feature][0] < 20000.0 && harmonics[feature][1] > 0.01) {
// average out the frequencies
meanHarmonics[feature][0] += harmonics[feature][0];
// average out the amplitudes
meanHarmonics[feature][1] += harmonics[feature][1];
}
}
}
}
float maxAmp = 0.0;
float freq = 0.0;
sample.pause(true);
meanFrequency /= loops;
println(meanFrequency);
for(int feature = 0; feature < numPeaks; feature++) {
meanHarmonics[feature][0] /= loops;
meanHarmonics[feature][1] /= loops;
if(meanHarmonics[feature][1] > maxAmp) {
freq = meanHarmonics[feature][0];
maxAmp = meanHarmonics[feature][1];
}
println(meanHarmonics[feature][0] + " " + meanHarmonics[feature][1]);
}
println(freq + " " + meanFrequency);
println();
}
I run FFT for a set amount of time during which I sum the frequency returned by the Frequency object and the SpectralPeaks features.
At the end I divide accumulated frequencies and amplitudes to obtain the means. I also try to find the fundamental frequency in the SpectralPeaks array by finding the frequency with the largest amplitude.
But every time I run my program I get a different result, both from SpectralPeaks and Frequency(and their values also differ from each other).
Here are some example values:
1st run:
Spectral Peaks features:
914.84863 0.040409338
844.96295 0.033234257
816.0808 0.027509697
664.9141 0.022158746
633.3232 0.019597264
501.93716 0.01606628
Spectral Peaks fundamental: 914.84863
Frequency: 1028.1572
2nd run, same sample:
Spectral Peaks features:
1023.4123 0.03913592
1109.2562 0.031178929
967.0786 0.026673868
721.2698 0.021666735
629.9294 0.018046249
480.82416 0.014858524
Spectral Peaks fundamental: 1023.4123
Frequency: 1069.3387
Also, the value returned by Frequency is often NaN, I don't understand why that is.
The reason why your code returns different values is because it is sampling and analyzing the audio at different moments. Once you start playing the audio, you have no control when Float inputFrequency = f.getFeatures(); gets executed.
A better approach is not to use millis() and replace the while loop with a for loop, and use ac.runForMillisecondsNonRealTime(). This way you get know exactly that you performing the analysis for a 1500 milliseconds.
//while(millis() - startTime < 1500) {
for(int i = 0; i < numPeaks; i++) {
ac.runForNMillisecondsNonRealTime(1500/numPeaks);
Float inputFrequency = f.getFeatures();
if(inputFrequency != null) {
meanFrequency += inputFrequency;
}
float[][] harmonics = sp.getFeatures();
if(harmonics != null) {
for(int feature = 0; feature < numPeaks; feature++) {
// harmonic must be in human audible range
// and its amplitude must be large enough to be audible
if(harmonics[feature][0] < 20000.0 && harmonics[feature][1] > 0.01) {
// average out the frequencies
meanHarmonics[feature][0] += harmonics[feature][0];
// average out the amplitudes
meanHarmonics[feature][1] += harmonics[feature][1];
}
}
}
}
Related
I created a simple neural network; in order to actually train it, I would need to know in which direction the weights and biases need to be tweaked. I've read some articles on the topic, but I'm not exactly great at math and the only thing I understood was that the cost functions (which I managed to get working) need to be minimized. It would be great if someone could at least tell me in theory how this works. If required, I could also post more of the code. The minimize function should in the end replace evolve():
import java.util.Random;
public class Neuron {
Neuron[] input;
float[] weight;
float bias;
Float value = null;
public Neuron(Neuron[] input) {
this.input = input;
weight = new float[input.length];
setRandom();
}
public void setValue(float val) {
this.value = val;
}
public float getValue() {
if(this.value == null) {
return calculate();
}
else {
return this.value;
}
}
private float calculate() {
float res = 0;
for(int i = 0; i < input.length; i++) {
res += input[i].getValue() * weight[i];
}
res -= bias;
return sigmoid(res);
}
private void setRandom() {
Random rand = new Random();
float max = 0;
for(int i = 0; i < weight.length; i++) {
weight[i] = rand.nextFloat();
max += weight[i];
}
this.bias = max * 0.8f - rand.nextFloat();
}
public void evolve() {
Random rand = new Random();
for(int i = 0; i < weight.length; i++) {
weight[i] += rand.nextFloat() - 0.5f;
}
this.bias += rand.nextFloat() - 0.5f;
}
public static float sigmoid(float x) {
return (float)(1/( 1 + Math.pow(Math.E,(-1*(double)x))));
}
}
Cost function is basically a function of the difference between the real datapoints and your predictions (i.e. it's your penalty). Say for argument's sake, your neural network is f(x) = 2x + 1. Now, say your observed real datapoint is x = 1, y = 4. Therefore your prediction (f(1)) is 3.
If your cost function is the absolute difference between actual observed value and prediction i.e. |f(x) - y| the value of your cost function is 1 (for x = 1) and you would need to minimize this cost function. However, if your cost function is 100 - |f(x) - y| you would want to maximize it. In this cost function your maximum reward is 100.
So your weights and bias need to move in the direction that would get you closer to minimizing your penalty and maximizing your reward. The closer your prediction is to the observed dataset value, the higher the reward and smaller the penalty.
Notes:
This is a gross oversimplification of the math involved but it should help you get started. Also read about overfitting in machine learning.
For understanding machine learning theory Cross Validated would be better forum.
I need to get the amplitude of a signal at a certain frequency.
I use FFTAnalysis function. But I get all spectrum. How can I modify this for get the amplitude of a signal at a certain frequency?
For example I have:
data = array of 1024 points;
If I use FFTAnalysis I get FFTdata array of 1024 points.
But I need only FFTdata[454] for instance ();
public static float[] FFTAnalysis(short[] AVal, int Nvl, int Nft) {
double TwoPi = 6.283185307179586;
int i, j, n, m, Mmax, Istp;
double Tmpr, Tmpi, Wtmp, Theta;
double Wpr, Wpi, Wr, Wi;
double[] Tmvl;
float[] FTvl;
n = Nvl * 2;
Tmvl = new double[n];
FTvl = new float[Nvl];
for (i = 0; i < Nvl; i++) {
j = i * 2; Tmvl[j] = 0; Tmvl[j+1] = AVal[i];
}
i = 1; j = 1;
while (i < n) {
if (j > i) {
Tmpr = Tmvl[i]; Tmvl[i] = Tmvl[j]; Tmvl[j] = Tmpr;
Tmpr = Tmvl[i+1]; Tmvl[i+1] = Tmvl[j+1]; Tmvl[j+1] = Tmpr;
}
i = i + 2; m = Nvl;
while ((m >= 2) && (j > m)) {
j = j - m; m = m >> 1;
}
j = j + m;
}
Mmax = 2;
while (n > Mmax) {
Theta = -TwoPi / Mmax; Wpi = Math.sin(Theta);
Wtmp = Math.sin(Theta / 2); Wpr = Wtmp * Wtmp * 2;
Istp = Mmax * 2; Wr = 1; Wi = 0; m = 1;
while (m < Mmax) {
i = m; m = m + 2; Tmpr = Wr; Tmpi = Wi;
Wr = Wr - Tmpr * Wpr - Tmpi * Wpi;
Wi = Wi + Tmpr * Wpi - Tmpi * Wpr;
while (i < n) {
j = i + Mmax;
Tmpr = Wr * Tmvl[j] - Wi * Tmvl[j-1];
Tmpi = Wi * Tmvl[j] + Wr * Tmvl[j-1];
Tmvl[j] = Tmvl[i] - Tmpr; Tmvl[j-1] = Tmvl[i-1] - Tmpi;
Tmvl[i] = Tmvl[i] + Tmpr; Tmvl[i-1] = Tmvl[i-1] + Tmpi;
i = i + Istp;
}
}
Mmax = Istp;
}
for (i = 0; i < Nft; i++) {
j = i * 2; FTvl[Nft - i - 1] = (float) Math.sqrt((Tmvl[j]*Tmvl[j]) + (Tmvl[j+1]*Tmvl[j+1]));
}
return FTvl;
}
The Goertzel algorithm (or filter) is similar to computing the magnitude for just 1 bin of an FFT.
The Goertzel algorithm is identical to 1 bin of an FFT, except for numerical artifacts, if the period of the frequency is an exact submultiple of your Goertzel filter's length. Otherwise there are some added scalloping effects from a rectangular window of non-periodic-in-aperture size, and how that window relates to the phase of the input.
Multiplying by a complex sinusoid and taking the magnitude of the complex sum is also computationally similar to a Goertzel, except the Goertzel does not require separately calling (or looking up) a trig library function every point, as it usually includes a trig recursion at part of its algorithm.
You'd multiply a (complex) sine wave on the input data, and integrate the result.
Multiplying with a complex sine is equal to a frequency shift, you want to shift the target frequency down to 0 Hz. The integration is a low pass filtering step, with the bandwidth being the inverse of the sampling length.
You then end up with a complex number, which is the same number you would have found in the FFT bin for this frequency (because in essence this is what the FFT does).
The fast fourier transform (FFT) is a clever way of doing many discrete fourier transforms very quickly. As such, the FFT is designed for when one needs a lot of frequencies from the input. If you want just one frequency, the DFT is the way to go (as otherwise you're wasting resources).
The DFT is defined as:
So, in pseudocode:
samples = [#,#,#,#...]
FREQ = 440; // frequency to detect
PI = 3.14159;
E = 2.718;
DFT = 0i; // this is a complex number
for(int sampleNum=0; sampleNum<N; sampleNum++){
DFT += samples[sampleNum] * E^( (-2*PI*1i*N) / N ); //Note that "i" here means imaginary
}
The resulting variable DFT will be a complex number representing the real and imaginary values of the chosen frequency.
Task : Unfair die(6 sides) is being rolled n times. Probability of 1 is p1, probability of 2 is p2 and so on. Write a computer program, that for given n (n<100), the probability of set (p1,p2,p3,p4,p5,p6) and $x \in [n,600n]$ would find the probability of sum of dice values is less than x. Program cannot work more than 5 minutes. This is an extra question that will give me extra points, but so far nobody has done it. I quess beginner computer scientist like me can learn from this code also, since i found 0 help with bias dice in the web and came up with roulette like solution. I kind of wanted to show the world my way also.
I have 2 solutions - using geometrical and statistical probability.
My question is: 1) Is it correct when i do it like this or am i going wrong somewhere ?
2) Which one you think gives me better answer geometric or statistical probability ?
My intuition says it is geometric, because it is more reliable.
i think it is correct answer that my code is giving me - more than 0.99..... usually.
I wanted somebody to check my work since i'm not sure at all and i wanted to share this code with others.
I prefer Java more since it is much faster than R with loops, but i gave R code also for statistical , they are very similar i hope it is not a problam.
Java code :
import java.util.ArrayList;
public class Statistical_prob_lisayl2_geometrical {
public static double mean(ArrayList<Double> a) {
double sum=0;
int len = a.size();
for (int i = 0; i < len; i++) {
sum = sum + a.get(i);
}
return (sum/len);
}
public static double geom_prob(double p1,double p2,double p3,double p4,double p5,double p6){
ArrayList<Double> prob_values = new ArrayList<Double>();
int repeatcount = 1000000;
int[] options = {1,2,3,4,5,6};
int n = 50;
double[] probabilities = {p1,p2,p3,p4,p5,p6};
for (int i = 0 ; i < repeatcount ; i++ ) { // a lot of repeats for better statistical probability
int sum = 0; //for each repeat, the sum is being computed
for (int j = 0; j < n ; j++ ) { // for each repeat there is n cast of dies and we compute them here
double probability_value=0; // the value we start looking from with probability
double instant_probability = Math.random(); // we generate random probability for dice value
for (int k = 0; k < 6; k++ ) { // because we have 6 sides, we start looking at each probability like a roulette table
probability_value = probability_value + probabilities[k]; // we sum the probabilities for checking in which section the probability belongs to
if (probability_value>instant_probability) {
sum = sum + options[k]; // if probability belongs to certain area , lets say p3 to p4, then p3 is added to sum
break; // we break the loop , because it would give us false values otherwise
}
}
}
double length1 = (600*n)-n-(sum-n); //length of possible x values minus length of sum
double length2 = 600*n-n;
prob_values.add( (length1/length2) ); // geometric probability l1/l2
}
return mean(prob_values); // we give the mean value of a arraylist, with 1000000 numbers in it
}
public static double stat_prob(double p1,double p2,double p3,double p4,double p5,double p6){
ArrayList<Double> prob_values = new ArrayList<Double>();
int repeatcount = 1000000;
int[] options = {1,2,3,4,5,6};
int n = 50;
double[] probabilities = {p1,p2,p3,p4,p5,p6};
int count = 0;
for (int i = 0 ; i < repeatcount ; i++ ) {
int sum = 0;
for (int j = 0; j < n ; j++ ) {
double probability_value=0;
double instant_probability = Math.random();
for (int k = 0; k < 6; k++ ) {
probability_value = probability_value + probabilities[k];
if (probability_value>instant_probability) {
sum = sum + options[k];
break;
}
}
}
int x = (int)Math.round(Math.random()*(600*n-n)+n);
if( x>sum ) {
count = count + 1;
}
}
double probability = (double)count/(double)repeatcount;
return probability;
}
public static void main(String[] args) {
System.out.println(stat_prob(0.1,0.1,0.1,0.1,0.3,0.3));
System.out.println(geom_prob(0.1,0.1,0.1,0.1,0.3,0.3));
}
}
R code:
repeatcount = 100000
options = c(1,2,3,4,5,6)
n = 50
probabilities = c(1/10,1/10,1/10,1/10,3/10,3/10)
count = 0
for (i in 1:repeatcount) {
sum = 0
for (i in 1:n) {
probability_value=0
instant_probability = runif(1,0,1)
for (k in 1:6){
probability_value = probability_value + probabilities[k]
if (probability_value>instant_probability) {
sum = sum + options[k]
break
}
}
}
x = runif(1,n,600*n)
x
sum
if ( x> sum ) {
count = count + 1
}
}
count
probability = count/repeatcount
probability
Is this what you are trying to do??
n <- 50 # number of rolls in a trial
k <- 100000 # number if trials in the simulation
x <- 220 # cutoff for calculating P(X<x)
p <- c(1/10,1/10,1/10,1/10,3/10,3/10) # distribution of p-side
X <- sapply(1:k,function(i)sum(sample(1:6,n,replace=T,prob=p)))
P <- sum(X<x)/length(X) # probability that X < x
par(mfrow=c(1,2))
hist(X)
plot(quantile(X,probs=seq(0,1,.01)),seq(0,1,.01),type="l",xlab="x",ylab="P(X < x)")
lines(c(x,x,0),c(0,P,P),col="red",lty=2)
This makes sense because the expected side
E(s) = 1*0.1 + 2*0.1 + 3*0.1 + 4*0.1 + 5*0.3 + 6*0.3 = 4.3
Since you are simulating 50 rolls, the expected value of the total should be 50*4.3, or about 215, which is almost exactly what it is.
The slow step, below, runs in about 3.5s on my system. Obviously the actual time will depend on the number of trials in the simulation, and the speed of your computer, but 5 min is absurd...
system.time(X <- sapply(1:k,function(i)sum(sample(1:6,n,replace=T,prob=p))))
# user system elapsed
# 3.20 0.00 3.24
I am trying to write a small Discrete Fourier Transformation in Java to find the magnitude spectrum in a clear 400 Hz Sinus Signal (1 second as pcm signed-short)
So first I calculate the DFT for the complex values:
public void berechneDFT(int abtastwerte) {
int i;
int N = abtastwerte;
ReX = new double[N/2+1];
ImX = new double[N/2+1];
TextFileOperator tfo = new TextFileOperator(file.substring(0, file.length()-4)+"_DFT.txt");
try {
tfo.openOutputStream();
tfo.writeString("ReX ImX\n");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
// real-Anteil berechnen
for (i=0, ReX[i] = 0, ImX[i] = 0; i <= N/2; i++)
{
for(int n=0; n < N; n++)
{
ReX[i] += x[n] * Math.cos( (2.0 * Math.PI * n * i) / (double) N);
ImX[i] += - (x[n] * Math.sin( (2.0 * Math.PI * n * i) / (double) N));
}
tfo.writeString(ReX[i] +" "+ImX[i]+"\n");
}
x = null;
tfo.closeOutputStream(); // flush
System.out.println("Anteile berechnet.");
}
And then I try to calculate the magnitude Spectrum:
public void berechneBetragsSpektrum() {
int N = ReX.length;
TextFileOperator tfo = new TextFileOperator("betragsspektrum_400hz.txt");
try {
tfo.openOutputStream();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
double powerAtFreq;
int marker = 0;
double maxPowerAtFreq = 0;
for(int i=0; i < N; i++)
{
double A1 = ReX[i] * ReX[i];
double A2 = ImX[i] * ImX[i];
powerAtFreq = Math.sqrt(A1+A2);
if(powerAtFreq > maxPowerAtFreq)
{
maxPowerAtFreq = powerAtFreq;
marker = i;
}
tfo.writeString(powerAtFreq+"\n");
}
tfo.closeOutputStream();
System.out.println("Stärkste Frequenz: "+(marker)+" Hz");
}
But for some reason I only get the result of 400 Hz in the 'marker' if I choose to check for all 16000 samples. But shouldn't I see the peak in 400 Hz also if I only choose 800 samples, because with 800 I could see 800/2 = 400 Hz as maximum frequency?
I guess some little thing must be wrong with the code, because if I choose 800 samples I get 20 Hz, for 1600 samples I get 40 Hz which is always 1/40 * sample rate.
What the hell do I miss or did wrong? The results are strange..
Note that if I do the inverse DFT with the complex values I can reconstruct the audio signal again!
The answer for the question is that if you calculate the fourier transforms, magnitude spectrum etc. the indices show relative frequencies which need to be calculated to their correct value.
I'm currently working on Java for Android. I try to implement the FFT in order to realize a kind of viewer of the frequencies.
Actually I was able to do it, but the display is not fluid at all.
I added some traces in order to check the treatment time of each part of my code, and the fact is that the FFT takes about 300ms to be applied on my complex array, that owns 4096 elements. And I need it to take less than 100ms, as my thread (that displays the frequencies) is refreshed every 100ms. I reduced the initial array in order that the FFT results own only 1028 elements, and it works, but the result is deprecated.
Does someone have an idea ?
I used the default fft.java and Complex.java classes that can be found on the internet.
For information, my code computing the FFT is the following :
int bytesPerSample = 2;
Complex[] x = new Complex[bufferSize/2] ;
for (int index = 0 ; index < bufferReadResult - bytesPerSample + 1; index += bytesPerSample)
{
// 16BITS = 2BYTES
float asFloat = Float.intBitsToFloat(asInt);
double sample = 0;
for (int b = 0; b < bytesPerSample; b++) {
int v = buffer[index + b];
if (b < bytesPerSample - 1 || bytesPerSample == 1) {
v &= 0xFF;
}
sample += v << (b * 8);
}
double sample32 = 100 * (sample / 32768.0); // don't know the use of this compute...
x[index/bytesPerSample] = new Complex(sample32, 0);
}
Complex[] tx = new Complex[1024]; // size = 2048
///// reduction of the size of the signal in order to improve the fft traitment time
for (int i = 0; i < x.length/4; i++)
{
tx[i] = new Complex(x[i*4].re(), 0);
}
// Signal retrieval thanks to the FFT
fftRes = FFT.fft(tx);
I don't know Java, but you're way of converting between your input data and an array of complex values seems very convoluted. You're building two arrays of complex data where only one is necessary.
Also it smells like your complex real and imaginary values are doubles. That's way over the top for what you need, and ARMs are veeeery slow at double arithmetic anyway. Is there a complex class based on single precision floats?
Thirdly you're performing a complex fft on real data by filling the imaginary part of your complexes with zero. Whilst the result will be correct it is twice as much work straight off (unless the routine is clever enough to spot that, which I doubt). If possible perform a real fft on your data and save half your time.
And then as Simon says there's the whole issue of avoiding garbage collection and memory allocation.
Also it looks like your FFT has no preparatory step. This mean that the routine FFT.fft() is calculating the complex exponentials every time. The longest part of the FFT calculation is working out the complex exponentials, which is a shame because for any given FFT length the exponentials are constants. They don't depend on your input data at all. In the real time world we use FFT routines where we calculate the exponentials once at the start of the program and then the actual fft itself takes that const array as one of its inputs. Don't know if your FFT class can do something similar.
If you do end up going to something like FFTW then you're going to have to get used to calling C code from your Java. Also make sure you get a version that supports (I think) NEON, ARM's answer to SSE, AVX and Altivec. It's worth ploughing through their release notes to check. Also I strongly suspect that FFTW will only be able to offer a significant speed up if you ask it to perform an FFT on single precision floats, not doubles.
Google luck!
--Edit--
I meant of course 'good luck'. Give me a real keyboard quick, these touchscreen ones are unreliable...
First, thanks for all your answers.
I followed them and made two test :
first one, I replace the double used in my Complex class by float. The result is just a bit better, but not enough.
then I've rewroten the fft method in order not to use Complex anymore, but a two-dimensional float array instead. For each row of this array, the first column contains the real part, and the second one the imaginary part.
I also changed my code in order to instanciate the float array only once, on the onCreate method.
And the result... is worst !! Now it takes a little bit more than 500ms instead of 300ms.
I don't know what to do now.
You can find below the initial fft fonction, and then the one I've re-wroten.
Thanks for your help.
// compute the FFT of x[], assuming its length is a power of 2
public static Complex[] fft(Complex[] x) {
int N = x.length;
// base case
if (N == 1) return new Complex[] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2 : " + N); }
// fft of even terms
Complex[] even = new Complex[N/2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
Complex[] q = fft(even);
// fft of odd terms
Complex[] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
Complex[] r = fft(odd);
// combine
Complex[] y = new Complex[N];
for (int k = 0; k < N/2; k++) {
double kth = -2 * k * Math.PI / N;
Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
y[k] = q[k].plus(wk.times(r[k]));
y[k + N/2] = q[k].minus(wk.times(r[k]));
}
return y;
}
public static float[][] fftf(float[][] x) {
/**
* x[][0] = real part
* x[][1] = imaginary part
*/
int N = x.length;
// base case
if (N == 1) return new float[][] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2 : " + N); }
// fft of even terms
float[][] even = new float[N/2][2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
float[][] q = fftf(even);
// fft of odd terms
float[][] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
float[][] r = fftf(odd);
// combine
float[][] y = new float[N][2];
double kth, wkcos, wksin ;
for (int k = 0; k < N/2; k++) {
kth = -2 * k * Math.PI / N;
//Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
wkcos = Math.cos(kth) ; // real part
wksin = Math.sin(kth) ; // imaginary part
// y[k] = q[k].plus(wk.times(r[k]));
y[k][0] = (float) (q[k][0] + wkcos * r[k][0] - wksin * r[k][1]);
y[k][1] = (float) (q[k][1] + wkcos * r[k][1] + wksin * r[k][0]);
// y[k + N/2] = q[k].minus(wk.times(r[k]));
y[k + N/2][0] = (float) (q[k][0] - (wkcos * r[k][0] - wksin * r[k][1]));
y[k + N/2][1] = (float) (q[k][1] - (wkcos * r[k][1] + wksin * r[k][0]));
}
return y;
}
actually I think I don't understand everything.
First, about Math.cos and Math.sin : how do you want me not to compute it each time ? Do you mean that I should instanciate the whole values only once (e.g store it in an array) and use them for each compute ?
Second, about the N % 2, indeed it's not very useful, I could make the test before the call of the function.
Third, about Simon's advice : I mixed what he said and what you said, that's why I've replaced the Complex by a two-dimensional float[][]. If that was not what he suggested, then what was it ?
At least, I'm not a FFT expert, so what do you mean by making a "real FFT" ? Do you mean that my imaginary part is useless ? If so, I'm not sure, because later in my code, I compute the magnitude of each frequence, so sqrt(real[i]*real[i] + imag[i]*imag[i]). And I think that my imaginary part is not equal to zero...
thanks !