FFT for a single frequency - java

I need to get the amplitude of a signal at a certain frequency.
I use FFTAnalysis function. But I get all spectrum. How can I modify this for get the amplitude of a signal at a certain frequency?
For example I have:
data = array of 1024 points;
If I use FFTAnalysis I get FFTdata array of 1024 points.
But I need only FFTdata[454] for instance ();
public static float[] FFTAnalysis(short[] AVal, int Nvl, int Nft) {
double TwoPi = 6.283185307179586;
int i, j, n, m, Mmax, Istp;
double Tmpr, Tmpi, Wtmp, Theta;
double Wpr, Wpi, Wr, Wi;
double[] Tmvl;
float[] FTvl;
n = Nvl * 2;
Tmvl = new double[n];
FTvl = new float[Nvl];
for (i = 0; i < Nvl; i++) {
j = i * 2; Tmvl[j] = 0; Tmvl[j+1] = AVal[i];
}
i = 1; j = 1;
while (i < n) {
if (j > i) {
Tmpr = Tmvl[i]; Tmvl[i] = Tmvl[j]; Tmvl[j] = Tmpr;
Tmpr = Tmvl[i+1]; Tmvl[i+1] = Tmvl[j+1]; Tmvl[j+1] = Tmpr;
}
i = i + 2; m = Nvl;
while ((m >= 2) && (j > m)) {
j = j - m; m = m >> 1;
}
j = j + m;
}
Mmax = 2;
while (n > Mmax) {
Theta = -TwoPi / Mmax; Wpi = Math.sin(Theta);
Wtmp = Math.sin(Theta / 2); Wpr = Wtmp * Wtmp * 2;
Istp = Mmax * 2; Wr = 1; Wi = 0; m = 1;
while (m < Mmax) {
i = m; m = m + 2; Tmpr = Wr; Tmpi = Wi;
Wr = Wr - Tmpr * Wpr - Tmpi * Wpi;
Wi = Wi + Tmpr * Wpi - Tmpi * Wpr;
while (i < n) {
j = i + Mmax;
Tmpr = Wr * Tmvl[j] - Wi * Tmvl[j-1];
Tmpi = Wi * Tmvl[j] + Wr * Tmvl[j-1];
Tmvl[j] = Tmvl[i] - Tmpr; Tmvl[j-1] = Tmvl[i-1] - Tmpi;
Tmvl[i] = Tmvl[i] + Tmpr; Tmvl[i-1] = Tmvl[i-1] + Tmpi;
i = i + Istp;
}
}
Mmax = Istp;
}
for (i = 0; i < Nft; i++) {
j = i * 2; FTvl[Nft - i - 1] = (float) Math.sqrt((Tmvl[j]*Tmvl[j]) + (Tmvl[j+1]*Tmvl[j+1]));
}
return FTvl;
}

The Goertzel algorithm (or filter) is similar to computing the magnitude for just 1 bin of an FFT.
The Goertzel algorithm is identical to 1 bin of an FFT, except for numerical artifacts, if the period of the frequency is an exact submultiple of your Goertzel filter's length. Otherwise there are some added scalloping effects from a rectangular window of non-periodic-in-aperture size, and how that window relates to the phase of the input.
Multiplying by a complex sinusoid and taking the magnitude of the complex sum is also computationally similar to a Goertzel, except the Goertzel does not require separately calling (or looking up) a trig library function every point, as it usually includes a trig recursion at part of its algorithm.

You'd multiply a (complex) sine wave on the input data, and integrate the result.
Multiplying with a complex sine is equal to a frequency shift, you want to shift the target frequency down to 0 Hz. The integration is a low pass filtering step, with the bandwidth being the inverse of the sampling length.
You then end up with a complex number, which is the same number you would have found in the FFT bin for this frequency (because in essence this is what the FFT does).

The fast fourier transform (FFT) is a clever way of doing many discrete fourier transforms very quickly. As such, the FFT is designed for when one needs a lot of frequencies from the input. If you want just one frequency, the DFT is the way to go (as otherwise you're wasting resources).
The DFT is defined as:
So, in pseudocode:
samples = [#,#,#,#...]
FREQ = 440; // frequency to detect
PI = 3.14159;
E = 2.718;
DFT = 0i; // this is a complex number
for(int sampleNum=0; sampleNum<N; sampleNum++){
DFT += samples[sampleNum] * E^( (-2*PI*1i*N) / N ); //Note that "i" here means imaginary
}
The resulting variable DFT will be a complex number representing the real and imaginary values of the chosen frequency.

Related

Reducing time complexity of bilinear interpolation

I use bilinear interpolation in my android application. It runs perfectly, but takes a lot of time to get the result.
I test it when xi = 259,920, and yi also = 259,920. The time to response was: Galaxy Note 4 takes 3 sec,
and HTC One M8 takes about 8 sec !. So what I can change or use to reduce the time?!
The code I use for bilinear interpolation:
public static double[] BiInterp(Mat z, ArrayList < Double > xi, ArrayList < Double > yi) {
// Declare matrix indeces
int xi_i, yi_i;
// Initialize output vector
double zi[] = new double[xi.size()];
double s00, s01, s10, s11;
for (int i = 0; i < xi.size(); i++) { // Note: xi.length = yi.length !
xi_i = xi.get(i).intValue(); // X index without round
yi_i = yi.get(i).intValue(); // Y index without round
if (xi_i < z.rows() - 1 && yi_i < z.cols() - 1 && xi_i >= 0 && yi_i >= 0) {
// Four neighbors of sample pixel
s00 = z.get(xi_i,yi_i)[0]; s01 = z.get(xi_i,yi_i + 1)[0];
s10 = z.get(xi_i + 1,yi_i)[0];s11 = z.get(xi_i + 1,yi_i + 1)[0];
int neighbor_no = 4; // As bilinear interpolation take 4 neighbors
double A[][] = new double[neighbor_no][neighbor_no];
A[0][0]=xi_i; A[0][1]=yi_i; A[0][2]=xi_i*yi_i; A[0][3]=1;
A[1][0]=xi_i; A[1][1]=yi_i+1; A[1][2]=xi_i*(yi_i+1); A[1][3]=1;
A[2][0]=xi_i+1; A[2][1]=yi_i; A[2][2]=(xi_i+1)*yi_i; A[2][3]=1;
A[3][0]=xi_i+1; A[3][1]=yi_i+1; A[3][2]=(xi_i+1)*(yi_i+1); A[3][3]=1;
GaussianElimination solveE = new GaussianElimination();
double b[] = {s00,s01,s10,s11};
double x[] = solveE.solve(A, b);
zi[i] = xi.get(i)*x[0] + yi.get(i)*x[1] + xi.get(i)*yi.get(i)*x[2] + x[3];
}
}
return zi;
}
and I use Gaussian elimination to solve the equation of 4 unknown
private static final double EPSILON = 1e-10;
// Gaussian elimination with partial pivoting
public static double[] solve(double[][] A, double[] b) {
int N = b.length;
for (int p = 0; p < N; p++) {
// find pivot row and swap
int max = p;
for (int i = p + 1; i < N; i++) {
if (Math.abs(A[i][p]) > Math.abs(A[max][p])) {
max = i;
}
}
double[] temp = A[p];
A[p] = A[max];
A[max] = temp;
double t = b[p];
b[p] = b[max];
b[max] = t;
// singular or nearly singular
if (Math.abs(A[p][p]) <= EPSILON) {
throw new RuntimeException("Matrix is singular or nearly singular");
}
// pivot within A and b
for (int i = p + 1; i < N; i++) {
double alpha = A[i][p] / A[p][p];
b[i] -= alpha * b[p];
for (int j = p; j < N; j++) {
A[i][j] -= alpha * A[p][j];
}
}
}
// back substitution
double[] x = new double[N];
for (int i = N - 1; i >= 0; i--) {
double sum = 0.0;
for (int j = i + 1; j < N; j++) {
sum += A[i][j] * x[j];
}
x[i] = (b[i] - sum) / A[i][i];
}
return x;
}
As you see in the bilinear code, I take the pixels intensities immediately from the Mat object. However, when I used matrix it takes much less time, such as with note 4 takes 1 sec.
But to convert from Mat image to matrix takes 4 sec. So I preferred to use Mat.
Here is a more simple bilinear interpolation algorithm:
Find the fractional part of yi and use it to interpolate between s00 and s01 to find s0, and between s10 and s11 to find s1
Find the fractional part of xi and use it to interpolate between s0 and s1 to find zi
Basically you are decomposing it into three simple linear interpolations. You can visualize it as an H shape. First you interpolate down the left and right posts of the H to get values part way down each. Then you interpolate along the cross-beam to get the final value in the middle.
The code would be something like this:
xi_i = xi.get(i).intValue(); // X index without round
yi_i = yi.get(i).intValue(); // Y index without round
if (xi_i < z.rows() - 1 && yi_i < z.cols() - 1 && xi_i >= 0 && yi_i >= 0) {
// Four neighbors of sample pixel
s00 = z.get(xi_i,yi_i)[0]; s01 = z.get(xi_i,yi_i + 1)[0];
s10 = z.get(xi_i + 1,yi_i)[0];s11 = z.get(xi_i + 1,yi_i + 1)[0];
// find fractional part of yi:
double yi_frac = yi.get(i) - (double)yi_i;
// interpolate between s00 and s01 to find s0:
double s0 = s00 + ((s01 - s00) * yi_frac);
// interpolate between s10 and s11 to find s1:
double s1 = s10 + ((s11 - s10) * yi_frac);
// find fractional part of xi:
double xi_frac = xi.get(i) - (double)xi_i;
// interpolate between s0 and s1 to find zi:
zi[i] = s0 + ((s1 - s0) * xi_frac);
}
You could also speed the whole thing up (at the expense of accuracy) by using fixed point integers instead of doubles.

How to mix PCM audio sources (Java)?

Here's what I'm working with right now:
for (int i = 0, numSamples = soundBytes.length / 2; i < numSamples; i += 2)
{
// Get the samples.
int sample1 = ((soundBytes[i] & 0xFF) << 8) | (soundBytes[i + 1] & 0xFF); // Automatically converts to unsigned int 0...65535
int sample2 = ((outputBytes[i] & 0xFF) << 8) | (outputBytes[i + 1] & 0xFF); // Automatically converts to unsigned int 0...65535
// Normalize for simplicity.
float normalizedSample1 = sample1 / 65535.0f;
float normalizedSample2 = sample2 / 65535.0f;
float normalizedMixedSample = 0.0f;
// Apply the algorithm.
if (normalizedSample1 < 0.5f && normalizedSample2 < 0.5f)
normalizedMixedSample = 2.0f * normalizedSample1 * normalizedSample2;
else
normalizedMixedSample = 2.0f * (normalizedSample1 + normalizedSample2) - (2.0f * normalizedSample1 * normalizedSample2) - 1.0f;
int mixedSample = (int)(normalizedMixedSample * 65535);
// Replace the sample in soundBytes array with this mixed sample.
soundBytes[i] = (byte)((mixedSample >> 8) & 0xFF);
soundBytes[i + 1] = (byte)(mixedSample & 0xFF);
}
From as far as I can tell, it's an accurate representation of the algorithm defined on this page: http://www.vttoth.com/CMS/index.php/technical-notes/68
However, just mixing a sound with silence (all 0's) results in a sound that very obviously doesn't sound right, maybe it's best to describe it as higher-pitched and louder.
Would appreciate help in determining if I'm implementing the algorithm correctly, or if I simply need to go about it a different way (different algorithm/method)?
In the linked article the author assumes A and B to represent entire streams of audio. More specifically X means the maximum abs value of all of the samples in stream X - where X is either A or B. So what his algorithm does is scans the entirety of both streams to compute the max abs sample of each and then scales things so that the output theoretically peaks at 1.0. You'll need to make multiple passes over the data in order to implement this algorithm and if your data is streaming in then it simply will not work.
Here is an example of how I think the algorithm to work. It assumes that the samples have already been converted to floating point to side step the issue of your conversion code being wrong. I'll explain what is wrong with it later:
double[] samplesA = ConvertToDoubles(samples1);
double[] samplesB = ConvertToDoubles(samples2);
double A = ComputeMax(samplesA);
double B = ComputeMax(samplesB);
// Z always equals 1 which is an un-useful bit of information.
double Z = A+B-A*B;
// really need to find a value x such that xA+xB=1, which I think is:
double x = 1 / (Math.sqrt(A) * Math.sqrt(B));
// Now mix and scale the samples
double[] samples = MixAndScale(samplesA, samplesB, x);
Mixing and scaling:
double[] MixAndScale(double[] samplesA, double[] samplesB, double scalingFactor)
{
double[] result = new double[samplesA.length];
for (int i = 0; i < samplesA.length; i++)
result[i] = scalingFactor * (samplesA[i] + samplesB[i]);
}
Computing the max peak:
double ComputeMaxPeak(double[] samples)
{
double max = 0;
for (int i = 0; i < samples.length; i++)
{
double x = Math.abs(samples[i]);
if (x > max)
max = x;
}
return max;
}
And conversion. Notice how I'm using short so that the sign bit is properly maintained:
double[] ConvertToDouble(byte[] bytes)
{
double[] samples = new double[bytes.length/2];
for (int i = 0; i < samples.length; i++)
{
short tmp = ((short)bytes[i*2])<<8 + ((short)(bytes[i*2+1]);
samples[i] = tmp / 32767.0;
}
return samples;
}

Simple bandpass filter in java

I am trying to write a simple band pass filter following the instructions in this book. My code creates a blackman window, and combines two low pass filter kernels to create a band pass filter kernel using spectral inversion, as described in the second example here (table 16-2).
I am testing my code by comparing it with the results I get in matlab. When I test the methods that create a blackman window and a low pass filter kernel separately, I get results that are close to what I see in matlab (up to some digits after the decimal point - I attribute the error to java double variables rounding issues), but my band pass filter kernel is incorrect.
Tests I ran:
Created a blackman window and compared it with what I get in matlab - all good.
Created a low pass filter using this window using my code and fir1(N, Fc1/(Fs/2), win, flag); in matlab (see full code below). I think the results are correct, although I get bigger error the bigger Fc1 is (why?)
Created a pand pass filter using my code and fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag); in matlab - results are completely off.
Filtered my data using my code and the kernel generated by matlab - all good.
So - why is my band pass filter kernel off? What did I do wrong?
I think I either have a bug or fir1 uses a different algorithm, but I can't check because the article referenced in its documentation is not publicly available.
This is my matlab code:
Fs = 200; % Sampling Frequency
N = 10; % Order
Fc1 = 1.5; % First Cutoff Frequency
Fc2 = 7.5; % Second Cutoff Frequency
flag = 'scale'; % Sampling Flag
% Create the window vector for the design algorithm.
win = blackman(N+1);
% Calculate the coefficients using the FIR1 function.
b = fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag);
Hd = dfilt.dffir(b);
res = filter(Hd, data);
This is my java code (I believe the bug is in bandPassKernel):
/**
* See - http://www.mathworks.com/help/signal/ref/blackman.html
* #param length
* #return
*/
private static double[] blackmanWindow(int length) {
double[] window = new double[length];
double factor = Math.PI / (length - 1);
for (int i = 0; i < window.length; ++i) {
window[i] = 0.42d - (0.5d * Math.cos(2 * factor * i)) + (0.08d * Math.cos(4 * factor * i));
}
return window;
}
private static double[] lowPassKernel(int length, double cutoffFreq, double[] window) {
double[] ker = new double[length + 1];
double factor = Math.PI * cutoffFreq * 2;
double sum = 0;
for (int i = 0; i < ker.length; i++) {
double d = i - length/2;
if (d == 0) ker[i] = factor;
else ker[i] = Math.sin(factor * d) / d;
ker[i] *= window[i];
sum += ker[i];
}
// Normalize the kernel
for (int i = 0; i < ker.length; ++i) {
ker[i] /= sum;
}
return ker;
}
private static double[] bandPassKernel(int length, double lowFreq, double highFreq) {
double[] ker = new double[length + 1];
double[] window = blackmanWindow(length + 1);
// Create a band reject filter kernel using a high pass and a low pass filter kernel
double[] lowPass = lowPassKernel(length, lowFreq, window);
// Create a high pass kernel for the high frequency
// by inverting a low pass kernel
double[] highPass = lowPassKernel(length, highFreq, window);
for (int i = 0; i < highPass.length; ++i) highPass[i] = -highPass[i];
highPass[length / 2] += 1;
// Combine the filters and invert to create a bandpass filter kernel
for (int i = 0; i < ker.length; ++i) ker[i] = -(lowPass[i] + highPass[i]);
ker[length / 2] += 1;
return ker;
}
private static double[] filter(double[] signal, double[] kernel) {
double[] res = new double[signal.length];
for (int r = 0; r < res.length; ++r) {
int M = Math.min(kernel.length, r + 1);
for (int k = 0; k < M; ++k) {
res[r] += kernel[k] * signal[r - k];
}
}
return res;
}
And this is how I use my code:
double[] kernel = bandPassKernel(10, 1.5d / (200/2), 7.5d / (200/2));
double[] res = filter(data, kernel);
I ended up implementing Matlab's fir1 function in Java. My results are quite accurate.

FFT library in android Sdk [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am working with android project.I need FFT algorithm to process the android accelerometer data.Is there FFT library available in android sdk?
You can use this class, which is fast enough for real time audio analysis
public class FFT {
int n, m;
// Lookup tables. Only need to recompute when size of FFT changes.
double[] cos;
double[] sin;
public FFT(int n) {
this.n = n;
this.m = (int) (Math.log(n) / Math.log(2));
// Make sure n is a power of 2
if (n != (1 << m))
throw new RuntimeException("FFT length must be power of 2");
// precompute tables
cos = new double[n / 2];
sin = new double[n / 2];
for (int i = 0; i < n / 2; i++) {
cos[i] = Math.cos(-2 * Math.PI * i / n);
sin[i] = Math.sin(-2 * Math.PI * i / n);
}
}
public void fft(double[] x, double[] y) {
int i, j, k, n1, n2, a;
double c, s, t1, t2;
// Bit-reverse
j = 0;
n2 = n / 2;
for (i = 1; i < n - 1; i++) {
n1 = n2;
while (j >= n1) {
j = j - n1;
n1 = n1 / 2;
}
j = j + n1;
if (i < j) {
t1 = x[i];
x[i] = x[j];
x[j] = t1;
t1 = y[i];
y[i] = y[j];
y[j] = t1;
}
}
// FFT
n1 = 0;
n2 = 1;
for (i = 0; i < m; i++) {
n1 = n2;
n2 = n2 + n2;
a = 0;
for (j = 0; j < n1; j++) {
c = cos[a];
s = sin[a];
a += 1 << (m - i - 1);
for (k = j; k < n; k = k + n2) {
t1 = c * x[k + n1] - s * y[k + n1];
t2 = s * x[k + n1] + c * y[k + n1];
x[k + n1] = x[k] - t1;
y[k + n1] = y[k] - t2;
x[k] = x[k] + t1;
y[k] = y[k] + t2;
}
}
}
}
}
Warning: this code appears to be derived from here, and has a GPLv2 license.
Using the class at: https://www.ee.columbia.edu/~ronw/code/MEAPsoft/doc/html/FFT_8java-source.html
Short explanation: call fft() providing x as you amplitude data, y as all-zeros array, after the function returns your first answer will be a[0]=x[0]^2+y[0]^2.
Complete explanation: FFT is complex transform, it takes N complex numbers and produces N complex numbers. So x[0] is the real part of the first number, y[0] is the complex part. This function computes in-place, so when the function returns x and y will have the real and complex parts of the transform.
One typical usage is to calculate the power spectrum of audio. Your audio samples only have real part, you your complex part is 0. To calculate the power spectrum you add the square of the real and complex parts P[0]=x[0]^2+y[0]^2.
Also it's important to notice that the Fourier transform, when applied over real numbers, result in symmetrical result (x[0]==x[x.lenth-1]). The data at x[x.length/2] have the data from frequency f=0Hz. x[0]==x[x.length-1] has the data for a frequency equals to have the sampling rate (eg if you sampling was 44000Hz than it means f[0] refeers to 22kHz).
Full procedure:
create array p[n] with 512 samples with zeros
Collect 1024 audio samples, write them on x
Set y[n]=0 for all n
calculate fft(x,y)
calculate p[n]+=x[n+512]^2+y[n+512]^2 for all n=0 to 512
to go 2 to take another batch (after 50 batches go to next step)
plot p
go to 1
Than adjust the fixed number for your taste.
The number 512 defines the sampling window, I won't explain it. Just avoid reducing it too much.
The number 1024 must be always the double of the last number.
The number 50 defines you update rate. If your sampling rate is 44000 samples per second you update rate will be: R=44000/1024/50 = 0.85 seconds.
kissfft is a decent enough library that compiles on android. It has a more versatile license than FFTW (even though FFTW is admittedly better).
You can find an android binding for kissfft in libgdx https://github.com/libgdx/libgdx/blob/0.9.9/extensions/gdx-audio/src/com/badlogic/gdx/audio/analysis/KissFFT.java
Or if you would like a pure Java based solution try jTransforms
https://sites.google.com/site/piotrwendykier/software/jtransforms
Use this class (the one that EricLarch's answer is derived from).
Usage Notes
This function replaces your inputs arrays with the FFT output.
Input
N = the number of data points (the size of your input array, must be a power of 2)
X = the real part of your data to be transformed
Y = the imaginary part of the data to be transformed
i.e. if your input is
(1+8i, 2+3j, 7-i, -10-3i)
N = 4
X = (1, 2, 7, -10)
Y = (8, 3, -1, -3)
Output
X = the real part of the FFT output
Y = the imaginary part of the FFT output
To get your classic FFT graph, you will want to calculate the magnitude of the real and imaginary parts.
Something like:
public double[] fftCalculator(double[] re, double[] im) {
if (re.length != im.length) return null;
FFT fft = new FFT(re.length);
fft.fft(re, im);
double[] fftMag = new double[re.length];
for (int i = 0; i < re.length; i++) {
fftMag[i] = Math.pow(re[i], 2) + Math.pow(im[i], 2);
}
return fftMag;
}
Also see this StackOverflow answer for how to get frequencies if your original input was magnitude vs. time.
Yes, there is the JTransforms that is maintained on github here and avaiable as a Maven plugin here.
Use with:
compile group: 'com.github.wendykierp', name: 'JTransforms', version: '3.1'
But with more recent, Gradle versions you need to use something like:
dependencies {
...
implementation 'com.github.wendykierp:JTransforms:3.1'
}
#J Wang
Your output magnitude seems better than the answer given on the thread you have linked however that is still magnitude squared ... the magnitude of a complex number
z = a + ib
is calculated as
|z|=sqrt(a^2+b^2)
the answer in the linked thread suggests that for pure real inputs the outputs
should be using a2 or a for the output because the values for
a_(i+N/2) = -a_(i),
with b_(i) = a_(i+N/2) meaning the complex part in their table is in the second
half of the output table.
i.e the second half of the output table for an input table of reals is the conjugate of the real ...
so z = a-ia giving a magnitude
|z|=sqrt(2a^2) = sqrt(2)a
so it is worth noting the scaling factors ...
I would recommend looking all this up in a book or on wiki to be sure.
Unfortunately the top answer only works for Array that its size is a power of 2, which is very limiting.
I used the Jtransforms library and it works perfectly, you can compare it to the function used by Matlab.
here is my code with comments referencing how matlab transforms any signal and gets the frequency amplitudes (https://la.mathworks.com/help/matlab/ref/fft.html)
first, add the following in the build.gradle (app)
implementation 'com.github.wendykierp:JTransforms:3.1'
and here it is the code for for transforming a simple sine wave, works like a charm
double Fs = 8000;
double T = 1/Fs;
int L = 1600;
double freq = 338;
double sinValue_re_im[] = new double[L*2]; // because FFT takes an array where its positions alternate between real and imaginary
for( int i = 0; i < L; i++)
{
sinValue_re_im[2*i] = Math.sin( 2*Math.PI*freq*(i * T) ); // real part
sinValue_re_im[2*i+1] = 0; //imaginary part
}
// matlab
// tf = fft(y1);
DoubleFFT_1D fft = new DoubleFFT_1D(L);
fft.complexForward(sinValue_re_im);
double[] tf = sinValue_re_im.clone();
// matlab
// P2 = abs(tf/L);
double[] P2 = new double[L];
for(int i=0; i<L; i++){
double re = tf[2*i]/L;
double im = tf[2*i+1]/L;
P2[i] = sqrt(re*re+im*im);
}
// P1 = P2(1:L/2+1);
double[] P1 = new double[L/2]; // single-sided: the second half of P2 has the same values as the first half
System.arraycopy(P2, 0, P1, 0, L/2);
// P1(2:end-1) = 2*P1(2:end-1);
System.arraycopy(P1, 1, P1, 1, L/2-2);
for(int i=1; i<P1.length-1; i++){
P1[i] = 2*P1[i];
}
// f = Fs*(0:(L/2))/L;
double[] f = new double[L/2 + 1];
for(int i=0; i<L/2+1;i++){
f[i] = Fs*((double) i)/L;
}

How can I transfer a discrete set of data into the frequency domain and back (preferrably losslessly)

I would like to take an array of bytes of roughly size 70-80k and transform them from the time domain to the frequency domain (probably using a DFT). I have been following wiki and gotten this code so far.
for (int k = 0; k < windows.length; k++) {
double imag = 0.0;
double real = 0.0;
for (int n = 0; n < data.length; n++) {
double val = (data[n])
* Math.exp(-2.0 * Math.PI * n * k / data.length)
/ 128;
imag += Math.cos(val);
real += Math.sin(val);
}
windows[k] = Math.sqrt(imag * imag + real
* real);
}
and as far as I know, that finds the magnitude of each frequency window/bin. I then go through the windows and find the one with the highest magnitude. I add a flag to that frequency to be used when reconstructing the signal. I check to see if the reconstructed signal matches my original data set. If it doesn't find the next highest frequency window and flag that to be used when reconstructing the signal.
Here is the code I have for reconstructing the signal which I'm mostly certain is very wrong (it is supposed to perform an IDFT):
for (int n = 0; n < data.length; n++) {
double imag = 0.0;
double real = 0.0;
sinValue[n] = 0;
for (int k = 0; k < freqUsed.length; k++) {
if (freqUsed[k]) {
double val = (windows[k] * Math.exp(2.0 * Math.PI * n
* k / data.length));
imag += Math.cos(val);
real += Math.sin(val);
}
}
sinValue[n] = imag* imag + real * real;
sinValue[n] /= data.length;
newData[n] = (byte) (127 * sinValue[n]);
}
freqUsed is a boolean array used to mark whether or not a frequency window should be used when reconstructing the signal.
Anyway, here are the problems that arise:
Even if all of the frequency windows are used, the signal is not reconstructed. This may be due to the fact that ...
Sometimes the value of Math.exp() is too high and thus returns infinity. This makes it difficult to get accurate calculations.
While I have been following wiki as a guide, it is hard to tell whether or not my data is meaningful. This makes it hard to test and identify problems.
Off hand from the problem:
I am fairly new to this and do not fully understand everything. Thus, any help or insight is appreciated. Thanks for even taking the time to read all of that and thanks ahead of time for any help you can provide. Any help really would be good, even if you think I'm doing this the most worst awful way possible, I'd like to know. Thanks again.
-
EDIT:
So I updated my code to look like:
for (int k = 0; k < windows.length; k++) {
double imag = 0.0;
double real = 0.0;
for (int n = 0; n < data.length; n++) {
double val = (-2.0 * Math.PI * n * k / data.length);
imag += data[n]*-Math.sin(val);
real += data[n]*Math.cos(val);
}
windows[k] = Math.sqrt(imag * imag + real
* real);
}
for the original transform and :
for (int n = 0; n < data.length; n++) {
double imag = 0.0;
double real = 0.0;
sinValue[n] = 0;
for (int k = 0; k < freqUsed.length; k++) {
if (freqUsed[k]) {
double val = (2.0 * Math.PI * n
* k / data.length);
imag += windows[k]*-Math.sin(val);
real += windows[k]*Math.cos(val);
}
}
sinValue[n] = Math.sqrt(imag* imag + real * real);
sinValue[n] /= data.length;
newData[n] = (byte) (Math.floor(sinValue[n]));
}
for the inverse transform. Though I am still concerned that it isn't quite working correctly. I generated an array holding a single sine wave and it can't even decompose/reconstruct that. Any insight as to what I'm missing?
Yes, your code (for both DFT and IDFT) is broken. You are confusing the issue of how to use the exponential. The DFT can be written as:
N-1
X[k] = SUM { x[n] . exp(-j * 2 * pi * n * k / N) }
n=0
where j is sqrt(-1). That can be expressed as:
N-1
X[k] = SUM { (x_real[n] * cos(2*pi*n*k/N) + x_imag[n] * sin(2*pi*n*k/N))
n=0 +j.(x_imag[n] * cos(2*pi*n*k/N) - x_real[n] * sin(2*pi*n*k/N)) }
which in turn can be split into:
N-1
X_real[k] = SUM { x_real[n] * cos(2*pi*n*k/N) + x_imag[n] * sin(2*pi*n*k/N) }
n=0
N-1
X_imag[k] = SUM { x_imag[n] * cos(2*pi*n*k/N) - x_real[n] * sin(2*pi*n*k/N) }
n=0
If your input data is real-only, this simplifies to:
N-1
X_real[k] = SUM { x[n] * cos(2*pi*n*k/N) }
n=0
N-1
X_imag[k] = SUM { x[n] * -sin(2*pi*n*k/N) }
n=0
So in summary, you don't need both the exp and the cos/sin.
As well as the points that #Oli correctly makes, you also have a fundamental misunderstanding about conversion between time and frequency domains. Your real input signal becomes a complex signal in the frequency domain. You should not be taking the magnitude of this and converting back to the time domain (this will actually give you the time domain autocorrelation if done correctly, but this is not what you want). If you want to be able to reconstruct the time domain signal then you must keep the complex frequency domain signal as it is (i.e. separate real/imaginary components) and do a complex-to-real IDFT to get back to the time domain.
E.g. your forward transform should look something like this:
for (int k = 0; k < windows.length; k++) {
double imag = 0.0;
double real = 0.0;
for (int n = 0; n < data.length; n++) {
double val = (-2.0 * Math.PI * n * k / data.length);
imag += data[n]*-Math.sin(val);
real += data[n]*Math.cos(val);
}
windows[k].real = real;
windows[k].imag = image;
}
where windows is defined as an array of complex values.

Categories