If I(in Java) have a double[] array containing audio samples, ranging from -1 to 1, which I can play, and have generated to sound like a guitar string being played, is there any way through which I can simulate the effect of distortion from an amplifier on these samples?
I apologize for the vagueness of the term "distortion", but I'm referring to any effect similar to setting a guitar amplifier to "distortion". What I already have sounds like an acoustic guitar, or an electric guitar with no distortion(set to "clean"), so how can I alter the array to sound more like what you would expect from an electric guitar in a rock or metal setting?
The current set of samples is calculated using the following method:
double[] samples = new double[duration]; //duration = seconds * sampleRate
int period = (float)sampleRate / (float)frequency;
double[] buf = new double[period]; //a ring buffer used for the sound generation
int count = 0, c1 = 1, c2 = 2;
for(int i=0; i<duration; i++){
if(count <= period)count = 0;
if(c1 <= period)c1 = 0;
if(c2 <= period)c2 = 0;
if(i < period){
buf[count] = rand.nextDouble() * 2 - 1; //rand being a Random
}
else{
buff[count] = (buff[c1] + buff[c2]) / 2;
}
samples[i] = buff[count];
count++;
c1++;
c2++;
}
There are three main types of distortion:
Hard distortion: simple clipping of the signal
for (int i = 0; i < duration; i++)
{
double sample = samples[i] * gain_pre;
if (sample > clip)
sample = clip;
else
if (sample < -clip)
sample = -clip;
samples[i] = sample * gain_post;
}
Normal distortion: exponential smooth scaling of the signal
double max = 1.0 / (1.0 - exp(-gain_pre));
for (int i = 0; i < duration; i++)
{
double sample = samples[i] * gain_pre;
double z = (sample < 0.0) ? (-1.0 + exp(sample)) :
(1.0 - exp(-sample));
samples[i] = z * max * gain_post;
}
Soft distortion: same as above, but using arc-tan (presumably more aggressive)
double max = 1.0 / atan(gain_pre);
for (int i = 0; i < duration; i++)
{
samples[i] = atan(samples[i] * gain_pre) * max * gain_post;
}
Variables:
gain_pre and gain_post: Pre-gain and Post-gain parameters
clip: maximum value for hard-distortion signal
samples: the sample sequence you calculate
References / more info:
http://cp-gfx.sourceforge.net/ (download the source code and look in /src/effects/)
https://en.wikipedia.org/wiki/Distortion_(music)#Theory_and_circuits
Related
I use bilinear interpolation in my android application. It runs perfectly, but takes a lot of time to get the result.
I test it when xi = 259,920, and yi also = 259,920. The time to response was: Galaxy Note 4 takes 3 sec,
and HTC One M8 takes about 8 sec !. So what I can change or use to reduce the time?!
The code I use for bilinear interpolation:
public static double[] BiInterp(Mat z, ArrayList < Double > xi, ArrayList < Double > yi) {
// Declare matrix indeces
int xi_i, yi_i;
// Initialize output vector
double zi[] = new double[xi.size()];
double s00, s01, s10, s11;
for (int i = 0; i < xi.size(); i++) { // Note: xi.length = yi.length !
xi_i = xi.get(i).intValue(); // X index without round
yi_i = yi.get(i).intValue(); // Y index without round
if (xi_i < z.rows() - 1 && yi_i < z.cols() - 1 && xi_i >= 0 && yi_i >= 0) {
// Four neighbors of sample pixel
s00 = z.get(xi_i,yi_i)[0]; s01 = z.get(xi_i,yi_i + 1)[0];
s10 = z.get(xi_i + 1,yi_i)[0];s11 = z.get(xi_i + 1,yi_i + 1)[0];
int neighbor_no = 4; // As bilinear interpolation take 4 neighbors
double A[][] = new double[neighbor_no][neighbor_no];
A[0][0]=xi_i; A[0][1]=yi_i; A[0][2]=xi_i*yi_i; A[0][3]=1;
A[1][0]=xi_i; A[1][1]=yi_i+1; A[1][2]=xi_i*(yi_i+1); A[1][3]=1;
A[2][0]=xi_i+1; A[2][1]=yi_i; A[2][2]=(xi_i+1)*yi_i; A[2][3]=1;
A[3][0]=xi_i+1; A[3][1]=yi_i+1; A[3][2]=(xi_i+1)*(yi_i+1); A[3][3]=1;
GaussianElimination solveE = new GaussianElimination();
double b[] = {s00,s01,s10,s11};
double x[] = solveE.solve(A, b);
zi[i] = xi.get(i)*x[0] + yi.get(i)*x[1] + xi.get(i)*yi.get(i)*x[2] + x[3];
}
}
return zi;
}
and I use Gaussian elimination to solve the equation of 4 unknown
private static final double EPSILON = 1e-10;
// Gaussian elimination with partial pivoting
public static double[] solve(double[][] A, double[] b) {
int N = b.length;
for (int p = 0; p < N; p++) {
// find pivot row and swap
int max = p;
for (int i = p + 1; i < N; i++) {
if (Math.abs(A[i][p]) > Math.abs(A[max][p])) {
max = i;
}
}
double[] temp = A[p];
A[p] = A[max];
A[max] = temp;
double t = b[p];
b[p] = b[max];
b[max] = t;
// singular or nearly singular
if (Math.abs(A[p][p]) <= EPSILON) {
throw new RuntimeException("Matrix is singular or nearly singular");
}
// pivot within A and b
for (int i = p + 1; i < N; i++) {
double alpha = A[i][p] / A[p][p];
b[i] -= alpha * b[p];
for (int j = p; j < N; j++) {
A[i][j] -= alpha * A[p][j];
}
}
}
// back substitution
double[] x = new double[N];
for (int i = N - 1; i >= 0; i--) {
double sum = 0.0;
for (int j = i + 1; j < N; j++) {
sum += A[i][j] * x[j];
}
x[i] = (b[i] - sum) / A[i][i];
}
return x;
}
As you see in the bilinear code, I take the pixels intensities immediately from the Mat object. However, when I used matrix it takes much less time, such as with note 4 takes 1 sec.
But to convert from Mat image to matrix takes 4 sec. So I preferred to use Mat.
Here is a more simple bilinear interpolation algorithm:
Find the fractional part of yi and use it to interpolate between s00 and s01 to find s0, and between s10 and s11 to find s1
Find the fractional part of xi and use it to interpolate between s0 and s1 to find zi
Basically you are decomposing it into three simple linear interpolations. You can visualize it as an H shape. First you interpolate down the left and right posts of the H to get values part way down each. Then you interpolate along the cross-beam to get the final value in the middle.
The code would be something like this:
xi_i = xi.get(i).intValue(); // X index without round
yi_i = yi.get(i).intValue(); // Y index without round
if (xi_i < z.rows() - 1 && yi_i < z.cols() - 1 && xi_i >= 0 && yi_i >= 0) {
// Four neighbors of sample pixel
s00 = z.get(xi_i,yi_i)[0]; s01 = z.get(xi_i,yi_i + 1)[0];
s10 = z.get(xi_i + 1,yi_i)[0];s11 = z.get(xi_i + 1,yi_i + 1)[0];
// find fractional part of yi:
double yi_frac = yi.get(i) - (double)yi_i;
// interpolate between s00 and s01 to find s0:
double s0 = s00 + ((s01 - s00) * yi_frac);
// interpolate between s10 and s11 to find s1:
double s1 = s10 + ((s11 - s10) * yi_frac);
// find fractional part of xi:
double xi_frac = xi.get(i) - (double)xi_i;
// interpolate between s0 and s1 to find zi:
zi[i] = s0 + ((s1 - s0) * xi_frac);
}
You could also speed the whole thing up (at the expense of accuracy) by using fixed point integers instead of doubles.
I need to get the amplitude of a signal at a certain frequency.
I use FFTAnalysis function. But I get all spectrum. How can I modify this for get the amplitude of a signal at a certain frequency?
For example I have:
data = array of 1024 points;
If I use FFTAnalysis I get FFTdata array of 1024 points.
But I need only FFTdata[454] for instance ();
public static float[] FFTAnalysis(short[] AVal, int Nvl, int Nft) {
double TwoPi = 6.283185307179586;
int i, j, n, m, Mmax, Istp;
double Tmpr, Tmpi, Wtmp, Theta;
double Wpr, Wpi, Wr, Wi;
double[] Tmvl;
float[] FTvl;
n = Nvl * 2;
Tmvl = new double[n];
FTvl = new float[Nvl];
for (i = 0; i < Nvl; i++) {
j = i * 2; Tmvl[j] = 0; Tmvl[j+1] = AVal[i];
}
i = 1; j = 1;
while (i < n) {
if (j > i) {
Tmpr = Tmvl[i]; Tmvl[i] = Tmvl[j]; Tmvl[j] = Tmpr;
Tmpr = Tmvl[i+1]; Tmvl[i+1] = Tmvl[j+1]; Tmvl[j+1] = Tmpr;
}
i = i + 2; m = Nvl;
while ((m >= 2) && (j > m)) {
j = j - m; m = m >> 1;
}
j = j + m;
}
Mmax = 2;
while (n > Mmax) {
Theta = -TwoPi / Mmax; Wpi = Math.sin(Theta);
Wtmp = Math.sin(Theta / 2); Wpr = Wtmp * Wtmp * 2;
Istp = Mmax * 2; Wr = 1; Wi = 0; m = 1;
while (m < Mmax) {
i = m; m = m + 2; Tmpr = Wr; Tmpi = Wi;
Wr = Wr - Tmpr * Wpr - Tmpi * Wpi;
Wi = Wi + Tmpr * Wpi - Tmpi * Wpr;
while (i < n) {
j = i + Mmax;
Tmpr = Wr * Tmvl[j] - Wi * Tmvl[j-1];
Tmpi = Wi * Tmvl[j] + Wr * Tmvl[j-1];
Tmvl[j] = Tmvl[i] - Tmpr; Tmvl[j-1] = Tmvl[i-1] - Tmpi;
Tmvl[i] = Tmvl[i] + Tmpr; Tmvl[i-1] = Tmvl[i-1] + Tmpi;
i = i + Istp;
}
}
Mmax = Istp;
}
for (i = 0; i < Nft; i++) {
j = i * 2; FTvl[Nft - i - 1] = (float) Math.sqrt((Tmvl[j]*Tmvl[j]) + (Tmvl[j+1]*Tmvl[j+1]));
}
return FTvl;
}
The Goertzel algorithm (or filter) is similar to computing the magnitude for just 1 bin of an FFT.
The Goertzel algorithm is identical to 1 bin of an FFT, except for numerical artifacts, if the period of the frequency is an exact submultiple of your Goertzel filter's length. Otherwise there are some added scalloping effects from a rectangular window of non-periodic-in-aperture size, and how that window relates to the phase of the input.
Multiplying by a complex sinusoid and taking the magnitude of the complex sum is also computationally similar to a Goertzel, except the Goertzel does not require separately calling (or looking up) a trig library function every point, as it usually includes a trig recursion at part of its algorithm.
You'd multiply a (complex) sine wave on the input data, and integrate the result.
Multiplying with a complex sine is equal to a frequency shift, you want to shift the target frequency down to 0 Hz. The integration is a low pass filtering step, with the bandwidth being the inverse of the sampling length.
You then end up with a complex number, which is the same number you would have found in the FFT bin for this frequency (because in essence this is what the FFT does).
The fast fourier transform (FFT) is a clever way of doing many discrete fourier transforms very quickly. As such, the FFT is designed for when one needs a lot of frequencies from the input. If you want just one frequency, the DFT is the way to go (as otherwise you're wasting resources).
The DFT is defined as:
So, in pseudocode:
samples = [#,#,#,#...]
FREQ = 440; // frequency to detect
PI = 3.14159;
E = 2.718;
DFT = 0i; // this is a complex number
for(int sampleNum=0; sampleNum<N; sampleNum++){
DFT += samples[sampleNum] * E^( (-2*PI*1i*N) / N ); //Note that "i" here means imaginary
}
The resulting variable DFT will be a complex number representing the real and imaginary values of the chosen frequency.
I am trying to write a simple band pass filter following the instructions in this book. My code creates a blackman window, and combines two low pass filter kernels to create a band pass filter kernel using spectral inversion, as described in the second example here (table 16-2).
I am testing my code by comparing it with the results I get in matlab. When I test the methods that create a blackman window and a low pass filter kernel separately, I get results that are close to what I see in matlab (up to some digits after the decimal point - I attribute the error to java double variables rounding issues), but my band pass filter kernel is incorrect.
Tests I ran:
Created a blackman window and compared it with what I get in matlab - all good.
Created a low pass filter using this window using my code and fir1(N, Fc1/(Fs/2), win, flag); in matlab (see full code below). I think the results are correct, although I get bigger error the bigger Fc1 is (why?)
Created a pand pass filter using my code and fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag); in matlab - results are completely off.
Filtered my data using my code and the kernel generated by matlab - all good.
So - why is my band pass filter kernel off? What did I do wrong?
I think I either have a bug or fir1 uses a different algorithm, but I can't check because the article referenced in its documentation is not publicly available.
This is my matlab code:
Fs = 200; % Sampling Frequency
N = 10; % Order
Fc1 = 1.5; % First Cutoff Frequency
Fc2 = 7.5; % Second Cutoff Frequency
flag = 'scale'; % Sampling Flag
% Create the window vector for the design algorithm.
win = blackman(N+1);
% Calculate the coefficients using the FIR1 function.
b = fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag);
Hd = dfilt.dffir(b);
res = filter(Hd, data);
This is my java code (I believe the bug is in bandPassKernel):
/**
* See - http://www.mathworks.com/help/signal/ref/blackman.html
* #param length
* #return
*/
private static double[] blackmanWindow(int length) {
double[] window = new double[length];
double factor = Math.PI / (length - 1);
for (int i = 0; i < window.length; ++i) {
window[i] = 0.42d - (0.5d * Math.cos(2 * factor * i)) + (0.08d * Math.cos(4 * factor * i));
}
return window;
}
private static double[] lowPassKernel(int length, double cutoffFreq, double[] window) {
double[] ker = new double[length + 1];
double factor = Math.PI * cutoffFreq * 2;
double sum = 0;
for (int i = 0; i < ker.length; i++) {
double d = i - length/2;
if (d == 0) ker[i] = factor;
else ker[i] = Math.sin(factor * d) / d;
ker[i] *= window[i];
sum += ker[i];
}
// Normalize the kernel
for (int i = 0; i < ker.length; ++i) {
ker[i] /= sum;
}
return ker;
}
private static double[] bandPassKernel(int length, double lowFreq, double highFreq) {
double[] ker = new double[length + 1];
double[] window = blackmanWindow(length + 1);
// Create a band reject filter kernel using a high pass and a low pass filter kernel
double[] lowPass = lowPassKernel(length, lowFreq, window);
// Create a high pass kernel for the high frequency
// by inverting a low pass kernel
double[] highPass = lowPassKernel(length, highFreq, window);
for (int i = 0; i < highPass.length; ++i) highPass[i] = -highPass[i];
highPass[length / 2] += 1;
// Combine the filters and invert to create a bandpass filter kernel
for (int i = 0; i < ker.length; ++i) ker[i] = -(lowPass[i] + highPass[i]);
ker[length / 2] += 1;
return ker;
}
private static double[] filter(double[] signal, double[] kernel) {
double[] res = new double[signal.length];
for (int r = 0; r < res.length; ++r) {
int M = Math.min(kernel.length, r + 1);
for (int k = 0; k < M; ++k) {
res[r] += kernel[k] * signal[r - k];
}
}
return res;
}
And this is how I use my code:
double[] kernel = bandPassKernel(10, 1.5d / (200/2), 7.5d / (200/2));
double[] res = filter(data, kernel);
I ended up implementing Matlab's fir1 function in Java. My results are quite accurate.
I am trying to write a small Discrete Fourier Transformation in Java to find the magnitude spectrum in a clear 400 Hz Sinus Signal (1 second as pcm signed-short)
So first I calculate the DFT for the complex values:
public void berechneDFT(int abtastwerte) {
int i;
int N = abtastwerte;
ReX = new double[N/2+1];
ImX = new double[N/2+1];
TextFileOperator tfo = new TextFileOperator(file.substring(0, file.length()-4)+"_DFT.txt");
try {
tfo.openOutputStream();
tfo.writeString("ReX ImX\n");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
// real-Anteil berechnen
for (i=0, ReX[i] = 0, ImX[i] = 0; i <= N/2; i++)
{
for(int n=0; n < N; n++)
{
ReX[i] += x[n] * Math.cos( (2.0 * Math.PI * n * i) / (double) N);
ImX[i] += - (x[n] * Math.sin( (2.0 * Math.PI * n * i) / (double) N));
}
tfo.writeString(ReX[i] +" "+ImX[i]+"\n");
}
x = null;
tfo.closeOutputStream(); // flush
System.out.println("Anteile berechnet.");
}
And then I try to calculate the magnitude Spectrum:
public void berechneBetragsSpektrum() {
int N = ReX.length;
TextFileOperator tfo = new TextFileOperator("betragsspektrum_400hz.txt");
try {
tfo.openOutputStream();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
double powerAtFreq;
int marker = 0;
double maxPowerAtFreq = 0;
for(int i=0; i < N; i++)
{
double A1 = ReX[i] * ReX[i];
double A2 = ImX[i] * ImX[i];
powerAtFreq = Math.sqrt(A1+A2);
if(powerAtFreq > maxPowerAtFreq)
{
maxPowerAtFreq = powerAtFreq;
marker = i;
}
tfo.writeString(powerAtFreq+"\n");
}
tfo.closeOutputStream();
System.out.println("Stärkste Frequenz: "+(marker)+" Hz");
}
But for some reason I only get the result of 400 Hz in the 'marker' if I choose to check for all 16000 samples. But shouldn't I see the peak in 400 Hz also if I only choose 800 samples, because with 800 I could see 800/2 = 400 Hz as maximum frequency?
I guess some little thing must be wrong with the code, because if I choose 800 samples I get 20 Hz, for 1600 samples I get 40 Hz which is always 1/40 * sample rate.
What the hell do I miss or did wrong? The results are strange..
Note that if I do the inverse DFT with the complex values I can reconstruct the audio signal again!
The answer for the question is that if you calculate the fourier transforms, magnitude spectrum etc. the indices show relative frequencies which need to be calculated to their correct value.
I would like to take an array of bytes of roughly size 70-80k and transform them from the time domain to the frequency domain (probably using a DFT). I have been following wiki and gotten this code so far.
for (int k = 0; k < windows.length; k++) {
double imag = 0.0;
double real = 0.0;
for (int n = 0; n < data.length; n++) {
double val = (data[n])
* Math.exp(-2.0 * Math.PI * n * k / data.length)
/ 128;
imag += Math.cos(val);
real += Math.sin(val);
}
windows[k] = Math.sqrt(imag * imag + real
* real);
}
and as far as I know, that finds the magnitude of each frequency window/bin. I then go through the windows and find the one with the highest magnitude. I add a flag to that frequency to be used when reconstructing the signal. I check to see if the reconstructed signal matches my original data set. If it doesn't find the next highest frequency window and flag that to be used when reconstructing the signal.
Here is the code I have for reconstructing the signal which I'm mostly certain is very wrong (it is supposed to perform an IDFT):
for (int n = 0; n < data.length; n++) {
double imag = 0.0;
double real = 0.0;
sinValue[n] = 0;
for (int k = 0; k < freqUsed.length; k++) {
if (freqUsed[k]) {
double val = (windows[k] * Math.exp(2.0 * Math.PI * n
* k / data.length));
imag += Math.cos(val);
real += Math.sin(val);
}
}
sinValue[n] = imag* imag + real * real;
sinValue[n] /= data.length;
newData[n] = (byte) (127 * sinValue[n]);
}
freqUsed is a boolean array used to mark whether or not a frequency window should be used when reconstructing the signal.
Anyway, here are the problems that arise:
Even if all of the frequency windows are used, the signal is not reconstructed. This may be due to the fact that ...
Sometimes the value of Math.exp() is too high and thus returns infinity. This makes it difficult to get accurate calculations.
While I have been following wiki as a guide, it is hard to tell whether or not my data is meaningful. This makes it hard to test and identify problems.
Off hand from the problem:
I am fairly new to this and do not fully understand everything. Thus, any help or insight is appreciated. Thanks for even taking the time to read all of that and thanks ahead of time for any help you can provide. Any help really would be good, even if you think I'm doing this the most worst awful way possible, I'd like to know. Thanks again.
-
EDIT:
So I updated my code to look like:
for (int k = 0; k < windows.length; k++) {
double imag = 0.0;
double real = 0.0;
for (int n = 0; n < data.length; n++) {
double val = (-2.0 * Math.PI * n * k / data.length);
imag += data[n]*-Math.sin(val);
real += data[n]*Math.cos(val);
}
windows[k] = Math.sqrt(imag * imag + real
* real);
}
for the original transform and :
for (int n = 0; n < data.length; n++) {
double imag = 0.0;
double real = 0.0;
sinValue[n] = 0;
for (int k = 0; k < freqUsed.length; k++) {
if (freqUsed[k]) {
double val = (2.0 * Math.PI * n
* k / data.length);
imag += windows[k]*-Math.sin(val);
real += windows[k]*Math.cos(val);
}
}
sinValue[n] = Math.sqrt(imag* imag + real * real);
sinValue[n] /= data.length;
newData[n] = (byte) (Math.floor(sinValue[n]));
}
for the inverse transform. Though I am still concerned that it isn't quite working correctly. I generated an array holding a single sine wave and it can't even decompose/reconstruct that. Any insight as to what I'm missing?
Yes, your code (for both DFT and IDFT) is broken. You are confusing the issue of how to use the exponential. The DFT can be written as:
N-1
X[k] = SUM { x[n] . exp(-j * 2 * pi * n * k / N) }
n=0
where j is sqrt(-1). That can be expressed as:
N-1
X[k] = SUM { (x_real[n] * cos(2*pi*n*k/N) + x_imag[n] * sin(2*pi*n*k/N))
n=0 +j.(x_imag[n] * cos(2*pi*n*k/N) - x_real[n] * sin(2*pi*n*k/N)) }
which in turn can be split into:
N-1
X_real[k] = SUM { x_real[n] * cos(2*pi*n*k/N) + x_imag[n] * sin(2*pi*n*k/N) }
n=0
N-1
X_imag[k] = SUM { x_imag[n] * cos(2*pi*n*k/N) - x_real[n] * sin(2*pi*n*k/N) }
n=0
If your input data is real-only, this simplifies to:
N-1
X_real[k] = SUM { x[n] * cos(2*pi*n*k/N) }
n=0
N-1
X_imag[k] = SUM { x[n] * -sin(2*pi*n*k/N) }
n=0
So in summary, you don't need both the exp and the cos/sin.
As well as the points that #Oli correctly makes, you also have a fundamental misunderstanding about conversion between time and frequency domains. Your real input signal becomes a complex signal in the frequency domain. You should not be taking the magnitude of this and converting back to the time domain (this will actually give you the time domain autocorrelation if done correctly, but this is not what you want). If you want to be able to reconstruct the time domain signal then you must keep the complex frequency domain signal as it is (i.e. separate real/imaginary components) and do a complex-to-real IDFT to get back to the time domain.
E.g. your forward transform should look something like this:
for (int k = 0; k < windows.length; k++) {
double imag = 0.0;
double real = 0.0;
for (int n = 0; n < data.length; n++) {
double val = (-2.0 * Math.PI * n * k / data.length);
imag += data[n]*-Math.sin(val);
real += data[n]*Math.cos(val);
}
windows[k].real = real;
windows[k].imag = image;
}
where windows is defined as an array of complex values.