How to implement an Equalizer - java

I know there are a lot of questions about equalizers in so, but I didn't get what I was looking. What I want to do is an equalizer for modifying audio samples in such a way like:
equalizer.eqAudio(audiosamples, band, gain)
I'm not sure if that is the exact interface that I want because I know little about DSP in terms of implementing them (I used filters, limiters, compressors but not made them).
So Googling about this I read that I must do a FFT to the samples so I get the data per frequency ranges instead of amplitude, process it the way I want and then make the inverse of the FFT so I get the result in audio samples again. I looked for an implementation of this FFT and found JTransform for Java. This library has an implementation of a FFT related algorithm called Discrete Cosine Transform (DCT).
My questions are :
Well, Am I in the right way?
Since FFT gives me data about frequency, I should pass to the FFT algorithm a chunk of samples. How big this chunk must be?
Is there a good book about DSP programming that explains equalizers ?
Thanks!

FFT wouldn't be my first choice for audio equalisation. I would default to building an EQ with IIR or FIR filters. FFT might be useful for special circumstances.
A commonly recommended reference is the Cookbook Formulae for Audio EQ Biquad Filter Coefficients.
A java tutorial for programming biquad filters. http://arachnoid.com/BiQuadDesigner/index.html
Is there a good book about DSP programming that explains equalizers ?
Understanding Digital Signal Processing is a good introduction to DSP. There are chapters on FIR and IIR filters.
Intoduction To Digital Filters with Audio Applications by Julius O. Smith III.
Graphic Equalizer Design Using Higher-Order Recursive Filters by Martin Holters and Udo Zolzer is a short paper detailing one EQ filter design approach.

There are many different ways to obtain an equalizer, and as Shannon explains, the IIR/FIR filter way is one of them. However, if your goal is to quickly get an equalizer up and running, going the FFT way might be easier for you, as there exist a wealth of reference implementations.
As to your question of FFT size, it depends on what frequency resolution you want your equalizer to have. If you choose a size of 16, you will get 9 (8 complex + 1 real) channels in the frequency domain equally spaced from 0 to fs/2. The 1st is centered around 0Hz, and the 9th around fs/2 Hz. And note, some implementations return 16 channels where the high part is a mirrored and complex conjugated version of the low part.
As to the implementation of the equalizer functionality, multiply each channel with the wanted gain. And if the spectrum have the mirrored part, mirror the gains as well. If this is not done, the result of the following IFFT will not be a real valued signal. After multiplication, apply the IFFT.
As to the difference between a FFT and filter based equalizer, remember that a FFT is simply a fast way of calculating a set of FIR filters with sines as impulse, critically sampled (downsampled with the filter length) and evenly spaced center frequencies.
Regards

Related

Level out FFT graph (Processing)

I am trying to make a music visualizer in Processing, not that that part is super important, and I'm using a fast fourier transform through Minim. It's working perfectly (reading the data), but there is a large spike on the left (bass) end. What's the best way to 'level' this out?
My source code is here, if you want to take a look.
Thanks in advance,
-tlf
The spectrum you show looks fairly typical of a complex musical sound where you have a complex section at lower frequencies, but also some clear harmonics emerging from the low frequency mess. And, actually, these harmonics are atypically clear... music in general is complicated. Sometimes, for example, if a flute is playing a single clear note one will get a single nice peak or two, but it's much more common that transients and percussive sounds lead to a very complicated spectrum, especially at low frequencies.
For comparing directly to the video, it seems to me that the video is a bit odd. My guess is that the spectrum they show is either a zoom in a small section of the spectrum far from zero, or that it's just a graphical algorithm that's based off the music but doesn't correspond to an actual spectrum. That is, if you really want something to look very similar to this video, you'll need more than the spectrum, though the spectrum will likely be a good starting point. Here are a few points to note:
1) there is a prominent peak which occasionally appears right above the "N" in the word anchor. A single dominant peak should be clear in the audio as an approximately pure tone.
2) occasionally there's another peak that varies temporally with this peak, which would normally be a sign that the second peak is a harmonic, but many times this second peak isn't there.
3) A good examples of odd behavior, is a 2:26. This time just follows a little laser sound effect, and then there's basically a quite hiss. A hiss should be a broad spectrum sound without peaks, often weighted to lower frequencies. At 2:26, though, there's just this single large peak above the "N" with nothing else.
It turns out what I had to do was multiply the data by
Math.log(i + 2) / 3
where i is the index of the data being referenced, zero-indexed from the left (bass).
You can see this in context here

frequency / pitch detection for dummies

While I have many questions on this site dealing with the concept of pitch detection... They all deal with this magical FFT with which I am not familiar. I am trying to build an Android application that needs to implement pitch detection. I have absolutely no understanding for the algorithms that are used to do this.
It can't be that hard can it? There are around 8 billion guitar tuner apps on the android market after all.
Can someone help?
The FFT is not really the best way to implement pitch detection or pitch tracking. One issue is that the loudest frequency is not always the fundamental frequency. Another is that the FFT, by itself, requires a pretty large amount of data and processing to obtain the resolution you need to tune an instrument, so it can appear slow to respond (i.e. latency). Yet another issue is that the result of an FFT is necessarily intuitive to work with: you get an array of complex numbers and you have to know how to interpret them.
If you really want to use an FFT, here is one approach:
Low-pass your signal. This will help prevent noise and higher harmonics from creating spurious results. Conceivably, you could do skip this step and instead weight your results towards the lower values of the FFT instead. For some instruments with strong fundamental frequencies, this might not be necessary.
Window your signal. Windows should be at lest 4096 in size. Larger is better to a point because it gives you better frequency resolution. If you go too large, it will end up increasing your computation time and latency. The hann function is a good choice for your window. http://en.wikipedia.org/wiki/Hann_function
FFT the windowed signal as often as you can. Even overlapping windows are good.
The results of the FFT are complex numbers. Find the magnitude of each complex number using sqrt( real^2 + imag^2 ). The index in the FFT array with the largest magnitude is the index with your peak frequency.
You may want to average multiple FFTs for more consistent results.
How do you calculate the frequency from the index? Well, let's say you've got a window of size N. After you FFT, you will have N complex numbers. If your peak is the nth one, and your sample rate is 44100, then your peak frequency will be near (44100/2)*n/N. Why near? well you have an error of (44100/2)*1/N. For a bin size of 4096, this is about 5.3 Hz -- easily audible at A440. You can improve on that by 1. taking phase into account (I've only described how to take magnitude into account), 2. using larger windows (which will increase latency and processing requirements as the FFT is an N Log N algorithm), or 3. use a better algorithm like YIN http://www.ircam.fr/pcm/cheveign/pss/2002_JASA_YIN.pdf
You can skip the windowing step and just break the audio into discrete chunks of however many samples you want to analyze. This is equivalent to using a square window, which works, but you may get more noise in your results.
BTW: Many of those tuner apps license code form third parties, such as z-plane, and iZotope.
Update: If you want C source code and a full tutorial for the FFT method, I've written one. The code compiles and runs on Mac OS X, and should be convertible to other platforms pretty easily. It's not designed to be the best, but it is designed to be easy to understand.
A Fast Fourier Transform changes a function from time domain to frequency domain. So instead of f(t) where f is the signal that you are getting from the microphone and t is the time index of that signal, you get g(θ) where g is the FFT of f and θ is the frequency. Once you have g(θ), you just need to find which θ with the highest amplitude, meaning the "loudest" frequency. That will be the primary pitch of the sound that you are picking up.
As for actually implementing the FFT, if you google "fast fourier transform sample code", you'll get a bunch of examples.

Analyzing Sound in a WAV file

I am trying to analyze a movie file by splitting it up into camera shots and then trying to determine which shots are more important than others. One of the factors I am considering in a shot's importance is how loud the volume is during that part of the movie. To do this, I am analyzing the corresponding sound file. I'm having trouble determining how "loud" a shot is because I don't think I fully understand what the data in a WAV file represents.
I read the file into an audio buffer using a method similar to that described in this post.
Having already split the corresponding video file into shots, I am now trying to find which shots are louder than others in the WAV file. I am trying to do this by extracting each sample in the file like this:
double amplitude = (double)((audioData[i] & 0xff) | (audioData[i + 1] << 8));
Some of the other posts I have read seem to indicate that I need to apply a Fast Fourier Transform to this audio data to get the amplitude, which makes me wonder what the values I have extracted actually represent. Is what I'm doing correct? My sound file format is a 16-bit mono PCM with a sampling rate of 22,050 Hz. Should I be doing something with this 22,050 value when I am trying to analyze the volume of the file? Other posts suggest using Root Mean Square to evaluate loudness. Is this required, or just a more accurate way of doing it?
The more I look into this the more confused I get. If anyone could shed some light on my mistakes and misunderstandings, I would greatly appreciate it!
The FFT has nothing to do with volume and everything to do with frequencies. To find out how loud a scene is on average, simply average the sampled values. Depending on whether you get the data as signed or unsigned values in your language, you might have to apply an absolute function first so that negative amplitudes don't cancel out the positive ones, but that's pretty much it. If you don't get the results you were expecting that must have to do with the way you are extracting the individual values in line 20.
That said, there are a few refinements that might or might not affect your task. Perceived loudness, amplitude and acoustic power are in fact related in non-linear ways, but as long as you are only trying to get a rough estimate of how much is "going on" in the audio signal I doubt that this is relevant for you. And of course, humans hear different frequencies better or worse - for instance, bats emit ultrasound squeals that would be absolutely deafening to us, but luckily we can't hear them at all. But again, I doubt this is relevant to your task, since e.g. frequencies above 22kHz (or was is 44kHz? not sure which) are in fact not representable in simple WAV format.
I don't know the level of accuracy you want, but a simple RMS (and perhaps simple filtering of the signal) is all many similar applications would need.
RMS will be much better than Peak amplitude. Using peak amplitudes is like determining the brightness of an image based on the brightest pixel, rather than averaging.
If you want to filter the signal or weigh it to perceived loudness, then you would need the sample rate for that.
FFT should not be required unless you want to do complex frequency analysis as well. The ear responds differently to frequencies at different amplitudes - the ear does not respond to sounds at different frequencies and amplitudes linearly. In this case, you could use FFT to perform frequency analyses for another domain of accuracy.

OpenCV performance on template matching

I'm trying to do template matching basically on java. I used straightforward algorithm to find match. Here is the code:
minSAD = VALUE_MAX;
// loop through the search image
for ( int x = 0; x <= S_rows - T_rows; x++ ) {
for ( int y = 0; y <= S_cols - T_cols; y++ ) {
SAD = 0.0;
// loop through the template image
for ( int i = 0; i < T_rows; i++ )
for ( int j = 0; j < T_cols; j++ ) {
pixel p_SearchIMG = S[x+i][y+j];
pixel p_TemplateIMG = T[i][j];
SAD += abs( p_SearchIMG.Grey - p_TemplateIMG.Grey );
}
}
// save the best found position
if ( minSAD > SAD ) {
minSAD = SAD;
// give me VALUE_MAX
position.bestRow = x;
position.bestCol = y;
position.bestSAD = SAD;
}
}
But this is very slow approach. I tested 2 images (768 × 1280) and subimage (384 x 640). This lasts for ages.
Does openCV perform template matching much faster or not with ready function cvMatchTemplate()?
You will find openCV cvMatchTemplate() is much mush quicker than the method you have implemented. What you have created is a statistical template matching method. It is the most common and the easiest to implement however is extremely slow on large images. Lets take a look at the basic maths you have a image that is 768x1280 you loop through each of these pixels minus the edge as this is you template limits so (768 - 384) x (1280 - 640) that 384 x 640 = 245'760 operations in which you loop through each pixel of your template (another 245'760 operations) therefore before you add any maths in your loop you already have (245'760 x 245'760) 60'397'977'600 operations. Over 60 billion operations just to loop through your image It's more surprising how quick machines can do this.
Remember however its 245'760 x (245'760 x Maths Operations) so there are many more operations.
Now cvMatchTemplate() actually uses the Fourier Analysis Template matching operation. This works by applying a Fast Fourier Transform (FFT) on the image in which the signals that make up the pixel changes in intensity are segmented into each of the corresponding wave forms. The method is hard to explain well but the image is transformed into a signal representation of complex numbers. If you wish to understand more please search on goggle for the fast fourier transform. Now the same operation is performed on the template the signals that form the template are used to filter out any other signals from your image.
In simple it suppresses all features within the image that do not have the same features as your template. The image is then converted back using a inverse fast fourier transform to produce an images where high values mean a match and low values mean the opposite. This image is often normalised so 1's represent a match and 0's or there about mean the object is no where near.
Be warned though if they object is not in the image and it is normalised false detection will occur as the highest value calculated will be treated as a match. I could go on for ages about how the method works and its benefits or problems that can occur but...
The reason this method is so fast is: 1) opencv is highly optimised c++ code. 2) The fft function is easy for your processor to handle as a majority have the ability to perform this operation in hardware. GPU graphic cards are designed to perform millions of fft operations every second as these calculations are just as important in high performance gaming graphics or video encoding. 3) The amount of operations required is far less.
In summery statistical template matching method is slow and takes ages whereas opencv FFT or cvMatchTemplate() is quick and highly optimised.
Statistical template matching will not produce errors if an object is not there whereas opencv FFT can unless care is taken in its application.
I hope this gives you a basic understanding and answers your question.
Cheers
Chris
[EDIT]
To further answer your Questions:
Hi,
cvMatchTemplate can work with CCOEFF_NORMED and CCORR_NORMED and SQDIFF_NORMED including the non-normalised version of these. Here shows the kind of results you can expect and gives your the code to play with.
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut6.html#Step%202
The three methods are well cited and many papers are available through Google scholar. I have provided a few papers bellow. Each one simply uses a different equation to find the correlation between the FFT signals that form the template and the FFT signals that are present within the image the Correlation Coefficient tends to yield better results in my experience and is easier to find references to. Sum of the Squared Difference is another method that can be used with comparable results. I hope some of these help:
Fast normalized cross correlation for defect detection
Du-Ming Tsai; Chien-Ta Lin;
Pattern Recognition Letters
Volume 24, Issue 15, November 2003, Pages 2625-2631
Template Matching using Fast Normalised Cross Correlation
Kai Briechle; Uwe D. Hanebeck;
Relative performance of two-dimensional speckle-tracking techniques: normalized correlation, non-normalized correlation and sum-absolute-difference
Friemel, B.H.; Bohs, L.N.; Trahey, G.E.;
Ultrasonics Symposium, 1995. Proceedings., 1995 IEEE
A Class of Algorithms for Fast Digital Image Registration
Barnea, Daniel I.; Silverman, Harvey F.;
Computers, IEEE Transactions on Feb. 1972
It is often favoured to use the normalised version of these methods as anything that equals a 1 is a match however if not object is present you can get false positives. The method works fast simply due to the way it is instigated in the computer language. The operations involved are ideal for the processor architecture which means it can complete each operation with a few clock cycles rather than shifting memory and information around over several clock cycles. Processors have been solving FFT problems for many years know and like I said there is inbuilt hardware to do so. Hardware based is always faster than software and statistical method of template matching is in basic software based. Good reading for the hardware can be found here:
Digital signal processor
Although a Wiki page the references are worth a look an effectively this is the hardware that performs FFT calculations
A new Approach to Pipeline FFT Processor
Shousheng He; Mats Torkelson;
A favourite of mine as it shows whats happening inside the processor
An Efficient Locally Pipelined FFT Processor
Liang Yang; Kewei Zhang; Hongxia Liu; Jin Huang; Shitan Huang;
These papers really show how complex the FFT is when implemented however the pipe-lining of the process is what allows the operation to be performed in a few clock cycles. This is the reason real time vision based systems utilise FPGA (specifically design processors that you can design to implement a set task) as they can be design extremely parallel in the architecture and pipe-lining is easier to implement.
Although I must mention that for FFT of an image you are actually using FFT2 which is the FFT of the horizontal plain and the FFT of the vertical plain just so there is no confusion when you find reference to it. I can not say I have an expert knowledge in how the equations implemented and the FFT is implemented I have tried to find good guides yet finding a good guide is very hard so much I haven't yet found one (Not one I can understand at least). One day I may understand them but for know I have a good understanding of how they work and the kind of results that can be expected.
Other than this I can't really help you more if you want to implement your own version or understand how it works it's time to hit the library but I warn you the opencv code is so well optimised you will struggle to increase its performance however who knows you may figure out a way to gain better results all the best and Good luck
Chris

Audio analyzer for finding songs pitch

Is there anyway to analyze the audio pitches programmatically. For example, i know most of the players show a graph or bar & if the songs pitch is high # time t, the bar goes up at time t .. something like this. Is there any utility/tool/API to determine songs pitch so that we interpolate that to a bar which goes up & down.
Thanks for any help
Naive but robust: transform a modest length segment into Fourier space and find the peaks. Repeat as necessary.
Speed may be an issue, so choose the segment length as a power of 2 so that you can use the Fast Fourier Transform which is, well, fast.
Lots of related stuff on SO already. Try: https://stackoverflow.com/search?q=Fourier+transform
Well, unfortunately I'm not really an expert on audio with the iPhone, but I can point you towards a couple good resources.
Core Audio is probably going to be a big thing in what you want to do: htp://developer.apple.com/iphone/library/documentation/MusicAudio/Conceptual/CoreAudioOverview/Introduction/Introduction.html
As well, the Audio Toolbox may be of some help: htp://developer.apple.com/iphone/library/navigation/Frameworks/Media/AudioToolbox/index.html
If you are have a developer account, there are plenty of people on the forums that can help you: htps://devforums.apple.com/community/iphone
You'll have to add in a 't' in the http portion of those URLs, as I cannot post more than one hyperlink (sorry!).
To find the current pitch of a song, you need to learn about the Discrete Time Fourier Transform. To find the tempo, you need autocorrelation.
I think what you may be speaking of is a graphic equalizer, which displays the amplitude of different frequency ranges at a given time in an audio signal. It normally equipped with controls to modify the amplitudes between the given frequency ranges. Here's an example. Is that sort of what you're thinking of?
EDIT: Also, your numerous tags don't really give any indication of what language you might be using here, so I can't really suggest any specific techniques or libraries.

Categories