I know this question has been asked ad nauseam but somehow I can't make it work properly. I created a single, sine wave of 440 Hz having a unit amplitude. Now, after the FFT, the bin at 440 Hz has a distinct peak but the value just isn't right. I'd expect to see 0 dB since I'm dealing with a unit amplitude sine wave. Instead, the power calculated is well above 0 dB. The formula I'm using is simply
for (int i = 0; i < N/2; i++)
{
mag = sqrt((Real[i]*Real[i] + Img[i]*Img[i])/(N*0.54)); //0.54 correction for a Hamming Window
Mag[i] = 10 * log(mag) ;
}
I should probably point out that the total energy in the time domain is equal to the energy in the frequency domain (Parseval's theorem), so I know that my FFT routine is fine.
Any help is much appreciated.
Many common (but not all) FFT libraries scale the FFT result of a unit amplitude sinusoid by the length of the FFT. This maintains Parsevals equality since a longer sinusoid represents more total energy than a shorter one of the same amplitude.
If you don't want to scale by the FFT length when using one of these libraries, then divide by the length before computing the magnitude in dB.
I've been struggling with this again for work. It seems that a lot of software routines / books are a bit sloppy on the normalization of the FFT. The best summary I have is: Energy needs to be conserved - which is Parseval's theorem. Also when coding this in Python, you can easily loose an element and not know it. Note that numpy.arrays indexing is not inclusive of the last element.
Here's my code to normalize the FFT properly:
Normalization can be done in many different ways - depending on window, number of samples, etc.
Common trick: take FFT of known signal and normalize by the value of the peak. Say in the above example your peak is
123
- if you want it to be1
, then divide it ( and all results obtained with this algorithm) by 123.