Hi All,

It is common knowledge that FFT followed by iFFT is an alternative for time domain convolution. However, I read many articles explaining this method on the basis of multiplication in frequency domain replaces convolution of time domain. True, but why would one use multiplication in the first place if the aim is filtering (as opposed to convolution). I can just discard (zero) the bins (so creating stop band) and then apply iFFT and then sample as required including any rate conversion.

Is there any flaw in my thoughts.

Kaz

Hi kaz,

You are totally right. However there can be some resolution problem for very short signal. For example, if you have few points after discarding stop band bins, you cannot obtain time domain equivalent of the filtered signal. You can google it also:

"frequency domain channelizer".

True resolution is an issue but my main concern is that it converts a stream to blocks with phase discontinuities and also takes way too much resource. I don't see why anyone will do FFT then iFFT instead of time domain filtering. There might be some use cases that prefer going to frequency domain...

Actually what you have said (discarding bins, i.e. an ideal stop band filter) requires an infinitely long filter response in the time domain, right?

No.

We get fft bins @ __+__ 0.5Fs. We target our cutoff point and I don't see issues of the mystery infinity...Processing would be block based but could be made running I assume. Resolution could be an issue.

The issue is that putting a zero at the FFT bin center only zeros the frequency at that point. The frequencies between that bin center and the next bin center aren't zero. So, yes, it requires an infinitely long filter response to truly zero all frequencies from a desired cutoff point to fill a stopband with zero response. The FFT bins are only the samples at that frequency, and not the frequencies between, just like a time sample represents only that time, and not what happens between it and the adjacent samples.

Filtering with FFTs is somewhat nuanced, especially if one wants to filter a vector longer than the length of one FFT.

Actually my question is not about equivalence of time domain filtering versus frequency domain filtering as it depends on many variables including selection of time domain filter...and not about efficiency but about concept of fft __multiplication versus just discarding bins__. I don't see any reason to worry about edge bin as time domain filters suffer same issue at transition band.

I have never used FFT/iFFT for filtering as (on fpgas) it will be crazy to do that. But it helps to know that discarding bins is indeed a filter. For example if I want to consider aliasing from some time domain decimated output that is expected eventually to go through FFT where some bins are discarded then any aliasing into those discarded bins could be ignored.

The root of your question seems to be about the word "filter". I have a filter in my air conditioner, which obviously has nothing to do with signal processing. But even within signal processing, we can talk about *moving median* filtering, time invariant filtering, *linear* or *non-linear* filtering, etc.

Let's suppose that you are limiting your attention to *linear and time invariant filtering.* Using an FFT of finite length, __multiplying its output points by predetermined constants, and inverse filtering__ is clearly linear filtering, but it is not *time invariant* filtering.

Let x_n be the input to the filter and let N be the FFT block length. Suppose x_n is composed entirely of frequency components that are periodic with period N. That means that it has a component of period N, another component of period N/2, another of period N/3, etc. And a constant term. Then if you break the signal x_n into blocks and compute an FFT X_k, on the block of data, the k*th* FFT bin will contain a number which is the coefficient of the exponential sinusoid with the period N/k, for all bins. The FFT Y_k that results from the multiplication of each FFT bin by a fixed coefficient will be the FFT of a filtered sequence y_n which has the same periodicity characteristics as the input x_n, and that's what you get from the iFFT.

So that's a linear time-invariant filter, but only for *some* inputs x_n, the ones which are entirely period N. That's not all sequences. In fact, such inputs are very rare.

In the early days of signal processing, Thomas Stockham and Gordon Sande independently discovered that the FFT algorithm could perform linear time-invariant filtering of *any* signal by using FFTs to perform circular convolutions. Both of their methods are well described in about a hundred different books and papers. I'm pretty sure you understand that.

What do you mean by "discard the bin"? Do you mean shorten the IFFT to smaller N? Or do you mean zeroing the bins you are "discarding"?

The behavior of zeroing a bin doesn't change whether you're filtering or "discarding", the process doesn't know the difference.

It depends on the user case. In LTE iFFT (Downlink) we populate subcarrier bins (two sides of iFFT) and set centre bins to zeros which become stop band in time domain.

In uplink we do FFT to get subcarriers from side bins but discard centre bins.

In either case it helps to view that as filters and they are.

that could be ok as Mr. Charles Rader @CharlieRader explained below, your subcarriers are most probably periodic with 'N', i.e. exactly coincide with the center of one frequency bins?

Are you saying you are *generating* a signal that you know is exactly centered in some FFT bin, and you want to zero that bin to remove it from the output?

FWIW, chaining an FFT and iFFT for filtering on an FPGA is actually a thing, it helps that many (most?) FPGAs these day have hard DSP blocks for Multiply-accumlate which makes this pretty gate efficient as well.

Not on IDFT/IFFT cases.If you do IFFT the terms 0 aren't in the sum. So no cos sin terms are summed in the synthesis and kaz is right. Another issue is that the results aren't good for ear (for example).

In IDFT this is not the case. If you put 0 terms there is no cos sin to sum at these freqs. No intermediate freqs are summed at all. Imagine 8 point IDFT/IFFT, if you put 0 on all terms then the result is all 0.

The ideal filter has an acausal (non causal) impulse response, which constrains its implementation. If your application does not requires returning to time domain (e.g., band-specific feature extraction) you can adopt the ideal filter. Otherwise, an approximation of the ideal frequency response (such as, Butterworth) should be used

The thread here appears to speak to some of the shortcomings of this approach:

matlab - Why is it a bad idea to filter by zeroing out FFT bins? - Signal Processing Stack Exchange

I think the first reply sort of touches on points made in other responses here.

Plenty of ideas, thanks. Here is my go at it using Octave to compare, I don't see a significant problem (though my question is not about equivalence):

%%%%%%%%%%%%%%%%%%%%%%%%%%%%

x = randn(1,1000);

y = fft(x);

yy = zeros(1,1000);

yy(1:100) = y(1:100);

yy(901:1000) = y(901:1000);

xx = real(ifft(yy));

%%%%%%%%%%%%%%%%%%%%%%%%%%%

h = fir1(100,100/1000*2);

test = filter(h,1,x); %time domain filtering

plot(xx);hold;

plot(test(51:end),'g-');

The original question was about the impact on filtering. I'd think the criteria would be accuracy, relative to time domain convolution, or it's equivalent multiplication in the frequency domain. I used your last script to look at that a bit, as this:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%

x = randn(1,10000);

y = fft(x);

yy = zeros(1,10000);

yy(1:1000) = y(1:1000);

yy(9001:10000) = y(9001:10000);

xx = real(ifft(yy));

%%%%%%%%%%%%%%%%%%%%%%%%%%%

h = fir1(1000,1000/10000*2);

test = filter(h,1,x); %time domain filtering

%plot(xx);hold;

%plot(test(501:end),'g-');

t_fp = fopen( "t.csv", "w+" );

for idx = 1:( columns(xx) - 500)

idx_0 = ( idx + 500 );

val_fft = xx( 1, idx:idx );

val_fir = test( 1, idx_0:idx_0 );

diff_raw = ( val_fir - val_fft );

diff_percent = abs( 100.0 * ( diff_raw / val_fft ) );

fprintf( t_fp, "%6d, %20.12f, %20.12f, %20.12f, %20.12f\n", idx, val_fft, val_fir, diff_raw, diff_percent );

endfor

fclose( t_fp );

%%%%%%%%%%%%%%%%%%%%%%%%%%%

hopefully done correctly...

I don't have plot capability, where Octave is run, so send it to a file for Excel plot. And then generated these 3 plots:

The upper left, plot over each other, doesn't look dramatically different, but looking at the raw difference (lower left), or more so, the % difference (upper right), we see quite a different story. I assume these errors are from the predictable effects of a non-equivalent method, i.e. discarding bins and IFFT, versus convolution. Would those types of % error really be acceptable.

Differences are expected and in this case it depends on my choice of fir1.

If you use impulse input and do fft on both cases you will find that fft method with zeroing gives ideal brick wall. I believe that is the reference for various time domain filters. I am not familiar with using multiplication factors in frequency domain (rather than zeroing) but I guess it could be an approach to mirror a given time domain filter(reverse engineering). I am still convinced that fft with zeroing is an ideal filter for a given symbol but not so when it comes to block-by-block processing or resolution effect. Practically no sensible hardware designer (ASIC or FPGA) will go that way as it is overkill of resources but it is useful to be aware of it conceptually when FFT/iFFT already exist for whatever purpose.

Confused a bit by this. From all information read, and replies here, zero'ing the FFT bin appears to only be a brick wall for the signal component associated with that bin, not outside it/them. So a brick wall with lots of holes :) I could have misunderstood things though.

You are right about your brick wall thoughts. The important issue is the passband bins. How would two transforms(FFT then iFFT) faithfully reproduce the original pass band. That is for research.

The easiest thing to do is try it and compare it. Both filtering approaches are very easy to do in MATLAB or about any other signal processing software tool. I think this is one of those topics that takes longer to discuss than do. At the very least, once you conduct a few experiments, it'll lead to a more focused data-backed discussion.

Ahh, good, I failed to read down far enough the first time. I see some example code now!

Coding is best way indeed then comes its interpretation. After all zeroing FFT leads a brick wall filter by classic definition of "frequency filter". There should be no objection to that but "filter" as a concept may include air-conditioning filter, car filter, nose filter ..etc. Even here on DSP forum we need a filter. In such cases coding wouldn't help.

The practical engineering answer to your question on why use this FFT/IFFT technique at all is in a word, efficiency. For filters that are extremely low-pass in nature, e.g. 5000 frequency bins where only 20 are non-zero, it's far more efficient to compute the FFTs as opposed to implementing brute-force convolution. To your other point on why not just zero out the FFT as opposed to using frequency-domain multiplication...they are the same thing. When you zero something out, you are multiplying by a mask of 1's and 0's to represent the ideal low-pass, high-pass, or bandpass filter. To be specific though, that works for FIR filter implementations. For IIR filters, it's a bit different as you have to use fft ratios but that's digressing a bit...