Down sampling an iq signal

Started by tomb18 6 years ago10 replieslatest reply 6 years ago1606 views

Forgive my newbie questions. I always thought that to down simple a signal you first should do a low pass filter and then Dow sample to prevent any aliasing. 

So I am collection IQ data from a radio that is sampling at 2000000 samples per second. I de index to try down sampling by 2 by dropping every second sample from the iq stream.

The resulting spectrum 2as of course half the bandwidth but otherwise looked identical  no extra peaks or artifacts.  If anything the downsampled data looked lower in intensity.

So do I have to low pass filter the signals or not?

Thanks, Tom

[ - ]
Reply by achesirOctober 12, 2018

Please check your statement "I [downsample] by 2 by dropping every second sample from the iq stream. The resulting spectrum [has] of course half the bandwidth but otherwise looked identical  no extra peaks or artifacts.":

Dropping samples does *not* decrease the bandwith of the signal, unless you happen to drop only those samples that imply high-frequency components of your signal!

It is always the case that if the bandwidth of a signal is B, the sampling rate of that signal must meet or exceed 2B unless one is willing to tolerate aliasing.

For example: If your signal of interest has a bandwidth of 750 KHz, and your original sampling rate is 2 MHz, then before you drop the sample rate in half (by discarding every other sample, otherwise known as "decimating by 2"), you must first filter the signal such that only 500 KHz of signal remains, before you decimate by 2.

On the other hand: If your original signal has a signal bandwidth of only 500 KHz, and your original sampling rate is 2 MHz, then you can simply drop every other sample , without loss of the integrity of the signal, since your new sampling rate is still at least as high as twice the bandwidth.

[ - ]
Reply by timburnettOctober 12, 2018

Aliasing (higher frequency signals appearing at lower frequencies, potentially on top of signals already at those frequencies) will still occur with IQ downsampling. If the upper half of your spectrum is quiet then you won't notice much effect from the downsampling in the lower half of the spectrum. But if that was always going to be the case then you could just sample at half your current rate to begin with. If you think that you might sometimes get significant interference at frequencies more than half your sample rate, I would recommend low pass filtering prior to downsampling (or equivalently, use a decimating low pass filter structure). 

[ - ]
Reply by tomb18October 12, 2018


I passed this question on to a friend who is an expert in DSP and he pointed out what the effect of aliasing has on the resulting spectrum.

When I downsampled by 2, I noticed that there were no extra peaks seen but the amplitude of signals started getting smaller.  When I then experimented some more, and downsampled by 4 the signal peaks were again getting smaller and smaller.  So I wondered why? Well aliasing.  As he explained it, the noise floor itself is folding into the new passband.  Each factor of 2 results in a 3dB increase in the noise floor but the signals don't change.  So by the time I got to downsampling by 10 the noise had gone up by 30dB and I didn't see any signals any more.

One has to keep in mind is that this is coming from a software defined radio so there will always be signals outside the passband even if it's just noise.

It's one thing to read about this, but actually trying it and then understanding what is happening is another.  In everything I have read, there was never a mention about the noise itself, which of course is a signal!  So now I see why you must filter first.

[ - ]
Reply by kazOctober 12, 2018

Just adding few more notes. I think aliasing is always mentioned in terms of any power beyond intended Nyquist be it signal or noise. 

You may allow aliasing as long as it doesn't hit your signal band and that you can apply a final chopping in one go.

Before an ADC all electromagnetic spectrum may alias including - I imagine - all man-made signals anywhere in the spectrum as well God-made signals such as x rays, cosmic rays and light waves reaching the antenna. It must be extremely nasty and complex signal??

I imagine the RF engineer will first make sure choosing the right antenna, then they will apply analogue filter (band pass) or down-convert to baseband and apply LPF. All that to keep ADC happy and only sample in the target signal band. The digital engineer (DSP, FPGA, ASIC) will receive this signal at given ADC sampling rate and may decide to down-sample. What power is left outside band is decided by RF engineer and if he is a nice person he may have already filtered off everything as brickwall in which case I wouldn't care about digital filter and may just decimate directly and lose my job. Not realistic...

There is still issue of quantisation noise. Does it alias, can it be filtered?

I doubt it. when we apply a filter before actual decimation we still leave quantisation noise all over then decimate directly. I don't think we can filter quantisation noise as it is inherent and I don't think it aliases. It is not noise as such I believe but a distortion.

[ - ]
Reply by dszaboOctober 12, 2018

Aren't delta-sigma modulators entirely based on the idea of reshaping and filtering quantization noise?

[ - ]
Reply by Rick LyonsOctober 12, 2018

Hi tomb18.

Ignoring any aliasing issues, let's say you have 1024 time-domain samples of a signal, you perform a 1024-pt FFT on those samples, and you see that some spectral magnitude peak has a value of M1. If you then downsample your time samples by D = 2, perform a 512-pt FFT, you'll now see the downsampled spectral peak has a magnitude of M2 = M1/D = M1/2. The spectral magnitude of a signal is proportional to the number of samples used by the FFT to compute that spectral magnitude.

[ - ]
Reply by kazOctober 12, 2018

Hi Rick,

That is a valid point if the OP did not take it into account. 

Your comments however raised some thoughts with me. The fft in my opinion, should not scale based on resolution but it does. I personally believe fft as a tool should be cleared of this scaling bias. We can apply Parseval's theorem that states power in and out of fft should be same since it is same signal. Then we wouldn't get this effect when we decimate. For my work on OFDM symbols I always force fft or ifft towards unity power gain.

[ - ]
Reply by dszaboOctober 12, 2018

I would advocate for a unity gain FFT as well, but it only applies a scalar and wouldn’t effect the shape

[ - ]
Reply by dszaboOctober 12, 2018

Not to contradict your friend, but I would have to say yes and no.  To the best of my knowledge, "spectrum" is term that is widely treated as having a concrete definition but is largely implementation dependent.  Often times, I see the term applied because the chosen operation looks like what we think of as a spectrum.  It is difficult to comment on what you are seeing without more information, but I am sure that the amplitude of your baseband signal should not be reduced because of decimation.  This should be easily verified using a thought experiment.

For example, lets say we create a low frequency discrete time signal with a peak amplitude of 0dBFS and add white noise with a peak amplitude of -40dBFS.  Lets look exclusively at the time domain signal for this experiment.  Now lets say we decimate the signal by a factor of 2.  Do you expect the amplitude of either signal to change? No.  WRT the low frequency signal, by design no aliasing has occurred so we get the same signal at half the sample rate.  The white noise also has the same amplitude and is still just white noise.  Even though aliasing has occurred, the noise is uncorrelated across the spectrum, so there is a balanced amount of constructive and destructive interference, statistically speaking.

So why might we expect to see reductions in "peaks" in the "spectrum"?  One possible explanation comes to mind.  Lets say you use the magnitude of a 512 point FFT as your definition of spectrum.  At 2 MHz, this corresponds to 256us of data, providing a frequency "resolution" of ~3.9kHz.  If you did the same after decimating to 1MHz, you now have 512us of data with a resolution of ~1.95kHz.  Unless your signal is perfectly contained at 3.9kHz intervals, which is probably unreasonable, your signal is now going to be contained in twice as many bins.  This means that your "peaks" will reduce by about a factor of 2.  The problem isn't that you are halving your signal, its that you are increasing your frequency resolution by including more time data.

The noise side of things get a little counter intuitive, because as you have noted, the "spectrum" won't do what you might necissarily think.  As we pointed out in the previous thought experiment, the noise signal level doesn't reduce when we resample it because it is uncorrelated.  However, because the noise was already spread across all 512 bins at 2MHz, it will also be spread around all 512 bins at 1MHz, with roughly the same magnitude.  Your SNR hasn't changed though.

So why does the the noise floor drop when you low pass filter it?  Well, because you filtered it.  You are increasing the SNR by attenuating the noise in the part of the spectrum you aren't using.  That's good DSP.  It wasn't so much that you 'needed' to low pass filter it before decimation, its that you should filter the undesirable frequency range regardless.

[ - ]
Reply by tomb18October 12, 2018

Well, I have solved the whole down sampling issues that have perplexed me for some time.

As I mentioned in my first post, I didn't see any obvious aliasing because I wasn't looking at this the right way.  So I began doing more research and a whole bunch of experimentation.  So this is what I did:

First of all, I am collecting IQ samples from a software define radio.  The IQ signals are delivered at 1.920 MSPS, resulting in a bandwidth on a spectrum of 1.920 MHz.  I wanted to decimate this down to 192kSPS so as to increase the resolution when people zoom into the spectrum. So what I finally did was to use a lowpass filter on the I and Q separately, set for a cutoff of 96kHz with 1025 taps.  This took some experimentation since the software I am using for design needed to be set for a cutoff of 48kHz!  I didn't know this so only by asking why isn't this working that I decided to keep lowing the cutoff to see if it worked. Finally at 48kHz I saw it working! Anyways, now that the I and Q were filtered, I down sampled each by a factor of 10 and got my 192kHz wide spectrum.

Now, was it working?  Well, all I needed to do was hook up a signal generator.  I tuned the radio for 14.0 MHz and the signal generator, for the same.  As I tuned the radio +- 96kHz, the moment it went outside of this, you could see the signal appearing at the opposite side of the spectrum.  Voila, an alias of the signal!  As I continued to tune, this alias reduced in intensity until it was completely gone by tuning 6Khz more.  So effectively my decimation by 10 of a 1.920 MSPS signal to 192kSPS provides an un-aliased spectral bandwidth of 180kHz. It's one thing to read the theory, but everything comes into place when you see an actual aliased signal appear!