Hi to all!
I usually see 24-bit digitizers giving output sample rate at 50, 100, 200 samples per second, selectable by user.
Taking a look to code I can see input signal (0 to 200Hz bandwidth) get sampled at 30 KHz, then filtered and decimated by several blocks (low pass + decimation).
I supposed 30 kHz signal would be downsampled to available user-selectable output data rates, so 5000, 2000, 1000, 5000, ... and so on SPS, in this way I should have following blocks:
- input at 30 kSPS low pass filter + decimation to 5000
- input at 5 kSPS low pass filter + decimation to 2000
- input at 2000 SPS low pass filter + decimation to 1000
and so on.
In fact for all ODR selections I get same input sample rate (30000), then I have different chain for every output data rate:
- for 2000 I have FIR low pass (51 taps) + decimation (factor 5), low pass 239 taps + decimator factor 3, output 2000 SPS.
- for 1000 I have FIR (63 taps) + decimator (5) + FIR (55 taps) + decimator (3) + FIR (223) + decimator (2)
In above (real) example I have, after first stage, 6 kSPS on both paths, but two different filters.
Why? I can't understand...
Because the final output filter response has to be different for each it is possible that the designer of those two stages wanted a narrower output (or more stopband attenuation, or something), for one case than the other. This could be done as a tradeoff to make the later stage filter easier in the higher decimation case, for example, or just to control aliasing better.