DSPRelated.com
Forums

OFDM receiver IQ processing

Started by jekain314 2 weeks ago8 replieslatest reply 2 weeks ago172 views

The OFDM receiver front end operates on a real signal from the antenna. The receiver's IQMixer both downconverts the received signal to baseband and transforms the real signal into separate in-phase and quadrature digital data paths. These two digitized paths form the real and imaginary components of the complex vector for the FFT. The receiver sampling for both paths must occur at the signal bandwidth in order to reap the computational benefits from the FFT. 

My question is: Why not sample the real received signal at twice the bandwidth (Nyquist) and use an FFT at the receiver with real inputs? Would not this successfully recover the QAM coefficients with a penalty of 2X the ADC sample rate? The benefit would be simplification of the receiver hardware with a single ADC channel OR processing two separate datastreams with the same receiver multichannel ADC hardware. 

[ - ]
Reply by SlartibartfastApril 26, 2025

Mixing the signal to complex baseband has a number of advantages, including reduction of the sample rate of the baseband processing, reduction of complexity of the FFT, etc.   Usually the subcarrier modulations in OFDM are complex-valued, e.g., PSK or QAM, which is much simpler to manage at complex baseband.   If  you look at the overall implementation complexity and power consumption of the two approaches the use of complex baseband is usually far superior, which is why it has been done that way for decades.

[ - ]
Reply by jekain314April 26, 2025

Mixing the signal to complex baseband has a number of advantages, including reduction of the sample rate of the baseband processing ...

ADCs have increased in sample rate by 1000X in just the last 20 years (see below chart from Mercury). Yet the OFDM receiver's ADC samples at the signal bandwidth (exactly) because of FFT constraints. The dual-input (I, Q) FFT allows sampling at bandwidth vs sampling at 2X bandwidth (Nyquist) because of the complex versus real input. The estimated QAM values (Constellation grid points) that contain the data, will be identical for both designs. Note also that the signal received at the receiver antenna is real. It could be downconverted to baseband and sampled 2X to get the same results -- using a single ADC channel.  

.... reduction of complexity of the FFT, etc.

Its not clear that the FFT computational burden is different for either case -- both cases must estimate 2N values for an N subcarrier design. In one case, we estimate 2N values from two N-value sample datasets and the other case, we estimate 2N values from a single 2N-length dataset. Not sure of the "etc." items you mention!!

 Usually the subcarrier modulations in OFDM are complex-valued, e.g., PSK or QAM, which is much simpler to manage at complex baseband.

The A, B values that make up the complex vector for the transmitter's FFT input are just grid locations on a mQAM constellation diagram. The A, B values that are estimated at the receiver for either case will be identical and can be treated identically after the FFT; e.g., channel equalization, FEC and bit mapping. The single-channel receiver will work on all legacy OFDM signals.

 If  you look at the overall implementation complexity and power consumption of the two approaches the use of complex baseband is usually far superior, which is why it has been done that way for decades.

The motivation is saving an ADC channel with minimal penalty -- massive MIMO seems the trend in new architectures. Is a single ADC/filtering path at 40MHz, more power/complexity than dual ADC/filtering paths at 20Hz? Take a look at the TI architecture below (similar multi-channel architectures are available from ADI, Mercury/Intel, and NS). The as-presented 2-ADC path architecture is surely designed for I, Q from a single transmitter .. would freeing up an ADC path at the receiver offer a 2X throughput improvement?

... which is why it has been done that way for decades.

A lot has changed in these decades .. (actually >50 years) .. ADC capability, massive/MU MIMU, Direct RF trends .. the IFFT/FFT Tx/Rx architecture really hasn't changed a bit in this period!

adc_history_76858.jpg


ti 2rx2txchip_21727.jpg


[ - ]
Reply by SlartibartfastApril 26, 2025

How many ADCs are used is independent of the question you asked.  Whether you need one or two ADCs just affects where the downconversion is done.  Whether the receiver uses direct downconversion or a sampled IF is still a reasonable architecture tradeoff given overall requirements and is independent of the question you asked.   A real-valued sampled IF will still need downconversion (digitally), and it's nearly always simpler to manage a complex digital downconversion rather than trying to tune to sampled baseband (rather than IF) with a single ADC. The analog filtering can get really difficult in that case as well.

Once the signal is at complex baseband it can be decimated to a lower sample rate and synchronized for the FFT.   If the architecture is carefully chosen this results in less complexity (smaller FFT) and lower power consumption due to the lower sample rate after decimation.

These are architecture tradeoffs that have been being managed for thirty years or so, so there's a reason the majority (maybe all) of OFDM demodulators use a complex-input FFT.   That doesn't keep anybody from doing it a different way, it'll just change the usual tradeoffs. 



[ - ]
Reply by kazApril 25, 2025

Surely you know that the modulation schemes used in OFDM are I/Q based. So why you want to kill the I/Q and what modulation scheme are you thinking of to recover the symbols if you use one channel at double speed.

[ - ]
Reply by jekain314April 26, 2025

Surely you know that the modulation schemes used in OFDM are I/Q based. So why you want to kill the I/Q ... 

The modulation scheme is mQAM where we map a bit sequence to an A, B grid location selected from a constellation: 4QAM=QPSK, 16QAM, 256QAM, ... It is preferred that any new receiver works with the legacy transmission infrastructure. Whether we use I/Q, A/B grid points, or A+jB, is just semantics ... we will transmit 2N values for an N-subcarrier design and we must estimate these transmitted values at the receiver. 

what modulation scheme are you thinking of to recover the symbols if you use one channel at double speed.

The receiver's estimated constellation grid point (A/B or I/Q) will be identical in both cases. If you go back to the definition of the Fourier Series, the computation of the A/B terms in each ACos(wt)+BSin(wt) tone is just the solution of a set of 2N simultaneous equations. For N tones, we will have 2N A/B values to estimate. The recovery at the receiver of the A/B terms via the FFT is the demodulation. The receiver would downconvert the received signal (single channel) to baseband and then filter out the two-times-carrier parasitic disturbance followed by the ADC sampling at 2X the bandwidth. The 2N real samples are input to the Nth order FFT to result in the 2N values ... 

[ - ]
Reply by kazApril 26, 2025

what is A & B, I thought you are receiving one ADC channel.

If you mean one ADC channel (real only) then you need to shift its centre frequency not to dc but shift its lowest edge to be at dc, and you get double bandwidth in digital domain. Then you can apply real fft and get the positive subcarriers only. The transmitter will have more problems because it must restrict bandwidth to avoid having double bandwidth; a sharp filter at dc will be needed. Moreover The effect on some other Tx processes such as predistortion would be significant.

If you are thinking of two ADC channels (A & B) but not treated as complex then that compares to I/Q except that it is real/real rather than real/imag computations both at mixer and fft. I don't see any immediate benefit. 

[ - ]
Reply by Lito844April 26, 2025

Have you thought about how downmix and pre-ADC front end filtering may differ in terms of function and requirements between real and complex input scenarios?

[ - ]
Reply by tangusApril 26, 2025

The nature of OFDM is that the Inter-Symbol Interference (ISI) is cancelled out exactly at the integer multiples (N) times the symbol rate. In the frequency domain, the zero-ISI crossings occur only at N*multiples of the symbol rate when converted to baseband. Hence the symbols are Orthogonal.

Note that the inverse Fourier Transform of an OFDM signal sampled at N*Symbol rate results in a brick wall filtered response (rect function), so each symbol is uniquely identifiable since the inverse Fourier Transform of the sinc is a rect function. The same is true vice-versa - the inverse response of a rect-function/brick wall filter is a sinc function. This enables you to completely recover the signal after performing an IFFT. There are some guard bands that prevent the OFDM signal from overlapping.

sinc_function_fourier_transform_75118.png


If you sample at 2x the symbol rate before IQ to baseband conversion, you don't really benefit and may actually sample other OFDM symbols that are at ~2x the symbol rate at RF - hence experience ISI. For example if a channel at IEEE 802.11a,n maybe 2.48GHz for higher channels and if the sample rate is only a small percentage higher than 2*Symbol rate, you may interfere with 802.11ax (WiFi 6) channels at 5GHz. This can easily happen if you have an imperfectly tuned oscillator driving the sampler or IQ downconverter or due to frequency drift due to the temperature in a TCXO or other oscillator.

ofdm-guard-bands_45542.png