OFDM with real-only transmission

Started by jekain314 3 weeks ago17 replieslatest reply 3 weeks ago134 views

Consider a 64 subcarrier OFDM waveform with 20MHz bandwidth. At the transmitter, we would generate 64 complex numbers (A(n) + jB(n))  n=1, 2, 3, .... 64 using the QAM-to-bits mapping.  This 64-element data vector is input to the IFFT at the transmitter to generate 64 time-history complex sample points. The time-history samples are at 20MSPS because the FFT restricts the samples to occur at one-over-the-subcarrier-frequency separation. We use an IQMixer to upconvert the baseband time-histories (both real and imaginary) to the carrier frequency, sum the upconverted real and imaginary signals, and send across a channel to the receiver. At the receiver, we use a second IQ mixer to downconvert the received signal and to separate the real and imaginary signals. These real and imaginary signals are input to an FFT where the A(n) and B(n) values are re-generated. 

Would it be possible to fill the IFFT input vector with 128 complex datapoints where the second half is the reversed complex-conjugate of the first half. The IFFT would generate 128 real-only time-history datapoints -- but now the sample interval is 40MSPS. We upconvert as before (single-channel mixer only) and send to the receiver. The receiver downconverts with a single-channel mixer and samples at 40MSPS to recreate the baseband time-history of 128 real digital samples. We use a 128-point FFT to recreate the 64 unique A(n), B(n) pairs. 

Potential benefits:  

1) We would use reduced-cost single-channel mixers and a single ADC -- for low cost applications such as IOT.

2) Keeping the legacy IQMixer/dual-ADC configuration would allow a second independent data channel to be transmitted so as to double the datarate at the expense of doubling sample rates -- but maintaining the bandwidth. The receiver synch steps (timing, phase, frequency) would be simplified for the second channel. The preamble and pilot subcarrier allocation could be shared between the two channels. 


1) Doubling the sample rate imparts a cost. However, ADCs/DACs today can easily handle this doubled sample rate. 

2) Computation time. If the FFT is at 128 versus 64, then the computation will be approximately doubled. The ratio for the N versus 2N FFT will be 2Nlog2N / NlogN = 2.33 for a 64-point complex waveform (2.20 for a 1024-point FFT). 

I believe the Cyclic Prefix, Forward Error Correction, equalization, and receiver phase/frequency synch methods would work the same as before. 

This seems too obvious a solution for doubling the throughput --- what am I missing?

[ - ]
Reply by SlartibartfastJune 18, 2024

If you're already upsampled to 40 MHz, you can accomplish the same thing by mixing the usual IFFT from baseband to fs/4 with a simple +/-1 multiplication mixer.  Then take the real part and send it out a single DAC at 40MHz.

In practice, however, this probably won't work well because the images at the output of the DAC won't have enough guardband between them to allow for suitable filtering before transmission.

These days high-speed DACs are not unusual or expensive, so mixing to some digital IF with lots of guardband and filtering opportunities is not difficult to achieve.   e.g., even with a 100 MHz sample rate (which is still pretty low these days), mixing digitally to fs/4 would put the center of the signal BW at 25 MHz, with a 40MHz gap (guardband) to the next image.   That should be sufficient for image filtering before the PA.

BTW, none of these techniques, including what you proposed, would double the throughput.

[ - ]
Reply by jekain314June 18, 2024
I was under the impression that the IQMixer had the ability to send two independent time-history signals, s1(t)  and s2(t), within the same bandwidth. 

From the transmitter's IQMixer we get one real output:

Tx(t) = s1(t) Cos(wt) + s2(t) Sin(wt)

From the receiver's IQMixer (Zero IF formulation) we get two real outputs:

Rx1(t) = Tx(t) Cos(wt) = s1(t) Cos^2(wt) + s2(t) Sin(wt) Cos(wt)

Rx2(t) = Tx(t) Sin(wt) = s1(t) Sin(wt) Cos(wt) + s2(t) Sin^2(wt) 

trig identities .. 

Rx1(t) = 0.5 ( s1(t) + s1(t) Cos(2wt) + s2(t) Sin(2wt) )

Rx2(t) = 0.5 ( s2(t) - s2(t) Cos(2wt) + s1(t) Sin(2wt) )

Low Pass Filter Rx1(t) and Rx2(t) recovers S1(t) and S2(t). Sample these at 40MHz and pass through two separate 128-point FFTs to recover two independent sets of A(n), B(n) values.

[ - ]
Reply by SlartibartfastJune 18, 2024

At baseband, the real part has even symmetry about the y-axis.   Likewise the quadrature (imaginary) part has odd symmetry about the y-axis.   So each has really 1/2 the bandwidth capacity of the total signal when centered at baseband.   If your signal is 20 MHz wide at baseband, the real signal has 10 MHz unique BW that is reflected across the y-axis, and likewise the imaginary signal has 10 MHz of unique BW with the opposite symmetry across the y-axis.  Together they provide 20 MHz total unique BW when at baseband.

When you mix that up to an IF frequency away from the axis, the real (or imaginary) component will each have the full 20 MHz BW of unique signal energy, so either will support all of the information contained in the signal.  In other words, once mixed away from baseband, the two channels are now essentially redundant copies of each other.   This is why we can throw away either of the real or imaginary channels, transmit the remaining channel, and recover the entire 20 MHz BW signal in the receiver. 

The general idea is that the BW of the signal is proportional to the information capacity, regardless of whether it is at baseband or IF or RF or wherever.   Mixing it away from baseband does not double the throughput or increase the information capacity.

[ - ]
Reply by jekain314June 19, 2024

Slartibartfast -- thanks for your patience. I admit I didnt follow your semantic explanation. With OFDM, at the transmitter, we start in the frequency domain -- we manufacture the QAM-modulated tones and the IFFT takes us to the time domain at baseband -- then the Mixer upconverts to the carrier frequency. 

In my prior math expression, we can define two independent signals: 

s1(t) = A1 Cos(w0 t) + B1 Sin Sin(w0 t)

s2(t) = A2 Cos(w0 t) + B2 Sin Sin(w0 t)

These are two independent QAM-modulated single tones at w0 where (A1,B1) and (A2,B2) are bit-mapped from a constellation. The two (A,B) pairs are independent so represent unique information. 

At the transmitter's IQMixer, we form the single real signal: 

Tx = s1(t) Cos(wc t) + s2(t) Sin(wc t)    where wc >> w0. 

At the receiver's IQMixer, we generate: Tx Cos(wc t) and Tx Sin(wc t). These are two separate entities. If we pass each through separate low pass filters, we are left with s1(t)/2 and s2(t)/2 from which we can separately recover both (A,B) pairs. With the QAM-modulated single tones, we dont need to add in the complexity of the IFFT/FFT but I believe the 2X datarate can be argued.

[ - ]
Reply by SlartibartfastJune 19, 2024

I'm very familiar with OFDM as well as single-carrier modulation.  I've been developing modems for nearly forty years, so it's all pretty familiar.

Unless s1(t) and s2(t) have some sort of orthogonality they won't be separable if they occupy the same frequency (w0) at the same time.   Take a QAM constellation, form A1 and B1 (which are presumably the real and imaginary components of the signal, although you don't say that), then add A2 and B2. This seems to be the case you're describing, at least initially.  Once they're added together they are no longer separable without some sort of orthogonality (time, frequency, whatever).  In your expressions for s1 and s2 there is no imaginary part, the real and imaginary parts seem to be simply added together, which is also problematic as the orthogonality that normally separates the A and B components is then lost as well. 

In OFDM the orthogonality of the complex constellation components is preserved and the orthogonality of the subcarriers are also preserved, throughout the modulation and demodulation process.  This is what the IFFT/FFT pairs do.  Complex quantities with real and imaginary components are preserved throughout. This is how the data is able to be recovered in the demodulator.  Your expressions are all real-valued, so it is not clear how you maintain the original orthogonality of the QAM signals, let alone add any new information. 

[ - ]
Reply by SlartibartfastJune 21, 2024
I thought I should follow up a little bit better than my last response.   There's a link here on comp.dsp that at least shows some of the basic concepts of using complex-valued signals at IF and baseband and the complex-valued math of the mixing operations. 


I hope that helps a little bit.  It's not a trivial topic when you first encounter it.

I also wanted to point out that via Information Theory the theoretical throughput capacity of various modulations and coding can be determined. So we know that we can already operate fairly close to the theoretical limit of throughput for things like QAM using appropriate FEC coding. This suggests there really isn't an opportunity to double the throughput without re-inventing Information Theory and throwing out it's current principles.

[ - ]
Reply by jekain314June 21, 2024

Eric -- your credentials are indeed impressive. And I agree that suggesting a 2x datarate improvement for a 50-yr-old tech is a bit like anti-gravity boots. My background is image processing and GNC (>55yrs) so I am a bit handicapped. 

Returning to the high-school math, its hard to question that the two signals s1(t) and s2(t) can be independently transmitted across the same bandwidth. The Tx(t)=s1(t) Cos(wt) + s2(t) Sin(wt) indeed contains two terms that are orthogonal -- but s1 and s2 dont require this.

The problem lies in the recovery of the phase and amplitude of s1, s2 at the receiver even if they are sampled beyond Nyquist. We know these signals can be "recovered" (per Nyquist) but actually doing it is a bit tricky. 

Consider s1(t) = Mag Cos(w0 t + phi) -- w0<<w. Since we are creating this at the transmitter, we can also create s1I(t)= Mag Sin(w0 t + phi). Now, if we send s1(t) (Inphase) and s1I(t) (Quadrature) using the transmitter IQMixer magic, the computation of the phase/amplitude (and A,B) at the receiver is trivial and can be done on a per-sample basis. This is what motivates sending the complex signals across the wireless channel. Can we say that we have traded a 2x throughpout improvement for this benefit? 

If we have real-valued s1(t) and s2(t) at the receiver (sampled at >2X their bandwidth), can we use alternative signal processing methods to solve for the A,B pairs? I have a suggestion if we can get through the above anti-gravity issues. 

[ - ]
Reply by SlartibartfastJune 21, 2024

It looks to me like you're just describing using complex values.   Sin and Cos are orthogonal to each other, which is how orthogonality is maintained modulating or mixing complex-valued quantities, like the constellation for a QAM signal or the subcarriers for an OFDM signal.

If what you have in mind is not mixing s1 and s2 fully to baseband, but keeping the signal where it is still completely real-valued, you can still recover s1 and s2, it just takes a bit more work.   Usually it is just mixed full to baseband (i.e., the BW centered at zero Hz), and s1 and s2 recovered as the real and imaginary parts of the signal.

At least I think that's what you're describing.

[ - ]
Reply by jekain314June 21, 2024

The s1(t) and s2(t) are two totally independent communication signals. Just like two separate MIMO streams. If I can send 54mbps on s1(t), then I can also simultaneously send 54mbps on s2(t) with same shared spectrum. There is no Complex (I/Q) parallel paths anywhere during the transmission or reception. The throughput gain comes from re-use of the freed-up IQMixer channel for the 2nd stream -- but at the expense of faster sample rate. The phase/amplitude (and A,B symbols) are separately estimated for s1(t) and s2(t) using only sampling of the real signals at >2x the bandwidth.  

In my simulations, I use a lot of oversampling (e.g., 100MHz for a 20MHz signal bandwidth). Because the A,B estimation is recursive (processing is per-sample), I can skip samples (e.g., skip measurements with high values) so as to reduce PAPR -- w/o reduction in the optimality of the solution. Because the processing is recursive (not batch like FFT), we observe the prediction/correction steps during the symbol sampling -- this allows adaptive estimation of the multipath delay spread -- its clearly visible in the incremental symbol processing - no need for CP -- just extend the sampling. But the most interesting aspect is subsampling of the DirectRF. Normally, we ZIF the RF via the Mixer or DDC (SDR) so as to reduce the processing requirements. I find that, if we sufficiently bandpass the RF so only the signal spectrum remains, then we can subsample the RF so that mixing is not required. That is, we could sample a 2.4GHz signal at 120MHz preserving the phase/amplitude w/o concern over the nonlinear impacts of the mixing. 

[ - ]
Reply by kazJune 20, 2024

When changing iFFT input from 64 samples to 128 samples by using reversed conjugated 64 samples you will get a copy of same 64 samples extended at twice bandwidth. If you then discard the last 64 you get back your information and bandwidth but as I/Q not real only. So no benefit.

I guess you are wrongly mapping positive/negative frequencies to I/Q. 

The I/Q concept is better understood without iFFT/FFT such as QPSK modem that generates I/Q data.

The I/Q spectrum must be asymmetric and has to be upconverted well away from dc then the real only part can be taken to a channel, thus carrying information from both I & Q channels. The upconverter shifts the negative and positive frequencies to positive higher frequencies. 

[ - ]
Reply by jekain314June 21, 2024

thanks for noting this confusion.

I was (erroneously) referring to the FFTW software implementation of the IFFT. There, the input array is a complex array (each array element contains two values) -- but of course, you cannot use negative array indices. For an OFDM IFFT simulation, we may want, for example, to represent 64 subcarriers. If you fill the FFTW input array with 64 complex values, you will get an output array of 64 complex values. The FFTW output array contains the real and imaginary sampled time-histories. With OFDM, these two independent time-histories are placed into the IQMixer for upconversion and transmission to the receiver.

But what if we want an FFTW output of 128 real-only values. For FFTW, the input array will have the first 64 values as before but the second half of the array has a copy of the first half conjugated and reversed. The output array will be length 128 but the imaginary components will all be zero. 

[ - ]
Reply by kazJune 21, 2024

This is my understanding of your idea as a matlab(or Octave) model, please correct me if wrong, I don't see zero Q output:


x = randn(1,64)+j*rand(1,64);

xx=[x fliplr(conj(x))];


plot(real(y)); hold


[ - ]
Reply by jekain314June 21, 2024

So sorry kaz --- I'm not a Matlab programmer.

I hand-bang this stuff using Visual Studio and C/C++. The issue may be the mechanization of FFTW -- its very well documented and widely used. In C++

Complex[] input = new Complex[FFTLength];  //uses Complex type in .net

Complex[] output = new Complex[FFTLength];

input[12] = new Complex(0.5, 0.5);   //set the single frequency

input[FFTLength-12] = new Complex(0.5, -0.5);  //second-half for real

DFT.IFFT(input, output);   //do the FFTW IFFT

This produces an FFTLength time sequence with a single frequency. Can likely try with Matlab -- but its IFFT may use a different input array structure. 

I will be happy to send along a VisualStudio .sln if you want to run this from the source.

[ - ]
Reply by kazJune 21, 2024

If you download Octave (this is free Gnu tool that is the unwanted clone of Matlab), it takes few minutes. Then in Octave GUI just copy paste my code and you are done.

[ - ]
Reply by SlartibartfastJune 21, 2024

What is your motivation for only wanting real-valued output?   Previously you mentioned using only one DAC, but that's already pretty easy if the complex-valued signal is mixed to a digital IF within a single DACs sample frequency range.   This is typically what's done in modern modulators.

If I understand correctly what you're suggesting (and I'm not sure I do), your method would work, too, it's just more complex.   It doesn't add any information-carrying capacity.

[ - ]
Reply by jekain314June 21, 2024

I think the long play is robustness/graceful degradation to channel disturbance with fixed BER and spectrum use. The real-only path recursive solutions provide a ton of additional insight into the channel behavior while maintaining the end-of-symbol constellation-error maps available as with legacy OFDM. A minimal-device IOT solution also is of interest. 

Numerical complexity: The stuff I am doing is Nth order per digital sample. FFT is NlogN but after the symbol sampling is complete. 

Do you have a radio diagram showing how you do the complex tx/rx with only a single DAC?

[ - ]
Reply by SlartibartfastJune 21, 2024

FWIW, most channel effects of interest to comm systems that are processed in a demodulator are linear, e.g., multipath reflections, doppler, pathloss, etc., so there's no advantage to processing real-only.   Most channel models are developed to be translated to baseband (i.e., complex-valued) for this reason.  OFDM is generally always processed at baseband with complex-valued signals for this reason...it's easier and less complex to do with no loss of performance.   OFDM exists specifically to be robust to multipath effects, and complex-valued equalizers have been around forever for single-carrier modulations as well.  You may have an uphill battle to demonstrate an advantage over existing methods, which are pretty mature.

You can search on Digital Downconversion (DDC) or Digital Upconversion (DUC) to get a lot of information on using a single ADC or DAC in a receiver or modulator.   It is pretty much the normal modem architecture these days for most applications.  The signal flow can be the same as a traditional system, just the conversion is done at IF instead of at baseband.  It is not unusual to digitize a wide bandwidth with a single ADC and digitally tune multiple channels into multiple demodulators from the same sample stream, and similarly transmit multiple channels from independent modulators into a single DAC.

This article is a reasonable basic discussion of the architecture:


There is even a standards organization working on standardizing how people use Digital Intermediate Frequency processing to make equipment interoperable at the ADC or DAC interface.