Greetings,

Looking at the techniques for timing recovery (symbol synchronization) brought a few questions to mind.

Let me pick on the early-late method. A typical explanation of the approach begins with something like the following: "The algorithm behind this method takes three samples spaced by sampling duration Ts, each within the current symbol of duration T."

This troubles me. Our goal is to determine where the center of the symbol is because it corresponds to the optimal sampling instant. But if we don't know where the center is, then how can we be assured that each sample we choose will be "within the current sampling duration". That is, how can we know that one or more of those three samples doesn't actually belong to the previous or next symbol?

This begs a couple of other questions. For one, in what way are we not synchronized? What I mean is, relative to the sampling instants, are the symbols translated in time or rotated in time? And in any case, what causes this? While it's easy enough to understand how a carrier may not be synchronized with the sampling at the receiver, once we establish carrier synchronization I would have thought the symbols would be synchronized as well. Is the problem one of jitter in the transmitter (translation), or that there is a phase shift in the transmitter (rotation), or something else?

Thanks so much for any and all explanations.

Walt

It seems you are asking about the very concept of symbol timing recovery as well as carrier recovery.

They are two distinct issues. Though recovery of either assists in the other

The concept of symbol timing recovery is to sample at the centre of a symbol.

This could be implemented at origin (ADC level) by shifting clock or

surprising later in digital domain by reconstructing best SNR through interpolation and creating new samples.

carrier recovery is a separate issue and the purpose is to centre signal on dc as it shifts sideways due to oscillator issues from tx to rx.

Hi, it has been a while since I used these algorithms, so I may be wrong, but my understanding is this:

You estimate the symbol duration T and sample the incoming signal (optimally the matched filtered baseband signal) at three instants:

k*T-Ts, k*T, k*T+Ts (Ts << T)

the sample at k*T is your output sample, the other two deliver information on your timing. In optimally timed syncronization, the samples at the instants k*T-Ts and k*T+Ts have the same amplitude ON EVERAGE (expected value)! The samples at every symbol can vay depending on the symbol values. With inter-symbol-interference of less than one symbol, the symbol combinations [1,1,1] or [-1, 1, -1] should deliver equal values at k*T+-Ts around the center symbol. Non-symmetric combinations like [1, 1, -1] will not deliver equal values for k*T+-Ts around the center symbol.

The key is that in a statistically independant series of symbol values, the averaged (or low-passed) values at k*T+-Ts need to be equal.

The non-equality of these values can be used as a phase detector in a PLL that synchronizes the carrier frequency and eliminates an additional phase offset.

This is a nice overview on Timing recovery:

http://combrexelle.perso.enseeiht.fr/sujets/articl...

Hope I helped

Markus

Adding some information on top of what kaz has said.

The early-late method provides an error signal that indicates if you are in advance or delayed in relation to the exact symbol instant. This method should be used in a feedback system because the error signal and the actual error has a non-linear relation. Therefore, you must have a time adjust mechanism placed before the error signal generator, and a loop filter between then to adjust the dynamic of this feedback system. The loop filter parameters must be adjusted so the error signal go toward zero as slow as acceptable and usually with a small or no overshoot.

If the sample period of your input signal is much smaller that the symbol period, you can limit your time adjust mechanism to advance or delay the symbol sample instant in multiples of the sample period. This mean, you do not need to create new samples located in a time instant between existent samples to obtain a fine time adjustment. If it is not the case, you should use an adjustable fractional delay filter or, as kaz said, adjust your ADC sample time.

I recommend Gardner, F. M., “Interpolation in Digital Modems – Part 1”, 1993. It is an easily available paper.

The symbol clocks at each end of the link, the Tx and Rx, are independent and so will have random phase relationships to each other. They also won't be exactly the same frequency due to drift, oscillator stability and tolerance, etc., so the relative phase relationship will likely be changing with time as well. This is why synchronization is necessary.

In most modulations carrier phase and symbol timing are independent, i.e., there is no relationship or interdependence between them. The carrier frequency will have the same random phase and frequency offset problems that the symbol timing has. Usually you have to synchronize timing first because most carrier phase error detectors relay on sampling the transmitted symbols.

The early-late method is just one method of timing error detection, and there are many others. Any detector that reliably provides a usable error curve can be used to feed a timing control loop.

I hope that helps a little bit. There is a lot of literature in this area but not all of it is very straightforward to understand.

Startibartfast, I especially appreciate your explanation of why symbols are not synchronized even after the carriers have been made to be so.

What I still do not understand (despite understanding *how* algorithms such as Early/Late or Gardner work) is how you can be assured that the samples you are feeding to these methods are all from the same symbol. How do you know where a symbol begins so that you can know your samples are all from that symbol, as opposed to some samples from that symbol and some samples from the symbol before or after?

Thanks again,

Walt

you don't need to know symbol boundary. You target the peaks and dips and as long as there are plenty transitions you can keep tracking symbol timing

As Kaz mentioned, Timing Error Detectors, like early-late, just need to know the symbol period, Ts, and then provide a suitable error curve vs phase error for up to +/- Ts/2. In other words, the error detector typically provides zero output at the symbol center, increasing (positive) output for phase error in one direction and decreasing (negative) output in the other direction. There are a lot of TEDs that do this, and some popular ones, like Mueller-Muller, require transitions so only work on half the symbols, which is sufficient to steer the loop.

Thank you kaz and Slartibartfast for the follow-ups. I understand now that it’s not a problem if the samples span more than one symbol. If we do, the error will still drive us to adjust the sampling instant toward our goal. That was really throwing me off.

I truly appreciate everyone’s input and I’ll checkout the indicated references.

Walt