## Error in frequency estimation for low frequencies

Started by 2 months ago28 replieslatest reply 1 month ago208 views

Hello everyone

We want to use a chirp z-transform to determine the frequency of a signal more accurately. It works quite well, but we see that the accuracy is much worse at very low frequencies than at higher frequencies.

We ran some simple Python simulations and found that the position of the peak after the Chirp-Z transform is highly dependent on the initial phase of the signal.

Simulated sine signals with different phase:

Chirp z-transform output without windowing:

Zoomed in peak values:

We cannot currently explain in detail why the accuracy of the frequency determination is so much worse at low frequencies. Does anyone have any idea what the reason for this is and how the problem can be circumvented?

Thank you and best regards

Ueli

[ - ]

If your inputs are real-valued sinusoids this is a common problem, since the frequency response can be thought of as folding, or reflecting, across DC, which some refer to as aliasing or interference.  It's all the same thing, essentially, but it's usually what causes the biggest trouble at low frequencies for real-valued signals.

You can convert to an analytic signal with a complex mixer, which helps because it removes the reflection of the frequency response across DC (0 Hz).

Frequency Estimation is a very well-studied area and there are quite a few different techniques used ranging from simple to complex, depending on how much processing complexity you can tolerate to trade off for accuracy or bias or variance.   Most of them work with DFTs rather than Chirp-Z, but it's an option you can consider if needed.

[ - ]
It's better to have exact formulas for real valued signals instead of approximation formulas for complex valued tones.  Also, your conversion process will introduce errors.

Candan's Tweaks of Jacobsen's Frequency Approximation

Have you recognized exact formulas are possible, or are you still in denial?

So, if you have a high SNR, particularly for a single tone signal, that's what you want to use.
[ - ]

Hello Cedron

Thanks for linking your blog posts in this and also the other reply. They are great but I need more time to dig deeper here.

Maybe you're exact formulas could be a good solution to get more accurate results for very low frequencies. Also your time domain formulas could be an approach here.

[ - ]
You have lots of options.  In my limited experience with an inexpensive radar unit the variation in frequency from cycle to cycle exceeded the approximation error introduced by simpler formulas.  Yes, you can measure individual cycles.

With the DFT frequency formulas you are going to get the best results with sampling near four points per cycle and your results are going to be the average frequency across your sampling frame.  The wider the stance on the signal, the better your resolution, but also the greater your latency.  I do not recommend sampling frames smaller than two cycles.

You may consider applying a notch filter to improve results, especially if you are going to take a time domain approach, which I don't think will help in this case.  Any second tone will mess with the results.  This takes you down the rabbit hole of the impossible task of trying to define "instantaneous frequency".

It all depends on the quality and nature of your signal and your latency tolerance.

Perhaps you could show a graph of a few cycles of your sample data.

This graph shows clearly where using complex formulas for a real signal veers off the CRLB.  It should give you an idea but I recommend that your do your own implementation and testing.

The IEEE has still not recognized my formulas, though they are clearly superior to what they have.  They want me to pay to publish.  Screw it, I'm a hobbyist, they should pay me to write a book.  Please, every IEEE member reading this, send an email.

Thanks.
[ - ]

Hello Cedron

You're right it makes sense that I share some real world data. We sample the time domain signal after the real baseband mixer. We are using a highpass filter to reduce the antenna coupling and an low pass filter as an anti aliasing filter. At the moment we use then a hanning window and an FFT to measure the beat frequency. This works very well but especially in the short range (very low frequencies) we see the effect what I described in the inital post what makes the frequency measurement more inaccurate in comparison to measurements at higher frequencies.

RAW ADC Data, Single Object @ 1m:

RAW ADC Data, Single Object @ 0.1m:

The lowest frequency of the signal is based on the antenna coupling and can be learned out by measuring and subtracting it for a situation without any targets in the field of view.

Our goal is to measure also lower frequencies with a good accuracy.

[ - ]

Ouch, yes, you may have other effects happening with near-field antenna coupling that could be causing trouble.

It sounds like you may be using a linear-FM waveform with a de-ramp on receive mixer?  Or some processing flow that results in frequency proportional to distance?  If so, is the de-ramp mixer analog or digital?

Regardless, once the signal is digitized it can be mixed to complex baseband with a complex-valued mixer (often a simple +/-1 multiplication by fs/4), which moves fs/4 to 0 Hz, and 0 Hz to -fs/4.  This has a number of advantages, including moving the original DC to the band edge where it can be suppressed with a LPF.  This reduces any distortion due to any residual DC that may be present at the input.

This does a linear translation of frequencies, so no information is lost, just a translation that can be easily removed in interpreting results for range estimation.   This does move the lowest frequencies to the lower (negative frequency) band edge, so that any effects that may have been due to low frequency will not appear there. With a complex-valued signal all frequencies, including zero, have the same responses, so any remaining issues will be due to other effects that may be able to be corrected elsewhere.

This is essentially what was mentioned much earlier, but another possibility for doing it this way is that for a linear-FM waveform if the raw (un-de-ramped) signal is mixed to complex baseband like this then the de-ramp function can be done with a multiplication in the frequency domain with a quadratic phase matching the transmit FM rate.  This is a common methodology for signal processing in linear-FM radars, as it eliminates any distortion due to the analog mixer.   That may or may not apply to your system, but thought I'd throw it out in case it's useful, or at least something to think about that might help.

[ - ]

Hello Slartibartfast

As mentioned above, I believe we have some issues to resolve in our measurement system before we can work on the accuracy itself.

We are using a PLL controlled sawtooth ramp with an analog baseband mixer for de-ramping to get the beat frequency. There is also an LPF and an HPF in the signal chain to avoid saturation due to antenna coupling and aliasing effects.

Your input to mix up the digitized system to a complex baseband sounds very interesting and could solve the initial described problem.

[ - ]

That architecture limits you somewhat, and will have its own error and distortion sources, but there's still likely room for improvement.  Mixing the center frequency to DC (or a simple fs/4 digital downconversion) for complex-valued processing will at least get rid of the sinx/x reflection across DC.  You'll also be able to more easily apply a number of algorithms that may be useful.

[ - ]
That looks like nice high SNR data to me.  The close up indicates that you are not going to have good success with the nearly instantaneous time domain formulas.

I would apply a pseudo-differentiation pass on this to get rid of the drifting.  My favorite technique is the one from this early article:

Exponential Smoothing with a Wrinkle

I like to use a 50% weighting, so the forward pass is:

y_b[n-1] = 0.5 * ( y_f[n] + x[n] )     [run backward]

y_f[n+1] = 0.5 * ( y_f[n] + x[n] )     [run forward]

y[n] = y_b[n-1] - y_f[n+1]

= smoothed from future - smoothed from past

Leave at least a ten sample margin on each end for your DFT frame within your smoothing frame.  Then you can use four points per cycle for your upper expectation and use the improved three bin real formula.

If you want to get fancy and do content based framing, then line up on a whole + one half number of cycles with about four points per cycle, then calculate the two surrounding bin values and apply the two bin formula.  I've used 3 1/2 cycles with 16 point DFTs in the past and it has worked well.  Approximation formulas, like Jacobsen's, don't do as well with low N.

When you are looking at the comparison chart I posted, remember CD quality is around 90 db on the horizontal axis.

Any estimated tone can be subtracted out by estimating its phase and magnitude as well, then use the bin value formulas to generate the DFT image for the tone which can be subtracted out of your working DFT spectrum.

DFT Bin Value Formulas for Pure Real Tones
[ - ]

Hello Cedron

Thanks for sharing your ideas and for your great feedback. I will study it in more detail and try to implement it in the end application.

[ - ]
You're welcome.  I appreciate your positive response, they are rare.

Any of the exact formulas should solve your phase dependency problem.  For a pure generated signal, your results should be good to the precision of your variables.  Your real data looks really smooth already so noise distortion should not be an issue.

With these formulas and techniques you have the ability, if you have the computation time, to pick apart your signal into the individual constituents.  Furthermore, with adaptive framing you can get much higher resolution on your high frequency signals.

Each differentiating pass removes the DC component and turns a linear into a DC, a parabola into linear, etc.  It also mitigates higher frequency tones and local white noise.  This means after you do it a few times you can subsample (decimate) the signal and have a much smaller DFT/FFT to calculate without any material loss in the quality of the results.  I recommend doing this if you are studying subframes smaller than your large wavelengths.

If you are studying subframes and doing fresh DFTs, then removing other estimated tones is more efficiently done in the signal domain instead of the frequency domain.

One word of caution if you are going to use the Bin Value formulas.  If the frequency is very close to a whole integer of cycles per frame then simply create that bin and zero the rest.  The bin value formulas have a singularities at whole frequencies as they are actually the degenerate case.  For ultra precise work, there is a variation of the bin value formula that is more accurate near whole frequencies.

An Alternative Form of the Pure Real Tone DFT Bin Value Formula

I don't see a need for it unless you want better than about eight significant digits precision, otherwise just test to see if you are within 0.000001 or so of a whole number frequency.

There are two reasons to use the bin value formulas as part of your processing.  One is to remove the interference effects of other tones on a DFT patch you are currently analyzing, especially if they are nearby.  Second is to use as basis vectors to calculate the phase and amplitude parameters for a tone.

I am quite serious about the IEEE being totally ignorant of these approaches.  When I first found these formulas I did a bunch of emailing.  Back then you got more replies.  The common refrain was "Exact formulas are impossible, I'm not even going to look at your equations."  The most adamant in this position was Jacobsen.  IMO, he failed the IEEE badly.  I'm still waiting for acknowledgement and an apology from him.  Not holding my breath though.

The IEEE has no conduit for introducing this material to the membership.  Theoretically, some member could rewrite my articles and submit them for publication.  However, journals only want to publish original material, preferably by prestigious authors, and this is a real problem.

https://physicsdiscussionforum.org/opinion-piece-o...

Without Academics validating each other's work, bad publications aren't corrected and remain on the record unquestioned.  There is now something called the "replication crisis" going on which has cast doubt on all research.

If I were an IEEE member, I would feel ripped off.  I am quite serious about every IEEE member reading this sending them an email.  They never replied to any of mine and I've given up on them.  I'd like to see them embarrassed about this.

Not only will exact equations give you better results, but they also have a qualitative descriptive aspect that does not exist with an approximation function.  It's funny that 'exact' vs 'approximate' is so tough to grasp while 'lossless' vs 'lossy' is well understood.  Or confusing and conflating approximation with estimation.

Please report back which techniques you end up using and how they work out.  I think that would be valuable feedback for others to see.  My article read counts have gone up a bit in the last few days so I think what I am saying here is having an impact.  Feel free to send me an email (address in my bio, or use the contact form in my user profile) if you have small questions not worthy of public airing.  I am also curious about your specification requirements, you seem to be shooting for extreme accuracy.

I hope this works out well for you.

P.S.  My novel Physics formulas are much more important.

https://physicsdiscussionforum.org/why-i-think-gen...

Same systemic response.  "Don't bother us, we already know everything."

P.P.S. If you are tracking a tone frame to frame, use the center value as the reference point for your phase function.  That's where the best fit will line up with the signal the best, errors are balanced on the edges.

[ - ]

Hello Cedron

I appreciate your enthusiasm to support newbies like me and also what you have implemented in your blog articles.

We now have a lot of input and need to dig deeper to find the best solution to optimize our algorithms. One problem I see might be the limited computing power as the implementation is done in an MCU.

I also think that the accurate measurement of the frequency itself is not the only problem in our FMCW system. Slartibartfast's posts below also point out some interesting things here that can hopefully help eliminate the problems created by our measurement system before we can improve the measurement accuracy.

Thank you for your offer to contact you directly. This might actually make sense to discuss topics that are not necessarily public.

[ - ]
Honestly, it's more frustration than enthusiasm, but I still hope it is helpful.  There is no point in doing the extra calculations if the data doesn't support it quality wise, or the situation.

With a DFT, to get frequencies, your minimum is going to be DFT+Jacobsen's.  My original real valued three bin formula is similar in form to Jacobsen's approximation.  Both are a pair of three element dot products followed by a complex division.  The difference being that the Jacobsen coefficients are the real values 0,1,2 which allow for implicit multiplication and mine are complex valued and you need a lookup table of vectors to dot with based on the bin number.

Away from DC and Nyquist, you will get nearly as good results with Jacobsen's since the conjugate peak is further away.  At half Nyquist (4 points per cycle) the interference is nearly void.  A hybrid approach just requires some if logic.

Both frequency formulas do well on filtering out the tails of other peaks so applying a window is not necessary.  My Candan's tweaks article clearly shows that if you have more than about 30 samples in your frame then doing the extra exact tweak or using my exact formulas is not going to improve results any, the approximation is close enough.

I came at DSP from an audio application perspective concentrating on real valued signals.  Many here come from a RADAR background which is generally complex valued signals and much higher frequencies.  I can't really help you with your ancillary issues.

Now that I seem to have gotten noticed, let's see if the IEEE contacts me.  They are still lacking this material and many still seem to believe that exact formulas are impossible.  You would think that finding exact formulas should be noteworthy even if in many cases the extra accuracy is immaterial.
[ - ]

Thanks for your reply and yes you are right we are working with real-valued sinusoids.

The end application is an FMCW radar where we have no option to use a complex mixer.

I already thought that this is a common problem but I did not know how to name the problem correctly. I think I will find much more related information now when I expand my research to your description.

[ - ]

If you aren't resource limited, FMCW can be mixed and processed as a complex-valued signal, if that's an option to help get around the problem.   I think others have provided links showing some approaches to that method.

If that's possible for your application then the low-frequency signals generally have the same characteristics as any other, so they don't need to be treated separately for refining frequency estimation.   At that point if what you're doing works well at the other frequencies it will likely also work well at low frequencies.   If you need more performance then the literature is full of a wide variety of algorithms that offer practical tradeoffs for processing complexity or output accuracy, bias, or variance.

The above was written in a general sense with a DFT in mind.  Using the CZT comes with the usual cautions in interpreting the results the more the CZT deviates from a DFT.   I suspect that you're on top of that already since you started out on that path.

[ - ]

We usually work with a DFT but started some research with the CZT where we saw then the problem of the initial post.

I will further check your idea and search for the links you recommended.

[ - ]

Can you tell us which approach you ended up using?

[ - ]

Hello Cedron

I've now tested different approachs including digital mixing, autocorrelation and simple time domain phase measurements but at the end I see the same effect for all tested algorithms.

It looks that the algorithm to measure the frequency works quiet well but the problem seems to be in the FMCW system itself.

In the short range we have a coupling between the RX/TX channel and also a reflection caused by the plastic cover used for the system.

For short detection ranges I always get a combination of the channel coupling, cover reflection and the real reflection. Further it seems that I not only see the direct reflection of the object but also 2x and 3x the distance due to multi reflections.

At the end I will have a signal what is based on multiple sine signals with different magnitudes and phases which also change over temperature what makes it for example quiet hard to simple remove the coupling and cover reflection by learning it out.

I've only one or two full sine swings of the signal for an FMCW system in the short range with changing phases in the combined real signal.

The algorithm works well for objects at higher distances but tends to measure wrong distances in the short range because I can't separate all signal components anymore. Maybe an adaptive or iterative approach could give better results here but I do not see options to implement this in the design due to processing power restrictions.

So at the moment I've not found a solution to get rid of the effects but I also do not think that the problem is based on the frequency measurement itself. It is more based on the RAW data what I have available for processing.

[ - ]
Thanks for the detailed update.  You seem to have covered the "calibrate for it" option I was going to suggest.  The only other thing I can suggest is that if you resample your data on a frame very close to the two cycles you have then the 2x and 3x artifacts should be "invisible".  At the ideal four points per cycle that is only eight points you have to select.  Two basis vectors and you will have a phase calculation; the equivalent of a single bin calculation.

For a more accurate frequency calculation, frame 2 1/2 cycles, then calculate the bins for 2 cycles and 3 cycles and use my two bin real frequency formula.  This will put your 2x on a whole bin minimizing leakage and your 3x at bin 7 1/2, so with 16 sample points it will be close to the Nyquist.
[ - ]

I suspect you are getting aliasing effects between the positive and negative frequencies for a real signal (given the finite time duration: the aliasing is the interaction of the Sinc that is convolved with each tone in the frequency domain)  You can mitigate that by using the Hilbert Transform to convert your signal to a positive frequency only (complex signal) prior to your processing. This too has increased error as your approach DC but you can reduce that to any level (at the expense of delay and time to settle) by increasing the complexity in the Hilbert Transform used. A simple test in MATLAB / Octave or Python's scipy.signal can be done by using the hilbert function, since hilbert(x) returns the complex analytic (positive frequency only) representation of x.

[ - ]

Hello Dan

Interesting input, thank you very much. We will study it in more detail in combination with the feedback of Cedron.

[ - ]

If this is for FMCW radar, this article describes the complex-baseband architecture as another option to get the complex analytic signal I described:https://www.ti.com/lit/wp/spyy007/spyy007.pdf?ts=1...

Also this paper describes several popular techniques for frequency estimation for practical FMCW radar applications.

https://www.atlantis-press.com/article/6924.pdf

For FMCW you don't actually have a pure tone and may have situations of not only background noise but multiple targets as well as DC offsets with practical hardware, so be sure those cases are included in your evaluation if operation under those situations is important.

[ - ]

Hello Dan

Thanks for sharing the two documents. I was not aware of the TI paper what clears up a lot between the differences of both architectures.

[ - ]
You could use a DFT and formulas that are exact for real valued signals.  I disagree with Dan, I would say the conjugate peak is 'interfering' not 'aliasing'.

You have three choices amongst my blog articles:

Exact Frequency Formula for a Pure Real Tone in a DFT

Two Bin Exact Frequency Formulas for a Pure Real Tone in a DFT

Improved Three Bin Exact Frequency Formula for a Pure Real Tone in a DFT

The advantage of exact formulas is they work for extremely small sample and DFT sizes.  If you have a pure tone signal and want to calculate with less latency and fewer calculations, then my time domain formulas might be better.  You can find them in my overview article:

[ - ]

Can't you use windowing to reduce the effect of rendering the signal periodic by using a DFT ?

Do you have time to do a kind of dichotomic search with various frequencies and various initial phases ?

Can you increase the length of the vector for low frequencies to cover more than 1 period ?

Can you accumulate several periods of signal to average the effect of initial frequency ?

[ - ]

Hello bloublou

We are using a windowing in the end application but see the same effect there for low frequencies.

As mentioned above our end application is an FMCW radar with a real mixer in the RF path. It is possible to increase the length of the vector but this will not help in an FMCW radar as long as we can't increase the bandwidth of the sweep which is limited by regulatories.

[ - ]