Biquad Optimization

Started by jtp_1960 8 years ago5 replieslatest reply 8 years ago581 views

I would like to try optimize a bit more the biquad filters indended for low samplerates (44.1/48kHz). Here's the data:

fs = 44100.0;
p1 = exp(-1.0/(fs*3180e-6))
p2 = exp(-1.0/(fs*75e-6))
z1 = exp(-1.0/(fs*318e-6))
z2 = exp(-1.0/(fs*0.0)) // <-- unused zero slot
and then get the num/den
d[0] = 1.0;
d[1] = -(p1+p2);
d[2] = p1*p2;
n[0] = 1.0;
n[1] = -(z1+z2);
n[2] = z1*z2;

Best fit I get now (by just using (normally) unused zero 'slot') is < ±0.3dB for 44.1kHz and ±0.17dB for 48kHz (range 20Hz - 20kHz). Fixing magnitude error for higher samplerate filters isn't necessary since for 96kHz filter I have it ±0.0099dB, for 192kHz < ±0.0008dB and for 384kHz  < ±0.00004dB (range 20Hz - 20kHz).

Best ever fit I know for 2nd order RIAA filter is ±0.2239207dB (44.1kHz) and ±0.1395898dB (48kHz) got by Robert Orban but, his method to get there was something not many is capable to repeat.

Now that I have not touched p1, p2 nor z1 yet, I would like to try if adjusting those along with z2 might improve the result. What is the simpliest but effective fitting method I could try?

So far I did with finding good value for z2 was just compare the resulting magnitude against analog model at couple frequency points and looped the calculus (-> fix the s coeff->calculate z coeff->normalize->magnitude->compare->...) as long as it took to find the min error at those places. I think this isn't good for real time system? I could try by improving the loop by using 'binary search' for the suitable values (its not very wide range those suitable numbers comes from ...). Any thoughts?

Are there any open source (C++) libraries for optimization task I could examine or even use?


[ - ]
Reply by drmikeMarch 29, 2016

I would look at steepest descent (also called gradient descent) and "genetic search".  
The first is a good math algorithm which you can modify to work in real time.  The second is more useful if you are willing to let "optimal" follow the problem with a little lag.  

I don't follow your rule for "goodness", but as long as you have one either of these methods will help you zero in on the "best" set of parameters which fit your rules for "goodness of fit".  

There are a lot of open source C++  libraries for both of these methods.

Dr. mike

[ - ]
Reply by jtp_1960March 29, 2016

Thanks, I'll check those methods.

[ - ]
Reply by Tim WescottMarch 29, 2016

I was thinking happy thoughts as I read your post until it got to the "embedded" part -- then I kind of ground to a halt.

For an embedded system where you want the user to be able to enter any old frequency response, in terms of continuous-time poles and zeros, AND have the thing conform to the implied specifications, then go with what Dr. Mike says.

For an embedded system where you just want to give the user some knobs to twiddle until things sound right, and you don't particularly care if the resulting parameters apply at higher sampling rates, then just vary the time constants directly, use your above method, and let the chips fall where they may.

For an embedded system where you give the user just one slider (which would be what -- "tone" control?), you may be able to build up a table of pole and zero locations vs. slider position, and do linear interpolation between them.  You could maybe extend this to two "sliders", but I suspect that three would be a stretch.

Note: when you're comparing response to response, make sure you take the action of your DAC and any following reconstruction filter into account, and make sure that you know whether you just care about amplitude response or if you care about phase as well.  You care (I assume) about the whole system response and not just the inner DSP part, so you have to take these outer things into account.

[ - ]
Reply by jtp_1960March 29, 2016


My english is maybe taking to somewhere not indended.

'Analog model' I'm comparing filter response against isn't actually analog but calculations of magnitude responses of same base time constants as the final filter is done from.

double analogMagnitudeAtHz(double Fc, double t1, double t2, double t3){

    long double pi = 3.14159265358979323846;
    return (10.0 * log10(1.0 + 1.0 / (4.0 * powl(pi, 2.0) * powl(Fc, 2.0) * powl(t2, 2.0))) -
	    10.0 * log10(1.0 + 4.0 * powl(pi, 2.0) *powl(Fc, 2.0) * powl(t1, 2.0)) -
 	    10.0 * log10(1.0 + 1.0 / (4.0 * powl(pi, 2.0) * powl(Fc, 2.0) * powl(t3, 2.0))));

where t1-t3 are time constants (p1, p2, z1) and Fc is the frequency magnitude is looked at.

AFAIK, the error I'm trying to minimaze comes from s to z transformation (MZT in my case) and filter realization (DF). I'm using <double> or <long double> in all math so rounding errors shouldn't be a problem. Target is as accurate equalization in digital domain as possible for every sample rate one can pick.

Also, optimization nor other parameters are not accesible for user (it's possible to adjust the frequencies by hand but those are fixed frequencies by the RIAA / non-RIAA standards). My idea is that when user selects a playback equalization (RIAA or non-RIAA) from dropdown list (or changes frequency from one of three switches), time constants and sample rate is updated at code level --> new filter is build real time using MZT transfer method and then optimized before taking in use.

Using fixed coefficients isn't an option in this implementation.

[ - ]
Reply by Tim WescottMarch 29, 2016

I would suggest that if, between sampling rates and entries in your drop-down menu, you have a finite number of possibilities, that you just store fixed coefficients for those options in your code.

Then, if you really must give the user "twiddle knobs", let those be the only thing that launches the optimization.

Of course, if you want a fixed number of equalization options in the drop-down, but at any arbitrary sampling rate, you're back to using the optimizer for each thing.