Occam's razor, Physics example

Started by Cedron 2 months ago3 replieslatest reply 1 month ago181 views
From the Wikipedia article:

"Suppose an event has two possible explanations. The explanation that requires the fewest assumptions is usually correct.
However, Occam's razor only applies when the simple explanation and complex explanation both work equally well. If a more complex explanation does a better job than a simpler one, then you should use the complex explanation."

If you have an interest in Physics, please read the following post.

   Why I think General Relativity is an approximation

It pertains to two of my blog articles on this site.  For those of you who aren't familiar, I have an overview article with links to all my articles.  Everybody with an interest in DSP should be familiar with the material found in the "Fundamentals" sections.  The same math is also needed for Physics.  You will find my Physics articles referenced in the "Off Topic" section, and there are also links in the link above.

My horse in this race is the simpler answer.  Occam's razor does not say, as often misunderstood, "The simplest answer wins."  What it really says is the answer with the fewest assumptions will tend to be the better one.

In terms of assumptions for this case, the simpler is saying an inverse square law and the complicated wants the strong equivalence principle.  They are mutually contradicting.

I'm wondering if some of the Physics guys in this forum want to take the GR side, the one that goes against Occam's razor.

Any DSP examples of Occam's razor out there?
[ - ]
Reply by kazApril 15, 2024

Interesting philosophical principles.

In Design debugging we face many such situations.

Example: You designed a module and tested it to be working. It is then integrated with other modules in the system and failed.


- Wrong observation of testing person

- Wrong interpretation of testing person

- your design is the cause of failure 

- integrating system is the cause of failure, i.e. other's failure

- both your design and system are the cause of failure, many persons involved

I see the first case has fewer assumptions but the last one is more complex and likely to be correct but is not specific.

So it doesn't help I am afraid...I will have to accept the burden of proof.

[ - ]
Reply by CedronApril 15, 2024
That's an interesting example.  It could apply to hardware or software.  I don't think there is any expectation of correlation between levels in a hierarchy and complexity of the levels.  I can think of simple systems with complex components, or complex systems with simple components.  In either regard, if unit testing passes and system testing fails, well, "Houston, we got a problem."

A similar example to my posed Physics one I can think of would be if an approximation or an exact formula should be used when calculating a frequency from DFT bin values.  It is different though because sometimes the simpler approximation, say Jacobsen's, is going to be just as good as an exact formula, say Candan's second tweak, depending how much noise and the type of noise that is present.  In practicality, it also depends on the tolerance of the specification.

    "Candan's Tweaks of Jacobsen's Frequency Approximation"

For low noise applications with tight requirements, it might even be worth taking the extra step to do a "Projection" formula.

    "Three Bin Exact Frequency Formulas for a Pure Complex Tone in a DFT"

The white noise error variance is slightly lower, but it takes a lot of calculations to get it.  Is it worth it?

In the case of my frequency formulas, the assumption is that there is a single pure tone in the DFT.  If your sample size is larger, or there is noise, or other tones, the exact formulas will generally not be worth it.  Still following Occam's, yet having nothing to do with complexity determining the best answer.
[ - ]
Reply by CedronMay 14, 2024
I thought this post might get a little more attention than it has, especially from R B-J.  Which one of these equations better describes reality?


Here are the graphs with r/k as the horizontal axis.  For r values greater than 10*k the two are practically indistinguishable.


Trying to get Physicists to address this is even more difficult than getting the IEEE to recognize my exact formulas and vector based phase and amplitude calculations.  This is what is the most problematic about "publish or perish" culture, all value is placed on "origination" and none on "validation".  What could possibly go wrong?