• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Occam's Razor - when has it been wrong?

Status
Not open for further replies.
I don't mean just in the sense of ideal gas law or modern physics vs. classical physics.

well thus far, occams razor has been wrong i n nearly every facet of the sciences, economics, politics, etc.

Occams razor states the simplest theory is the correct one, while reality is based in complexities.



So, pretty much every theory we have ever had in any of these areas...biogenesis, evolution, atomic theory and the periodic table, relativity, quantum mechanics, string theory and force unification
 
Occam's razor is not just simply "the simplest theory is correct".

If there are two explanations A and B, and all else being equal (both explanations offer answers, both explanations have equal backing, both explanations are equally clear, both explanations are just as valid, both explanations have equal proof), then (and ONLY then) can you apply Occam's razor: the simpler of A or B should be taken preferentially.

It makes logical sense: if two explanations are equally valid, then the less complex one is the one less likely to have something wrong with it by the virtue that there is less that can go wrong with it, being simpler.
 
Now if only people would use it like that! That's up there with people calling every snag a catch 22, imo.
 
My understanding of Occam's Razor is not so much that the simpler explanation is "correct" when all else being equal, rather that you should start with the simpler explanation as it is the easiest one to test the validity of. The approach is to use the minimum number of parameters you can to characterize whatever system you are trying to describe, and only add more parameters as needed.

In the current mode of scientific reasoning, all theory technically must have predictive power that are testable (ergo String "Theory" is not a theory yet, but that's another story). It is not enough to simply explain existing phenomena, but it must also make testable predictions. And it is with this latter requirement, albeit often ignored by the public, where Occam's Razor principle is useful.

Consider a very simplified scenario where you have 2 data points that can be plotted on a 2D graph, let these represents some known phenomena. Now, how would you go about explaining these 2 phenomena? Well, you can have a simple theory that's basically a linear response on said graph, or you can have a more complicated theory that is say 4th order polynomial. Both theories would explain the 2 existing data points, but they also make different predictions.

Next you need to go about testing the predictions. With the the linear theory, it is easy to disprove it if it's incorrect (remember you can never technically prove a theory) with just one more measurement, since 2 data points constraint uniquely what the linear response can be, and thus its predictions.

With the 4th order polynomial theory though, you will need a total of 5 separate data points just to constraint the parameters, and at least 6 to test its validity. Since you already have 2 to start with, you will need to conduct 4 different experiments that measure the system at different places on the graph. This can potentially be very time-consuming and expensive to do, if at all doable. Things only get more difficult as you have more and more parameters that need to be constrained in order to have a meaningful theory.

Hence you see why you would want to start with the simpler explanation first, given you have no special insight on what the inherent nature of your system may be. Instead, you would go about testing the simple theory's prediction, and modify the theory as you go if experiments shows its prediction to be incorrect.

Is this the best way to do things? Well, I don't know, but it's not the ONLY way at least. It's just one approach that has been found productive. It is important to remember that scientific theories and laws are meant only as approximations of the reality. They are by no means the truth of the universe no matter how good they seem to be, and should be modified or even discarded as evidence dictates.
 
It's just the principle of parsimony. There are infinitely many mathematical descriptions for any set of data. Parsimony is the arbitrary selection of the one with the fewest parameters based on the idea that adding additional model terms gives no additional information. This is a strictly philosophical, non-scientific exercise which allows one to select one model from the uncountable possible models, nothing more. It's often shown to be the incorrect choice when additional data eventually become available.
 
In some areas like statistics/machine learning, it can be mathematically justifiable. The observations/measurements will have some noise in them, and too complex a model will be modeling noise together with real data and will thus generalize poorly. E.g. if you have 1000 measurements that are more less linear, a linear model will almost always have much better predictive performance than a model based on a 999-degree polynomial which will "explain" the so-far observed data perfectly, but will behave wildly everywhere else. This problem is in AI/machine learning known as overfitting, and learning models typically have some mechanism of dealing with it, like error that you're trying to minimize will have a penalty term that is tied to the complexity of the model (say number of parameters, their size), cross-validation etc.
 
Status
Not open for further replies.
Back
Top