Simple is Better than Complex ... Right?

I noticed upon re-reading my post on Price Series Characterization that I boldly state: "Simple is better than complex". I did not give voice to the corollary: "Simple is not easy". Is this just a matter of opinion?

William of Okham, famous for his Razor, argued:
"Plurality must never be posited without necessity" and
"It is futile to do with more things that which can be done with fewer"
.

Einstein is reputed to have said:
"Everything should be made as simple as possible, but not simpler".

DaVinci said: "Simplicity is the ultimate sophistication".

August company, but it still sounds like opinion to me!

Here's a Stanford paper on the subject.

Let's make a couple of qualitative arguments that support the case that simple is better than complex:

Points of Failure. The more elements in a system the more opportunity for something to go wrong. Attempting to make it a bit more mathematical, the probability of failure is:

$P = \left ( 1 - \prod_{1}^{n}\bar{p_{i}} \right )$ where $\bar{p_{i}}$ is the probability of element i not failing.

Obviously, $0 < \bar{p_{i}}< 1$, so each additional element increases the overall probability of failure. This assumes (!) that the addition of each element has no effect on the probability of failure of any other element.

Stability and Chaos The greater the number and the more interconnected the elements in a system, the less stable it is and the less predictable become the modes of failure. If there are N elements in the system then there are Nx(N-1) total interactions.

Comparing simple and complex trading systems, absolute performance combined with the change from in-sample to out-of-sample might tell us something about the optimum level of complexity.

Friction. At the interface between components in a system there is loss. In a mechanical device, energy is turned to heat via physical friction. In an informational device lags and noise consume information.

I took a look at the Akaike Information Criterion / Bayesian Information Criterion to see what insights these measures offer on the topic. In simple terms they are ranking systems that allow a modeller to compare one model to another taking into account the differences in the number of parameters in the model and the goodness of fit. They say nothing in an absolute sense rather they answer the question: does the improvement in my model's fit gained by adding a parameter offset the reduction in degrees of freedom?

I cannot find a proof that "Simple is Better than Complex". So it remains a question of personal philosophy. Me? I am simple!