LukeProg of CommonSenseAtheism fame created a bit of a row when he declared that Solomonoff Induction largely rules out theism, continuing on to expand on the theme:
If I want to pull somebody away from magical thinking, I don’t need to mention atheism. Instead, I teach them Kolmogorov complexity and Bayesian updating. I show them the many ways our minds trick us. I show them the detailed neuroscience of human decision-making. I show them that we can see (in the brain) a behavior being selected up to 10 seconds before a person is consciously aware of ‘making’ that decision. I explain timelessness.
There were several reasons for the CSA community to get riled up about these statements and they took on several different forms:
- The focus on Solomonoff Induction/Kolmogorov Complexity is obscurantist in using radical technical terminology.
- The author is ignoring deductive arguments that support theist claims.
- The author has joined a cult.
- Inductive claims based on Solomonoff/Kolmogorov are no different from Reasoning to the Best Explanation.
I think all of these critiques are partially valid, though I don’t think there are any good reasons for thinking theism is true, but the fourth one (which I contributed) was a personal realization for me. Though I have been fascinated with the topics related to Kolmogorov since the early 90s, I don’t think they are directly applicable to the topic of theism/atheism. Whether we are discussing the historical validity of Biblical claims or the logical consistency of extensions to notions of omnipotence or omniscience, I can’t think of a way that these highly mathematical concepts have direct application.
But what are we talking about? Solomonoff Induction, Kolmogorov Complexity, Minimum Description Length, Algorithmic Information Theory, and related ideas are formalizations of the idea of William of Occam (variously Ockham) known as Occam’s Razor that given multiple explanations of a given phenomena, one should prefer the simpler explanation. This notion that the most parsimonious explanation was preferable to other explanations existed as a heuristic until the 20th Century, when statistics began to be merged with computational theory through information theory. I’m not aware of any scientist describing facing a trade-off between contending theories that was resolved by an appeal to Occam’s Razor. Yet the intuition that the principle was important remained until being formalized by people like Kolmogorov as part of the mathematical and scientific zeitgeist.
The concepts are admittedly deep in their mathematical formulations, but at heart is the notion that all logical procedures can be reduced to a computational model. And a computational model running on a standardized computer called a Turing Machine can be expressed as a string of numbers that can be reduced to binary numbers. One can imagine that there are many programs that can produce the same output given the same input. In fact, we can just add an infinite number of random no-ops (no operations) to the bit stream and still get the same output from the computer. Moreover, we can guess that the structure of the program for a given string is essentially a model of the output string that compresses the underlying data into the form of the program. So, among all of the programs for a string, the shortest program is, like Occam’s Razor predicts, the most parsimonious way of generating the string.
What comes next is rather impressive: the shortest program among all of the possible programs is also the most likely to continue to produce the “right” output if the string is continued. In other words, as a friend coined (and as appears in my book Teleology), “compression is truth” in that the most compressed and compact program is also the best predictor of the future based on the existing evidence. The formalization of these concepts across statistics, computational theory, and recently into philosophy, represents a crowning achievement of information theory in the 20th Century.
I use these ideas regularly in machine learning, and related ideas inform concepts like Support Vector Machines, yet I don’t see a direct connection to human argumentation about complex ideas. Moreover, and I am hesitant to admit this, I am not convinced that human neural anatomy implements much more than vague approximations of these notions (and primarily in relatively low-level perceptual processes).
So does Solomonoff Induction rule out theism? Only indirectly in that it may help us feel confident about a solid theoretical basis for other conceptual processes that more directly interact with the evidence for and against.
I plan on elaborating on algorithmic information theory and its implications in future posts.