Category: Cognitive Science

Bostrom on the Hardness of Evolving Intelligence

At 38,000 feet somewhere above Missouri, returning from a one day trip to Washington D.C., it is easy to take Nick Bostrom’s point that bird flight is not the end-all of what is possible for airborne objects and mechanical contrivances like airplanes in his paper, How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects. His efforts to try to bound and distinguish the evolution of intelligence as either Hard or Not-Hard runs up against significant barriers, however. As a practitioner of the art, finding similarities between a purely physical phenomena like flying and something as complex as human intelligence falls flat for me.

But Bostrom is not taking flying as more than a starting point for arguing that there is an engineer-able possibility for intelligence. And that possibility might be bounded by a number of current and foreseeable limitations, not least of which is that computer simulations of evolution require a certain amount of computing power and representational detail in order to be a sufficient simulation. His conclusion is that we may need as much as another 100 years of improvements in computing technology just to get to a point where we might succeed at a massive-scale evolutionary simulation (I’ll leave to the reader to investigate his additional arguments concerning convergent evolution and observer selection effects).

Bostrom dismisses as pessimistic the assumption that a sufficient simulation would, in fact, require a highly detailed emulation of some significant portion of the real environment and the history of organism-environment interactions:

A skeptic might insist that an abstract environment would be inadequate for the evolution of general intelligence, believing instead that the virtual environment would need to closely resemble the actual biological environment in which our ancestors evolved … However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible. It is, on the contrary, plausible that it would be more efficient to use an artificial selection environment, one quite unlike that of our ancestors, an environment specifically designed to promote adaptations that increase the type of intelligence we are seeking to evolve.

Unfortunately, I don’t see any easy way to bound the combined complexity of the needed substrate for evolutionary action (be it artificial organisms or just artificial neuronal networks) and the complexity of defining the necessary artificial environment for achieving the requested goal. It just makes it at least as hard and perhaps harder in that we can define a physical system much more easily than an abstract adaptive landscape designed to “promote…abstract reasoning and general problem-solving skills.”

Randomness and Meaning

The impossibility of the Chinese Room has implications across the board for understanding what meaning means. Mark Walker’s paper “On the Intertranslatability of all Natural Languages” describes how the translation of words and phrases may be achieved:

  1. Through a simple correspondence scheme (word for word)
  2. Through “syntactic” expansion of the languages to accommodate concepts that have no obvious equivalence (“optometrist” => “doctor for eye problems”, etc.)
  3. Through incorporation of foreign words and phrases as “loan words”
  4. Through “semantic” expansion where the foreign word is defined through its coherence within a larger knowledge network.

An example for (4) is the word “lepton” where many languages do not have a corresponding concept and, in fact, the concept is dependent on a bulwark of advanced concepts from particle physics. There may be no way to create a superposition of the meanings of other words using (2) to adequately handle “lepton.”

These problems present again for trying to understand how children acquire meaning in learning a language. As Walker points out, language learning for a second language must involve the same kinds of steps as learning translations, so any simple correspondence theory has to be supplemented.

So how do we make adequate judgments about meanings and so rapidly learn words, often initially with a course granularity but later with increasingly sharp levels of focus? What procedure is required for expanding correspondence theories to operate in larger networks? Methods like Latent Semantic Analysis and Random Indexing show how this can be achieved in ways that are illuminating about human cognition. In each case, the methods provide insights into how relatively simple transformations of terms and their occurrence contexts can be viewed as providing a form of “triangulation” about the meaning of words. And, importantly, this level of triangulation is sufficient for these methods to do very human-like things. Both methods can pass the TOEFL exam, for instance, and Latent Semantic Analysis is at the heart of automatic essay grading approaches that have sufficiently high success rates that they are widely used by standardized test makers.

How do they work? I’ll just briefly describe Random Indexing, since I recently presented the concept at the Big Data Science meetup at SGI in Fremont, California. In Random Indexing, we simply create a randomized sparse vector for each word we encounter in a large collection of texts. The vector can be binary as a first approximation, so something like:

The: 0000000000000100000010000000000000000001000000000000000…

quick: 000100000000000010000000000001000000000110000000000000…

fox: 0000000000000000000000100000000000000000000000000100100…

Now, as I encountered a given word in the text, I just add up the random vectors for the words around it to create a new “context” vector that is still sparse, but less so than the component parts. What is interesting about this approach is that if you consider the vectors as representing points in a hyperspace with the same dimensionality as the vectors are long, then words that have similar meanings tend to cluster in that space. Latent Semantic Analysis achieves a similar clustering using some rather complex linear algebra. A simple approximation of the LSA approach is also at the heart of Google’s PageRank algorithm, though operating on link structure rather than word co-occurrences.

So how do we solve the TOEFL test using an approach like Random Indexing? A large collection of texts are analyzed to create a Random Index, then for a sample question like:

In line 5, the word “pronounced” most closely means

  1. evident
  2. spoken
  3. described
  4. unfortunate

The question and the question text are converted into a context vector using the same random vectors for the index and then the answers vectors are compared to see which is closest in the index space. This is remarkably inexpensive to compute, requiring just an inner product between the context vectors for question and answer.

A method for compact coding using Algorithmic Information Theory can also be used to achieve similar results, demonstrating the wide applicability of context-based analysis to helping understand how intertranslateability and language learning are dependent on the rich contexts of word usage.

On the Soul-Eyes of Polar Bears

I sometimes reference a computational linguistics factoid that appears to be now lost in the mists of early DoD Tipster program research: Chinese linguists only agree on the segmentation of texts into words about 80% of the time. We can find some qualitative agreement on the problematic nature of the task, but the 80% is widely smeared out among the references that I can now find. It should be no real surprise, though, because even English with white-space tokenization resists easy characterization of words versus phrases: “New York” and “New York City” are almost words in themselves, though just given white-space tokenization are also phrases. Phrases lift out with common and distinct usage, however, and become more than the sum of their parts; it would be ridiculously noisy to match a search for “York” against “New York” because no one in the modern world attaches semantic significance to the “York” part of the phrase. It exists as a whole and the nature of the parts has dissolved against this wholism.

John Searle’s Chinese Room argument came up again today. My son was waxing, as he does, in a discussion about mathematics and order, and suggested a poverty of our considerations of the world as being purely and completely natural. He meant in the sense of “materialism” and “naturalism” meaning that there are no mystical or magical elements to the world in a metaphysical sense. I argued that there may nonetheless be something that is different and indescribable by simple naturalistic calculi: there may be qualia. It led, in turn, to a qualification of what is unique about the human experience and hence on to Searle’s Chinese Room.

And what happens in the Chinese Room? Well, without knowledge of Chinese, you are trapped in a room with a large collection of rules for converting Chinese questions into Chinese answers. As slips of Chinese questions arrive, you consult the rule book and spit out responses. Searle’s point was that it is silly to argue that the algorithm embodied by the room really understands Chinese and that the notion of “Strong AI” (artificial intelligence is equivalent to human intelligence insofar as there is behaviorally equivalence between the two) falls short of the meaning of “strong.” This is a correlate to the Turing Test in a way, which also posits a thought experiment with computer and human interlocutors who are remotely located.

The arguments against the Chinese Room range from complaints that there is no other way to establish intelligence to the claim that given sensory-motor relationships with the objects the symbols represent, the room could be considered intentional. I don’t dispute any of these arguments, however. Instead, I would point out that the initial specification of the gedankenexperiment fails in the assumption that the Chinese Room is actually able to produce adequate outputs for the range of possible inputs. In fact, while even the linguists disagree about the nature of Chinese words, every language can be used to produce utterances that have never been uttered before. Chomsky’s famous “colorless green ideas sleep furiously” shows the problem with clarity. It is the infinitude of language and its inherent ambiguity that makes the Chinese Room an inexact metaphor. A Chinese questioner could ask how do the “soul-eyes of polar bears beam into the hearts of coal miners?” and the system would fail like enjambing precision German machinery fed tainted oil. Yeah, German machinery enjambs just like polar bears beam.

So the argument stands in its opposition to Strong AI given its initial assumptions, but fails given real qualifications of those assumptions.

NOTE: There is possibly a formal argument embedded in here in that a Chomsky grammar that is recursively enumerable has infinite possible productions but that an algorithm can be devised to accommodate those productions given Turing completeness. Such an algorithm is in principle only, however, and does require a finite symbol alphabet. While the Chinese characters may be finite, the semantic and pragmatic metadata are not clearly so.

Teleology, Chapter 5

Harry spent most of that summer involved in the Santa Fe Sangre de Cristo Church, first with the church summer camp, then with the youth group. He seemed happy and spent the evenings text messaging with his new friends. I was jealous in a way, but refused to let it show too much. Thursdays he was picked up by the church van and went to watch movies in a recreation center somewhere. I looked out one afternoon as the van arrived and could see Sarah’s bright hair shining through the high back window of the van.

Mom explained that they seemed to be evangelical, meaning that they liked to bring as many new worshippers into the religion as possible through outreach and activities. Harry didn’t talk much about his experiences. He was too much in the thick of things to be concerned with my opinions, I think, and snide comments were brushed aside with a beaming smile and a wave. “You just don’t understand,” Harry would dismissively tell me.

I was reading so much that Mom would often demand that I get out of the house on weekend evenings after she had encountered me splayed on the couch straight through lunch and into the shifting evening sunlight passing through the high windows of our thick-walled adobe. I would walk then, often for hours, snaking up the arroyos towards the mountains, then wend my way back down, traipsing through the thick sand until it was past dinner time.

It was during this time period that I read cyberpunk authors and became intrigued with the idea that someday, one day, perhaps computing machines would “wake up” and start to think on their own. I knew enough about computers that I could not even conceive of how that could possibly come about. My father had once described for me a simple guessing game that learned. If the system couldn’t guess your choice of animal, it would concede and use the correct answer to expand its repertoire. I had called it “learning by asking” at the time but only saw it as a simple game and never connected it to the problem of human learning.

Yet now the concept made some sense as an example of how an intelligent machine could escape from the confines of just producing the outputs that it was programmed to produce. Yet there were still confines; the system could never just reconfigure the rules system or decide to randomly guess when it got bored (or even get bored). There was something profound missing from our understanding of human intelligence.

Purposefulness seemed to be the missing attribute that we had and that machines did not. We were capable of making choices by a mechanism of purposefulness that transcended simple programmable rules systems, I hypothesized, and also traced that purpose back to more elementary programming that was part of our instinctive, animal core. There was a philosophical problem with this scheme, though, that I recognized early on; if our daily systems of learning and thought were just elaborations of logical games like that animal learning game, and the purpose was embedded more deeply, what natural rules governed that deeper thing, and how could it be fundamentally different than the higher-order rules?

I wanted to call this core “instinct” and even hypothesized that if it could be codified it would bridge the gap between truly thinking and merely programmed machines. But the alternative to instinct being a logical system seemed to be assigning it supernatural status and that wasn’t right for several reasons.

First, the commonsense notion of instinct associated with doing primitive things like eating, mating and surviving seemed far removed from the effervescent and transcendent ideas about souls that were preached by religions. I wanted to understand the animating principle behind simple ideas like wanting to eat and strategizing about how to do it—hardly the core that ascends to heaven in Christianity and other religions I was familiar with. It was also shared across all animals and even down to the level of truly freaky things like viruses and prions.

The other problem was that any answer of supernaturalism struck me as leading smack into an intellectual brick wall because we could explain and explain until we get to the core of our beings and then just find this billiard ball of God-light. Somehow, though, that billiard ball had to emanate energy or little logical arms to affect the rules systems by which we made decisions; after all, purposefulness can’t just be captive in the billiard ball but has to influence the real world, and at that point we must be able to characterize those interactions and guess a bit at the structure of the billiard ball.

So the simplest explanation seemed to be that the core, instinct, was a logically describable system shaped by natural processes and equipped with rules that governed how to proceed. Those rules didn’t need to be simple or even easily explainable, but they needed to be capable of explanation. Any other scheme I could imagine involved a problem of recursion, with little homunculi trapped inside other homunculi and ultimately powered by a billiard ball of cosmic energy.

I tried to imagine what the religious thought about this scheme of explanation but found what I had heard from Harry to be largely incompatible with any sort of explanation. Instead, the discussion was devoid of any sort of detailed analysis or arguments concerning human intelligence. There was a passion play between good and evil forces, the notion of betraying or denying the creator god, and an unexplained transmigration of souls, being something like our personalities or identities. If we wanted to ask a question about why someone had, say, committed a crime, it was due to supernatural influences that acted through their personalities. More fundamental questions like how somehow learned to speak a language, which I thought was pretty amazing, were not apparently subject to the same supernatural processes, but might be explained with a simple recognition of the eminence of God’s creation. So moral decisions were subject to evil while basic cognition was just an example of supernatural good in this scheme of things, with the latter perhaps subject to the arbitrary motivations of the creator being.

Supernaturalism was an appeal to non-explanation driven by a conscious desire to not look for answers. “God’s Will” was the refrain for this sort of reasoning and it was counterproductive to understanding how intelligence worked or had come about.

God was the end of all thought. The terminus of bland emptiness. A void.

But if natural processes were responsible, then the source of instinct was evolutionary in character. Evolution led to purpose, but in a kind of secondhand way. The desire to reproduce did not directly result in complex brains or those elaborate plumes on birds that showed up in biology textbooks. It was a distal effect built on a basic platform of sensing and reacting and controlling the environment. That seemed obvious enough but was just the beginning of the puzzle for me. It also left the possibility of machines “waking up” far too distant a possibility since evolution worked very slowly in the biological world.

I suddenly envisioned computer programs competing with each other to solve specific problems in massive phalanxes. Each program varied slightly from the others in minute details. One could print “X” while another could print “Y”. The programs that did better would then be replicated into the next generation. Gradually the programs would converge on solving a problem using a simple evolutionary scheme. There was an initial sense of elegant simplicity, though the computing system to carry the process out seemed at least as large as the internet itself. There was a problem, however. The system required a central governor to carry out the replication of the programs, their mutation and to measure the success of the programs. It would also have to kill off, to reap, the losers. That governor struck me as remarkably god-like in its powers, sitting above the population of actors and defining the world in which they acted. It was also inevitable that the solutions at which programs would arrive would be completely shaped by the adaptive landscape that they were presented with; though they were competing against one another, their behavior was mediated through an outside power. It was like a game show in a way and didn’t have the kind of direct competition that real evolutionary processes inherently have.

A solution required that the governor process go away, that the individual programs replicate themselves and that even that replication process be subject to variation and selection. Moreover, the selection process had to be very broadly defined based on harvesting resources in order to replicate, not based on an externally defined objective function. Under those circumstances, the range of replicating machines—automata—could be as vast as the types of flora and fauna on Earth itself.

As I trudged up the arroyo, I tried to imagine the number of insects, bacteria, spores, plants and vines in even this relatively sparse desert. A cricket began singing in a nearby mesquite bush, joining the chorus of other crickets in the late summer evening. The light of the moon was beginning to glow behind a mountain ridge. Darkness was coming fast and I could hear coyotes start calling further up the wash towards St. John’s College.

As I returned home, I felt as though I was the only person walking in the desert that night, isolated in the dark spaces that separated the haphazard Santa Fe roads, yet I also was warmed with the idea that there was a solution to the problem of purpose embedded deeply in our biology and that could be recreated in a laboratory of sorts, given a vastly complex computing system larger than the internet itself. That connection to a deep truth seemed satisfying in a way that the weird language of religion had never felt. We could know and understand our own nature through reason, through experiments and through simulation, and even perhaps create a completely new form of intelligence that had its own kind of soul derived from surviving generations upon generations of replications.

But did we, like gods, have the capacity to apprehend this? I recalled my Hamlet: The paragon of animals, indeed. A broad interpretation of the Biblical Fall as a desire to be like God lent a metaphorical flavor to this nascent notion. Were we reaching out to try to become like a creator god of sorts through the development of intelligent technologies and biological manipulation? If we did create a self-aware machine that seemed fully human-like, it would certainly support the idea that we were creators of new souls.

I was excited about this line of thinking as I slipped into the living room where Mom and Harry were watching a crime drama on TV. Harry would not understand this, I realized, and would lash out at me for being terrifically weird if I tried to discuss it with him. The distance between us had widened to the point that I would avoid talking directly to him. It felt a bit like the sense of loss after Dad died, though without the sense of finality that death brought with it. Harry and I could recover, I thought, reconnecting later on in life and reconciling our divergent views.

A commercial came and I stared at the back of his head like I had done so often, trying to burrow into his skull with my mind. “Harry, Harry!” I called in my thoughts. He suddenly turned around with his eyes bulging and a crooked smile erupting across his face.

“What?” he asked.

It still worked.

On the Non-Simulation of Human Intelligence

There is a curious dilemma that pervades much machine learning research. The solutions that we are trying to devise are supposed to minimize behavioral error by formulating the best possible model (or collection of competing models). This is also the assumption of evolutionary optimization, whether natural or artificial: optimality is the key to efficiently outcompeting alternative structures, alternative alleles, and alternative conceptual models. The dilemma is whether such optimality is applicable to the notoriously error prone, conceptual flexible, and inefficient reasoning of people. In other words, is machine learning at all like human learning?

I came across a paper called “Multi-Armed Bandit Bayesian Decision Making” while trying to understand what Ted Dunning is planning to talk about at the Big Data Science Meetup at SGI in Fremont, CA a week from Saturday (I’ll be talking, as well) that has a remarkable admission concerning this point:

Human behaviour is after all heavily influenced by emotions, values, culture and genetics; as agents operating in a decentralised system humans are notoriously bad at coordination. It is this fact that motivates us to develop systems that do coordinate well and that operate outside the realms of emotional biasing. We use Bayesian Probability Theory to build these systems specifically because we regard it as common sense expressed mathematically, or rather `the right thing to do’.

The authors continue on to suggest that therefore such systems should instead be seen as corrective assistants for the limitations of human cognitive processes! Machines can put the rational back into reasoned decision-making. But that is really not what machine learning is used for today. Instead, machine learning is used where human decision-making processes are unavailable due to the physical limitations of including humans “in the loop,” or the scale of the data involved, or the tediousness of the tasks at hand.

For example, automatic parts-of-speech tagging could be done by row after row of professional linguists who mark-up the text with the correct parts-of-speech. Where occasionally great ambiguity arises, they would have meetings to reach agreement on the correct assignment of the part. This kind of thing is still done. I worked with a company that creates conceptual models of the biological results expressed in research papers. The models are created by PhD biologists who are trained in the conceptual ontology they have developed over the years through a process of arguing and consensus development. Yahoo! originally used teams of ontologists to classify web pages. Automatic machine translation is still unacceptable for most professional translation tasks, though it can be useful for gisting.

So the argument that the goal of these systems is to overcome the cognitive limitations of people is mostly incorrect, I think. Instead, the real reason why we explore topics like Bayesian probability theory for machine learning is that the mathematics gives us traction against the problems. For instance, we could try to study the way experts make decisions about parts-of-speech and create a rules system that contained every little rule. This would be an “expert system,” but even the creation of such a system requires careful assessment of massive amounts of detail. That scalability barrier rises again and emotional biases are not much at play except where they result in boredom and ennui due to sheer tedium.

Eusociality, Errors, and Behavioral Plasticity

I encountered an error in E.O. Wilson’s The Social Conquest of Earth where the authors intended to assert an alternative to “kin selection” but instead repeated “multilevel selection,” which is precisely what the authors wanted to draw a distinction with. I am sympathetic, however, if for no other reason than I keep finding errors and issues with my own books and papers.

The critical technical discussion from Nature concerning the topic is available here. As technical discussion, the issues debated are fraught with details like how halictid bees appear to live socially, but are in fact solitary animals that co-exist in tunnel arrangements.

Despite the focus on “spring-loaded traits” as determiners for haplodiploid animals like bees and wasps, the problem of big-brained behavioral plasticity keeps coming up in Wilson’s book. Humanity is a pinnacle because of taming fire, because of the relative levels of energy available in animal flesh versus plant matter, and because of our ability to outrun prey over long distances (yes, our identity emerges from marathon running). But these are solutions that correlate with the rapid growth of our craniums.

So if behavioral plasticity is so very central to who we are, we are faced with an awfully complex problem in trying to simulate that behavior. We can expect that there must be phalanxes of genes involved in setting our developmental path (our nature and the substrate for our nurture). We should, indeed, expect that almost no cognitive capacity is governed by a small set of genes, and that all the relevant genes work in networks through polygeny, epistasis, and related effects (pleiotropy). And we can expect no easy answers as a result, except to assert that AI is exactly as hard as we should have expected, and progress will be inevitably slow in understanding the mind, the brain, and the way we interact.

From Ethics to Hypercomputation

Toby Ord of Giving What We Can has other interests, including ones that connect back to Solomonoff inference and algorithmic information theory. Specifically, Ord worked earlier on topics related to hypercomputation or, more simply put, the notion that there may be computational systems that exceed the capabilities of Turing Machines.

Turing Machines are abstract computers that can compute logical functions, but the question that has dominated theoretical computer science is what is computable and what is incomputable. The Kolmogorov Complexity of a string is the minimal specification needed to compute the string given a certain computational specification (a program). And the Kolmogorov Complexity is incomputable.  Yet, a compact representation is a minimalist model that can, in turn, lead to optimal future prediction of the underlying generator.

Wouldn’t it be astonishing if there were, in fact, computational systems that exceeded the limits of computability? That’s what Ord’s work set-out to investigate, though there have been detractors.

Radical Triangulation

Donald Davidson argued that descriptive theories of semantics suffered from untenable complications that could, in turn, be solved by a holistic theory of meaning. Holism, in this sense, is due to the dependency of words and phrases as part of a complex linguistic interchange. He proposed “triangulation” as a solution, where we zero-in on a tentatively held belief about a word based on other beliefs about oneself, about others, and about the world we think we know.

This seems daringly obvious, but it is merely the starting point of the hard work of what mechanisms and steps are involved in fixing the meaning of words through triangulation. There are certainly some predispositions that are innate and fit nicely with triangulation. These are subsumed under The Principle of Charity and even the notion of the Intentional Stance in how we regard others like us.

Fixing meaning via model-making has some curious results. The language used to discuss aesthetics and art tends to borrow from other fields (“The narrative of the painting,” “The functional grammar of the architecture.”) Religious and spiritual terminology often has extremely porous models: I recently listened to Episcopalians discuss the meaning of “grace” for almost an hour with great glee but almost no progress; it was the belief that they were discussing something of ineffable greatness that was moving to them. Even seemingly simple scientific ideas become elaborately complex for both children and adults: we begin with atoms as billiard balls that mutate into mini solar systems that become vibrating clouds of probabilistic wave-particles around groups of properties in energetic suspension by virtual particle exchange.

Can we apply more formal models to the task of staking out this method of triangulation? For Davidson, language was both compositional and holistic, so it stands to reason that optimizing each vector of the triangulation can be rephrased as maximizing the agreement between the existing belief and new beliefs about terms and meaning, the models we hold about others’ beliefs about the terms, and any empirical facts or related desiderata that are at sway. And here we may have an application of Solomonoff Induction, again, as an extension to Bayesian model-making. How do I chose to order the meaning signals from each of my belief sources? Under what circumstances do I reorder them or abandon an existing model in an aha moment? If the meta-model for ordering and triangulation is a striving for parsimony, then radical revisionism by reorganizing the underlying explanatory model is optimal when it follows Solomonoff-like principles.

“Optimality” might be straining credulity here–especially given the above description of arguments about the meaning of “grace”–but there may be a modified sense of the word in that the mathematical purity of a Solomonoff result is implemented in cognition as a kind of heuristic that tends towards good results in the face of extremely noisy signals.

Learning around the Non Sequiturs

If Solomonoff Induction and its conceptual neighbors have not yet found application in enhancing human reasoning, there are definitely areas where they have potential value.  Automatic, unsupervised learning of sequential patterns is an intriguing area of application. It also fits closely with the sequence inferencing problem that is at the heart of algorithmic information theory.

Pragmatically, the problem of how children learn the interchangeability of words that is the basic operation of grammaticality is one area where this kind of system might be useful. Given a sequence of words or symbols, what sort of information is available for figuring out the grammatical groupings? Not much beyond memories of repetitions, often inferred implicitly.

Could we apply some variant of Solomonoff Induction at this point? Recall that we want to find the most compact explanation for the observed symbol stream. Recall also that the form of the explanation is a computer program of some sort that consists of logical functions. It turns out that creating a program that, for every possible sequence, finds the absolutely most compact program is uncomputable. The notion of what is “uncomputable” (or incomputable) is a mathematical result that has to do with how many different potential programs must be investigated to try to find the shortest one. If that number grows faster than the length of a program, it becomes uncomputable. Being uncomputable is not a death sentence, however. We can come up with approximate methods that try to follow the same procedure because any method that incrementally compresses the explanatory program will get closer to the hypothetical best program.

Sequitur by Nevill-Manning and Witten is an example of a procedure that approximates Algorithmic Information Theory optimization for string sequences. In the Sequitur algorithm, when repetitions occur in a sequence, a new rule is created that encompasses the repeated grouping. New rules can be abandoned as well when they no longer cover more than one occurrence due to some exemplars being absorbed by other examples. A slightly more sophisticated version of Sequitur involves creating new rules only when the number of bits required to represent both the rules and the encoded string is lower than without the rule. This approach comes even closer to Solomonoff Induction but requires more calculation of probabilities and the information content of the rules and sequences.

I’ve applied this latter approach in various contexts, including using the method to try to explain human guessing performance on predicting symbol sequences. Neural correlates of this capability likely exist since there are abstractly related capabilities that are used for edge detection in visual processing (an edge is a compact “guess” drawn from a noisy region of a visual field). Finding a grammar inferencing brain region (likely in Wernicke’s area) but that is used for a range of sequencing problems would support the argument that Solomonoff-like capabilities have a biological reality.

Solomonoff Induction, Truth, and Theism

LukeProg of CommonSenseAtheism fame created a bit of a row when he declared that Solomonoff Induction largely rules out theism, continuing on to expand on the theme:

If I want to pull somebody away from magical thinking, I don’t need to mention atheism. Instead, I teach them Kolmogorov complexity and Bayesian updating. I show them the many ways our minds trick us. I show them the detailed neuroscience of human decision-making. I show them that we can see (in the brain) a behavior being selected up to 10 seconds before a person is consciously aware of ‘making’ that decision. I explain timelessness.

There were several reasons for the CSA community to get riled up about these statements and they took on several different forms:

  • The focus on Solomonoff Induction/Kolmogorov Complexity is obscurantist in using radical technical terminology.
  • The author is ignoring deductive arguments that support theist claims.
  • The author has joined a cult.
  • Inductive claims based on Solomonoff/Kolmogorov are no different from Reasoning to the Best Explanation.

I think all of these critiques are partially valid, though I don’t think there are any good reasons for thinking theism is true, but the fourth one (which I contributed) was a personal realization for me. Though I have been fascinated with the topics related to Kolmogorov since the early 90s, I don’t think they are directly applicable to the topic of theism/atheism.  Whether we are discussing the historical validity of Biblical claims or the logical consistency of extensions to notions of omnipotence or omniscience, I can’t think of a way that these highly mathematical concepts have direct application.

But what are we talking about? Solomonoff Induction, Kolmogorov Complexity, Minimum Description Length, Algorithmic Information Theory, and related ideas are formalizations of the idea of William of Occam (variously Ockham) known as Occam’s Razor that given multiple explanations of a given phenomena, one should prefer the simpler explanation. This notion that the most parsimonious explanation was preferable to other explanations existed as a heuristic until the 20th Century, when statistics began to be merged with computational theory through information theory. I’m not aware of any scientist describing facing a trade-off between contending theories that was resolved by an appeal to Occam’s Razor. Yet the intuition that the principle was important remained until being formalized by people like Kolmogorov as part of the mathematical and scientific zeitgeist.

The concepts are admittedly deep in their mathematical formulations, but at heart is the notion that all logical procedures can be reduced to a computational model. And a computational model running on a standardized computer called a Turing Machine can be expressed as a string of numbers that can be reduced to binary numbers. One can imagine that there are many programs that can produce the same output given the same input. In fact, we can just add an infinite number of random no-ops (no operations) to the bit stream and still get the same output from the computer.  Moreover, we can guess that the structure of the program for a given string is essentially a model of the output string that compresses the underlying data into the form of the program. So, among all of the programs for a string, the shortest program is, like Occam’s Razor predicts, the most parsimonious way of generating the string.

What comes next is rather impressive: the shortest program among all of the possible programs is also the most likely to continue to produce the “right” output if the string is continued.  In other words, as a friend coined (and as appears in my book Teleology), “compression is truth” in that the most compressed and compact program is also the best predictor of the future based on the existing evidence. The formalization of these concepts across statistics, computational theory, and recently into philosophy, represents a crowning achievement of information theory in the 20th Century.

I use these ideas regularly in machine learning, and related ideas inform concepts like Support Vector Machines, yet I don’t see a direct connection to human argumentation about complex ideas. Moreover, and I am hesitant to admit this, I am not convinced that human neural anatomy implements much more than vague approximations of these notions (and primarily in relatively low-level perceptual processes).

So does Solomonoff Induction rule out theism? Only indirectly in that it may help us feel confident about a solid theoretical basis for other conceptual processes that more directly interact with the evidence for and against.

I plan on elaborating on algorithmic information theory and its implications in future posts.