Evolution, Rationality, and Artificial Intelligence

We now know that our cognitive facilities are not perfectly rational. Indeed, our cultural memory has regularly reflected that fact. But we often thought we might be getting a handle on what it means to be rational by developing models for what good thinking might be like and using it in political, philosophical, and scientific discourse. The models were based on nascent ideas like the logical coherence of arguments, internal consistency, few tautologies, and the consistency with empirical data.

But an interesting and quite basic question is why should we be able to formulate logical rules and create increasingly impressive systems of theory and observations given a complex evolutionary history. We have big brains, sure, but they evolved to manage social relationships and find resources–not to understand the algebraic topology of prime numbers or the statistical oddities of quantum mechanics–yet they seem well suited for these newer and more abstract tasks.

Alvin Plantinga, a theist and modern philosopher whose work has touched everything from epistemology to philosophy of religion, formulated his Evolutionary Argument Against Naturalism (EANN) as a kind of complaint that the likelihood of rationality arising from evolutionary processes is very low (really he is most concerned with the probability of “reliability,” by which means that most conclusions and observations are true, but I am substituting rationality for this with an additional Bayesian overlay).

Plantinga mostly wants to advocate that maybe our faculties are rational because God made them rather than a natural process. The response to this from an evolutionary perspective is fairly simple: evolution is an adaptive process and adaptation to a series of niche signals involves not getting those signals wrong. There are technical issues that arise here concerning how specific adaptation can result in more general rational facilities but we can, at least in principle, imagine (and investigate) bridge rules that extend out from complex socialization to encompass the deep complexities of modern morality and the Leviathan state, and the extension of optimizing spear throwing to shooting rockets into orbit.

I’ve always held that Good Old Fashioned AI that tries to use decision trees created by specification is falling into a similar trap as Plantinga. By expecting the procedures of mind to be largely rational they result in a brittle response to the world that is as impotent as Plantinga’s “hyperbolic doubt” about naturalism. If so, though, it leads to the possibility that the only path to the kind of behavioral plasticity and careful balance of rationality and irrationality that we see as uniquely human is through simulating a significant portion of our entire evolutionary history. This might be formulated as an Evolutionary Argument Against AI (EAAAI), but I don’t think of it as a defeater like that, but as something more like an Evolutionary Argument for the Complexity of AI (and I’ll stop playing with the acronyms now).

Post a comment

You may use the following HTML:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>