Category: Cognitive Science

Simulated Experimental Morality

I’m deep in Steven Pinker’s The Better Angels of Nature: Why Violence Has Declined. It’s also only about the third book I’ve tried to read exclusively on the iPad, but I am finally getting used to the platform. The core thesis of Pinker’s book is something that I have been experimentally testing on people for several years: our moral facilities and decision-making are gradually improving. For Pinker, the thesis is built up elaborately from basic estimates of death rates due to war and homicide between non-state societies and state societies. It comes with an uncomfortable inversion of the nobility of the savage mind: primitive people had a lot to fight about and often did.

My first contact with the notion that morality is changing and improving was with Richard Dawkin’s observation in The God Delusion that most modern Westerners feel very uncomfortable with the fire bombing of Tokyo in World War II, the saturation bombing of Hanoi, nuclear attack against civilian populations, or treating people inhumanely based on race or ethnicity. Yet that wasn’t the case just decades ago. More moral drift can be seen in changing sentiments concerning the rights of gay people to marry. Experimentally, then, I would ask, over dinner or conversation, about simple moral trolley experiments and then move on to ask whether anyone would condone nuclear attack against civilian populations. There is always a first response of “no” to the latter, which reflects a gut moral sentiment, though a few people have agreed that it may be “permissible” (to use the language of these kinds of dilemmas) in response to a similar attack and when there may be “command and control assets” mixed into the attack area. But that gentle permissibility always follows the initial revulsion.

Pinker’s book suggested another type of experimental simulation, however. In it he describes how the foraging behavior of many chimpanzees in sparse woods results in males often traveling alone at the edges of their populations. Neighboring groups of chimps will systematically kill these loners when they have a 3-to-1 advantage of numbers. I’m curious if the sparseness of resources and population is at the heart of the violence, and that the same occurs with the violence patterns of hunter gatherers. If so, it seems plausible to try to simulate the evolution of moral behavior as population density and interconnectedness increases. When population density is low and there are memes/genes that trade off cooperation against raiding for resources, the raiding genes maintain in an equilibrium against cooperating with ingroup members. As population increases, the raiding genes simply die out because it is a non-zero sum game to cooperate.

There is an enormous amount of variability possible in a simulation like this, but I suspect that, given almost any initial starting conditions, morality is simply inevitable.

Evolution, Rationality, and Artificial Intelligence

We now know that our cognitive facilities are not perfectly rational. Indeed, our cultural memory has regularly reflected that fact. But we often thought we might be getting a handle on what it means to be rational by developing models for what good thinking might be like and using it in political, philosophical, and scientific discourse. The models were based on nascent ideas like the logical coherence of arguments, internal consistency, few tautologies, and the consistency with empirical data.

But an interesting and quite basic question is why should we be able to formulate logical rules and create increasingly impressive systems of theory and observations given a complex evolutionary history. We have big brains, sure, but they evolved to manage social relationships and find resources–not to understand the algebraic topology of prime numbers or the statistical oddities of quantum mechanics–yet they seem well suited for these newer and more abstract tasks.

Alvin Plantinga, a theist and modern philosopher whose work has touched everything from epistemology to philosophy of religion, formulated his Evolutionary Argument Against Naturalism (EANN) as a kind of complaint that the likelihood of rationality arising from evolutionary processes is very low (really he is most concerned with the probability of “reliability,” by which means that most conclusions and observations are true, but I am substituting rationality for this with an additional Bayesian overlay).

Plantinga mostly wants to advocate that maybe our faculties are rational because God made them rather than a natural process. The response to this from an evolutionary perspective is fairly simple: evolution is an adaptive process and adaptation to a series of niche signals involves not getting those signals wrong. There are technical issues that arise here concerning how specific adaptation can result in more general rational facilities but we can, at least in principle, imagine (and investigate) bridge rules that extend out from complex socialization to encompass the deep complexities of modern morality and the Leviathan state, and the extension of optimizing spear throwing to shooting rockets into orbit.

I’ve always held that Good Old Fashioned AI that tries to use decision trees created by specification is falling into a similar trap as Plantinga. By expecting the procedures of mind to be largely rational they result in a brittle response to the world that is as impotent as Plantinga’s “hyperbolic doubt” about naturalism. If so, though, it leads to the possibility that the only path to the kind of behavioral plasticity and careful balance of rationality and irrationality that we see as uniquely human is through simulating a significant portion of our entire evolutionary history. This might be formulated as an Evolutionary Argument Against AI (EAAAI), but I don’t think of it as a defeater like that, but as something more like an Evolutionary Argument for the Complexity of AI (and I’ll stop playing with the acronyms now).