Category: Evolutionary

Evolutionary Oneirology

I was recently contacted by a startup that is developing a dream-recording app. The startup wants to automatically extract semantic relationships and correlate the narratives that dreamers type into their phones. I assume that the goal is to help the user try to understand their dreams. But why should we assume that dreams are understandable? We now know that waking cognition is unreliable, that meta-cognitive strategies influence decision making, that base rate fallacies are the norm, that perceptions are shaped by apophenia, that framing and language choices dominate decision-making under ambiguity, and that moral judgments are driven by impulse and feeling rather than any rational calculus.

Yet there are some remarkable consistencies about dream content that have led to elaborate theorization down through the ages. Dreams, by being cryptic, want to be explained. But the content of dreams, when sorted out, leads us less to Kerkule’s Rings or to Freud and Jung, and more to asking why there is so much anxiety present in dreams? The Evolutionary Theory of Dreaming by Finnish researcher Revonsuo tries to explain the overrepresentation of threats and fear in dreams by suggesting that the brain is engaged in a process of reliving conflict events as a form of implicit learning. Evidence in support of this theory includes experimental observations that threatening dreams increase in frequency for children who experienced trauma in childhood combined with the cross-cultural persistence of threatening dream content (and likely cross-species as anyone who has watched a cat twitch in deep sleep suspects). To date, however, the question of whether these dream cycles result in learning or improved responses to future conflict remains unanswered.

I turned down consulting for the startup because of time constraints, but the topic of dream anxiety comes back to me every few years when I startle out of one of those recurring dreams where I have not studied for the final exam and am desperately pawing through a sheaf of empty paper trying to find my notes. I apparently still haven’t learned enough about deadlines, just like my ancient ancestors never learned enough about Sabertooth Tiger stalking patterns.

Simulated Experimental Morality

I’m deep in Steven Pinker’s The Better Angels of Nature: Why Violence Has Declined. It’s also only about the third book I’ve tried to read exclusively on the iPad, but I am finally getting used to the platform. The core thesis of Pinker’s book is something that I have been experimentally testing on people for several years: our moral facilities and decision-making are gradually improving. For Pinker, the thesis is built up elaborately from basic estimates of death rates due to war and homicide between non-state societies and state societies. It comes with an uncomfortable inversion of the nobility of the savage mind: primitive people had a lot to fight about and often did.

My first contact with the notion that morality is changing and improving was with Richard Dawkin’s observation in The God Delusion that most modern Westerners feel very uncomfortable with the fire bombing of Tokyo in World War II, the saturation bombing of Hanoi, nuclear attack against civilian populations, or treating people inhumanely based on race or ethnicity. Yet that wasn’t the case just decades ago. More moral drift can be seen in changing sentiments concerning the rights of gay people to marry. Experimentally, then, I would ask, over dinner or conversation, about simple moral trolley experiments and then move on to ask whether anyone would condone nuclear attack against civilian populations. There is always a first response of “no” to the latter, which reflects a gut moral sentiment, though a few people have agreed that it may be “permissible” (to use the language of these kinds of dilemmas) in response to a similar attack and when there may be “command and control assets” mixed into the attack area. But that gentle permissibility always follows the initial revulsion.

Pinker’s book suggested another type of experimental simulation, however. In it he describes how the foraging behavior of many chimpanzees in sparse woods results in males often traveling alone at the edges of their populations. Neighboring groups of chimps will systematically kill these loners when they have a 3-to-1 advantage of numbers. I’m curious if the sparseness of resources and population is at the heart of the violence, and that the same occurs with the violence patterns of hunter gatherers. If so, it seems plausible to try to simulate the evolution of moral behavior as population density and interconnectedness increases. When population density is low and there are memes/genes that trade off cooperation against raiding for resources, the raiding genes maintain in an equilibrium against cooperating with ingroup members. As population increases, the raiding genes simply die out because it is a non-zero sum game to cooperate.

There is an enormous amount of variability possible in a simulation like this, but I suspect that, given almost any initial starting conditions, morality is simply inevitable.