Toby Ord of Giving What We Can has other interests, including ones that connect back to Solomonoff inference and algorithmic information theory. Specifically, Ord worked earlier on topics related to hypercomputation or, more simply put, the notion that there may be computational systems that exceed the capabilities of Turing Machines.
Turing Machines are abstract computers that can compute logical functions, but the question that has dominated theoretical computer science is what is computable and what is incomputable. The Kolmogorov Complexity of a string is the minimal specification needed to compute the string given a certain computational specification (a program). And the Kolmogorov Complexity is incomputable. Yet, a compact representation is a minimalist model that can, in turn, lead to optimal future prediction of the underlying generator.
Wouldn’t it be astonishing if there were, in fact, computational systems that exceeded the limits of computability? That’s what Ord’s work set-out to investigate, though there have been detractors.
LukeProg interviews Toby Ord of Oxford and founder of Giving What We Can about what may be an even more complex problem than that of the Existential dilemma over being itself: how do we overcome uncertainty in formulating an ethical system?
Conversations from the Pale Blue Dot: Toby Ord
There are few definitive answers, of course. How could there be? But Toby does a great job of drawing a map of deontological theories (rule-based, including “divine command theory”), virtue ethics, and consequentialist theories (utilitarianism, etc.). He’s partial to the latter (and even thinks most of the ethical systems might be unified as consquentialist), and his foundation has worked to perfect the calculations concerning how an individual’s contributions to specific charities can potentially result in the future reduction in human suffering.
Here’s a quandary, though: is exuberant excess sometimes necessary for enhancing future productivity that might lead to greater reductions in human suffering? I may just be hoping to justify my failure to follow Toby’s example or, at least, to make up for it later in life.
Gated from Pinker’s The Better Angels of our Nature, the Dutch experiments concerning the “broken window hypothesis” are illuminating. The “broken window hypothesis” dates to the 1980s when criminologists Wilson and Kelling suggested that broken windows in an abandoned building might signal other vandals that breaking windows is permissible. This theory, though widely disputed among criminologists, informed increased enforcement efforts in the United States in the 1990s that correlated with the amazing reductions in the crime rate that have continued into the current decade.
What of the Dutch experiments? When artificial circumstances are established where people can, for instance, litter fliers, people will litter more when they are in an environment already littered or surrounded by buildings covered with graffiti. Small acts of theft also are enhanced by a shady environment.
If our moral sentiments are so heavily influenced by our environment, we don’t need convincing that our moral predispositions are socially influenced, as well. Teenagers and college students are case studies.
But what of positive influences? If graffiti enhances criminality, and a neutral environment is, well, neutral, is it possible that a beautiful, inspiring environment would promote positive morality?
In many cities and towns, artistic murals are applied to high-graffiti areas with the expressed purpose of eliminating graffiti, for example. Can astonishing architecture do similar things? Following the Dutch experimental setup, it would be easy to place fliers on bicycles around art galleries and interesting buildings, then monitor the littering rates. There are obvious problems with this methodology in that the people who live and work in some areas may have educational, class, and other differences with those who traffic other areas that are more prone to littering and graffiti. Experimental design could help to uncoil those factors and establish whether we can get better when in better surrounds.
Eric X. Li in The New York Times argues that:
America and China view their political systems in fundamentally different ways: whereas America sees democratic government as an end in itself, China sees its current form of government, or any political system for that matter, merely as a means to achieving larger national ends.
The implication developed further is that:
The West seems incapable of becoming less democratic even when its survival may depend on such a shift.
Li’s argument develops the idea that the repression of the Tiananmen movement in China was a strategic move that resulted in the underlying political stability for the current economic growth wave in China!
It’s hindsight bias, though, that builds on a kind of utilitarianism that asserts that there are political and social values that outweigh the rights of individuals (and that are predictable in their output). Indeed, Li asserts as much with:
The fundamental difference between Washington’s view and Beijing’s is whether political rights are considered God-given and therefore absolute or whether they should be seen as privileges to be negotiated based on the needs and conditions of the nation.
God is, of course, unnecessary in this equation. What is more relevant is whether an individual’s rights to freedom–whether of conscience or property–should be considered to supersede the desires of the state or collective. This was expressed as endowed by the Creator in the Declaration of Independence, but there was no particular justification provided in the Constitution. Rights are simply agreeably good in the US Constitution, subject to the same floor-plan as the rights and limitations of the Judiciary or the Legislative Branch, and even limited in some cases where just compensation for property seizures is allowable.
Critically, protecting the individual through negative rights, when contrasted with asserting political power to achieve uncertain future goals, is less likely to harm anyone and shows the inherent weaknesses of proactive utilitarian ethics. Eric Li should take this to heart.
For fun, I decided to try writing a partial post using Apple’s iBooks Author. The application runs on Mac OS X Lion and is available for free. It appears to be derivative of Keynote, which explains Apple’s rapid development of the authoring tool.
There are some limitations, though. I couldn’t embed equations from Word for Mac 2011 without converting them into images. It also only publishes to iBookstore, although you can export to PDF (as below). There are few PDF export options, however, and the metadata and labeling includes Apple logos.
Tearing apart the .iba format via unzip showed a collection of .jpg and .tiff images, a binary color array, and an .xml specification of the project. Fairly simple, but not including the compiled .epub file that iBookstore generally takes.
Total elapsed time: 1 hour (including download/installation). With improvements to the software and with more experience, that should be halved.
I recently re-read Mark Alan Walker‘s manuscript (unpublished?), A Neo-Irenaean Theodicy: Evolution, Playing God and Becoming Gods. The argument is straightforward and expands on the Theodicy of Irenaeus: God created evil as part of the process of letting His children–humanity–develop their own moral faculties as part of becoming gods ourselves. This quiet trick contra Augustinian Theodicy made it fashionable to treat The Fall as somewhat metaphorical that was inverted by the reclamation of the potential for moral perfection by Mary and Jesus.
Professor Walker’s paper takes Irenaeus further by suggesting that the obligation of becoming like God extends further towards perhaps genetic manipulation of ourselves, for if by having bigger, better brains makes us less likely to sin and more like God, then that transforms into a moral obligation. The argument seems to prescribe even more radical actions, too: are theists morally obligated, following our ascension as gods, to create new universes? Are simulations mandatory? Should Christians begin now?
I’m deep in Steven Pinker’s The Better Angels of Nature: Why Violence Has Declined. It’s also only about the third book I’ve tried to read exclusively on the iPad, but I am finally getting used to the platform. The core thesis of Pinker’s book is something that I have been experimentally testing on people for several years: our moral facilities and decision-making are gradually improving. For Pinker, the thesis is built up elaborately from basic estimates of death rates due to war and homicide between non-state societies and state societies. It comes with an uncomfortable inversion of the nobility of the savage mind: primitive people had a lot to fight about and often did.
My first contact with the notion that morality is changing and improving was with Richard Dawkin’s observation in The God Delusion that most modern Westerners feel very uncomfortable with the fire bombing of Tokyo in World War II, the saturation bombing of Hanoi, nuclear attack against civilian populations, or treating people inhumanely based on race or ethnicity. Yet that wasn’t the case just decades ago. More moral drift can be seen in changing sentiments concerning the rights of gay people to marry. Experimentally, then, I would ask, over dinner or conversation, about simple moral trolley experiments and then move on to ask whether anyone would condone nuclear attack against civilian populations. There is always a first response of “no” to the latter, which reflects a gut moral sentiment, though a few people have agreed that it may be “permissible” (to use the language of these kinds of dilemmas) in response to a similar attack and when there may be “command and control assets” mixed into the attack area. But that gentle permissibility always follows the initial revulsion.
Pinker’s book suggested another type of experimental simulation, however. In it he describes how the foraging behavior of many chimpanzees in sparse woods results in males often traveling alone at the edges of their populations. Neighboring groups of chimps will systematically kill these loners when they have a 3-to-1 advantage of numbers. I’m curious if the sparseness of resources and population is at the heart of the violence, and that the same occurs with the violence patterns of hunter gatherers. If so, it seems plausible to try to simulate the evolution of moral behavior as population density and interconnectedness increases. When population density is low and there are memes/genes that trade off cooperation against raiding for resources, the raiding genes maintain in an equilibrium against cooperating with ingroup members. As population increases, the raiding genes simply die out because it is a non-zero sum game to cooperate.
There is an enormous amount of variability possible in a simulation like this, but I suspect that, given almost any initial starting conditions, morality is simply inevitable.
Dystopian literature is mostly about the unintended consequences of technological change. Cory Doctorow expands on this theme related to technological singularities on Boing Boing:
Indeed, it seems to me that in literature, the Singularity’s role is to serve as a straw-man for critiquing technology as a one-sided panacea.
Fair enough. Literature and drama are all about conflicts and Man vs. Technology is at least one of the primary conflicts of the modern age.
But why is it that we are drawn to this notion of some kind of transcendent mechanism that alleviates us of the struggles of everyday existence? It’s a central theme of Hinduism (get off the wheel of existence), Buddhism (existence is void; free the mind of your very desire of it), Christianity and Islam (post-life existence is better and more perfect). I think it arises from the same predisposition for magical thinking combined with hope that is part of imaginative play among children. In play, the child creates an imagined and utopian existence where their alter egos typically overcome all obstacles. There are a few sex differences that are part conditioning and likely partly biological, but the patterns are remarkably utopian in terms of the dispositions of the children’s play avatars.
The translation of this into adult formulations of heavens filled with inchoate goodness and light (or many virgins), or even an emptiness that defies ordinary characterization, is just an extension of this urge to play. In a technological world, singularities are the secular equivalent, but with the additional propellant of observed technological change that surrounds all of us.
We now know that our cognitive facilities are not perfectly rational. Indeed, our cultural memory has regularly reflected that fact. But we often thought we might be getting a handle on what it means to be rational by developing models for what good thinking might be like and using it in political, philosophical, and scientific discourse. The models were based on nascent ideas like the logical coherence of arguments, internal consistency, few tautologies, and the consistency with empirical data.
But an interesting and quite basic question is why should we be able to formulate logical rules and create increasingly impressive systems of theory and observations given a complex evolutionary history. We have big brains, sure, but they evolved to manage social relationships and find resources–not to understand the algebraic topology of prime numbers or the statistical oddities of quantum mechanics–yet they seem well suited for these newer and more abstract tasks.
Alvin Plantinga, a theist and modern philosopher whose work has touched everything from epistemology to philosophy of religion, formulated his Evolutionary Argument Against Naturalism (EANN) as a kind of complaint that the likelihood of rationality arising from evolutionary processes is very low (really he is most concerned with the probability of “reliability,” by which means that most conclusions and observations are true, but I am substituting rationality for this with an additional Bayesian overlay).
Plantinga mostly wants to advocate that maybe our faculties are rational because God made them rather than a natural process. The response to this from an evolutionary perspective is fairly simple: evolution is an adaptive process and adaptation to a series of niche signals involves not getting those signals wrong. There are technical issues that arise here concerning how specific adaptation can result in more general rational facilities but we can, at least in principle, imagine (and investigate) bridge rules that extend out from complex socialization to encompass the deep complexities of modern morality and the Leviathan state, and the extension of optimizing spear throwing to shooting rockets into orbit.
I’ve always held that Good Old Fashioned AI that tries to use decision trees created by specification is falling into a similar trap as Plantinga. By expecting the procedures of mind to be largely rational they result in a brittle response to the world that is as impotent as Plantinga’s “hyperbolic doubt” about naturalism. If so, though, it leads to the possibility that the only path to the kind of behavioral plasticity and careful balance of rationality and irrationality that we see as uniquely human is through simulating a significant portion of our entire evolutionary history. This might be formulated as an Evolutionary Argument Against AI (EAAAI), but I don’t think of it as a defeater like that, but as something more like an Evolutionary Argument for the Complexity of AI (and I’ll stop playing with the acronyms now).