Category: evolution

Zebras with Machine Guns

I was just rereading some of the literature on Plantinga’s Evolutionary Argument Against Naturalism (EAAN) as a distraction from trying to write too much on ¡Reconquista!, since it looks like I am on a much faster trajectory to finishing the book than I had thought. EAAN is a curious little argument that some have dismissed as a resurgent example of scholastic theology. It has some newer trappings that we see in modern historical method, however, especially in the use Bayes’ Theorem to establish the warrant of beliefs by trying to cast those warrants as probabilities.

A critical part of Plantinga’s argument hinges on the notion that evolutionary processes optimize against behavior and not necessarily belief. Therefore, it is plausible that an individual could hold false beliefs that are nonetheless adaptive. For instance, Plantinga gives the example of a man who desires to be eaten by tigers but always feels hopeless when confronted by a given tiger because he doesn’t feel worthy of that particular tiger, so he runs away and looks for another one. This may seem like a strange conjunction of beliefs and actions that happen to result in the man surviving, but we know from modern psychology that people can form elaborate justifications for perceived events and wild metaphysics to coordinate those justifications.

If that is the case, for Plantinga, the evolutionary consequence is that we should not trust our belief in our reasoning faculties because they are effectively arbitrary. There are dozens of responses to this argument that dissect it from many different dimensions. I’ve previously showcased Branden Fitelson and Elliot Sober’s Plantinga’s Probability Arguments Against Evolutionary Naturalism from 1997, which I think is one of the most complete examinations of the structure of the argument. There are two critical points that I think emerge from Fitelson and Sober. First, there is the sober reminder of the inherent frailty of scientific method that needs to be kept in mind. Science is an evolving work involving many minds operating, when at its best, in a social network that reduces biases and methodological overshoots. It should be seen as a tentative foothold against “global skepticism.”

The second, and critical take-away from that response is more nuanced, however. The notion that our beliefs can be arbitrarily disconnected from adaptive behavior in an evolutionary setting, like the tiger survivor, requires a very different kind of evolution than we theorize. Fitelson and Sober point out that if anything was possible, zebras might have developed machine guns to defend against lions rather than just cryptic stripes. Instead, the sieve of possible solutions to adaptive problems is built on the genetic and phenotypic variants that came before. This will limit the range of arbitrary, non-true beliefs that can be compatible with an adaptive solution. If the joint probability of true belief and adaptive behavior is much higher than the alternative, which we might guess is true, then there is a greater probability that our faculties are reliable. In fact, we could argue that using a parsimony argument that extends Bayesian analysis to the general case of optimal inductive models (Sober actually works on this issue extensively), that there are classes of inductive solutions that, through eliminating add-ons, outperform predictively those solutions that have extra assumptions and entities. So, P(not getting eaten | true belief that tigers are threats) >> P(not getting eaten | false beliefs about tigers), especially when updated over time. I would be remiss if I didn’t mention that William of Ockham of Ockham’s Razor-fame was a scholastic theologian, so if Plantinga’s argument is revisiting those old angels-head-pin-style arguments, it might be opposed by a fellow scholastic.

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus. In English, the syntax and semantics are less constrained and the pragmatics more pronounced, but the emotional high is much the same.

With the recent death of Hubert Dreyfus at Berkeley it seems an opportune time to consider the uniquely human capabilities that are involved in each of these creative ventures. Uniquely, I suggest, because we can’t yet imagine what it would be like for a machine to do the same kinds of intelligent tasks. Yet, from Stephen Hawking through to Elon Musk, influential minds are worried about what might happen if we develop machines that rise to the level of human consciousness. This might be considered a science fiction-like speculation since we have little basis for conjecture beyond the works of pure imagination. We know that mechanization displaces workers, for instance, and think it will continue, but what about conscious machines?

For Dreyfus, the human mind is too embodied and situational to be considered an encodable thing representable by rules and algorithms. Much like the trajectory of a species through an evolutionary landscape, the mind is, in some sense, an encoded reflection of the world in which it lives. Taken further, the evolutionary parallel becomes even more relevant in that it is embodied in a sensory and physical identity, a product of a social universe, and an outgrowth of some evolutionary ping pong through contingencies that led to greater intelligence and self-awareness.

Obsession with whatever cultivars, whatever traits and tendencies, lead to this riot of wordplay and software refinement is a fine example of how this moves away from the fears of Hawking and towards the impossibilities of Dreyfus. We might imagine that we can simulate our way to the kernel of instinct and emotion that makes such things possible. We might also claim that we can disconnect the product of the effort from these internal states and the qualia that defy easy description. The books and the new technologies have only desultory correspondence to the process by which they are created. But I doubt it. It’s more likely that getting from great automatic speech recognition or image classification to the general AI that makes us fearful is a longer hike than we currently imagine.

Traitorous Reason, Facts, and Analysis

dinoObama’s post-election press conference was notable for its continued demonstration of adult discourse and values. Especially notable:

This office is bigger than any one person and that’s why ensuring a smooth transition is so important. It’s not something that the constitution explicitly requires but it is one of those norms that are vital to a functioning democracy, similar to norms of civility and tolerance and a commitment to reason and facts and analysis.

But ideology in American politics (and elsewhere) has the traitorous habit of undermining every one of those norms. It always begins with undermining the facts in pursuit of manipulation. Just before the election, the wizardly Aron Ra took to YouTube to review VP-elect Mike Pence’s bizarre grandstanding in Congress in 2002:

And just today, Trump lashed out at the cast of Hamilton for lecturing Mike Pence on his anti-LGBTQ stands, also related to ideology and belief, at the end of a show.

Astonishing as this seems, we live in an imperfect world being drawn very slowly away from tribal and xenophobic tendencies, and in fits and starts. My wife received a copy of letter from now-deceased family that contained an editorial from the Shreveport Journal in the 1960s that (with its embedded The Worker editorial review) simultaneously attacked segregationist violence, the rhetoric of Alabama governor George Wallace, claimed that communists were influencing John F. Kennedy and the civil rights movement, demanded the jailing of communists, and suggested the federal government should take over Alabama:


The accompanying letter was also concerned over the fate of children raised as Unitarians, amazingly enough, and how they could possibly be moral people. It then concluded with a recommendation to vote for Goldwater.

Is it any wonder that the accompanying cultural revolutions might lead to the tearing down of the institutions that were used to justify the deviation away from “reason and facts and analysis?”

But I must veer to the positive here, that this brief blip is a passing retrenchment of these old tendencies that the Millennials and their children will look back to with fond amusement, the way I remember Ronald Reagan.

Motivation, Boredom, and Problem Solving

shatteredIn the New York Times Stone column, James Blachowicz of Loyola challenges the assumption that the scientific method is uniquely distinguishable from other ways of thinking and problem solving we regularly employ. In his example, he lays out how writing poetry involves some kind of alignment of words that conform to the requirements of the poem. Whether actively aware of the process or not, the poet is solving constraint satisfaction problems concerning formal requirements like meter and structure, linguistic problems like parts-of-speech and grammar, semantic problems concerning meaning, and pragmatic problems like referential extension and symbolism. Scientists do the same kinds of things in fitting a theory to data. And, in Blachowicz’s analysis, there is no special distinction between scientific method and other creative methods like the composition of poetry.

We can easily see how this extends to ideas like musical composition and, indeed, extends with even more constraints that range from formal through to possibly the neuropsychology of sound. I say “possibly” because there remains uncertainty on how much nurture versus nature is involved in the brain’s reaction to sounds and music.

In terms of a computational model of this creative process, if we presume that there is an objective function that governs possible fits to the given problem constraints, then we can clearly optimize towards a maximum fit. For many of the constraints there are, however, discrete parameterizations (which part of speech? which word?) that are not like curve fitting to scientific data. In fairness, discrete parameters occur there, too, especially in meta-analyses of broad theoretical possibilities (Quantum loop gravity vs. string theory? What will we tell the children?) The discrete parameterizations blow up the search space with their combinatorics, demonstrating on the one hand why we are so damned amazing, and on the other hand why a controlled randomization method like evolutionary epistemology’s blind search and selective retention gives us potential traction in the face of this curse of dimensionality. The blind search is likely weakened for active human engagement, though. Certainly the poet or the scientist would agree; they are using learned skills, maybe some intellectual talent of unknown origin, and experience on how to traverse the wells of improbability in finding the best fit for the problem. This certainly resembles pre-training in deep learning, though on a much more pervasive scale, including feedback from categorical model optimization into the generative basis model.

But does this extend outwards to other ways in which we form ideas? We certainly know that motivated reasoning is involved in key aspects of our belief formation, which plays strongly into how we solve these constraint problems. We tend to actively look for confirmations and avoid disconfirmations of fit. We positively bias recency of information, or repeated exposures, and tend to only reconsider in much slower cycles.

Also, as the constraints of certain problem domains become, in turn, extensions that can result in change—where there is a dynamic interplay between belief and success—the fixity of the search space itself is no longer guaranteed. Broad human goals like the search for meaning are an example of that. In come complex human factors, like how boredom correlates with motivation and ideological extremism (overview, here, journal article, here).

This latter data point concerning boredom crosses from mere bias that might preclude certain parts of a search space into motivation that focuses it, and that optimizes for novelty seeking and other behaviors.

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.


Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way. So other methods may need to be tweaked to get closer to the observed learning of people in these word tasks.

So what can this tell us about epistemology and belief? Classical foundationalism might be formulated as something is a “basic” or “justified” belief if it is self-evident or evident to our senses. Other beliefs may therefore be grounded by those basic beliefs. And a more modern reformulation might substitute “incorrigible” for “justified” with the layered meaning of incorrigibility built on the necessity that given the proposition it is in fact true.

Here’s Alvin Plantinga laying out a case for why justified and incorrigibility have a range of problems, problems serious enough for Plantinga that he suspects that god belief could just as easily be a basic belief, allowing for the kinds of presuppositional Natural Theology (think: I look around me and the hand of God is obvious) that is at the heart of some of the loftier claims concerning the viability or non-irrationality of god belief. It even provides a kind of coherent interpretative framework for historical interpretation.

Plantinga positions the problem of properly basic belief then as an inductive problem:

And hence the proper way to arrive at such a criterion is, broadly speaking, inductive. We must assemble examples of beliefs and conditions such that the former are obviously properly basic in the latter, and examples of beliefs and conditions such that the former are obviously not properly basic in the latter. We must then frame hypotheses as to the necessary and sufficient conditions of proper basicality and test these hypothesis by reference to those examples. Under the right conditions, for example, it is clearly rational to believe that you see a human person before you: a being who has thoughts and feelings, who knows and believes things, who makes decisions and acts. It is clear, furthermore, that you are under no obligation to reason to this belief from others you hold; under those conditions that belief is properly basic for you.

He goes on to conclude that this opens up the god hypothesis as providing this kind of coherence mechanism:

By way of conclusion then: being self-evident, or incorrigible, or evident to the senses is not a necessary condition of proper basicality. Furthermore, one who holds that belief in God is properly basic is not thereby committed to the idea that belief in God is groundless or gratuitous or without justifying circumstances. And even if he lacks a general criterion of proper basicality, he is not obliged to suppose that just any or nearly any belief—belief in the Great Pumpkin, for example—is properly basic. Like everyone should, he begins with examples; and he may take belief in the Great Pumpkin as a paradigm of irrational basic belief.

So let’s assume that the word learning mechanism based on this Bayesian scaling is representative of our human inductive capacities. Now this may or may not be broadly true. It is possible that it is true of words but not other domains of perceptual phenomena. Nevertheless, given this scaling property, the relative inductive truth of a given proposition (a meaning hypothesis) is strictly Bayesian. Moreover, this doesn’t succumb to problems of verificationalism because it only claims relative truth. Properly basic or basic is then the scaled contending explanatory hypotheses and the god hypothesis has to compete with other explanations like evolutionary theory (for human origins), empirical evidence of materialism (for explanations contra supernatural ones), perceptual mistakes (ditto), myth scholarship, textual analysis, influence of parental belief exposure, the psychology of wish fulfillment, the pragmatic triumph of science, etc. etc.

And so we can stick to a relative scaling of hypotheses as to what constitutes basicality or justified true belief. That’s fine. We can continue to argue the previous points as to whether they support or override one hypothesis or another. But the question Plantinga raises as to what ethics to apply in making those decisions is important. He distinguishes different reasons why one might want to believe more true things than others (broadly) or maybe some things as properly basic rather than others, or, more correctly, why philosophers feel the need to pin god-belief as irrational. But we succumb to a kind of unsatisfying relativism insofar as the space of these hypotheses is not, in fact, weighted in a manner that most reflects the known facts. The relativism gets deeper when the weighting is washed out by wish fulfillment, pragmatism, aspirations, and personal insights that lack falsifiability. That is at least distasteful, maybe aretetically so (in Plantinga’s framework) but probably more teleologically so in that it influences other decision-making and the conflicts and real harms societies may cause.

Non-Cognitivist Trajectories in Moral Subjectivism

imageWhen I say that “greed is not good” the everyday mind creates a series of images and references, from Gordon Gekko’s inverse proposition to general feelings about inequality and our complex motivations as people. There is a network of feelings and, perhaps, some facts that might be recalled or searched for to justify the position. As a moral claim, though, it might most easily be considered connotative rather than cognitive in that it suggests a collection of secondary emotional expressions and networks of ideas that support or deny it.

I mention this (and the theories that are consonant with this kind of reasoning are called non-cognitivist and, variously, emotive and expressive), because there is a very real tendency to reduce moral ideas to objective versus subjective, especially in atheist-theist debates. I recently watched one such debate between Matt Dillahunty and an orthodox priest where the standard litany revolved around claims about objectivity versus subjectivity of truth. Objectivity of truth is often portrayed as something like, “without God there is no basis for morality. God provides moral absolutes. Therefore atheists are immoral.” The atheists inevitably reply that the scriptural God is a horrific demon who slaughters His creation and condones slavery and other ideas that are morally repugnant to the modern mind. And then the religious descend into what might be called “advanced apologetics” that try to diminish, contextualize, or dismiss such objections.

But we are fairly certain regardless of the tradition that there are inevitable nuances to any kind of moral structure. Thou shalt not kill gets revised to thou shalt not murder. So we have to parse manslaughter in pursuit of a greater good against any rules-based approach to such a simplistic commandment. Not eating shellfish during a famine has less human expansiveness but nevertheless caries similar objective antipathy,

I want to avoid invoking the Euthyphro dilemma here and instead focus on the notion that there might be an inevitability to certain moral proscriptions and even virtues given an evolutionary milleu. This was somewhat the floorplan of Sam Harris, but I’ll try to project the broader implications of species-level fitness functions to a more local theory, specifically Gibbard’s fact-prac worlds where the trajectories of normative, non-cognitive statements like “greed is not good” align with sets of perceptions of the world and options for implementing activities that strengthen the engagement with the moral assertion. The assertion is purely subjective but it derives out of a correspondence with incidental phenomena and a coherence with other ideations and aspirations. It is mostly non-cognitive in this sense that it expresses emotional primitives rather than simple truth propositions. It has a number of interesting properties, however, most notably that the fact-prac set of constraints that surround these trajectories are movable, resulting in the kinds of plasticity and moral “evolution” that we see around us, like “slavery is bad” and “gay folks should not be discriminated against.” So as an investigative tool, we can see some value that gives such a theory important verificational value. As presented by Gibbard, however, these collections of constraints that guide the trajectories of moral approaches to simple moral commandments, admonishments, or statements, need further strengthening to meet the moral landscape “ethical naturalism” that asserts that certain moral attitudes result in improved species outcomes and are therefore axiomatically possible and sensibly rendered as objective.

And it does this without considering moral propositions at all.

A Critique of Pure Randomness

Random MemeThe notion of randomness brings about many interesting considerations. For statisticians, randomness is a series of events with chances that are governed by a distribution function. In everyday parlance, equally-likely means random, while an even more common semantics is based on both how unlikely and how unmotivated an event might be (“That was soooo random!”) In physics, there are only certain physical phenomena that can be said to be truly random, including the probability of a given nucleus decomposing into other nuclei via fission. The exact position of a quantum thingy is equally random when it’s momentum is nailed down, and vice-versa. Vacuums have a certain chance of spontaneously creating matter, too, and that chance appears to be perfectly random. In algorithmic information theory, a random sequence of bits is a sequence that can’t be represented by a smaller descriptive algorithm–it is incompressible. Strangely enough, we simulate random number generators using a compact algorithm that has a complicated series of steps that lead to an almost impossible to follow trajectory through a deterministic space of possibilities; it’s acceptible to be random enough that the algorithm parameters can’t be easily reverse engineered and the next “random” number guessed.

One area where we often speak of randomness is in biological evolution. Random mutations lead to change and to deleterious effects like dead-end evolutionary experiments. Or so we hypothesized. The exact mechanism of the transmission of inheritance and of mutations were unknown to Darwin, but soon in the evolutionary synthesis notions like random genetic drift and the role of ionizing radiation and other external factors became exciting candidates for the explanation of the variation required for evolution to function. Amusingly, arguing largely from a stance that might be called a fallacy of incredulity, creationists have often seized on a logical disconnect they perceive between the appearance of purpose both in our lives and in the mechanisms of biological existence, and the assumption of underlying randomness and non-directedness as evidence for the paucity of arguments from randomness.

I give you Stephen Talbott in The New Atlantis, Evolution and the Illusion of Randomness, wherein he unpacks the mounting evidence and the philosophical implications of jumping genes, self-modifying genetic regulatory frameworks, transposons, and the likelihood that randomness in the strong sense of cosmic ray trajectories bouncing around in cellular nuclei are simply wrong. Randomness is at best a minor contribution to evolutionary processes. We are not just purposeful at the social, personal, systemic, cellular, and sub-cellular levels, we are also purposeful through time around the transmission of genetic information and the modification thereof.

This opens a wildly new avenue for considering the certain normative claims that anti-evolutionists bring to the table, such as that a mechanistic universe devoid of central leadership is meaningless and allows for any behavior to be equally acceptable. This hoary chestnut is ripe to the point of rot, of course, but the response to it should be much more vibrant than the usual retorts. The evolution of social and moral outcomes can be every bit as inevitable as if they were designed because co-existence and greater group success (yes, I wrote it) is a potential well on the fitness landscape. And, equally, we need to stop being so reticent to claim that there is a purposefulness to life, a teleology, but simply make sure that we are according the proper mechanistic feel to that teleology. Fine, call it teleonomy, or even an urge to existence. A little poetry might actually help here.

Informational Chaff and Metaphors

chaffI received word last night that our scholarship has received over 1400 applications, which definitely surprised me. I had worried that the regional restriction might be too limiting but Agricultural Sciences were added in as part of STEM so that probably magnified the pool.

Dan Dennett of Tufts and Deb Roy at MIT draw parallels between informational transparency in our modern world and biological mechanism in Scientific American (March 2015, 312:3). Their article, Our Transparent Future (related video here; you have to subscribe to read the full article), starts with Andrew Parker’s theory that the Cambrian Explosion may have been tied to the availability of light as cloud cover lifted and seas became transparent. An evolutionary arms race began for the development of sensors that could warn against predators, and predators that could acquire more prey.

They continue on drawing parallels to biological processes, including the concept of squid ink and how a similar notion, chaff, was used to mask radar signatures as aircraft became weapons of war. The explanatory mouthful of the Multiple Independent Reentry Vehicle (MIRV) with dummy warheads to counter anti-ballistic missiles were likewise a deceptive way of reducing the risk of interception. So Dennett and Roy “predict the introduction of chaff made of nothing but megabytes of misinformation,” designed to deceive search engines of the nature of real info.

This is a curious idea. Search engine optimization (SEO) is a whole industry that combines consulting with tricks and tools to try to raise the position of vendors in the Google rankings. Being in the first page of listings can be make-or-break for retail vendors, and they pay to try to make that happen. The strategies are based around trying to establish links to the vendor from individuals and other pages to try to game the PageRank algorithm. In turn, Google has continued to optimize to reduce the effectiveness of these links, trying to establish whether hand- or machine-created content with links looks like real, valuable information or just promotional materials. This is, in some ways, the opposite of informational chaff. The goal is not to hide the content in plain sight, but to make it more discoverable. “Information scent” was a concept introduced at XeroX PARC when I was there and it applies here.

But what of chaff? Perhaps the best example that I can think of is the idea of “drowning in paper” that lawyers occasionally describe, on TV or otherwise, where huge piles of non-digitized materials are dumped in the hopes that the criminal or civil needle-in-the-haystack will be impossible to find. This is highly dependent on the temporal limitations of individuals to ingest the materials, and is equally countered by OCR and scanning services to produce accessible forms of data. Dennett and Roy point out that more sophisticated search engines (and I’ll add other analytic tools) can counter efforts at chaff.

More broadly, though, we get to the issue of whether evolutionary metaphors provide us with any new insights into the changing role of information in an interconnected and digitized society? I’m not altogether sure. It is routinely argued that the existence of early computing machines led to cognitive science as we have known it, conflating problem solving with algorithms and describing the brain’s hardware and software. Is evolutionary adaption equally influential in steering weapon’s designs or informational secrecy strategy? I think we are probably cunning enough (thanks evolution) about proximate threats and consequences that there might not be much to learn from metaphorical analysis of this type.

Evolutionary Optimization and Environmental Coupling

Red QueensCarl Schulman and Nick Bostrom argue about anthropic principles in “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” (Journal of Consciousness Studies, 2012, 19:7-8), focusing on specific models for how the assumption of human-level intelligence should be easy to automate are built upon a foundation of assumptions of what easy means because of observational bias (we assume we are intelligent, so the observation of intelligence seems likely).

Yet the analysis of this presumption is blocked by a prior consideration: given that we are intelligent, we should be able to achieve artificial, simulated intelligence. If this is not, in fact, true, then the utility of determining whether the assumption of our own intelligence being highly probable is warranted becomes irrelevant because we may not be able to demonstrate that artificial intelligence is achievable anyway. About this, the authors are dismissive concerning any requirement for simulating the environment that is a prerequisite for organismal and species optimization against that environment:

In the limiting case, if complete microphysical accuracy were insisted upon, the computational requirements would balloon to utterly infeasible proportions. However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible. It is, on the contrary, plausible that it would be more efficient to use an artificial selection environment, one quite unlike that of our ancestors, an environment specifically designed to promote adaptations that increase the type of intelligence we are seeking to evolve (say, abstract reasoning and general problem-solving skills as opposed to maximally fast instinctual reactions or a highly optimized visual system).

Why is this “unlikely”? The argument is that there are classes of mental function that can be compartmentalized away from the broader, known evolutionary provocateurs. For instance, the Red Queen argument concerning sexual optimization in the face of significant parasitism is dismissed as merely a distraction to real intelligence:

And as mentioned above, evolution scatters much of its selection power on traits that are unrelated to intelligence, such as Red Queen’s races of co-evolution between immune systems and parasites. Evolution will continue to waste resources producing mutations that have been reliably lethal, and will fail to make use of statistical similarities in the effects of different mutations. All these represent inefficiencies in natural selection (when viewed as a means of evolving intelligence) that it would be relatively easy for a human engineer to avoid while using evolutionary algorithms to develop intelligent software.

Inefficiencies? Really? We know that sexual dimorphism and competition are essential to the evolution of advanced species. Even the growth of brain size and creative capabilities are likely tied to sexual competition, so why should we think that they can be uncoupled? Instead, we are left with a blocker to the core argument that states instead that simulated evolution may, in fact, not be capable of producing sufficient complexity to produce intelligence as we know it without, in turn, a sufficiently complex simulated fitness function to evolve against. Observational effects, aside, if we don’t get this right, we need not worry about the problem of whether there are 10 or ten billion planets suitable for life out there.