Evolutionary Optimization and Environmental Coupling

Red QueensCarl Schulman and Nick Bostrom argue about anthropic principles in “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” (Journal of Consciousness Studies, 2012, 19:7-8), focusing on specific models for how the assumption of human-level intelligence should be easy to automate are built upon a foundation of assumptions of what easy means because of observational bias (we assume we are intelligent, so the observation of intelligence seems likely).

Yet the analysis of this presumption is blocked by a prior consideration: given that we are intelligent, we should be able to achieve artificial, simulated intelligence. If this is not, in fact, true, then the utility of determining whether the assumption of our own intelligence being highly probable is warranted becomes irrelevant because we may not be able to demonstrate that artificial intelligence is achievable anyway. About this, the authors are dismissive concerning any requirement for simulating the environment that is a prerequisite for organismal and species optimization against that environment:

In the limiting case, if complete microphysical accuracy were insisted upon, the computational requirements would balloon to utterly infeasible proportions. However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible. It is, on the contrary, plausible that it would be more efficient to use an artificial selection environment, one quite unlike that of our ancestors, an environment specifically designed to promote adaptations that increase the type of intelligence we are seeking to evolve (say, abstract reasoning and general problem-solving skills as opposed to maximally fast instinctual reactions or a highly optimized visual system).

Why is this “unlikely”? The argument is that there are classes of mental function that can be compartmentalized away from the broader, known evolutionary provocateurs.… Read the rest

Alien Singularities and Great Filters

Life on MarsNick Bostrom at Oxford’s Future of Humanity Institute takes on Fermi’s question “Where are they?” in a new paper on the possibility of life on other planets. The paper posits probability filters (Great Filters) that may have existed in the past or might be still to come and that limit the likelihood of the outcome that we currently observe: our own, ahem, intelligent life. If a Great Filter existed in our past—say the event of abiogenesis or prokaryote to eukaryote transition—then we can somewhat explain the lack of alien contact thus far: our existence is of very low probability. Moreover, we can expect to not find life on Mars.

If, however, the Great Filter exists in our future then we might see life all over the place (including the theme of his paper, Mars). Primitive life is abundant but the Great Filter is somewhere in our future where we annihilate ourselves, thus explaining why Fermi’s They are not here while little strange things thrive on Mars, and beyond. It is only advanced life that got squeezed out by the Filter.

Bostrom’s Simulation Hypothesis provides a potential way out of this largely pessimistic perspective. If there is a very high probability that civilizations achieve sufficient simulation capabilities that they can create artificial universes prior to conquering the vast interstellar voids needed to move around and signal with adequate intensity, it is equally possible that their “exit strategy” is a benign incorporation into artificial realities that prevents corporeal destruction by other means. It seems unlikely that every advanced civilization would “give up” physical being under these circumstances (in Teleology there are hold-outs from the singularity though they eventually die out), which would mean that there might remain a sparse subset of active alien contact possibilities.… Read the rest

Bostrom on the Hardness of Evolving Intelligence

At 38,000 feet somewhere above Missouri, returning from a one day trip to Washington D.C., it is easy to take Nick Bostrom’s point that bird flight is not the end-all of what is possible for airborne objects and mechanical contrivances like airplanes in his paper, How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects. His efforts to try to bound and distinguish the evolution of intelligence as either Hard or Not-Hard runs up against significant barriers, however. As a practitioner of the art, finding similarities between a purely physical phenomena like flying and something as complex as human intelligence falls flat for me.

But Bostrom is not taking flying as more than a starting point for arguing that there is an engineer-able possibility for intelligence. And that possibility might be bounded by a number of current and foreseeable limitations, not least of which is that computer simulations of evolution require a certain amount of computing power and representational detail in order to be a sufficient simulation. His conclusion is that we may need as much as another 100 years of improvements in computing technology just to get to a point where we might succeed at a massive-scale evolutionary simulation (I’ll leave to the reader to investigate his additional arguments concerning convergent evolution and observer selection effects).

Bostrom dismisses as pessimistic the assumption that a sufficient simulation would, in fact, require a highly detailed emulation of some significant portion of the real environment and the history of organism-environment interactions:

A skeptic might insist that an abstract environment would be inadequate for the evolution of general intelligence, believing instead that the virtual environment would need to closely resemble the actual biological environment in which our ancestors evolved … However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible.

Read the rest