Tagged: Nick Bostrom

Evolutionary Optimization and Environmental Coupling

Red QueensCarl Schulman and Nick Bostrom argue about anthropic principles in “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” (Journal of Consciousness Studies, 2012, 19:7-8), focusing on specific models for how the assumption of human-level intelligence should be easy to automate are built upon a foundation of assumptions of what easy means because of observational bias (we assume we are intelligent, so the observation of intelligence seems likely).

Yet the analysis of this presumption is blocked by a prior consideration: given that we are intelligent, we should be able to achieve artificial, simulated intelligence. If this is not, in fact, true, then the utility of determining whether the assumption of our own intelligence being highly probable is warranted becomes irrelevant because we may not be able to demonstrate that artificial intelligence is achievable anyway. About this, the authors are dismissive concerning any requirement for simulating the environment that is a prerequisite for organismal and species optimization against that environment:

In the limiting case, if complete microphysical accuracy were insisted upon, the computational requirements would balloon to utterly infeasible proportions. However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible. It is, on the contrary, plausible that it would be more efficient to use an artificial selection environment, one quite unlike that of our ancestors, an environment specifically designed to promote adaptations that increase the type of intelligence we are seeking to evolve (say, abstract reasoning and general problem-solving skills as opposed to maximally fast instinctual reactions or a highly optimized visual system).

Why is this “unlikely”? The argument is that there are classes of mental function that can be compartmentalized away from the broader, known evolutionary provocateurs. For instance, the Red Queen argument concerning sexual optimization in the face of significant parasitism is dismissed as merely a distraction to real intelligence:

And as mentioned above, evolution scatters much of its selection power on traits that are unrelated to intelligence, such as Red Queen’s races of co-evolution between immune systems and parasites. Evolution will continue to waste resources producing mutations that have been reliably lethal, and will fail to make use of statistical similarities in the effects of different mutations. All these represent inefficiencies in natural selection (when viewed as a means of evolving intelligence) that it would be relatively easy for a human engineer to avoid while using evolutionary algorithms to develop intelligent software.

Inefficiencies? Really? We know that sexual dimorphism and competition are essential to the evolution of advanced species. Even the growth of brain size and creative capabilities are likely tied to sexual competition, so why should we think that they can be uncoupled? Instead, we are left with a blocker to the core argument that states instead that simulated evolution may, in fact, not be capable of producing sufficient complexity to produce intelligence as we know it without, in turn, a sufficiently complex simulated fitness function to evolve against. Observational effects, aside, if we don’t get this right, we need not worry about the problem of whether there are 10 or ten billion planets suitable for life out there.

Alien Singularities and Great Filters

Life on MarsNick Bostrom at Oxford’s Future of Humanity Institute takes on Fermi’s question “Where are they?” in a new paper on the possibility of life on other planets. The paper posits probability filters (Great Filters) that may have existed in the past or might be still to come and that limit the likelihood of the outcome that we currently observe: our own, ahem, intelligent life. If a Great Filter existed in our past—say the event of abiogenesis or prokaryote to eukaryote transition—then we can somewhat explain the lack of alien contact thus far: our existence is of very low probability. Moreover, we can expect to not find life on Mars.

If, however, the Great Filter exists in our future then we might see life all over the place (including the theme of his paper, Mars). Primitive life is abundant but the Great Filter is somewhere in our future where we annihilate ourselves, thus explaining why Fermi’s They are not here while little strange things thrive on Mars, and beyond. It is only advanced life that got squeezed out by the Filter.

Bostrom’s Simulation Hypothesis provides a potential way out of this largely pessimistic perspective. If there is a very high probability that civilizations achieve sufficient simulation capabilities that they can create artificial universes prior to conquering the vast interstellar voids needed to move around and signal with adequate intensity, it is equally possible that their “exit strategy” is a benign incorporation into artificial realities that prevents corporeal destruction by other means. It seems unlikely that every advanced civilization would “give up” physical being under these circumstances (in Teleology there are hold-outs from the singularity though they eventually die out), which would mean that there might remain a sparse subset of active alien contact possibilities. This finite but nonzero probability is probably at least as great as the probability of any advanced civilization just succumbing to destruction by other technological means, however, which means they are out there but they just don’t care.

And that leaves us one exit strategy that is not as abhorrent as the future Great Filter might suggest.

NOTE: Thanks to the great Steve Diamond for his initial query of whether Bostrom’s Simulation Hypothesis impacted the Great Filter hypothesis.

Bostrom on the Hardness of Evolving Intelligence

At 38,000 feet somewhere above Missouri, returning from a one day trip to Washington D.C., it is easy to take Nick Bostrom’s point that bird flight is not the end-all of what is possible for airborne objects and mechanical contrivances like airplanes in his paper, How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects. His efforts to try to bound and distinguish the evolution of intelligence as either Hard or Not-Hard runs up against significant barriers, however. As a practitioner of the art, finding similarities between a purely physical phenomena like flying and something as complex as human intelligence falls flat for me.

But Bostrom is not taking flying as more than a starting point for arguing that there is an engineer-able possibility for intelligence. And that possibility might be bounded by a number of current and foreseeable limitations, not least of which is that computer simulations of evolution require a certain amount of computing power and representational detail in order to be a sufficient simulation. His conclusion is that we may need as much as another 100 years of improvements in computing technology just to get to a point where we might succeed at a massive-scale evolutionary simulation (I’ll leave to the reader to investigate his additional arguments concerning convergent evolution and observer selection effects).

Bostrom dismisses as pessimistic the assumption that a sufficient simulation would, in fact, require a highly detailed emulation of some significant portion of the real environment and the history of organism-environment interactions:

A skeptic might insist that an abstract environment would be inadequate for the evolution of general intelligence, believing instead that the virtual environment would need to closely resemble the actual biological environment in which our ancestors evolved … However, such extreme pessimism seems unlikely to be well founded; it seems unlikely that the best environment for evolving intelligence is one that mimics nature as closely as possible. It is, on the contrary, plausible that it would be more efficient to use an artificial selection environment, one quite unlike that of our ancestors, an environment specifically designed to promote adaptations that increase the type of intelligence we are seeking to evolve.

Unfortunately, I don’t see any easy way to bound the combined complexity of the needed substrate for evolutionary action (be it artificial organisms or just artificial neuronal networks) and the complexity of defining the necessary artificial environment for achieving the requested goal. It just makes it at least as hard and perhaps harder in that we can define a physical system much more easily than an abstract adaptive landscape designed to “promote…abstract reasoning and general problem-solving skills.”