Tagged: simulation hypothesis

Simulator Superputz

The simulation hypothesis is perhaps a bit more interesting than how to add clusters of neural network nodes to do a simple reference resolution task, but it is also less testable. This is the nature of big questions since they would otherwise have been resolved by now. Nevertheless, some theory and experimental analysis has been undertaken for the question of whether or not we are living in a simulation, all based on an assumption that the strangeness of quantum and relativistic realities might be a result of limited computing power in the grand simulator machine. For instance, in a virtual reality game, only the walls that you, as a player, can see need to be calculated and rendered. The other walls that are out of sight exist only as a virtual map in the computer’s memory or persisted to longer-term storage. Likewise, the behavior of virtual microscopic phenomena need not be calculated insofar as the macroscopic results can be rendered, like the fire patterns in a virtual torch.

So one way of explaining physics conundrums like delayed choice quantum erasers, Bell’s inequality, or ER = EPR might be to claim that these sorts of phenomena are the results of a low-fidelity simulation necessitated by the limits of the simulator computer. I think the likelihood that this is true is low, however, because we can imagine that there exists an infinitely large cosmos that merely includes our universe simulation as a mote within it. Low-fidelity simulation constraints might give experimental guidance, but the results could also be supported by just living with the indeterminacy and non-locality as fundamental features of our universe.

It’s worth considering, however, what we should think about the nature of the simulator given this potentially devious (and poorly coded) little Matrix that we find ourselves trapped in? There are some striking alternatives. To make this easier, I’ll use the following abbreviations:

S = Simulator (creator of simulation)

U = Simulation

SC = Simulation Computer (whatever the simulation runs on)

MA = Morally Aware (the perception, rightly or wrongly, that judgments and choices influence simulation-level phenomena)

US = Simulatees

CA = Conscious Awareness (the perception that one is aware of stuff)

So let’s get started:

  1. S is unaware of events in U due to limited monitoring resources.
  2. S is unaware of events in U due to lack of interest.
  3. S is incapable of conscious awareness of U (S is some kind of automatic system).
  4. It seems unlikely that limited monitoring resources would be a constraint given the scale and complexity of U because they would be of a lower cost than U and simply tuned to filter active categories of interest, so S must either lack interest (2) or be incapable of awareness (3).
  5. We can dismiss (3) due to an infinite regress on the nature of the simulator in general, since the origin of the Simulation Hypothesis is the probability that we humans will create ever-better simulations in the future. There is no other simulation hypothesis that involves pure automation of S lacking CA and some form of MA.
  6. Given (2), why would S lack interest in U? Perhaps S created a large ensemble of universes and is only interested in long-term outcomes. But maybe S is just a putz.
  7. For (6), if S is MA, then S is wrong to create U that supports the evolution of US insofar as S allows for CA and MA in US combined with radical uncertainty in U.
  8. Conclusion: S is a putz or this ain’t a simulation.

Theists can squint and see the problem, here. We might add 7.5: it’s certainly wrong of S to actively burn, drown, imprison, enslave, and murder CA and MA US. If S is doing 7.5, that makes S a superputz.

In my novel, Teleology, the creation of another simulated universe by a first one was a religious imperative. The entities saw, once in contact with their S, that it must be the ultimate fulfillment of purpose for them to become S. Yet their S was very concerned with their U and would have objected to even cleanly pulling the plug on their U. He did lack instrumentation (1) into U, but built a great deal of it after discovering that there was evidence of CA. He was no putz.

Humbly Evolving in a Non-Simulated Universe

darwin-changeThe New York Times seems to be catching up to me, first with an interview of Alvin Plantinga by Gary Cutting in The Stone on February 9th, and then with notes on Bostrom’s Simulation Hypothesis in the Sunday Times.

I didn’t see anything new in the Plantinga interview, but reviewed my previous argument that adaptive fidelity combined with adaptive plasticity must raise the probability of rationality at a rate that is much greater than the contributions that would be “deceptive” or even mildly cognitively or perceptually biased. Worth reading is Branden Fitelsen and Eliot Sober’s very detailed analysis of Plantinga’s Evolutionary Argument Against Naturalism (EAAN), here. Most interesting are the beginning paragraphs of Section 3, which I reproduce here because it is a critical addition that should surprise no one but often does:

Although Plantinga’s arguments don’t work, he has raised a question that needs to be answered by people who believe evolutionary theory and who also believe that this theory says that our cognitive abilities are in various ways imperfect. Evolutionary theory does say that a device that is reliable in the environment in which it evolved may be highly unreliable when used in a novel environment. It is perfectly possible that our mental machinery should work well on simple perceptual tasks, but be much less reliable when applied to theoretical matters. We hasten to add that this is possible, not inevitable. It may be that the cognitive procedures that work well in one domain also work well in another; Modus Ponens may be useful for avoiding tigers and for doing quantum physics.

Anyhow, if evolutionary theory does say that our ability to theorize about the world is apt to be rather unreliable, how are evolutionists to apply this point to their own theoretical beliefs, including their belief in evolution? One lesson that should be extracted is a certain humility—an admission of fallibility. This will not be news to evolutionists who have absorbed the fact that science in general is a fallible enterprise. Evolutionary theory just provides an important part of the explanation of why our reasoning about theoretical matters is fallible.

Far from showing that evolutionary theory is self-defeating, this consideration should lead those who believe the theory to admit that the best they can do in theorizing is to do the best they can. We are stuck with the cognitive equipment that we have. We should try to be as scrupulous and circumspect about how we use this equipment as we can. When we claim that evolutionary theory is a very well confirmed theory, we are judging this theory by using the fallible cognitive resources we have at our disposal. We can do no other.

And such humility helps to dismiss arguments about the arrogance of science and scientism.

On the topic of Bostrom’s Simulation Hypothesis, I remain skeptical that we live in a simulated universe.

Keep Suspicious and Carry On

I’ve previously argued that it is unlikely that resource-constrained simulations can achieve adequate levels of fidelity to be sufficient for what we observe around us. This argument was a combination of computational irreducibility and assumptions about the complexity of evolutionary trajectories of living beings. There may also be an argument about the observed contingency of the evolutionary process that is an argument against any kind of “intelligent” organizing principle though not against simulation itself.

Leave it to physicists to envision a test of the Bostrom hypothesis that we are living in a computer simulation. Martin Savage and his colleagues look at Quantum Chromodynamic (QCD) theory and current simulation methods for QCD. They conclude that if we are, in fact, living in a simulation, then we might observe specific inconsistencies that arise from finite computing power for the universe as a whole. Those inconsistencies would be observed in looking at the distribution of cosmic ray energies, specifically. Note that if the distribution is not unusual the universe could either be a simulation (just a sophisticated one) or could be a truly physical one (free running and not on another entity’s computational framework). It is only if the distribution is unusual that it might be a simulation.

Bostrom and Computational Irreducibility


Nick Bostrom elevated philosophical concerns to the level of the popular press with his paper, Are You Living in a Computer Simulation? which argues that:

at least one of the following propositions is
true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

A critical prerequisite of (3) is that human brains can be simulated in some way. And a co-requisite of that requirement is that the environment must be at least partially simulated in order for the brain simulations to believe in the sensorium that they experience:

If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.

Bostrom’s efforts to minimize the required information content doesn’t ring true, however. In order for a perceived universe to provide even “local” consistency, then large-scale phenomena must be simulated with perfect accuracy. Even if, as Bostrom suggests, noticed inconsistencies can be rewritten in the brains of the simulated individuals, those inconsistencies would have to be eventually resolved into a consistent universe.

Further, creating local consistency without emulating quantum-level phenomena requires first computing the macroscopic phenomena that would be a consequence of those quantum events. Many of these macroscopic physical behaviors are suspected of being essentially irreducible to anything other than the particulate ensemble evolution itself–in other words, there is no analytic macroscopic model that reflects reality without performing the entire simulation. This restates Wolfram’s notion of computational irreducibility.

Taken together, Bostrom’s simulated world seems less likely and his quoted defeater that “Simulating the entire universe down to the quantum level is obviously infeasible,” appears to be a critical requirement for the current observable universe.  Therefore, it is unlikely that we live in a simulated universe.