Simulator Superputz

The simulation hypothesis is perhaps a bit more interesting than how to add clusters of neural network nodes to do a simple reference resolution task, but it is also less testable. This is the nature of big questions since they would otherwise have been resolved by now. Nevertheless, some theory and experimental analysis has been undertaken for the question of whether or not we are living in a simulation, all based on an assumption that the strangeness of quantum and relativistic realities might be a result of limited computing power in the grand simulator machine. For instance, in a virtual reality game, only the walls that you, as a player, can see need to be calculated and rendered. The other walls that are out of sight exist only as a virtual map in the computer’s memory or persisted to longer-term storage. Likewise, the behavior of virtual microscopic phenomena need not be calculated insofar as the macroscopic results can be rendered, like the fire patterns in a virtual torch.

So one way of explaining physics conundrums like delayed choice quantum erasers, Bell’s inequality, or ER = EPR might be to claim that these sorts of phenomena are the results of a low-fidelity simulation necessitated by the limits of the simulator computer. I think the likelihood that this is true is low, however, because we can imagine that there exists an infinitely large cosmos that merely includes our universe simulation as a mote within it. Low-fidelity simulation constraints might give experimental guidance, but the results could also be supported by just living with the indeterminacy and non-locality as fundamental features of our universe.

It’s worth considering, however, what we should think about the nature of the simulator given this potentially devious (and poorly coded) little Matrix that we find ourselves trapped in?… Read the rest

Ambiguously Slobbering Dogs

I was initially dismissive of this note from Google Research on improving machine translation via Deep Learning Networks by adding in a sentence-level network. My goodness, they’ve rediscovered anaphora and co-reference resolution! Next thing they will try is some kind of network-based slot-filler ontology to carry gender metadata. But their goal was to add a framework to their existing recurrent neural network architecture that would support a weak, sentence-level resolution of translational ambiguities while still allowing the TPU/GPU accelerators they have created to function efficiently. It’s a hack, but one that potentially solves yet another corner of the translation problem and might result in a few percent further improvements in the quality of the translation.

But consider the following sentences:

The dog had the ball. It was covered with slobber.

The dog had the ball. It was thinking about lunch while it played.

In these cases, the anaphora gets resolved by semantics and the resolution seems largely an automatic and subconscious process to us as native speakers. If we had to translate these into a second language, however, we would be able to articulate that there are specific reasons for correctly assigning the “It” to the ball in the first two sentences. Well, it might be possible for the dog to be covered with slobber, but we would guess the sentence writer would intentionally avoid that ambiguity. The second set of sentences could conceivably be ambiguous if, in the broader context, the ball was some intelligent entity controlling the dog. Still, when our guesses are limited to the sentence pairs in isolation we would assign the obvious interpretations. Moreover, we can resolve giant, honking passage-level ambiguities with ease, where the author is showing off in not resolving the co-referents until obscenely late in the text.… Read the rest