Sentience is Physical

Sentience is all the rage these days. With large language models (LLMs) based on deep learning neural networks, question-answering behavior of these systems takes on curious approximations to talking with a smart person. Recently a member of Google’s AI team was fired after declaring one of their systems sentient. His offense? Violating public disclosure rules. I and many others who have a firm understanding of how these systems work—by predicting next words from previous productions crossed with the question token stream—are quick to dismiss the claims of sentience. But what does sentience really amount to and how can we determine if a machine becomes sentient?

Note that there are those who differentiate sentience (able to have feelings), from sapience (able to have thoughts), and consciousness (some private, subjective phenomenal sense of self). I am willing to blend them together a bit since the topic here isn’t narrowly trying to address the ethics of animal treatment, for example, where the distinction can be useful.

First we have the “imitation game” Turing test-style approach to the question of how we might ever determine if a machine becomes sentient. If a remote machine can fool a human into believing it is a person, it must be as intelligent as a person and therefore sentient like we presume of people. But this is a limited goal line. If the interaction is only over a limited domain like solving your cable internet installation problems, we don’t think of that as a sentient machine. Even against a larger domain of open-ended question and answering, if the human doesn’t hit upon a revealing kind of error that a machine might make that a human would not, we remain unconvinced that the target is sentient.… Read the rest

On Woo-Woo and Schrödinger’s Cat

schrodingers-cat-walks-into-a-bar-memeMichael Shermer and Sam Harris got together with an audience at Caltech to beat up on Deepak Chopra and a “storyteller” named Jean Houston in The Future of God debate hosted by ABC News. And Deepak got uncharacteristically angry back behind his crystal-embellished eyewear, especially at Shermer’s assertion that Deepak is just talking “woo-woo.”

But is there any basis for the woo-woo that Deepak is weaving? As it turns out, he is building on some fairly impressive work by Stuart Hameroff, MD, of University of Arizona and Sir Roger Penrose of Oxford University. Under development for more than 25 years, this work has most recently been summed up in their 2014 paper, “Consciousness in the universe: A review of the ‘Orch OR’ theory” available for free (but not the commentaries, alas). Deepak was even invited to comment on the paper in Physics of Life Reviews, though the content of his commentary was challenged as being somewhat orthogonal or contradictory to the main argument.

To start somewhere near the beginning, Penrose became obsessed with the limits of computation in the late 80s. The Halting Problem sums up his concerns about the idea that human minds can possibly be isomorphic with computational devices. There seems to be something that allows for breaking free of the limits of “mere” Turing Complete computation to Penrose. Whatever that something is, it should be physical and reside within the structure of the brain itself. Hameroff and Penrose would also like that something to explain consciousness and all of its confusing manifestations, for surely consciousness is part of that brain operation.

Now, to get at some necessary and sufficient sorts of explanations for this new model requires looking at Hameroff’s medical speciality: anesthesiology.… Read the rest