Instrumentality and Terror in the Uncanny Valley

I got an Apple HomePod the other day. I have several Airplay speakers already, two in one house and a third in my separate office. The latter, a Naim Mu-So, combines Airplay with internet radio and bluetooth, but I mostly use it for the streaming radio features (KMozart, KUSC, Capital Public Radio, etc.). The HomePod’s Siri implementation combined with Apple Music allows me to voice control playlists and experiment with music that I wouldn’t generally have bothered to buy and own. I can now sample at my leisure without needing to broadcast via a phone or tablet or computer. Steve Reich, Bill Evans, Theolonius Monk, Bach organ mixes, variations of Tristan and Isolde, and, yesterday, when I asked for “workout music” I was gifted with Springsteen’s Born to Run, which I would never have associated with working out, but now I have dying on the mean streets of New Jersey with Wendy in some absurd drag race conflagration replaying over and over again in my head.

Right after setup, I had a strange experience. I was shooting random play thoughts to Siri, then refining them and testing the limits. There are many, as reviewers have noted. Items easily found in Apple Music are occasionally fails for Siri in HomePod, but simple requests and control of a few HomeKit devices work acceptably. The strange experience was my own trepidation over barking commands at the device, especially when I was repeating myself: “Hey Siri. Stop. Play Bill Evans. Stop. Play Bill Evans’ Peace Piece.” (Oh my, homophony, what will happen? It works.) I found myself treating Siri as a bit of a human being in that I didn’t want to tell her to do a trivial task that I had just asked her to perform. A person would become irritated and we naturally avoid that kind of repetitious behavior when asking others to perform tasks for us. Unless there is an employer-employee relationship where that kind of repetition is part of the work duties, it is not something we do among friends, family, acquaintances, or co-workers. It is rude. It is demanding. It treats people insensitively as instruments for fulfilling one’s trivial goals.

I found this odd. I occasionally play video games with lifelike visual and auditory representations of characters, but I rarely ask them to do things that involve selecting from an open-ended collection of possibilities, since most interactions with non-player entities are channeled by the needs of the storyline. They are never “eerie” as the research on uncanny valley effects refers to them. This is likely mediated by context and expectations. I’m playing a game and it is on a flat projection. My expectations are never violated by the actions within the storyline.

But why then did Siri as a repetitious slave elicit such a concern?

There are only a handful of studies designed to better understand the nature of the uncanny valley eeriness effect. One of the more interesting studies investigates the relationship between our thoughts of death and the appearance of uncanny wax figures or androids. Karl MacDorman’s Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can We Exploit It? investigates the relationship between Terror Management Theory and our reactions to eeriness in objects. Specifically, the work builds on the idea that we have developed different cultural and psychological mechanisms to distract ourselves from the anxiety of death concerns. These take different forms in their details but largely involve:

… a dual-component cultural anxiety buffer consisting of (a) a cultural world-view — a humanly constructed symbolic conception of reality that imbues life with order, permanence, and stability; a set of standards through which individuals can attain a sense of personal value; and some hope of either literally or symbolically transcending death for those who live up to these standards of value; and (b) self-esteem, which is acquired by believing that one is living up to the standards of value inherent in one’s cultural worldview.

This theory has been tested in various ways, largely through priming with death-related terms (coffin, buried, murder, etc.) and looking at the the impact of exposure to these terms have on other decision-making after short delays. In this particular study, an android face and a similar human face were shown to study participants and their reactions were examined for evidence that the android affected their subsequent choices. For instance, a charismatic speech versus a “relationship-oriented” speech by politician. Our terror response hypothetically causes us to prefer the charismatic leader more when we are under threat. Another form of testing involved doing word completion puzzles that were ambiguous. For instance, the subject is presented with COFF_ _ and asked to choose an E or an I for the next letter (COFFEE versus COFFIN), or MUR _ _ R (MURMUR or MURDER). Other “uncanny” word sets (_ _ REEPY) (CREEPY, SLEEPY) and ST _ _ _ GE (STRANGE, STORAGE) were also included as were controls that had no such associations. The android presentation resulted in statistically significant increases in the uncanny word set as well as combined uncanny/death word presentations, though the death words alone were not statistically significant.

And what about my fear of treating others instrumentally? It may fall in a similar category, but due to the “set of standards through which individuals can attain a sense of personal value.” I am sensitive to mistreating others as both a barely-conscious recognition of their humanity and as a set of heuristic guidelines that fire automatically as a form of nagging conscience. I will note, however, that after a few days I appear to have become desensitized to the concern. Siri, please turn off the damn noise.

Leave a Reply

Your email address will not be published. Required fields are marked *