Category: Psychology

Instrumentality and Terror in the Uncanny Valley

I got an Apple HomePod the other day. I have several Airplay speakers already, two in one house and a third in my separate office. The latter, a Naim Mu-So, combines Airplay with internet radio and bluetooth, but I mostly use it for the streaming radio features (KMozart, KUSC, Capital Public Radio, etc.). The HomePod’s Siri implementation combined with Apple Music allows me to voice control playlists and experiment with music that I wouldn’t generally have bothered to buy and own. I can now sample at my leisure without needing to broadcast via a phone or tablet or computer. Steve Reich, Bill Evans, Theolonius Monk, Bach organ mixes, variations of Tristan and Isolde, and, yesterday, when I asked for “workout music” I was gifted with Springsteen’s Born to Run, which I would never have associated with working out, but now I have dying on the mean streets of New Jersey with Wendy in some absurd drag race conflagration replaying over and over again in my head.

Right after setup, I had a strange experience. I was shooting random play thoughts to Siri, then refining them and testing the limits. There are many, as reviewers have noted. Items easily found in Apple Music are occasionally fails for Siri in HomePod, but simple requests and control of a few HomeKit devices work acceptably. The strange experience was my own trepidation over barking commands at the device, especially when I was repeating myself: “Hey Siri. Stop. Play Bill Evans. Stop. Play Bill Evans’ Peace Piece.” (Oh my, homophony, what will happen? It works.) I found myself treating Siri as a bit of a human being in that I didn’t want to tell her to do a trivial task that I had just asked her to perform. A person would become irritated and we naturally avoid that kind of repetitious behavior when asking others to perform tasks for us. Unless there is an employer-employee relationship where that kind of repetition is part of the work duties, it is not something we do among friends, family, acquaintances, or co-workers. It is rude. It is demanding. It treats people insensitively as instruments for fulfilling one’s trivial goals.

I found this odd. I occasionally play video games with lifelike visual and auditory representations of characters, but I rarely ask them to do things that involve selecting from an open-ended collection of possibilities, since most interactions with non-player entities are channeled by the needs of the storyline. They are never “eerie” as the research on uncanny valley effects refers to them. This is likely mediated by context and expectations. I’m playing a game and it is on a flat projection. My expectations are never violated by the actions within the storyline.

But why then did Siri as a repetitious slave elicit such a concern?

There are only a handful of studies designed to better understand the nature of the uncanny valley eeriness effect. One of the more interesting studies investigates the relationship between our thoughts of death and the appearance of uncanny wax figures or androids. Karl MacDorman’s Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can We Exploit It? investigates the relationship between Terror Management Theory and our reactions to eeriness in objects. Specifically, the work builds on the idea that we have developed different cultural and psychological mechanisms to distract ourselves from the anxiety of death concerns. These take different forms in their details but largely involve:

… a dual-component cultural anxiety buffer consisting of (a) a cultural world-view — a humanly constructed symbolic conception of reality that imbues life with order, permanence, and stability; a set of standards through which individuals can attain a sense of personal value; and some hope of either literally or symbolically transcending death for those who live up to these standards of value; and (b) self-esteem, which is acquired by believing that one is living up to the standards of value inherent in one’s cultural worldview.

This theory has been tested in various ways, largely through priming with death-related terms (coffin, buried, murder, etc.) and looking at the the impact of exposure to these terms have on other decision-making after short delays. In this particular study, an android face and a similar human face were shown to study participants and their reactions were examined for evidence that the android affected their subsequent choices. For instance, a charismatic speech versus a “relationship-oriented” speech by politician. Our terror response hypothetically causes us to prefer the charismatic leader more when we are under threat. Another form of testing involved doing word completion puzzles that were ambiguous. For instance, the subject is presented with COFF_ _ and asked to choose an E or an I for the next letter (COFFEE versus COFFIN), or MUR _ _ R (MURMUR or MURDER). Other “uncanny” word sets (_ _ REEPY) (CREEPY, SLEEPY) and ST _ _ _ GE (STRANGE, STORAGE) were also included as were controls that had no such associations. The android presentation resulted in statistically significant increases in the uncanny word set as well as combined uncanny/death word presentations, though the death words alone were not statistically significant.

And what about my fear of treating others instrumentally? It may fall in a similar category, but due to the “set of standards through which individuals can attain a sense of personal value.” I am sensitive to mistreating others as both a barely-conscious recognition of their humanity and as a set of heuristic guidelines that fire automatically as a form of nagging conscience. I will note, however, that after a few days I appear to have become desensitized to the concern. Siri, please turn off the damn noise.

Black and Gray Boxes with Autonomous Meta-Cognition

Vijay Pande of VC Andreessen Horowitz (who passed on my startups twice but, hey, it’s just business!) has a relevant article in New York Times concerning fears of the “black box” of deep learning and related methods: is the lack of explainability and limited capacity for interrogation of the underlying decision making a deal-breaker for applications to critical areas like medical diagnosis or parole decisions? His point is simple, and related to the previous post’s suggestion of the potential limitations of our capacity to truly understand many aspects of human cognition. Even the doctor may only be able to point to a nebulous collection of clinical experiences when it comes to certain observational aspects of their jobs, like in reading images for indicators of cancer. At least the algorithm has been trained on a significantly larger collection of data than the doctor could ever encounter in a professional lifetime.

So the human is almost as much a black box (maybe a gray box?) as the algorithm. One difference that needs to be considered, however, is that the deep learning algorithm might make unexpected errors when confronted with unexpected inputs. The classic example from the early history of artificial neural networks involved a DARPA test of detecting military tanks in photographs. The apocryphal to legendary formulation of the story is that there was a difference in the cloud cover between the tank images and the non-tank images. The end result was that the system performed spectacularly on the training and test data sets but then failed miserably on new data that lacked the cloud cover factor. I recalled this slightly differently recently and substituted film grain for the cloudiness. In any case, it became a discussion point about the limits of data-driven learning that showed how radically incorrect solutions could be created without careful understanding of how the systems work.

How can the fears of radical failure be reduced? In medicine we expect automated decision making to be backed up by a doctor who serves as a kind of meta-supervisor. When the diagnosis or prognosis looks dubious, the doctor will order more tests or countermand the machine. When the parole board sees the parole recommendation, they always have the option of ignoring it based on special circumstances. In each case, it is the presence of some anomaly in the recommendation or the input data that would lead to reconsideration. Similarly, it is certainly possible to automate that scrutiny at a meta-level. In machine learning, statistical regularization is used to reduce or eliminate outliers in data sets in an effort to prevent overtraining or overfitting on noisy data elements. In much the same way, the regularization process can provide warnings and clues about data viability. And, in turn, unusual outputs that are statistically unlikely given the history of the machine’s decisions can trigger warnings about anomalous results.

Deep Simulation in the Southern Hemisphere

I’m unusually behind in my postings due to travel. I’ve been prepping for and now deep inside a fresh pass through New Zealand after two years away. The complexity of the place seems to have a certain draw for me that has lured me back, yet again, to backcountry tramping amongst the volcanoes and glaciers, and to leasurely beachfront restaurants painted with eruptions of summer flowers fueled by the regular rains.

I recently wrote a technical proposal that rounded up a number of the most recent advances in deep learning neural networks. In each case, like with Google’s transformer architecture, there is a modest enhancement that is based on a realization of a deficit in the performance of one of two broad types of networks, recurrent and convolutional.

An old question is whether we learn anything about human cognition if we just simulate it using some kind of automatically learning mechanism. That is, if we use a model acquired through some kind of supervised or unsupervised learning, can we say we know anything about the original mind and its processes?

We can at least say that the learning methodology appears to be capable of achieving the technical result we were looking for. But it also might mean something a bit different: that there is not much more interesting going on in the original mind. In this radical corner sits the idea that cognitive processes in people are tactical responses left over from early human evolution. All you can learn from them is that they may be biased and tilted towards that early human condition, but beyond that things just are the way they turned out.

If we take this position, then, we might have to discard certain aspects of the social sciences. We can no longer expect to discover some hauntingly elegant deep structure to our minds, language, or personalities. Instead, we are just about as deep as we are shallow, nothing more.

I’m alright with this, but think it is premature to draw the more radical conclusion. In the same vein that there are tactical advances—small steps—that improve simple systems in solving more complex problems, the harder questions of self-reflectivity and “general AI” still have had limited traction. It’s perfectly reasonable that upon further development, the right approaches and architectures that do reveal insights into deep cognitive “simulation” will, in turn, uncover some of that deep structure.

The Universal Roots of Fantasyland

Intellectual history and cultural criticism always teeters on the brink of totalism. So it was when Christopher Hitchens was forced to defend the hyperbolic subtitle of God Is Not Great: How Religion Poisons Everything. The complaint was always the same: everything, really? Or when Neil Postman downplayed the early tremors of the internet in his 1985 Amusing Ourselves to Death. Email couldn’t be anything more than another movement towards entertainment and celebrity. So it is no surprise that Kurt Andersen’s Fantasyland: How America Went Wrong: A 500-Year History is open to similar charges.

Andersen’s thesis is easily digestible: we built a country on fantasies. From the earliest charismatic stirrings of the Puritans to the patent medicines of the 19th century, through to the counterculture of the 1960s, and now with an incoherent insult comedian and showman as president, America has thrived on inventing wild, fantastical narratives that coalesce into movements. Andersen’s detailed analysis is breathtaking as he pulls together everything from linguistic drift to the psychology of magical thinking to justify his thesis.

Yet his thesis might be too narrow. It is not a uniquely American phenomenon. When Andersen mentions cosplay, he fails to identify its Japanese contributions, including the word itself. In the California Gold Rush, he sees economic fantasies driving a generation to unmoor themselves from their merely average lives. Yet the conquistadores had sought to enrich themselves, God, and country while Americans were forming their shining cities on hills. And in mid-19th-century Europe, while the Americans panned in the Sierra, romanticism was throwing off the oppressive yoke of Enlightenment rationality as the West became increasingly exposed to enigmatic Asian cultures. By the 20th century, Weimar Berlin was a hotbed of cultural fantasies that dovetailed with the rise of Nazism and a fantastical theory of race, German volk culture, and Indo-European mysticism. In India, film has been the starting point for many politicians. The religion of Marxism led to Heroic Realism as the stained glass of the Communist cathedrals.

Is America unique or is it simply human nature to strive for what has not yet existed and, in so doing, create and live in alternative fictions that transcend the mundanity of ordinary reality? If the latter, then Andersen’s thesis still stands but not as a singular evolution. Cultural change is driven by equal parts fantasy and reality. Exploration and expansion was paired with fantastical justifications from religious and literary sources. The growth of an entertainment industry was two-thirds market-driven commerce and one-third creativity. The development of the World Wide Web was originally to exchange scientific information but was exchanging porn from nearly the moment it began.

To be fair, Chapter 32 (America Versus the Godless Civilized Word: Why Are We So Exceptional), provides an argument for the exceptionalism of America at least in terms of religiosity. The pervasiveness of religious belief in America is unlike nearly all other developed nations, and the variation and creativity of those beliefs seems to defy economic and social science predictions about how religions shape modern life across nations. In opposition, however, is a following chapter on postmodernism in academia that again shows how a net wider than America is needed to explain anti-rationalist trends. From Foucault and Continental philosophy we see the trend towards fantasy; Anglo-American analytical philosophy has determinedly moved towards probabilistic formulations of epistemology and more and more scientism.

So what is the explanation of irrationality, whether uniquely American or more universal? In Fantasyland Andersen pins the blame on the persistence of intense religiosity in America. Why America alone remains a mystery, but the consequence is that the adolescent transition from belief in fairytales never occurs and there is a bleed-over effect into the acceptance of alternative formulations of reality:

The UC Berkeley psychologist Alison Gopnik studies the minds of small children and sees them as little geniuses, models of creativity and innovation. “They live twenty-four/seven in these crazy pretend worlds,” she says. “They have a zillion different imaginary friends.” While at some level, they “know the difference between imagination and reality…it’s just they’d rather live in imaginary worlds than in real ones. Who could blame them?” But what happens when that set of mental habits persists into adulthood too generally and inappropriately? A monster under the bed is true for her, the stuffed animal that talks is true for him, speaking in tongues and homeopathy and vaccines that cause autism and Trilateral Commission conspiracies are true for them.

This analysis extends the umbrella of religious theories built around instincts for perceiving purposeful action to an unceasing escalation of imaginary realities to buttress these personified habits of mind. It’s a strange preoccupation for many of us, though we can be accused of being coastal elites (or worse) just for entertaining such thoughts.

Fantasyland doesn’t end on a positive note but I think the broader thesis just might. We are all so programmed, I might claim. Things slip and slide, politics see and saw, but there seems to be a gradual unfolding of more rights and more opportunity for the many. Theocracy has always lurked in the basement of the American soul, but the atavistic fever dream has been eroded by a cosmopolitan engagement with the world. Those who long for utopia get down to the business of non-zero-sum interactions with a broader clientele and drift away, their certitude fogging until it lifts and a more conscientious idealization of what is and what can be takes over.

Brain Gibberish with a Convincing Heart

Elon Musk believes that direct brain interfaces will help people better transmit ideas to one another in addition to just allowing thought-to-text generation. But there is a fundamental problem with this idea. Let’s take Hubert Dreyfus’ conception of the way meaning works as being tied to a more holistic view of our social interactions with others. Hilary Putnam would probably agree with this perspective, though now I am speaking for two dead philosphers of mind. We can certainly conclude that my mental states when thinking about the statement “snow is white” are, borrowing from Putnam who borrows from Quine, different from a German person thinking “Schnee ist weiß.” The orthography, grammar, and pronunciation are different to begin with. Then there is what seems to transpire when I think about that statement: mild visualizations of white snow-laden rocks above a small stream for instance, or, just now, Joni Mitchell’s “As snow gathers like bolts of lace/Waltzing on a ballroom girl.” The centrality or some kind of logical ground that merely asserts that such a statement is a propositional truth that is shared in some kind of mind interlingua doesn’t bear much fruit to the complexities of what such a statement entails.

Religious and political terminology is notoriously elastic. Indeed, for the former, it hardly even seems coherent to talk about the concept of supernatural things or events. If they are detectable by any other sense than some kind of unverifiable gnosis, then they are at least natural in that they are manifesting in the observable world. So supernatural imposes a barrier that seems to preclude any kind of discussion using ordinary language. The only thing left is a collection of metaphysical assumptions that, in lacking any sort of reference, must merely conform to the patterns of synonymy, metonymy, and other language games that we ordinarily reserve for discernible events and things. And, of course, where unverifiable gnosis holds sway, it is not public knowledge and therefore seems to mainly serve as a social mechanism for attracting attention to oneself.

Politics takes on a similar quality, with it often said to be a virtue if a leader can translate complex policies into simple sound bites. But, as we see in modern American politics, what instead happens is that abstract fear signaling is the primary currency to try to motivate (and manipulate) the voter. The elasticity of a concept like “freedom” is used to polarize the sides of political negotiation that almost always involves the management of winners and losers and the dividing line between them. Fear mixes with complex nostalgia about times that never were, or were more nuanced than most recall, and jeremiads serve to poison the well of discourse.

So, if I were to have a brain interface, it might be trainable to write words for me by listening to the regular neural firing patterns that accompany my typing or speaking, but I doubt it would provide some kind of direct transmission or telepathy between people that would have any more content than those written or spoken forms. Instead, the inscrutable and non-referential abstractions about complex ideas would be tied together and be in contrast with the existing holistic meaning network. And that would just be gibberish to any other mind. Worst still, such a system might also be able to convey raw emotion from person to person, thus just amplifying the fear or joy component of the idea without being able to transmit the specifics of the thoughts. And that would be worse than mere gibberish, it would be gibberish with a convincing heart.

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus. In English, the syntax and semantics are less constrained and the pragmatics more pronounced, but the emotional high is much the same.

With the recent death of Hubert Dreyfus at Berkeley it seems an opportune time to consider the uniquely human capabilities that are involved in each of these creative ventures. Uniquely, I suggest, because we can’t yet imagine what it would be like for a machine to do the same kinds of intelligent tasks. Yet, from Stephen Hawking through to Elon Musk, influential minds are worried about what might happen if we develop machines that rise to the level of human consciousness. This might be considered a science fiction-like speculation since we have little basis for conjecture beyond the works of pure imagination. We know that mechanization displaces workers, for instance, and think it will continue, but what about conscious machines?

For Dreyfus, the human mind is too embodied and situational to be considered an encodable thing representable by rules and algorithms. Much like the trajectory of a species through an evolutionary landscape, the mind is, in some sense, an encoded reflection of the world in which it lives. Taken further, the evolutionary parallel becomes even more relevant in that it is embodied in a sensory and physical identity, a product of a social universe, and an outgrowth of some evolutionary ping pong through contingencies that led to greater intelligence and self-awareness.

Obsession with whatever cultivars, whatever traits and tendencies, lead to this riot of wordplay and software refinement is a fine example of how this moves away from the fears of Hawking and towards the impossibilities of Dreyfus. We might imagine that we can simulate our way to the kernel of instinct and emotion that makes such things possible. We might also claim that we can disconnect the product of the effort from these internal states and the qualia that defy easy description. The books and the new technologies have only desultory correspondence to the process by which they are created. But I doubt it. It’s more likely that getting from great automatic speech recognition or image classification to the general AI that makes us fearful is a longer hike than we currently imagine.

Tweak, Memory

Artificial Neural Networks (ANNs) were, from early on in their formulation as Threshold Logic Units (TLUs) or Perceptrons, mostly focused on non-sequential decision-making tasks. With the invention of back-propagation training methods, the application to static presentations of data became somewhat fixed as a methodology. During the 90s Support Vector Machines became the rage and then Random Forests and other ensemble approaches held significant mindshare. ANNs receded into the distance as a quaint, historical approach that was fairly computationally expensive and opaque when compared to the other methods.

But Deep Learning has brought the ANN back through a combination of improvements, both minor and major. The most important enhancements include pre-training of the networks as auto-encoders prior to pursuing error-based training using back-propagation or  Contrastive Divergence with Gibbs Sampling. The critical other enhancement derives from Schmidhuber and others work in the 90s on managing temporal presentations to ANNs so the can effectively process sequences of signals. This latter development is critical for processing speech, written language, grammar, changes in video state, etc. Back-propagation without some form of recurrent network structure or memory management washes out the error signal that is needed for adjusting the weights of the networks. And it should be noted that increased compute fire-power using GPUs and custom chips has accelerated training performance enough that experimental cycles are within the range of doable.

Note that these are what might be called “computer science” issues rather than “brain science” issues. Researchers are drawing rough analogies between some observed properties of real neuronal systems (neurons fire and connect together) but then are pursuing a more abstract question as to how a very simple computational model of such neural networks can learn. And there are further analogies that start building up: learning is due to changes in the strength of neural connections, for instance, and neurons fire after suitable activation. Then there are cognitive properties of human minds that might be modeled, as well, which leads us to a consideration of working memory in building these models.

It is this latter consideration of working memory that is critical to holding stimuli presentations long enough that neural connections can process them and learn from them. Schmidhuber et. al.’s methodology (LSTM) is as ad hoc as most CS approaches in that it observes a limitation with a computational architecture and the algorithms that operate within that architecture and then tries to remedy the limitation by architectural variations. There tends to be a tinkering and tweaking that goes on in the gradual evolution of these kinds of systems until something starts working. Theory walks hand-in-hand with practice in applied science.

Given that, however, it should be noted that there are researchers who are attempting to create a more biologically-plausible architecture that solves some of the issues with working memory and training neural networks. For instance, Frank, Loughry, and O’Reilly at University of Colorado have been developing a computational model that emulates the circuits that connect the frontal cortex and the basal ganglia. The model uses an elaborate series of activating and inhibiting connections to provide maintenance of perceptual stimuli in working memory. The model shows excellent performance on specific temporal presentation tasks. In its attempt to preserve a degree of fidelity to known brain science, it does lose some of the simplicity that purely CS-driven architectures provide, but I think it has a better chance of helping overcome another vexing problem for ANNs. Specifically, the slow learning properties of ANNs have only scant resemblance to much human learning. We don’t require many, many presentations of a given stimulus in order to learn it; often, one presentation is sufficient. Reconciling the slow tuning of ANN models, even recurrent ones, with this property of human-like intelligence remains an open issue, and more biology may be the key.

Desire and Other Matters

From the frothy mind of Jeff Koons
From the frothy mind of Jeff Koons

“What matters?” is a surprisingly interesting question. I think about it constantly since it weighs-in whenever plotting future choices, though often I seem to be more autopilot than consequentialist in these conceptions. It is an essential first consideration when trying to value one option versus another. I can narrow the question a bit to “what ideas matter?” This immediately externalizes the broad reality of actions that meaningfully improve lives, like helping others, but still leaves a solid core of concepts that are valued more abstractly. Does the traditional Western liberal tradition really matter? Do social theories? Are less intellectually-embellished virtues like consistency and trust more relevant and applicable than notions like, well, consequentialism?

Maybe it amounts to how to value certain intellectual systems against others?

Some are obviously more true than others. So “dowsing belief systems” are less effective in a certain sense than “planetary science belief systems.” Yet there are a broader range of issues at work.

But there are some areas of the liberal arts that have a vexing relationship with the modern mind. Take linguistics. The field ranges from catalogers of disappearing languages to theorists concerned with how to structure syntactic trees. Among the latter are the linguists who have followed Noam Chomsky’s paradigm that explains language using a hierarchy of formal syntactic systems, all of which feature recursion as a central feature. What is interesting is that there have been very few impacts of this theory. It is very simple at its surface: languages are all alike and involve phrasal groups that embed in deep hierarchies. The specific ways in which the phrases and their relative embeddings take place may differ among languages, but they are alike in this abstract way.

And likewise we have to ask what the impact is of scholarship like René Girard’s theory of mimesis. The theory has a Victorian feel about it: a Freudian/Jungian essential psychological tendency girds all that we know, experience, and see. Violence is the triangulation of wanton desire as we try to mimic one another. That triangulation was suppressed—sublimated, if you will—by sacrifice that refocused the urge to violence on the sacrificial object. It would be unusual for such a theory to rise above the speculative scholarship that only queasily embraces empiricism without some prodding.

But maybe it is enough that ideas are influential at some level. So we have Ayn Rand, liberally called-out by American economic conservatives, at least until they are reminded of Rand’s staunch atheism. And we have Peter Thiel, from PayPal mafia to recent Gawker lawsuits, justifying his Facebook angel round based on Girard’s theory of mimesis. So we are all slaves of our desires to like, indirectly, a bunch of crap on the internet. But at least it is theoretically sound.

Subtly Motivating Reasoning

larson-sheepContinuing on with the general theme of motivated reasoning, there are some rather interesting results reported in New Republic, here. Specifically, Ian Anson from University of Maryland, Baltimore County, found that political partisans reinforced their perspectives on the state of the U.S. economy more strongly when they were given “just the facts” rather than a strong partisan statement combined with the facts. Even when the partisan statements aligned with their own partisan perspectives, the effect held.

The author concludes that people, in constructing their views of the causal drivers of the economy, believe that they are unbiased in their understanding of the underlying mechanisms. The barefaced partisan statements interrupt that construction process, perhaps, or at least distract from it. Dr. Anson points out that subtly manufacturing consent therefore makes for better partisan fellow travelers.

There are a number of theories concerning how meanings must get incorporated into our semantic systems, and whether the idea of meaning itself is as good or worse than simply discussing reference. More, we can rate or gauge the uncertainty we must have concerning complex systems. They seem to form a hierarchy, with actors in our daily lives and the motivations of those we have long histories with in the mostly-predictable camp. Next we may have good knowledge about a field or area of interest that we have been trained in. When this framework has a scientific basis, we also rate our knowledge as largely reliable, but we also know the limits of that knowledge. It is in predictive futures and large-scale policy that we become subject to the difficulty of integrating complex signals into a cohesive framework. The partisans supply factoids and surround them with causal reasoning. We weigh those against alternatives and hold them as tentative. But then we have to exist in a political life, as well, and it’s not enough to just proclaim our man or woman or party as great and worthy of our vote and love, we must also justify that consideration.

I speculate now that it may be possible to wage war against partisan bias by employing the exact methods described as effective by Dr. Anson. Specifically, if in any given presentation of economic data there was one fact presented that appeared to undermine the partisan position otherwise described by the data, would it lead to a general weakening of the mental model in the reader’s head? For instance, compare the following two paragraphs:

The unemployment rate has decreased from a peak of 10% in 2009 to 4.7% in June of 2016. This rate doesn’t reflect the broader, U-6, rate of nearly 10% that includes the underemployed and others who are not seeking work. Wages have been down or stagnant over the same period.

Versus:

The unemployment rate has decreased from a peak of 10% in 2009 to 4.7% in June of 2016. This rate doesn’t reflect the broader, U-6, rate of nearly 10% that includes the underemployed and others who are not seeking work. Wages have been down or stagnant over the same period even while consumer confidence and spending has risen to an 11-month high.

The second paragraph adds an accurate but upbeat and contradictory signal to the more subtle gloom of the first paragraph. Of course, partisan hacks will naturally avoid doing this kind of thing. Marketers and salespeople don’t let the negative signals creep in if they can avoid it, but I would guess that a subtle contradiction embedded in the signal would disrupt the conspiracy theorists and the bullshit artists alike.

Euhemerus and the Bullshit Artist

trump-minotaurSailing down through the Middle East, past the monuments of Egypt and the wild African coast, and then on into the Indian Ocean, past Arabia Felix, Euhemerus came upon an island. Maybe he came upon it. Maybe he sailed. He was perhaps—yes, perhaps; who can say?—sailing for Cassander in deconstructing the memory of Alexander the Great. And that island, Panchaea, held a temple of Zeus with a written history of the deeds of men who became the Greek gods.

They were elevated, they became fixed in the freckled amber of ancient history, their deeds escalated into myths and legends. And, likewise, the ancient tribes of the Levant brought their El and Yah-Wah, and Asherah and Baal, and then the Zoroastrians influenced the diaspora in refuge in Babylon, until they returned and had found dualism, elemental good and evil, and then reimagined their origins pantheon down through monolatry and into monotheism. These great men and women were reimagined into something transcendent and, ultimately, barely understandable.

Even the rational Yankee in Twain’s Connecticut Yankee in King Arthur’s Court realizes almost immediately why he would soon rule over the medieval world as he is declared a wild dragon when presented to the court. He waits for someone to point out that he doesn’t resemble a dragon, but the medieval mind does not seem to question the reasonableness of the mythic claims, even in the presence of evidence.

So it goes with the human mind.

And even today we have Fareed Zakaria justifying his use of the term “bullshit artist” for Donald Trump. Trump’s logorrhea is punctuated by so many incomprehensible and contradictory statements that it becomes a mythic whirlwind. He lets slip, now and again, that his method is deliberate:

DT: Therefore, he was the founder of ISIS.

HH: And that’s, I’d just use different language to communicate it, but let me close with this, because I know I’m keeping you long, and Hope’s going to kill me.

DT: But they wouldn’t talk about your language, and they do talk about my language, right?

Bullshit artist is the modern way of saying what Euhemerus was trying to say in his fictional “Sacred History.” Yet we keep getting entranced by these coordinated maelstroms of utter crap, from World Net Daily to Infowars to Fox News to Rush Limbaugh. Only the old Steven Colbert could contend with it through his own bullshit mythical inversion. Mockery seems the right approach, but it doesn’t seem to have a great deal of impact on the conspiratorial mind.