Category: Science

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus. In English, the syntax and semantics are less constrained and the pragmatics more pronounced, but the emotional high is much the same.

With the recent death of Hubert Dreyfus at Berkeley it seems an opportune time to consider the uniquely human capabilities that are involved in each of these creative ventures. Uniquely, I suggest, because we can’t yet imagine what it would be like for a machine to do the same kinds of intelligent tasks. Yet, from Stephen Hawking through to Elon Musk, influential minds are worried about what might happen if we develop machines that rise to the level of human consciousness. This might be considered a science fiction-like speculation since we have little basis for conjecture beyond the works of pure imagination. We know that mechanization displaces workers, for instance, and think it will continue, but what about conscious machines?

For Dreyfus, the human mind is too embodied and situational to be considered an encodable thing representable by rules and algorithms. Much like the trajectory of a species through an evolutionary landscape, the mind is, in some sense, an encoded reflection of the world in which it lives. Taken further, the evolutionary parallel becomes even more relevant in that it is embodied in a sensory and physical identity, a product of a social universe, and an outgrowth of some evolutionary ping pong through contingencies that led to greater intelligence and self-awareness.

Obsession with whatever cultivars, whatever traits and tendencies, lead to this riot of wordplay and software refinement is a fine example of how this moves away from the fears of Hawking and towards the impossibilities of Dreyfus. We might imagine that we can simulate our way to the kernel of instinct and emotion that makes such things possible. We might also claim that we can disconnect the product of the effort from these internal states and the qualia that defy easy description. The books and the new technologies have only desultory correspondence to the process by which they are created. But I doubt it. It’s more likely that getting from great automatic speech recognition or image classification to the general AI that makes us fearful is a longer hike than we currently imagine.

The Ethics of Knowing

In the modern American political climate, I’m constantly finding myself at sea in trying to unravel the motivations and thought processes of the Republican Party. The best summation I can arrive at involves the obvious manipulation of the electorate—but that is not terrifically new—combined with a persistent avoidance of evidence and facts.

In my day job, I research a range of topics trying to get enough of a grasp on what we do and do not know such that I can form a plan that innovates from the known facts towards the unknown. Here are a few recent investigations:

  • What is the state of thinking about the origins of logic? Logical rules form into broad classes that range from the uncontroversial (modus tollens, propositional logic, predicate calculus) to the speculative (multivalued and fuzzy logic, or quantum logic, for instance). In most cases we make an assumption based on linguistic convention that they are true and then demonstrate their extension, despite the observation that they are tautological. Synthetic knowledge has no similar limitations but is assumed to be girded by the logical basics.
  • What were the early Christian heresies, how did they arise, and what was their influence? Marcion of Sinope is perhaps the most interesting one of these, in parallel with the Gnostics, asserting that the cruel tribal god of the Old Testament was distinct from the New Testament Father, and proclaiming perhaps (see various discussions) a docetic Jesus figure. The leading “mythicists” like Robert Price are invaluable in this analysis (ignore first 15 minutes of nonsense). The thin braid of early Christian history and the constant humanity that arises in morphing the faith before settling down after Nicaea (well, and then after Martin Luther) reminds us that abstractions and faith have a remarkable persistence in the face of cultural change.
  • How do mathematical machines take on so many forms while achieving the same abstract goals? Machine learning, as a reificiation of human-like learning processes, can imitate neural networks (or an extreme sketch and caricature of what we know about real neural systems), or can be just a parameter slicing machine like Support Vector Machines or ID3, or can be a Bayesian network or mixture model of parameters.  We call them generative or non-generative, we categorize them as to discrete or continuous decision surfaces, and we label them in a range of useful ways. But why should they all achieve similar outcomes with similar ranges of error? Indeed, Random Forests were the belles of the ball until Deep Learning took its tiara.

In each case, I try to work my way, as carefully as possible, through the thicket of historical and intellectual concerns that provide point and counterpoint to the ideas. It feels ethically wrong to make a short, fast judgment about any such topics. I can’t imagine doing anything less with a topic as fraught as the US health care system. It’s complex, indeed, Mr. President.

So, I tracked down a foundational paper on this idea of ethics and epistemology. It dates to 1877 and provides a grounding for why and when we should believe in anything. William Clifford’s paper, The Ethics of Belief, tracks multiple lines of argumentation and the consequences of believing without clarity. Even tentative clarity comes with moral risk, as Clifford shows in his thought experiments.

In summary, though, there is no more important statement than Clifford’s final assertion that it is wrong to believe without sufficient evidence. It’s that simple. And it’s even more wrong to act on those beliefs.

Traitorous Reason, Facts, and Analysis

dinoObama’s post-election press conference was notable for its continued demonstration of adult discourse and values. Especially notable:

This office is bigger than any one person and that’s why ensuring a smooth transition is so important. It’s not something that the constitution explicitly requires but it is one of those norms that are vital to a functioning democracy, similar to norms of civility and tolerance and a commitment to reason and facts and analysis.

But ideology in American politics (and elsewhere) has the traitorous habit of undermining every one of those norms. It always begins with undermining the facts in pursuit of manipulation. Just before the election, the wizardly Aron Ra took to YouTube to review VP-elect Mike Pence’s bizarre grandstanding in Congress in 2002:

And just today, Trump lashed out at the cast of Hamilton for lecturing Mike Pence on his anti-LGBTQ stands, also related to ideology and belief, at the end of a show.

Astonishing as this seems, we live in an imperfect world being drawn very slowly away from tribal and xenophobic tendencies, and in fits and starts. My wife received a copy of letter from now-deceased family that contained an editorial from the Shreveport Journal in the 1960s that (with its embedded The Worker editorial review) simultaneously attacked segregationist violence, the rhetoric of Alabama governor George Wallace, claimed that communists were influencing John F. Kennedy and the civil rights movement, demanded the jailing of communists, and suggested the federal government should take over Alabama:

editorial-shreveport-60s-m

The accompanying letter was also concerned over the fate of children raised as Unitarians, amazingly enough, and how they could possibly be moral people. It then concluded with a recommendation to vote for Goldwater.

Is it any wonder that the accompanying cultural revolutions might lead to the tearing down of the institutions that were used to justify the deviation away from “reason and facts and analysis?”

But I must veer to the positive here, that this brief blip is a passing retrenchment of these old tendencies that the Millennials and their children will look back to with fond amusement, the way I remember Ronald Reagan.

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.

 

The Goldilocks Complexity Zone

FractalSince my time in the early 90s at Santa Fe Institute, I’ve been fascinated by the informational physics of complex systems. What are the requirements of an abstract system that is capable of complex behavior? How do our intuitions about complex behavior or form match up with mathematical approaches to describing complexity? For instance, we might consider a snowflake complex, but it is also regular in it’s structure, driven by an interaction between crystal growth and the surrounding air. The classic examples of coastlines and fractal self-symmetry also seem complex but are not capable of complex behavior.

So what is a good way of thinking about complexity? There is actually a good range of ideas about how to characterize complexity. Seth Lloyd rounds up many of them, here. The intuition that drives many of them is that complexity seems to be associated with distributions of relationships and objects that are somehow juxtapositioned between a single state and a uniformly random set of states. Complex things, be they living organisms or computers running algorithms, should exist in a Goldilocks zone when each part is examined and those parts are somehow summed up to a single measure.

We can easily construct a complexity measure that captures some of these intuitions. Let’s look at three strings of characters:

x = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

y = menlqphsfyjubaoitwzrvcgxdkbwohqyxplerz

z = the fox met the hare and the fox saw the hare

Now we would likely all agree that y and z are more complex than x, and I suspect most would agree that y looks like gibberish compared with z. Of course, y could be a sequence of weirdly coded measurements or something, or encrypted such that the message appears random. Let’s ignore those possibilities for our initial attempt at defining a complexity measure. We can see right away that an approach using basic information theory doesn’t help much. Algorithmic informational complexity will be highest for y, as will entropy:

for each sequence composed out of an alphabet with counts, s. So we get: H(x) = 0, H(y) = 3.199809, and H(z) = 2.3281. Here’s some sample R code using the “entropy” package if you want to calculate yourself:

> z = "the fox met the hare and the fox saw the hare"
> zt = table(strsplit(z, '')[[1]])
> entropy(zt, method="ML")

Note that the alphabet of each string is slightly different, but the missing characters between them don’t matter since their probabilities are 0.

We can just arbitrarily scale entropy by the maximum entropy possible for the same length string like this:

This is somewhat like channel efficiency in communications theory, I think. And then just turn this into a parabolically-scaled measure that centers at 0.5:

where is an arbitrary non-zero scaling parameter.

But this calculation is only considering the individual character frequencies, not the composition of the characters into groupings. So we can consider pairs of characters in this same calculation, or triples, etc. And also, just looking at these n-gram sequences doesn’t capture potentially longer range repetitious structures. So we can gradually ladle on grammars as the counting mechanism. Now, if our measure of complexity is really going to capture what we intuitively consider to be complex, all of these different levels of connections within the string or other organized piece of information must be present.

This general program is present in every one of Seth Lloyd’s complexity metrics in various ways and even comes into play in discussions of consciousness, though many use mutual information rather than entropy per se. Here’s Max Tegmark using a variation on Giulio Tinoni’s Phi concept from Integrated Information Theory to demonstrate that integration is a key component of consciousness and how that might be calculated for general physical systems.

Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way. So other methods may need to be tweaked to get closer to the observed learning of people in these word tasks.

So what can this tell us about epistemology and belief? Classical foundationalism might be formulated as something is a “basic” or “justified” belief if it is self-evident or evident to our senses. Other beliefs may therefore be grounded by those basic beliefs. And a more modern reformulation might substitute “incorrigible” for “justified” with the layered meaning of incorrigibility built on the necessity that given the proposition it is in fact true.

Here’s Alvin Plantinga laying out a case for why justified and incorrigibility have a range of problems, problems serious enough for Plantinga that he suspects that god belief could just as easily be a basic belief, allowing for the kinds of presuppositional Natural Theology (think: I look around me and the hand of God is obvious) that is at the heart of some of the loftier claims concerning the viability or non-irrationality of god belief. It even provides a kind of coherent interpretative framework for historical interpretation.

Plantinga positions the problem of properly basic belief then as an inductive problem:

And hence the proper way to arrive at such a criterion is, broadly speaking, inductive. We must assemble examples of beliefs and conditions such that the former are obviously properly basic in the latter, and examples of beliefs and conditions such that the former are obviously not properly basic in the latter. We must then frame hypotheses as to the necessary and sufficient conditions of proper basicality and test these hypothesis by reference to those examples. Under the right conditions, for example, it is clearly rational to believe that you see a human person before you: a being who has thoughts and feelings, who knows and believes things, who makes decisions and acts. It is clear, furthermore, that you are under no obligation to reason to this belief from others you hold; under those conditions that belief is properly basic for you.

He goes on to conclude that this opens up the god hypothesis as providing this kind of coherence mechanism:

By way of conclusion then: being self-evident, or incorrigible, or evident to the senses is not a necessary condition of proper basicality. Furthermore, one who holds that belief in God is properly basic is not thereby committed to the idea that belief in God is groundless or gratuitous or without justifying circumstances. And even if he lacks a general criterion of proper basicality, he is not obliged to suppose that just any or nearly any belief—belief in the Great Pumpkin, for example—is properly basic. Like everyone should, he begins with examples; and he may take belief in the Great Pumpkin as a paradigm of irrational basic belief.

So let’s assume that the word learning mechanism based on this Bayesian scaling is representative of our human inductive capacities. Now this may or may not be broadly true. It is possible that it is true of words but not other domains of perceptual phenomena. Nevertheless, given this scaling property, the relative inductive truth of a given proposition (a meaning hypothesis) is strictly Bayesian. Moreover, this doesn’t succumb to problems of verificationalism because it only claims relative truth. Properly basic or basic is then the scaled contending explanatory hypotheses and the god hypothesis has to compete with other explanations like evolutionary theory (for human origins), empirical evidence of materialism (for explanations contra supernatural ones), perceptual mistakes (ditto), myth scholarship, textual analysis, influence of parental belief exposure, the psychology of wish fulfillment, the pragmatic triumph of science, etc. etc.

And so we can stick to a relative scaling of hypotheses as to what constitutes basicality or justified true belief. That’s fine. We can continue to argue the previous points as to whether they support or override one hypothesis or another. But the question Plantinga raises as to what ethics to apply in making those decisions is important. He distinguishes different reasons why one might want to believe more true things than others (broadly) or maybe some things as properly basic rather than others, or, more correctly, why philosophers feel the need to pin god-belief as irrational. But we succumb to a kind of unsatisfying relativism insofar as the space of these hypotheses is not, in fact, weighted in a manner that most reflects the known facts. The relativism gets deeper when the weighting is washed out by wish fulfillment, pragmatism, aspirations, and personal insights that lack falsifiability. That is at least distasteful, maybe aretetically so (in Plantinga’s framework) but probably more teleologically so in that it influences other decision-making and the conflicts and real harms societies may cause.

Rationality and the Intelligibility of Philosophy

6a00d83542d51e69e20133f5650edd970b-800wiThere is a pervasive meme in the physics community that holds as follows: there are many physical phenomena that don’t correspond in any easy way to our ordinary experiences of life on earth. We have wave-particle duality wherein things behave like waves sometimes and particles other times. We have simultaneous entanglement of physically distant things. We have quantum indeterminacy and the emergence of stuff out of nothing. The tiny world looks like some kind of strange hologram with bits connected together by virtual strings. We have a universe that began out of nothing and that begat time itself. It is, in this framework, worthwhile to recognize that our every day experiences are not necessarily useful (and are often confounding) when trying to understand the deep new worlds of quantum and relativistic physics.

And so it is worthwhile to ask whether many of the “rational” queries that have been made down through time have any intelligible meaning given our modern understanding of the cosmos. For instance, if we were to state the premise “all things are either contingent or necessary” that underlies a poor form of the Kalam Cosmological Argument, we can immediately question the premise itself. And a failed premise leads to a failed syllogism. Maybe the entanglement of different things is piece-part of the entanglement of large-scale space time, and that the insights we have so far are merely shadows of the real processes acting behind the scenes? Who knows what happened before the Big Bang?

In other words, do the manipulations of logic and the assumptions built into the terms lead us to empty and destructive conclusions? There is no reason not to suspect that and therefore the bits of rationality that don’t derive from empirical results are immediately suspect. This seems to press for a more coherence-driven view of epistemology, one which accords with known knowledge but adjusts automatically as semantics change.

There is an interesting mental exercise concerning why we should be able to even undertake these empirical discoveries and all their seemingly non-sensible results that are nevertheless fashioned into a cohesive picture of the physical world (and increasingly the mental one). Are we not making an assumption that our brains are capable of rational thinking given our empirical understanding of our evolved pasts? Plantinga’s Evolutionary Argument Against Naturalism tries, for instance, to upend this perspective by claiming it is highly unlikely that a random process of evolution could produce reliable mental faculties because it would be focused too much on optimization for survival. This makes no sense empirically, however, since we have good evidence for evolution and we have good evidence for reliable mental faculties when subjected to the crucible of group examination and scientific process. We might be deluding ourselves, it’s true, but there are too many artifacts of scientific understanding and progress to take that terribly seriously.

So we get back to coherence and watchful empiricism. No necessity for naturalism as an ideology. It’s just the only thing that currently makes sense.

A Critique of Pure Randomness

Random MemeThe notion of randomness brings about many interesting considerations. For statisticians, randomness is a series of events with chances that are governed by a distribution function. In everyday parlance, equally-likely means random, while an even more common semantics is based on both how unlikely and how unmotivated an event might be (“That was soooo random!”) In physics, there are only certain physical phenomena that can be said to be truly random, including the probability of a given nucleus decomposing into other nuclei via fission. The exact position of a quantum thingy is equally random when it’s momentum is nailed down, and vice-versa. Vacuums have a certain chance of spontaneously creating matter, too, and that chance appears to be perfectly random. In algorithmic information theory, a random sequence of bits is a sequence that can’t be represented by a smaller descriptive algorithm–it is incompressible. Strangely enough, we simulate random number generators using a compact algorithm that has a complicated series of steps that lead to an almost impossible to follow trajectory through a deterministic space of possibilities; it’s acceptible to be random enough that the algorithm parameters can’t be easily reverse engineered and the next “random” number guessed.

One area where we often speak of randomness is in biological evolution. Random mutations lead to change and to deleterious effects like dead-end evolutionary experiments. Or so we hypothesized. The exact mechanism of the transmission of inheritance and of mutations were unknown to Darwin, but soon in the evolutionary synthesis notions like random genetic drift and the role of ionizing radiation and other external factors became exciting candidates for the explanation of the variation required for evolution to function. Amusingly, arguing largely from a stance that might be called a fallacy of incredulity, creationists have often seized on a logical disconnect they perceive between the appearance of purpose both in our lives and in the mechanisms of biological existence, and the assumption of underlying randomness and non-directedness as evidence for the paucity of arguments from randomness.

I give you Stephen Talbott in The New Atlantis, Evolution and the Illusion of Randomness, wherein he unpacks the mounting evidence and the philosophical implications of jumping genes, self-modifying genetic regulatory frameworks, transposons, and the likelihood that randomness in the strong sense of cosmic ray trajectories bouncing around in cellular nuclei are simply wrong. Randomness is at best a minor contribution to evolutionary processes. We are not just purposeful at the social, personal, systemic, cellular, and sub-cellular levels, we are also purposeful through time around the transmission of genetic information and the modification thereof.

This opens a wildly new avenue for considering the certain normative claims that anti-evolutionists bring to the table, such as that a mechanistic universe devoid of central leadership is meaningless and allows for any behavior to be equally acceptable. This hoary chestnut is ripe to the point of rot, of course, but the response to it should be much more vibrant than the usual retorts. The evolution of social and moral outcomes can be every bit as inevitable as if they were designed because co-existence and greater group success (yes, I wrote it) is a potential well on the fitness landscape. And, equally, we need to stop being so reticent to claim that there is a purposefulness to life, a teleology, but simply make sure that we are according the proper mechanistic feel to that teleology. Fine, call it teleonomy, or even an urge to existence. A little poetry might actually help here.

Informational Chaff and Metaphors

chaffI received word last night that our scholarship has received over 1400 applications, which definitely surprised me. I had worried that the regional restriction might be too limiting but Agricultural Sciences were added in as part of STEM so that probably magnified the pool.

Dan Dennett of Tufts and Deb Roy at MIT draw parallels between informational transparency in our modern world and biological mechanism in Scientific American (March 2015, 312:3). Their article, Our Transparent Future (related video here; you have to subscribe to read the full article), starts with Andrew Parker’s theory that the Cambrian Explosion may have been tied to the availability of light as cloud cover lifted and seas became transparent. An evolutionary arms race began for the development of sensors that could warn against predators, and predators that could acquire more prey.

They continue on drawing parallels to biological processes, including the concept of squid ink and how a similar notion, chaff, was used to mask radar signatures as aircraft became weapons of war. The explanatory mouthful of the Multiple Independent Reentry Vehicle (MIRV) with dummy warheads to counter anti-ballistic missiles were likewise a deceptive way of reducing the risk of interception. So Dennett and Roy “predict the introduction of chaff made of nothing but megabytes of misinformation,” designed to deceive search engines of the nature of real info.

This is a curious idea. Search engine optimization (SEO) is a whole industry that combines consulting with tricks and tools to try to raise the position of vendors in the Google rankings. Being in the first page of listings can be make-or-break for retail vendors, and they pay to try to make that happen. The strategies are based around trying to establish links to the vendor from individuals and other pages to try to game the PageRank algorithm. In turn, Google has continued to optimize to reduce the effectiveness of these links, trying to establish whether hand- or machine-created content with links looks like real, valuable information or just promotional materials. This is, in some ways, the opposite of informational chaff. The goal is not to hide the content in plain sight, but to make it more discoverable. “Information scent” was a concept introduced at XeroX PARC when I was there and it applies here.

But what of chaff? Perhaps the best example that I can think of is the idea of “drowning in paper” that lawyers occasionally describe, on TV or otherwise, where huge piles of non-digitized materials are dumped in the hopes that the criminal or civil needle-in-the-haystack will be impossible to find. This is highly dependent on the temporal limitations of individuals to ingest the materials, and is equally countered by OCR and scanning services to produce accessible forms of data. Dennett and Roy point out that more sophisticated search engines (and I’ll add other analytic tools) can counter efforts at chaff.

More broadly, though, we get to the issue of whether evolutionary metaphors provide us with any new insights into the changing role of information in an interconnected and digitized society? I’m not altogether sure. It is routinely argued that the existence of early computing machines led to cognitive science as we have known it, conflating problem solving with algorithms and describing the brain’s hardware and software. Is evolutionary adaption equally influential in steering weapon’s designs or informational secrecy strategy? I think we are probably cunning enough (thanks evolution) about proximate threats and consequences that there might not be much to learn from metaphorical analysis of this type.

Language Games

Word GamesOn The Thinking Atheist, C.J. Werleman promotes the idea that atheists can’t be Republicans based on his new book. Why? Well, for C.J. it’s because the current Republican platform is not grounded in any kind of factual reality. Supply-side economics, Libertarianism, economic stimuli vs. inflation, Iraqi WMDs, Laffer curves, climate change denial—all are grease for the wheels of a fantastical alternative reality where macho small businessmen lift all boats with their steely gaze, the earth is forever resilient to our plunder, and simple truths trump obscurantist science. Watch out for the reality-based community!

Is politics essentially religion in that it depends on ideology not grounded in reality, spearheaded by ideologues who serve as priests for building policy frameworks?

Likely. But we don’t really seem to base our daily interactions on rationality either. 538 Science tells us that it has taken decades to arrive at the conclusion that vitamin supplements are probably of little use to those of us lucky enough to live in the developed world. Before that we latched onto indirect signaling about vitamin C, E, D, B12, and others to decide how to proceed. The thinking typically took on familiar patterns: someone heard or read that vitamin X is good for us/I’m skeptical/why not?/maybe there are negative side-effects/it’s expensive anyway/forget it. The language games are at all levels in promoting, doubting, processing, and reinforcing the microclaims for each option. We embrace signals about differences and nuances but it often takes many months and collections of those signals in order to make up our minds. And then we change them again.

Among the well educated, I’ve variously heard the wildest claims about the effectiveness of chiropractors, pseudoscientific remedies, the role of immunizations in autism (not due to preservatives in this instance; due to immune responses themselves), and how karma works in software development practice.

And what about C.J.’s central claims? Well I haven’t read the book and don’t plan to, so I can only build on what he said during the interview. If we require evidence for our political beliefs as much as we require it for our religious perspective we probably need to have a scheme for how to rank the likelihood of different beliefs and policy commitments. For instance, C.J. follows the continued I-told-you-so approach of Paul Krugman in his comments on fiscal stimulus; not enough was done and there is no evidence of inflationary pressure. Well and good that fiscal stimulus as a macro-economic stabilizer has been established in the most recent economic past. The non-appearance of inflation was somewhat surprising, actually, but is now the retrospective majority opinion of economists concerned with such matters. It was a cause for concern, however, as were the problematic bailouts that softened the consequences (if not rewarded them) of risky behavior in pursuit of broader stability.

The language game theory of politics and religion accounts for most of the uncertainty and chaos that drives thinking about politics and economics. We learn the rules (social and pragmatic impact as well as grammatical rules) and the game pieces (words, phrases, and concepts) early on. They don’t have firm referential extension, of course. In fact, they never really do. But they cohere more and more over time unless radically disrupted, and even then they try to recohere against the tangle of implications as the dust settles. This is Wittgensteinian and anti-Positivist, but it is also somewhat value-free in that there is no sense for why one language game should be preferential to another.

For C.J., there is a clear demarcation that facts trump fantasy, and our lives and society would be better served by factually-derived policies and factually enervated perspectives on the claims of most religions. But it is far less clear to me as to how to apply some rationalist overlay to the problem of politics that would have consistent and meaningful improvements in our lives and society save the obvious one of improving general education and thinking.

I recently irritated and frustrated my teen son in questioning him about some claims he was making about bad teachers in the local school system. The irritation came as I probed into various rumors about a teacher who had been fired because she was, according to him, sexist and graded boys poorly. It turns out he only had a handful of rumors about everything from the teacher’s firing to the sexism. It looked more likely that one of his friends made it up in conjunction with other boys who were doing poorly in the teacher’s class. They created a meme in a language game and it propagated. My son was defensive about the possibility of the whole story and I admitted it was possible but that it was sufficiently unlikely as to not warrant concern. The attachment of levels of likely veracity and valuations were ultimately the only difference in the end.

I apologized for making him mad but didn’t apologize for my skepticism and, later, there were signals that his network of beliefs had been moved a bit, the vile evil sexist teacher drifting out of focus among the other shades of consideration.

And that is how the language game is played.