Category: Atheism

Bright Sarcasm in the Classroom

That old American tradition, the Roman Salute

When a Pew research poll discovered a shocking divide between self-identifying Republicans/GOP-leaning Independents and their Democratic Party opposites on the question of the value of higher education, the commentariat went apeshit. Here’s a brief rundown of sources, left, center, and right, and what they decided are the key issues:

  • National Review: Higher education has eroded the Western canon and turned into a devious plot to rob our children of good thinking, spiked with avocado toast.
  • Paul Krugman at New York Times: Conservative tribal identification leads to opposition to climate change science or evolution, and further towards a “grim” anti-intellectualism.
  • New Republic: There is no evidence that college kid’s political views are changed by higher education and, also, that conservative-minded professors aren’t much maltreated on campus either, so the conservative complaints are just overblown anti-liberal hype that, they point out, has some very negative consequences.

I would make a slightly more radical claim than Krugman, for instance, and one that is pointedly opposed to Simonson at National Review. In higher education we see not just a dedication to science but an active program of criticizing and deconstructing ideas like the Western canon as central to higher thought. In history, great man theories have been broken down into smart and salient compartments that explore the many ways in which groups and individuals, genders and ideas, all were part of fashioning the present. These changes, largely late 20th century academic inventions, have broken up the monopolies on how concepts of law, order, governance, and the worth of people were once formulated. This must be anti-conservative in the pure sense that there is little to be conserved from older ideas, except as objects of critique. We need only stroll through the grotesque history of Social Darwinism, psychological definitions of homosexuality as a mental disorder, or anthropological theories of race and values to get a sense for why academic pursuits, in becoming more critically influenced by a burgeoning and democratizing populace, were obligated to refine what is useful, intellectually valuable, and less wrong. The process will continue, too.

The consequences are far reaching. Higher education correlates necessarily with liberal values and those values tend to correlate more with valuing reason and fairness over tradition and security. That means that atheism has a greater foothold and science as a primary means of truth discovery takes precedence over the older and uglier angels of our nature. The enhanced creativity that arises from better knowledge of the world and accurate and careful assessment then, in turn, leads to knowledge generation and technological innovation that is derived almost exclusively from a broad engagement with ideas. This can cause problems when ordering Italian sandwiches.

Is there or should there be any antidote to the disjunctive opinions on the value of higher learning? Polarized disagreements on the topic can lead to societal consequences that are reactive and precipitous, which is what all three sources are warning about in various ways. But the larger goals of conservatives should be easily met through the mechanism that most of them would agree is always open: form, build, and attend ideologically-attuned colleges. There are at least dozens of Christian colleges that have various charters that should meet some of their expectations. If these institutions are good for them and society as a whole, they just need to do a better job of explaining that to America. Then, like the consumer flocking from Microsoft to Apple, the great public and private institutions will lose the student debt dollar to these other options and, finally, indoctrination in all that bright sarcasm will end in the classroom. Maybe, then, everyone will agree that the earth is only a few thousand years old and that coal demand proceeds from supply.

Traitorous Reason, Facts, and Analysis

dinoObama’s post-election press conference was notable for its continued demonstration of adult discourse and values. Especially notable:

This office is bigger than any one person and that’s why ensuring a smooth transition is so important. It’s not something that the constitution explicitly requires but it is one of those norms that are vital to a functioning democracy, similar to norms of civility and tolerance and a commitment to reason and facts and analysis.

But ideology in American politics (and elsewhere) has the traitorous habit of undermining every one of those norms. It always begins with undermining the facts in pursuit of manipulation. Just before the election, the wizardly Aron Ra took to YouTube to review VP-elect Mike Pence’s bizarre grandstanding in Congress in 2002:

And just today, Trump lashed out at the cast of Hamilton for lecturing Mike Pence on his anti-LGBTQ stands, also related to ideology and belief, at the end of a show.

Astonishing as this seems, we live in an imperfect world being drawn very slowly away from tribal and xenophobic tendencies, and in fits and starts. My wife received a copy of letter from now-deceased family that contained an editorial from the Shreveport Journal in the 1960s that (with its embedded The Worker editorial review) simultaneously attacked segregationist violence, the rhetoric of Alabama governor George Wallace, claimed that communists were influencing John F. Kennedy and the civil rights movement, demanded the jailing of communists, and suggested the federal government should take over Alabama:

editorial-shreveport-60s-m

The accompanying letter was also concerned over the fate of children raised as Unitarians, amazingly enough, and how they could possibly be moral people. It then concluded with a recommendation to vote for Goldwater.

Is it any wonder that the accompanying cultural revolutions might lead to the tearing down of the institutions that were used to justify the deviation away from “reason and facts and analysis?”

But I must veer to the positive here, that this brief blip is a passing retrenchment of these old tendencies that the Millennials and their children will look back to with fond amusement, the way I remember Ronald Reagan.

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.

 

Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way. So other methods may need to be tweaked to get closer to the observed learning of people in these word tasks.

So what can this tell us about epistemology and belief? Classical foundationalism might be formulated as something is a “basic” or “justified” belief if it is self-evident or evident to our senses. Other beliefs may therefore be grounded by those basic beliefs. And a more modern reformulation might substitute “incorrigible” for “justified” with the layered meaning of incorrigibility built on the necessity that given the proposition it is in fact true.

Here’s Alvin Plantinga laying out a case for why justified and incorrigibility have a range of problems, problems serious enough for Plantinga that he suspects that god belief could just as easily be a basic belief, allowing for the kinds of presuppositional Natural Theology (think: I look around me and the hand of God is obvious) that is at the heart of some of the loftier claims concerning the viability or non-irrationality of god belief. It even provides a kind of coherent interpretative framework for historical interpretation.

Plantinga positions the problem of properly basic belief then as an inductive problem:

And hence the proper way to arrive at such a criterion is, broadly speaking, inductive. We must assemble examples of beliefs and conditions such that the former are obviously properly basic in the latter, and examples of beliefs and conditions such that the former are obviously not properly basic in the latter. We must then frame hypotheses as to the necessary and sufficient conditions of proper basicality and test these hypothesis by reference to those examples. Under the right conditions, for example, it is clearly rational to believe that you see a human person before you: a being who has thoughts and feelings, who knows and believes things, who makes decisions and acts. It is clear, furthermore, that you are under no obligation to reason to this belief from others you hold; under those conditions that belief is properly basic for you.

He goes on to conclude that this opens up the god hypothesis as providing this kind of coherence mechanism:

By way of conclusion then: being self-evident, or incorrigible, or evident to the senses is not a necessary condition of proper basicality. Furthermore, one who holds that belief in God is properly basic is not thereby committed to the idea that belief in God is groundless or gratuitous or without justifying circumstances. And even if he lacks a general criterion of proper basicality, he is not obliged to suppose that just any or nearly any belief—belief in the Great Pumpkin, for example—is properly basic. Like everyone should, he begins with examples; and he may take belief in the Great Pumpkin as a paradigm of irrational basic belief.

So let’s assume that the word learning mechanism based on this Bayesian scaling is representative of our human inductive capacities. Now this may or may not be broadly true. It is possible that it is true of words but not other domains of perceptual phenomena. Nevertheless, given this scaling property, the relative inductive truth of a given proposition (a meaning hypothesis) is strictly Bayesian. Moreover, this doesn’t succumb to problems of verificationalism because it only claims relative truth. Properly basic or basic is then the scaled contending explanatory hypotheses and the god hypothesis has to compete with other explanations like evolutionary theory (for human origins), empirical evidence of materialism (for explanations contra supernatural ones), perceptual mistakes (ditto), myth scholarship, textual analysis, influence of parental belief exposure, the psychology of wish fulfillment, the pragmatic triumph of science, etc. etc.

And so we can stick to a relative scaling of hypotheses as to what constitutes basicality or justified true belief. That’s fine. We can continue to argue the previous points as to whether they support or override one hypothesis or another. But the question Plantinga raises as to what ethics to apply in making those decisions is important. He distinguishes different reasons why one might want to believe more true things than others (broadly) or maybe some things as properly basic rather than others, or, more correctly, why philosophers feel the need to pin god-belief as irrational. But we succumb to a kind of unsatisfying relativism insofar as the space of these hypotheses is not, in fact, weighted in a manner that most reflects the known facts. The relativism gets deeper when the weighting is washed out by wish fulfillment, pragmatism, aspirations, and personal insights that lack falsifiability. That is at least distasteful, maybe aretetically so (in Plantinga’s framework) but probably more teleologically so in that it influences other decision-making and the conflicts and real harms societies may cause.

Lucifer on the Beach

glowwormsI picked up a whitebait pizza while stopped along the West Coast of New Zealand tonight. Whitebait are tiny little swarming immature fish that can be scooped out of estuarial river flows using big-mouthed nets. They run, they dart, and it is illegal to change river exit points to try to channel them for capture. Hence, whitebait is semi-precious, commanding NZD70-130/kg, which explains why there was a size limit on my pizza: only the small one was available.

By the time I was finished the sky had aged from cinereal to iron in a satire of the vivid, watch-me colors of CNN International flashing Donald Trump’s linguistic indirection across the television. I crept out, setting my headlamp to red LEDs designed to minimally interfere with night vision. Just up away from the coast, hidden in the impossible tangle of cold rainforest, there was a glow worm dell. A few tourists conjured with flashlights facing the ground to avoid upsetting the tiny arachnocampa luminosa that clung to the walls inside the dark garden. They were like faint stars composed into irrelevant constellations, with only the human mind to blame for any observed patterns.

And the light, what light, like white-light LEDs recently invented, but a light that doesn’t flicker or change, and is steady under the calmest observation. Driven by luciferin and luciferase, these tiny creatures lure a few scant light-seeking creatures to their doom and as food for absorption until they emerge to mate, briefly, lay eggs, and then die.

Lucifer again, named properly from the Latin as the light bringer, the chemical basis for bioluminescence was largely isolated in the middle of the 20th Century. Yet there is this biblical stigma hanging over the term—one that really makes no sense at all. The translation of morning star or some other such nonsense into Latin got corrupted into a proper name by a process of word conversion (this isn’t metonymy or something like that; I’m not sure there is a word for it other than “mistake”). So much for some kind of divine literalism tracking mechanism that preserves perfection. Even Jesus got rendered as lucifer in some passages.

But nothing new, here. Demon comes from the Greek daemon and Christianity tried to, well, demonize all the ancient spirits during the monolatry to monotheism transition. The spirits of the air that were in a constant flux for the Hellenists, then the Romans, needed to be suppressed and given an oppositional position to the Christian soteriology. Even “Satan” may have been borrowed from Persian court drama as a kind of spy or informant after the exile.

Oddly, we are left with a kind of naming magic for the truly devout who might look at those indifferent little glow worms with some kind of castigating eye, corrupted by a semantic chain that is as kinked as the popular culture epithets of Lucifer himself.

Free Will and Thermodynamic Warts

Free WillyThe Stone at New York Times is a great resource for insights into both contemporary and rather ancient discussions in philosophy. Here’s William Irvin at King’s College discoursing on free will and moral decision-making. The central problem is one that we all discussed in high school: if our atomistic world is deterministic in that there is a chain of causation from one event to another (contingent in the last post), and therefore even our mental processes must be caused, then there is no free will in the expected sense (“libertarian free will” in the literature). This can be overcome by the simplest fix of proposing a non-material soul that somehow interacts with the material being and is inherently non-deterministic. This results in a dualism of matter and mind that doesn’t seem justifiable by any empirical results. For instance, we know that decision-making does appear to have a neuropsychological basis because we know about the effects of lesioning brains, neurotransmitters, and even how smells can influence decisions. Irving also claims that the realization of the potential loss of free will leaves us awash in some sense of hopelessness at the simultaneous loss of the metaphysical reality of an objective moral system. Without free will we seem off the hook for our decisions.

Compatibilists will disagree, and might even cite quantum indeterminacy as a rescue donut for pulling some notion of free will up out of the deep ocean of Irving’s despair. But the fix is perhaps even easier than that. Even though we might recognize that there are chains of causation at a microscopic scale, the macroscopic combinations of these events—even without quantum indeterminacy—becomes only predictable along broad contours of probabilistic outcomes. We start with complex initial conditions and things just get worse from there. By the time we get to exceedingly complex organisms deciding things, we also have elaborate control cycles influenced by childhood training, religion, and reason that cope with this ambiguity and complexity. The metaphysical reality of morality or free will may be gone, but there is no need for fictionalism. They are empirically real and any sense of loss is tied to merely overcoming the illusions arriving from these incompatibilities between everyday reasoning and the deeper appreciation of the world as it is, thermodynamic warts and all.

Rationality and the Intelligibility of Philosophy

6a00d83542d51e69e20133f5650edd970b-800wiThere is a pervasive meme in the physics community that holds as follows: there are many physical phenomena that don’t correspond in any easy way to our ordinary experiences of life on earth. We have wave-particle duality wherein things behave like waves sometimes and particles other times. We have simultaneous entanglement of physically distant things. We have quantum indeterminacy and the emergence of stuff out of nothing. The tiny world looks like some kind of strange hologram with bits connected together by virtual strings. We have a universe that began out of nothing and that begat time itself. It is, in this framework, worthwhile to recognize that our every day experiences are not necessarily useful (and are often confounding) when trying to understand the deep new worlds of quantum and relativistic physics.

And so it is worthwhile to ask whether many of the “rational” queries that have been made down through time have any intelligible meaning given our modern understanding of the cosmos. For instance, if we were to state the premise “all things are either contingent or necessary” that underlies a poor form of the Kalam Cosmological Argument, we can immediately question the premise itself. And a failed premise leads to a failed syllogism. Maybe the entanglement of different things is piece-part of the entanglement of large-scale space time, and that the insights we have so far are merely shadows of the real processes acting behind the scenes? Who knows what happened before the Big Bang?

In other words, do the manipulations of logic and the assumptions built into the terms lead us to empty and destructive conclusions? There is no reason not to suspect that and therefore the bits of rationality that don’t derive from empirical results are immediately suspect. This seems to press for a more coherence-driven view of epistemology, one which accords with known knowledge but adjusts automatically as semantics change.

There is an interesting mental exercise concerning why we should be able to even undertake these empirical discoveries and all their seemingly non-sensible results that are nevertheless fashioned into a cohesive picture of the physical world (and increasingly the mental one). Are we not making an assumption that our brains are capable of rational thinking given our empirical understanding of our evolved pasts? Plantinga’s Evolutionary Argument Against Naturalism tries, for instance, to upend this perspective by claiming it is highly unlikely that a random process of evolution could produce reliable mental faculties because it would be focused too much on optimization for survival. This makes no sense empirically, however, since we have good evidence for evolution and we have good evidence for reliable mental faculties when subjected to the crucible of group examination and scientific process. We might be deluding ourselves, it’s true, but there are too many artifacts of scientific understanding and progress to take that terribly seriously.

So we get back to coherence and watchful empiricism. No necessity for naturalism as an ideology. It’s just the only thing that currently makes sense.

Non-Cognitivist Trajectories in Moral Subjectivism

imageWhen I say that “greed is not good” the everyday mind creates a series of images and references, from Gordon Gekko’s inverse proposition to general feelings about inequality and our complex motivations as people. There is a network of feelings and, perhaps, some facts that might be recalled or searched for to justify the position. As a moral claim, though, it might most easily be considered connotative rather than cognitive in that it suggests a collection of secondary emotional expressions and networks of ideas that support or deny it.

I mention this (and the theories that are consonant with this kind of reasoning are called non-cognitivist and, variously, emotive and expressive), because there is a very real tendency to reduce moral ideas to objective versus subjective, especially in atheist-theist debates. I recently watched one such debate between Matt Dillahunty and an orthodox priest where the standard litany revolved around claims about objectivity versus subjectivity of truth. Objectivity of truth is often portrayed as something like, “without God there is no basis for morality. God provides moral absolutes. Therefore atheists are immoral.” The atheists inevitably reply that the scriptural God is a horrific demon who slaughters His creation and condones slavery and other ideas that are morally repugnant to the modern mind. And then the religious descend into what might be called “advanced apologetics” that try to diminish, contextualize, or dismiss such objections.

But we are fairly certain regardless of the tradition that there are inevitable nuances to any kind of moral structure. Thou shalt not kill gets revised to thou shalt not murder. So we have to parse manslaughter in pursuit of a greater good against any rules-based approach to such a simplistic commandment. Not eating shellfish during a famine has less human expansiveness but nevertheless caries similar objective antipathy,

I want to avoid invoking the Euthyphro dilemma here and instead focus on the notion that there might be an inevitability to certain moral proscriptions and even virtues given an evolutionary milleu. This was somewhat the floorplan of Sam Harris, but I’ll try to project the broader implications of species-level fitness functions to a more local theory, specifically Gibbard’s fact-prac worlds where the trajectories of normative, non-cognitive statements like “greed is not good” align with sets of perceptions of the world and options for implementing activities that strengthen the engagement with the moral assertion. The assertion is purely subjective but it derives out of a correspondence with incidental phenomena and a coherence with other ideations and aspirations. It is mostly non-cognitive in this sense that it expresses emotional primitives rather than simple truth propositions. It has a number of interesting properties, however, most notably that the fact-prac set of constraints that surround these trajectories are movable, resulting in the kinds of plasticity and moral “evolution” that we see around us, like “slavery is bad” and “gay folks should not be discriminated against.” So as an investigative tool, we can see some value that gives such a theory important verificational value. As presented by Gibbard, however, these collections of constraints that guide the trajectories of moral approaches to simple moral commandments, admonishments, or statements, need further strengthening to meet the moral landscape “ethical naturalism” that asserts that certain moral attitudes result in improved species outcomes and are therefore axiomatically possible and sensibly rendered as objective.

And it does this without considering moral propositions at all.

A Critique of Pure Randomness

Random MemeThe notion of randomness brings about many interesting considerations. For statisticians, randomness is a series of events with chances that are governed by a distribution function. In everyday parlance, equally-likely means random, while an even more common semantics is based on both how unlikely and how unmotivated an event might be (“That was soooo random!”) In physics, there are only certain physical phenomena that can be said to be truly random, including the probability of a given nucleus decomposing into other nuclei via fission. The exact position of a quantum thingy is equally random when it’s momentum is nailed down, and vice-versa. Vacuums have a certain chance of spontaneously creating matter, too, and that chance appears to be perfectly random. In algorithmic information theory, a random sequence of bits is a sequence that can’t be represented by a smaller descriptive algorithm–it is incompressible. Strangely enough, we simulate random number generators using a compact algorithm that has a complicated series of steps that lead to an almost impossible to follow trajectory through a deterministic space of possibilities; it’s acceptible to be random enough that the algorithm parameters can’t be easily reverse engineered and the next “random” number guessed.

One area where we often speak of randomness is in biological evolution. Random mutations lead to change and to deleterious effects like dead-end evolutionary experiments. Or so we hypothesized. The exact mechanism of the transmission of inheritance and of mutations were unknown to Darwin, but soon in the evolutionary synthesis notions like random genetic drift and the role of ionizing radiation and other external factors became exciting candidates for the explanation of the variation required for evolution to function. Amusingly, arguing largely from a stance that might be called a fallacy of incredulity, creationists have often seized on a logical disconnect they perceive between the appearance of purpose both in our lives and in the mechanisms of biological existence, and the assumption of underlying randomness and non-directedness as evidence for the paucity of arguments from randomness.

I give you Stephen Talbott in The New Atlantis, Evolution and the Illusion of Randomness, wherein he unpacks the mounting evidence and the philosophical implications of jumping genes, self-modifying genetic regulatory frameworks, transposons, and the likelihood that randomness in the strong sense of cosmic ray trajectories bouncing around in cellular nuclei are simply wrong. Randomness is at best a minor contribution to evolutionary processes. We are not just purposeful at the social, personal, systemic, cellular, and sub-cellular levels, we are also purposeful through time around the transmission of genetic information and the modification thereof.

This opens a wildly new avenue for considering the certain normative claims that anti-evolutionists bring to the table, such as that a mechanistic universe devoid of central leadership is meaningless and allows for any behavior to be equally acceptable. This hoary chestnut is ripe to the point of rot, of course, but the response to it should be much more vibrant than the usual retorts. The evolution of social and moral outcomes can be every bit as inevitable as if they were designed because co-existence and greater group success (yes, I wrote it) is a potential well on the fitness landscape. And, equally, we need to stop being so reticent to claim that there is a purposefulness to life, a teleology, but simply make sure that we are according the proper mechanistic feel to that teleology. Fine, call it teleonomy, or even an urge to existence. A little poetry might actually help here.

The Rise and Triumph of the Bayesian Toolshed

Bayes LawIn Asimov’s Foundation, psychohistory is the mathematical treatment of history, sociology, and psychology to predict the future of human populations. Asimov was inspired by Gibbon’s Decline and Fall of the Roman Empire that postulated that Roman society was weakened by Christianity’s focus on the afterlife and lacked the pagan attachment to Rome as an ideal that needed defending. Psychohistory detects seeds of ideas and social movements that are predictive of the end of the galactic empire, creating foundations to preserve human knowledge against a coming Dark Age.

Applying statistics and mathematical analysis to human choices is a core feature of economics, but Richard Carrier’s massive tome, On the Historicity of Jesus: Why We Might Have Reason for Doubt, may be one of the first comprehensive applications to historical analysis (following his other related work). Amusingly, Carrier’s thesis dovetails with Gibbon’s own suggestion, though there is a certain irony to a civilization dying because of a fictional being.

Carrier’s methods use Bayesian analysis to approach a complex historical problem that has a remarkably impoverished collection of source material. First century A.D. (C.E. if you like; I agree with Carrier that any baggage about the convention is irrelevant) sources are simply non-existent or sufficiently contradictory that the background knowledge of paradoxography (tall tales), rampant messianism, and general political happenings at the time lead to a likelihood that Jesus was made up. Carrier constructs the argument around equivalence classes of prior events that then reduce or strengthen the evidential materials (a posteriori). And he does this without ablating the richness of the background information. Indeed, his presentation and analysis of works like Inanna’s Descent into the Underworld and its relationship to the Ascension of Isaiah are both didactic and beautiful in capturing the way ancient minds seem to have worked.

We’ve come a long way from Gibbon’s era where we now have mathematical tools directly influencing historical arguments. The notion of inference and probability has always played a role in history, but perhaps never so directly. All around us we have the sharpening of our argumentation, whether in policymaking, in history, or in law.  Even the arts and humanities are increasingly impacted by scientific and technological change and the metaphors that emerge from it. Perhaps not a Cathedral of Computation, but modestly at least a new toolshed.