Category: Atheism

The Universal Roots of Fantasyland

Intellectual history and cultural criticism always teeters on the brink of totalism. So it was when Christopher Hitchens was forced to defend the hyperbolic subtitle of God Is Not Great: How Religion Poisons Everything. The complaint was always the same: everything, really? Or when Neil Postman downplayed the early tremors of the internet in his 1985 Amusing Ourselves to Death. Email couldn’t be anything more than another movement towards entertainment and celebrity. So it is no surprise that Ken Andersen’s Fantasyland: How America Went Wrong: A 500-Year History is open to similar charges.

Andersen’s thesis is easily digestible: we built a country on fantasies. From the earliest charismatic stirrings of the Puritans to the patent medicines of the 19th century, through to the counterculture of the 1960s, and now with an incoherent insult comedian and showman as president, America has thrived on inventing wild, fantastical narratives that coalesce into movements. Andersen’s detailed analysis is breathtaking as he pulls together everything from linguistic drift to the psychology of magical thinking to justify his thesis.

Yet his thesis might be too narrow. It is not a uniquely American phenomenon. When Andersen mentions cosplay, he fails to identify its Japanese contributions, including the word itself. In the California Gold Rush, he sees economic fantasies driving a generation to unmoor themselves from their merely average lives. Yet the conquistadores had sought to enrich themselves, God, and country while Americans were forming their shining cities on hills. And in mid-19th-century Europe, while the Americans panned in the Sierra, romanticism was throwing off the oppressive yoke of Enlightenment rationality as the West became increasingly exposed to enigmatic Asian cultures. By the 20th century, Weimar Berlin was a hotbed of cultural fantasies that dovetailed with the rise of Nazism and a fantastical theory of race, German volk culture, and Indo-European mysticism. In India, film has been the starting point for many politicians. The religion of Marxism led to Heroic Realism as the stained glass of the Communist cathedrals.

Is America unique or is it simply human nature to strive for what has not yet existed and, in so doing, create and live in alternative fictions that transcend the mundanity of ordinary reality? If the latter, then Andersen’s thesis still stands but not as a singular evolution. Cultural change is driven by equal parts fantasy and reality. Exploration and expansion was paired with fantastical justifications from religious and literary sources. The growth of an entertainment industry was two-thirds market-driven commerce and one-third creativity. The development of the World Wide Web was originally to exchange scientific information but was exchanging porn from nearly the moment it began.

To be fair, Chapter 32 (America Versus the Godless Civilized Word: Why Are We So Exceptional), provides an argument for the exceptionalism of America at least in terms of religiosity. The pervasiveness of religious belief in America is unlike nearly all other developed nations, and the variation and creativity of those beliefs seems to defy economic and social science predictions about how religions shape modern life across nations. In opposition, however, is a following chapter on postmodernism in academia that again shows how a net wider than America is needed to explain anti-rationalist trends. From Foucault and Continental philosophy we see the trend towards fantasy; Anglo-American analytical philosophy has determinedly moved towards probabilistic formulations of epistemology and more and more scientism.

So what is the explanation of irrationality, whether uniquely American or more universal? In Fantasyland Andersen pins the blame on the persistence of intense religiosity in America. Why America alone remains a mystery, but the consequence is that the adolescent transition from belief in fairytales never occurs and there is a bleed-over effect into the acceptance of alternative formulations of reality:

The UC Berkeley psychologist Alison Gopnik studies the minds of small children and sees them as little geniuses, models of creativity and innovation. “They live twenty-four/seven in these crazy pretend worlds,” she says. “They have a zillion different imaginary friends.” While at some level, they “know the difference between imagination and reality…it’s just they’d rather live in imaginary worlds than in real ones. Who could blame them?” But what happens when that set of mental habits persists into adulthood too generally and inappropriately? A monster under the bed is true for her, the stuffed animal that talks is true for him, speaking in tongues and homeopathy and vaccines that cause autism and Trilateral Commission conspiracies are true for them.

This analysis extends the umbrella of religious theories built around instincts for perceiving purposeful action to an unceasing escalation of imaginary realities to buttress these personified habits of mind. It’s a strange preoccupation for many of us, though we can be accused of being coastal elites (or worse) just for entertaining such thoughts.

Fantasyland doesn’t end on a positive note but I think the broader thesis just might. We are all so programmed, I might claim. Things slip and slide, politics see and saw, but there seems to be a gradual unfolding of more rights and more opportunity for the many. Theocracy has always lurked in the basement of the American soul, but the atavistic fever dream has been eroded by a cosmopolitan engagement with the world. Those who long for utopia get down to the business of non-zero-sum interactions with a broader clientele and drift away, their certitude fogging until it lifts and a more conscientious idealization of what is and what can be takes over.

Simulator Superputz

The simulation hypothesis is perhaps a bit more interesting than how to add clusters of neural network nodes to do a simple reference resolution task, but it is also less testable. This is the nature of big questions since they would otherwise have been resolved by now. Nevertheless, some theory and experimental analysis has been undertaken for the question of whether or not we are living in a simulation, all based on an assumption that the strangeness of quantum and relativistic realities might be a result of limited computing power in the grand simulator machine. For instance, in a virtual reality game, only the walls that you, as a player, can see need to be calculated and rendered. The other walls that are out of sight exist only as a virtual map in the computer’s memory or persisted to longer-term storage. Likewise, the behavior of virtual microscopic phenomena need not be calculated insofar as the macroscopic results can be rendered, like the fire patterns in a virtual torch.

So one way of explaining physics conundrums like delayed choice quantum erasers, Bell’s inequality, or ER = EPR might be to claim that these sorts of phenomena are the results of a low-fidelity simulation necessitated by the limits of the simulator computer. I think the likelihood that this is true is low, however, because we can imagine that there exists an infinitely large cosmos that merely includes our universe simulation as a mote within it. Low-fidelity simulation constraints might give experimental guidance, but the results could also be supported by just living with the indeterminacy and non-locality as fundamental features of our universe.

It’s worth considering, however, what we should think about the nature of the simulator given this potentially devious (and poorly coded) little Matrix that we find ourselves trapped in? There are some striking alternatives. To make this easier, I’ll use the following abbreviations:

S = Simulator (creator of simulation)

U = Simulation

SC = Simulation Computer (whatever the simulation runs on)

MA = Morally Aware (the perception, rightly or wrongly, that judgments and choices influence simulation-level phenomena)

US = Simulatees

CA = Conscious Awareness (the perception that one is aware of stuff)

So let’s get started:

  1. S is unaware of events in U due to limited monitoring resources.
  2. S is unaware of events in U due to lack of interest.
  3. S is incapable of conscious awareness of U (S is some kind of automatic system).
  4. It seems unlikely that limited monitoring resources would be a constraint given the scale and complexity of U because they would be of a lower cost than U and simply tuned to filter active categories of interest, so S must either lack interest (2) or be incapable of awareness (3).
  5. We can dismiss (3) due to an infinite regress on the nature of the simulator in general, since the origin of the Simulation Hypothesis is the probability that we humans will create ever-better simulations in the future. There is no other simulation hypothesis that involves pure automation of S lacking CA and some form of MA.
  6. Given (2), why would S lack interest in U? Perhaps S created a large ensemble of universes and is only interested in long-term outcomes. But maybe S is just a putz.
  7. For (6), if S is MA, then S is wrong to create U that supports the evolution of US insofar as S allows for CA and MA in US combined with radical uncertainty in U.
  8. Conclusion: S is a putz or this ain’t a simulation.

Theists can squint and see the problem, here. We might add 7.5: it’s certainly wrong of S to actively burn, drown, imprison, enslave, and murder CA and MA US. If S is doing 7.5, that makes S a superputz.

In my novel, Teleology, the creation of another simulated universe by a first one was a religious imperative. The entities saw, once in contact with their S, that it must be the ultimate fulfillment of purpose for them to become S. Yet their S was very concerned with their U and would have objected to even cleanly pulling the plug on their U. He did lack instrumentation (1) into U, but built a great deal of it after discovering that there was evidence of CA. He was no putz.

Bright Sarcasm in the Classroom

That old American tradition, the Roman Salute

When a Pew research poll discovered a shocking divide between self-identifying Republicans/GOP-leaning Independents and their Democratic Party opposites on the question of the value of higher education, the commentariat went apeshit. Here’s a brief rundown of sources, left, center, and right, and what they decided are the key issues:

  • National Review: Higher education has eroded the Western canon and turned into a devious plot to rob our children of good thinking, spiked with avocado toast.
  • Paul Krugman at New York Times: Conservative tribal identification leads to opposition to climate change science or evolution, and further towards a “grim” anti-intellectualism.
  • New Republic: There is no evidence that college kid’s political views are changed by higher education and, also, that conservative-minded professors aren’t much maltreated on campus either, so the conservative complaints are just overblown anti-liberal hype that, they point out, has some very negative consequences.

I would make a slightly more radical claim than Krugman, for instance, and one that is pointedly opposed to Simonson at National Review. In higher education we see not just a dedication to science but an active program of criticizing and deconstructing ideas like the Western canon as central to higher thought. In history, great man theories have been broken down into smart and salient compartments that explore the many ways in which groups and individuals, genders and ideas, all were part of fashioning the present. These changes, largely late 20th century academic inventions, have broken up the monopolies on how concepts of law, order, governance, and the worth of people were once formulated. This must be anti-conservative in the pure sense that there is little to be conserved from older ideas, except as objects of critique. We need only stroll through the grotesque history of Social Darwinism, psychological definitions of homosexuality as a mental disorder, or anthropological theories of race and values to get a sense for why academic pursuits, in becoming more critically influenced by a burgeoning and democratizing populace, were obligated to refine what is useful, intellectually valuable, and less wrong. The process will continue, too.

The consequences are far reaching. Higher education correlates necessarily with liberal values and those values tend to correlate more with valuing reason and fairness over tradition and security. That means that atheism has a greater foothold and science as a primary means of truth discovery takes precedence over the older and uglier angels of our nature. The enhanced creativity that arises from better knowledge of the world and accurate and careful assessment then, in turn, leads to knowledge generation and technological innovation that is derived almost exclusively from a broad engagement with ideas. This can cause problems when ordering Italian sandwiches.

Is there or should there be any antidote to the disjunctive opinions on the value of higher learning? Polarized disagreements on the topic can lead to societal consequences that are reactive and precipitous, which is what all three sources are warning about in various ways. But the larger goals of conservatives should be easily met through the mechanism that most of them would agree is always open: form, build, and attend ideologically-attuned colleges. There are at least dozens of Christian colleges that have various charters that should meet some of their expectations. If these institutions are good for them and society as a whole, they just need to do a better job of explaining that to America. Then, like the consumer flocking from Microsoft to Apple, the great public and private institutions will lose the student debt dollar to these other options and, finally, indoctrination in all that bright sarcasm will end in the classroom. Maybe, then, everyone will agree that the earth is only a few thousand years old and that coal demand proceeds from supply.

Traitorous Reason, Facts, and Analysis

dinoObama’s post-election press conference was notable for its continued demonstration of adult discourse and values. Especially notable:

This office is bigger than any one person and that’s why ensuring a smooth transition is so important. It’s not something that the constitution explicitly requires but it is one of those norms that are vital to a functioning democracy, similar to norms of civility and tolerance and a commitment to reason and facts and analysis.

But ideology in American politics (and elsewhere) has the traitorous habit of undermining every one of those norms. It always begins with undermining the facts in pursuit of manipulation. Just before the election, the wizardly Aron Ra took to YouTube to review VP-elect Mike Pence’s bizarre grandstanding in Congress in 2002:

And just today, Trump lashed out at the cast of Hamilton for lecturing Mike Pence on his anti-LGBTQ stands, also related to ideology and belief, at the end of a show.

Astonishing as this seems, we live in an imperfect world being drawn very slowly away from tribal and xenophobic tendencies, and in fits and starts. My wife received a copy of letter from now-deceased family that contained an editorial from the Shreveport Journal in the 1960s that (with its embedded The Worker editorial review) simultaneously attacked segregationist violence, the rhetoric of Alabama governor George Wallace, claimed that communists were influencing John F. Kennedy and the civil rights movement, demanded the jailing of communists, and suggested the federal government should take over Alabama:

editorial-shreveport-60s-m

The accompanying letter was also concerned over the fate of children raised as Unitarians, amazingly enough, and how they could possibly be moral people. It then concluded with a recommendation to vote for Goldwater.

Is it any wonder that the accompanying cultural revolutions might lead to the tearing down of the institutions that were used to justify the deviation away from “reason and facts and analysis?”

But I must veer to the positive here, that this brief blip is a passing retrenchment of these old tendencies that the Millennials and their children will look back to with fond amusement, the way I remember Ronald Reagan.

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.

 

Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way. So other methods may need to be tweaked to get closer to the observed learning of people in these word tasks.

So what can this tell us about epistemology and belief? Classical foundationalism might be formulated as something is a “basic” or “justified” belief if it is self-evident or evident to our senses. Other beliefs may therefore be grounded by those basic beliefs. And a more modern reformulation might substitute “incorrigible” for “justified” with the layered meaning of incorrigibility built on the necessity that given the proposition it is in fact true.

Here’s Alvin Plantinga laying out a case for why justified and incorrigibility have a range of problems, problems serious enough for Plantinga that he suspects that god belief could just as easily be a basic belief, allowing for the kinds of presuppositional Natural Theology (think: I look around me and the hand of God is obvious) that is at the heart of some of the loftier claims concerning the viability or non-irrationality of god belief. It even provides a kind of coherent interpretative framework for historical interpretation.

Plantinga positions the problem of properly basic belief then as an inductive problem:

And hence the proper way to arrive at such a criterion is, broadly speaking, inductive. We must assemble examples of beliefs and conditions such that the former are obviously properly basic in the latter, and examples of beliefs and conditions such that the former are obviously not properly basic in the latter. We must then frame hypotheses as to the necessary and sufficient conditions of proper basicality and test these hypothesis by reference to those examples. Under the right conditions, for example, it is clearly rational to believe that you see a human person before you: a being who has thoughts and feelings, who knows and believes things, who makes decisions and acts. It is clear, furthermore, that you are under no obligation to reason to this belief from others you hold; under those conditions that belief is properly basic for you.

He goes on to conclude that this opens up the god hypothesis as providing this kind of coherence mechanism:

By way of conclusion then: being self-evident, or incorrigible, or evident to the senses is not a necessary condition of proper basicality. Furthermore, one who holds that belief in God is properly basic is not thereby committed to the idea that belief in God is groundless or gratuitous or without justifying circumstances. And even if he lacks a general criterion of proper basicality, he is not obliged to suppose that just any or nearly any belief—belief in the Great Pumpkin, for example—is properly basic. Like everyone should, he begins with examples; and he may take belief in the Great Pumpkin as a paradigm of irrational basic belief.

So let’s assume that the word learning mechanism based on this Bayesian scaling is representative of our human inductive capacities. Now this may or may not be broadly true. It is possible that it is true of words but not other domains of perceptual phenomena. Nevertheless, given this scaling property, the relative inductive truth of a given proposition (a meaning hypothesis) is strictly Bayesian. Moreover, this doesn’t succumb to problems of verificationalism because it only claims relative truth. Properly basic or basic is then the scaled contending explanatory hypotheses and the god hypothesis has to compete with other explanations like evolutionary theory (for human origins), empirical evidence of materialism (for explanations contra supernatural ones), perceptual mistakes (ditto), myth scholarship, textual analysis, influence of parental belief exposure, the psychology of wish fulfillment, the pragmatic triumph of science, etc. etc.

And so we can stick to a relative scaling of hypotheses as to what constitutes basicality or justified true belief. That’s fine. We can continue to argue the previous points as to whether they support or override one hypothesis or another. But the question Plantinga raises as to what ethics to apply in making those decisions is important. He distinguishes different reasons why one might want to believe more true things than others (broadly) or maybe some things as properly basic rather than others, or, more correctly, why philosophers feel the need to pin god-belief as irrational. But we succumb to a kind of unsatisfying relativism insofar as the space of these hypotheses is not, in fact, weighted in a manner that most reflects the known facts. The relativism gets deeper when the weighting is washed out by wish fulfillment, pragmatism, aspirations, and personal insights that lack falsifiability. That is at least distasteful, maybe aretetically so (in Plantinga’s framework) but probably more teleologically so in that it influences other decision-making and the conflicts and real harms societies may cause.

Lucifer on the Beach

glowwormsI picked up a whitebait pizza while stopped along the West Coast of New Zealand tonight. Whitebait are tiny little swarming immature fish that can be scooped out of estuarial river flows using big-mouthed nets. They run, they dart, and it is illegal to change river exit points to try to channel them for capture. Hence, whitebait is semi-precious, commanding NZD70-130/kg, which explains why there was a size limit on my pizza: only the small one was available.

By the time I was finished the sky had aged from cinereal to iron in a satire of the vivid, watch-me colors of CNN International flashing Donald Trump’s linguistic indirection across the television. I crept out, setting my headlamp to red LEDs designed to minimally interfere with night vision. Just up away from the coast, hidden in the impossible tangle of cold rainforest, there was a glow worm dell. A few tourists conjured with flashlights facing the ground to avoid upsetting the tiny arachnocampa luminosa that clung to the walls inside the dark garden. They were like faint stars composed into irrelevant constellations, with only the human mind to blame for any observed patterns.

And the light, what light, like white-light LEDs recently invented, but a light that doesn’t flicker or change, and is steady under the calmest observation. Driven by luciferin and luciferase, these tiny creatures lure a few scant light-seeking creatures to their doom and as food for absorption until they emerge to mate, briefly, lay eggs, and then die.

Lucifer again, named properly from the Latin as the light bringer, the chemical basis for bioluminescence was largely isolated in the middle of the 20th Century. Yet there is this biblical stigma hanging over the term—one that really makes no sense at all. The translation of morning star or some other such nonsense into Latin got corrupted into a proper name by a process of word conversion (this isn’t metonymy or something like that; I’m not sure there is a word for it other than “mistake”). So much for some kind of divine literalism tracking mechanism that preserves perfection. Even Jesus got rendered as lucifer in some passages.

But nothing new, here. Demon comes from the Greek daemon and Christianity tried to, well, demonize all the ancient spirits during the monolatry to monotheism transition. The spirits of the air that were in a constant flux for the Hellenists, then the Romans, needed to be suppressed and given an oppositional position to the Christian soteriology. Even “Satan” may have been borrowed from Persian court drama as a kind of spy or informant after the exile.

Oddly, we are left with a kind of naming magic for the truly devout who might look at those indifferent little glow worms with some kind of castigating eye, corrupted by a semantic chain that is as kinked as the popular culture epithets of Lucifer himself.

Free Will and Thermodynamic Warts

Free WillyThe Stone at New York Times is a great resource for insights into both contemporary and rather ancient discussions in philosophy. Here’s William Irvin at King’s College discoursing on free will and moral decision-making. The central problem is one that we all discussed in high school: if our atomistic world is deterministic in that there is a chain of causation from one event to another (contingent in the last post), and therefore even our mental processes must be caused, then there is no free will in the expected sense (“libertarian free will” in the literature). This can be overcome by the simplest fix of proposing a non-material soul that somehow interacts with the material being and is inherently non-deterministic. This results in a dualism of matter and mind that doesn’t seem justifiable by any empirical results. For instance, we know that decision-making does appear to have a neuropsychological basis because we know about the effects of lesioning brains, neurotransmitters, and even how smells can influence decisions. Irving also claims that the realization of the potential loss of free will leaves us awash in some sense of hopelessness at the simultaneous loss of the metaphysical reality of an objective moral system. Without free will we seem off the hook for our decisions.

Compatibilists will disagree, and might even cite quantum indeterminacy as a rescue donut for pulling some notion of free will up out of the deep ocean of Irving’s despair. But the fix is perhaps even easier than that. Even though we might recognize that there are chains of causation at a microscopic scale, the macroscopic combinations of these events—even without quantum indeterminacy—becomes only predictable along broad contours of probabilistic outcomes. We start with complex initial conditions and things just get worse from there. By the time we get to exceedingly complex organisms deciding things, we also have elaborate control cycles influenced by childhood training, religion, and reason that cope with this ambiguity and complexity. The metaphysical reality of morality or free will may be gone, but there is no need for fictionalism. They are empirically real and any sense of loss is tied to merely overcoming the illusions arriving from these incompatibilities between everyday reasoning and the deeper appreciation of the world as it is, thermodynamic warts and all.

Rationality and the Intelligibility of Philosophy

6a00d83542d51e69e20133f5650edd970b-800wiThere is a pervasive meme in the physics community that holds as follows: there are many physical phenomena that don’t correspond in any easy way to our ordinary experiences of life on earth. We have wave-particle duality wherein things behave like waves sometimes and particles other times. We have simultaneous entanglement of physically distant things. We have quantum indeterminacy and the emergence of stuff out of nothing. The tiny world looks like some kind of strange hologram with bits connected together by virtual strings. We have a universe that began out of nothing and that begat time itself. It is, in this framework, worthwhile to recognize that our every day experiences are not necessarily useful (and are often confounding) when trying to understand the deep new worlds of quantum and relativistic physics.

And so it is worthwhile to ask whether many of the “rational” queries that have been made down through time have any intelligible meaning given our modern understanding of the cosmos. For instance, if we were to state the premise “all things are either contingent or necessary” that underlies a poor form of the Kalam Cosmological Argument, we can immediately question the premise itself. And a failed premise leads to a failed syllogism. Maybe the entanglement of different things is piece-part of the entanglement of large-scale space time, and that the insights we have so far are merely shadows of the real processes acting behind the scenes? Who knows what happened before the Big Bang?

In other words, do the manipulations of logic and the assumptions built into the terms lead us to empty and destructive conclusions? There is no reason not to suspect that and therefore the bits of rationality that don’t derive from empirical results are immediately suspect. This seems to press for a more coherence-driven view of epistemology, one which accords with known knowledge but adjusts automatically as semantics change.

There is an interesting mental exercise concerning why we should be able to even undertake these empirical discoveries and all their seemingly non-sensible results that are nevertheless fashioned into a cohesive picture of the physical world (and increasingly the mental one). Are we not making an assumption that our brains are capable of rational thinking given our empirical understanding of our evolved pasts? Plantinga’s Evolutionary Argument Against Naturalism tries, for instance, to upend this perspective by claiming it is highly unlikely that a random process of evolution could produce reliable mental faculties because it would be focused too much on optimization for survival. This makes no sense empirically, however, since we have good evidence for evolution and we have good evidence for reliable mental faculties when subjected to the crucible of group examination and scientific process. We might be deluding ourselves, it’s true, but there are too many artifacts of scientific understanding and progress to take that terribly seriously.

So we get back to coherence and watchful empiricism. No necessity for naturalism as an ideology. It’s just the only thing that currently makes sense.

Non-Cognitivist Trajectories in Moral Subjectivism

imageWhen I say that “greed is not good” the everyday mind creates a series of images and references, from Gordon Gekko’s inverse proposition to general feelings about inequality and our complex motivations as people. There is a network of feelings and, perhaps, some facts that might be recalled or searched for to justify the position. As a moral claim, though, it might most easily be considered connotative rather than cognitive in that it suggests a collection of secondary emotional expressions and networks of ideas that support or deny it.

I mention this (and the theories that are consonant with this kind of reasoning are called non-cognitivist and, variously, emotive and expressive), because there is a very real tendency to reduce moral ideas to objective versus subjective, especially in atheist-theist debates. I recently watched one such debate between Matt Dillahunty and an orthodox priest where the standard litany revolved around claims about objectivity versus subjectivity of truth. Objectivity of truth is often portrayed as something like, “without God there is no basis for morality. God provides moral absolutes. Therefore atheists are immoral.” The atheists inevitably reply that the scriptural God is a horrific demon who slaughters His creation and condones slavery and other ideas that are morally repugnant to the modern mind. And then the religious descend into what might be called “advanced apologetics” that try to diminish, contextualize, or dismiss such objections.

But we are fairly certain regardless of the tradition that there are inevitable nuances to any kind of moral structure. Thou shalt not kill gets revised to thou shalt not murder. So we have to parse manslaughter in pursuit of a greater good against any rules-based approach to such a simplistic commandment. Not eating shellfish during a famine has less human expansiveness but nevertheless caries similar objective antipathy,

I want to avoid invoking the Euthyphro dilemma here and instead focus on the notion that there might be an inevitability to certain moral proscriptions and even virtues given an evolutionary milleu. This was somewhat the floorplan of Sam Harris, but I’ll try to project the broader implications of species-level fitness functions to a more local theory, specifically Gibbard’s fact-prac worlds where the trajectories of normative, non-cognitive statements like “greed is not good” align with sets of perceptions of the world and options for implementing activities that strengthen the engagement with the moral assertion. The assertion is purely subjective but it derives out of a correspondence with incidental phenomena and a coherence with other ideations and aspirations. It is mostly non-cognitive in this sense that it expresses emotional primitives rather than simple truth propositions. It has a number of interesting properties, however, most notably that the fact-prac set of constraints that surround these trajectories are movable, resulting in the kinds of plasticity and moral “evolution” that we see around us, like “slavery is bad” and “gay folks should not be discriminated against.” So as an investigative tool, we can see some value that gives such a theory important verificational value. As presented by Gibbard, however, these collections of constraints that guide the trajectories of moral approaches to simple moral commandments, admonishments, or statements, need further strengthening to meet the moral landscape “ethical naturalism” that asserts that certain moral attitudes result in improved species outcomes and are therefore axiomatically possible and sensibly rendered as objective.

And it does this without considering moral propositions at all.