Category: Science

Gravity and the Dark Star

Totality in Nebraska

I began at 5 AM from the Broomfield Aloft hotel, strategically situated in a sterile “new urban” office park cum apartment complex along the connecting freeway between Denver and Boulder. The whole weekend was fucked in a way: colleges across Colorado were moving in for a Monday start, half of Texas was here already, and most of Colorado planned to head north to the zone of totality. I split off I-25 around Loveland and had success using US 85 northbound through Cheyenne. Continuing up 85 was the original plan, but that fell apart when 85 came to a crawl in the vast prairie lands of Wyoming. I dodged south and east, then, (dodging will be a continuing theme) and entered Nebraska’s panhandle with middling traffic.

I achieved totality on schedule north of Scottsbluff. And it was spectacular. A few fellow adventurers were hanging out along the outflow lane of an RV dump at a state recreation area. One guy flew his drone around a bit. Maybe he wanted B roll for other purposes. I got out fast, but not fast enough, and dodged my way through lane closures designed to provide access from feeder roads. The Nebraska troopers were great, I should add, always willing to wave to us science and spectacle immigrants. Meanwhile, SiriusXM spewed various Sibelius pieces that had “sun” in their name, while the Grateful Dead channel gave us a half dozen versions of Dark Star, the quintessential jam song for the band that dates to the early, psychedelic era of the band.

Was it worth it? I think so, though one failed dodge that left me in a ten mile bumper-to-bumper crawl in rural Nebraska with a full bladder tested my faith in the stellar predictability of gravity. Gravity remains an enigma in many ways, though the perfection of watching the corona flare around the black hole sun shows just how unenigmatic it can be in the macroscopic sphere.

But reconciling gravity with quantum-scale phenomena remains remarkably elusive and is the beginning of the decades-long detour through string theory which, admittedly, some have characterized as “fake science” due to our inability to find testable aspects of the theory. Yet, there are some interesting recent developments that, though they are not directly string theoretic, have a relationship to the quantum symmetries that, in turn, led to stringiness.

So I give you Juan Maldacena and Leonard Susskind’s suggestion that ER = EPR. This is a rather remarkable conclusion that unites quantum and relativistic realities, but is based on a careful look at the symmetry between two theoretical outcomes at the two different scales. So how does it work? In a nut shell, the claim is that quantum entanglement is identical to relativistic entanglement. Just like the science fiction idea of wormholes connecting distant things together to facilitate faster-than-light travel, ER connects singularities like black holes together. And the correlations that occur between black holes is just like the correlations between entangled quanta. Neither is amenable to either FTL travel or signaling due to Lorentzian traversability issues (former) or Bell’s Inequality (latter).

Today was just a shadow, classically projected, maybe just slightly twisted by the gravity wells, not some wormhole wending its way through space and time. It is worth remembering, though, that the greatest realization of 20th century physics is that reality really isn’t in accord with our everyday experiences. Suns and moons kind of are, briefly and ignoring fusion in the sun, but reality is almost mystically entangled with itself, a collection of vibrating potentialities that extend out everywhere, and then, unexpectedly, the potentialities are connected together in another way that defies these standard hypothetical representations and the very notion of space connectivities.

Bright Sarcasm in the Classroom

That old American tradition, the Roman Salute

When a Pew research poll discovered a shocking divide between self-identifying Republicans/GOP-leaning Independents and their Democratic Party opposites on the question of the value of higher education, the commentariat went apeshit. Here’s a brief rundown of sources, left, center, and right, and what they decided are the key issues:

  • National Review: Higher education has eroded the Western canon and turned into a devious plot to rob our children of good thinking, spiked with avocado toast.
  • Paul Krugman at New York Times: Conservative tribal identification leads to opposition to climate change science or evolution, and further towards a “grim” anti-intellectualism.
  • New Republic: There is no evidence that college kid’s political views are changed by higher education and, also, that conservative-minded professors aren’t much maltreated on campus either, so the conservative complaints are just overblown anti-liberal hype that, they point out, has some very negative consequences.

I would make a slightly more radical claim than Krugman, for instance, and one that is pointedly opposed to Simonson at National Review. In higher education we see not just a dedication to science but an active program of criticizing and deconstructing ideas like the Western canon as central to higher thought. In history, great man theories have been broken down into smart and salient compartments that explore the many ways in which groups and individuals, genders and ideas, all were part of fashioning the present. These changes, largely late 20th century academic inventions, have broken up the monopolies on how concepts of law, order, governance, and the worth of people were once formulated. This must be anti-conservative in the pure sense that there is little to be conserved from older ideas, except as objects of critique. We need only stroll through the grotesque history of Social Darwinism, psychological definitions of homosexuality as a mental disorder, or anthropological theories of race and values to get a sense for why academic pursuits, in becoming more critically influenced by a burgeoning and democratizing populace, were obligated to refine what is useful, intellectually valuable, and less wrong. The process will continue, too.

The consequences are far reaching. Higher education correlates necessarily with liberal values and those values tend to correlate more with valuing reason and fairness over tradition and security. That means that atheism has a greater foothold and science as a primary means of truth discovery takes precedence over the older and uglier angels of our nature. The enhanced creativity that arises from better knowledge of the world and accurate and careful assessment then, in turn, leads to knowledge generation and technological innovation that is derived almost exclusively from a broad engagement with ideas. This can cause problems when ordering Italian sandwiches.

Is there or should there be any antidote to the disjunctive opinions on the value of higher learning? Polarized disagreements on the topic can lead to societal consequences that are reactive and precipitous, which is what all three sources are warning about in various ways. But the larger goals of conservatives should be easily met through the mechanism that most of them would agree is always open: form, build, and attend ideologically-attuned colleges. There are at least dozens of Christian colleges that have various charters that should meet some of their expectations. If these institutions are good for them and society as a whole, they just need to do a better job of explaining that to America. Then, like the consumer flocking from Microsoft to Apple, the great public and private institutions will lose the student debt dollar to these other options and, finally, indoctrination in all that bright sarcasm will end in the classroom. Maybe, then, everyone will agree that the earth is only a few thousand years old and that coal demand proceeds from supply.

Zebras with Machine Guns

I was just rereading some of the literature on Plantinga’s Evolutionary Argument Against Naturalism (EAAN) as a distraction from trying to write too much on ¡Reconquista!, since it looks like I am on a much faster trajectory to finishing the book than I had thought. EAAN is a curious little argument that some have dismissed as a resurgent example of scholastic theology. It has some newer trappings that we see in modern historical method, however, especially in the use Bayes’ Theorem to establish the warrant of beliefs by trying to cast those warrants as probabilities.

A critical part of Plantinga’s argument hinges on the notion that evolutionary processes optimize against behavior and not necessarily belief. Therefore, it is plausible that an individual could hold false beliefs that are nonetheless adaptive. For instance, Plantinga gives the example of a man who desires to be eaten by tigers but always feels hopeless when confronted by a given tiger because he doesn’t feel worthy of that particular tiger, so he runs away and looks for another one. This may seem like a strange conjunction of beliefs and actions that happen to result in the man surviving, but we know from modern psychology that people can form elaborate justifications for perceived events and wild metaphysics to coordinate those justifications.

If that is the case, for Plantinga, the evolutionary consequence is that we should not trust our belief in our reasoning faculties because they are effectively arbitrary. There are dozens of responses to this argument that dissect it from many different dimensions. I’ve previously showcased Branden Fitelson and Elliot Sober’s Plantinga’s Probability Arguments Against Evolutionary Naturalism from 1997, which I think is one of the most complete examinations of the structure of the argument. There are two critical points that I think emerge from Fitelson and Sober. First, there is the sober reminder of the inherent frailty of scientific method that needs to be kept in mind. Science is an evolving work involving many minds operating, when at its best, in a social network that reduces biases and methodological overshoots. It should be seen as a tentative foothold against “global skepticism.”

The second, and critical take-away from that response is more nuanced, however. The notion that our beliefs can be arbitrarily disconnected from adaptive behavior in an evolutionary setting, like the tiger survivor, requires a very different kind of evolution than we theorize. Fitelson and Sober point out that if anything was possible, zebras might have developed machine guns to defend against lions rather than just cryptic stripes. Instead, the sieve of possible solutions to adaptive problems is built on the genetic and phenotypic variants that came before. This will limit the range of arbitrary, non-true beliefs that can be compatible with an adaptive solution. If the joint probability of true belief and adaptive behavior is much higher than the alternative, which we might guess is true, then there is a greater probability that our faculties are reliable. In fact, we could argue that using a parsimony argument that extends Bayesian analysis to the general case of optimal inductive models (Sober actually works on this issue extensively), that there are classes of inductive solutions that, through eliminating add-ons, outperform predictively those solutions that have extra assumptions and entities. So, P(not getting eaten | true belief that tigers are threats) >> P(not getting eaten | false beliefs about tigers), especially when updated over time. I would be remiss if I didn’t mention that William of Ockham of Ockham’s Razor-fame was a scholastic theologian, so if Plantinga’s argument is revisiting those old angels-head-pin-style arguments, it might be opposed by a fellow scholastic.

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus. In English, the syntax and semantics are less constrained and the pragmatics more pronounced, but the emotional high is much the same.

With the recent death of Hubert Dreyfus at Berkeley it seems an opportune time to consider the uniquely human capabilities that are involved in each of these creative ventures. Uniquely, I suggest, because we can’t yet imagine what it would be like for a machine to do the same kinds of intelligent tasks. Yet, from Stephen Hawking through to Elon Musk, influential minds are worried about what might happen if we develop machines that rise to the level of human consciousness. This might be considered a science fiction-like speculation since we have little basis for conjecture beyond the works of pure imagination. We know that mechanization displaces workers, for instance, and think it will continue, but what about conscious machines?

For Dreyfus, the human mind is too embodied and situational to be considered an encodable thing representable by rules and algorithms. Much like the trajectory of a species through an evolutionary landscape, the mind is, in some sense, an encoded reflection of the world in which it lives. Taken further, the evolutionary parallel becomes even more relevant in that it is embodied in a sensory and physical identity, a product of a social universe, and an outgrowth of some evolutionary ping pong through contingencies that led to greater intelligence and self-awareness.

Obsession with whatever cultivars, whatever traits and tendencies, lead to this riot of wordplay and software refinement is a fine example of how this moves away from the fears of Hawking and towards the impossibilities of Dreyfus. We might imagine that we can simulate our way to the kernel of instinct and emotion that makes such things possible. We might also claim that we can disconnect the product of the effort from these internal states and the qualia that defy easy description. The books and the new technologies have only desultory correspondence to the process by which they are created. But I doubt it. It’s more likely that getting from great automatic speech recognition or image classification to the general AI that makes us fearful is a longer hike than we currently imagine.

The Ethics of Knowing

In the modern American political climate, I’m constantly finding myself at sea in trying to unravel the motivations and thought processes of the Republican Party. The best summation I can arrive at involves the obvious manipulation of the electorate—but that is not terrifically new—combined with a persistent avoidance of evidence and facts.

In my day job, I research a range of topics trying to get enough of a grasp on what we do and do not know such that I can form a plan that innovates from the known facts towards the unknown. Here are a few recent investigations:

  • What is the state of thinking about the origins of logic? Logical rules form into broad classes that range from the uncontroversial (modus tollens, propositional logic, predicate calculus) to the speculative (multivalued and fuzzy logic, or quantum logic, for instance). In most cases we make an assumption based on linguistic convention that they are true and then demonstrate their extension, despite the observation that they are tautological. Synthetic knowledge has no similar limitations but is assumed to be girded by the logical basics.
  • What were the early Christian heresies, how did they arise, and what was their influence? Marcion of Sinope is perhaps the most interesting one of these, in parallel with the Gnostics, asserting that the cruel tribal god of the Old Testament was distinct from the New Testament Father, and proclaiming perhaps (see various discussions) a docetic Jesus figure. The leading “mythicists” like Robert Price are invaluable in this analysis (ignore first 15 minutes of nonsense). The thin braid of early Christian history and the constant humanity that arises in morphing the faith before settling down after Nicaea (well, and then after Martin Luther) reminds us that abstractions and faith have a remarkable persistence in the face of cultural change.
  • How do mathematical machines take on so many forms while achieving the same abstract goals? Machine learning, as a reificiation of human-like learning processes, can imitate neural networks (or an extreme sketch and caricature of what we know about real neural systems), or can be just a parameter slicing machine like Support Vector Machines or ID3, or can be a Bayesian network or mixture model of parameters.  We call them generative or non-generative, we categorize them as to discrete or continuous decision surfaces, and we label them in a range of useful ways. But why should they all achieve similar outcomes with similar ranges of error? Indeed, Random Forests were the belles of the ball until Deep Learning took its tiara.

In each case, I try to work my way, as carefully as possible, through the thicket of historical and intellectual concerns that provide point and counterpoint to the ideas. It feels ethically wrong to make a short, fast judgment about any such topics. I can’t imagine doing anything less with a topic as fraught as the US health care system. It’s complex, indeed, Mr. President.

So, I tracked down a foundational paper on this idea of ethics and epistemology. It dates to 1877 and provides a grounding for why and when we should believe in anything. William Clifford’s paper, The Ethics of Belief, tracks multiple lines of argumentation and the consequences of believing without clarity. Even tentative clarity comes with moral risk, as Clifford shows in his thought experiments.

In summary, though, there is no more important statement than Clifford’s final assertion that it is wrong to believe without sufficient evidence. It’s that simple. And it’s even more wrong to act on those beliefs.

Traitorous Reason, Facts, and Analysis

dinoObama’s post-election press conference was notable for its continued demonstration of adult discourse and values. Especially notable:

This office is bigger than any one person and that’s why ensuring a smooth transition is so important. It’s not something that the constitution explicitly requires but it is one of those norms that are vital to a functioning democracy, similar to norms of civility and tolerance and a commitment to reason and facts and analysis.

But ideology in American politics (and elsewhere) has the traitorous habit of undermining every one of those norms. It always begins with undermining the facts in pursuit of manipulation. Just before the election, the wizardly Aron Ra took to YouTube to review VP-elect Mike Pence’s bizarre grandstanding in Congress in 2002:

And just today, Trump lashed out at the cast of Hamilton for lecturing Mike Pence on his anti-LGBTQ stands, also related to ideology and belief, at the end of a show.

Astonishing as this seems, we live in an imperfect world being drawn very slowly away from tribal and xenophobic tendencies, and in fits and starts. My wife received a copy of letter from now-deceased family that contained an editorial from the Shreveport Journal in the 1960s that (with its embedded The Worker editorial review) simultaneously attacked segregationist violence, the rhetoric of Alabama governor George Wallace, claimed that communists were influencing John F. Kennedy and the civil rights movement, demanded the jailing of communists, and suggested the federal government should take over Alabama:

editorial-shreveport-60s-m

The accompanying letter was also concerned over the fate of children raised as Unitarians, amazingly enough, and how they could possibly be moral people. It then concluded with a recommendation to vote for Goldwater.

Is it any wonder that the accompanying cultural revolutions might lead to the tearing down of the institutions that were used to justify the deviation away from “reason and facts and analysis?”

But I must veer to the positive here, that this brief blip is a passing retrenchment of these old tendencies that the Millennials and their children will look back to with fond amusement, the way I remember Ronald Reagan.

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.

 

The Goldilocks Complexity Zone

FractalSince my time in the early 90s at Santa Fe Institute, I’ve been fascinated by the informational physics of complex systems. What are the requirements of an abstract system that is capable of complex behavior? How do our intuitions about complex behavior or form match up with mathematical approaches to describing complexity? For instance, we might consider a snowflake complex, but it is also regular in it’s structure, driven by an interaction between crystal growth and the surrounding air. The classic examples of coastlines and fractal self-symmetry also seem complex but are not capable of complex behavior.

So what is a good way of thinking about complexity? There is actually a good range of ideas about how to characterize complexity. Seth Lloyd rounds up many of them, here. The intuition that drives many of them is that complexity seems to be associated with distributions of relationships and objects that are somehow juxtapositioned between a single state and a uniformly random set of states. Complex things, be they living organisms or computers running algorithms, should exist in a Goldilocks zone when each part is examined and those parts are somehow summed up to a single measure.

We can easily construct a complexity measure that captures some of these intuitions. Let’s look at three strings of characters:

x = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

y = menlqphsfyjubaoitwzrvcgxdkbwohqyxplerz

z = the fox met the hare and the fox saw the hare

Now we would likely all agree that y and z are more complex than x, and I suspect most would agree that y looks like gibberish compared with z. Of course, y could be a sequence of weirdly coded measurements or something, or encrypted such that the message appears random. Let’s ignore those possibilities for our initial attempt at defining a complexity measure. We can see right away that an approach using basic information theory doesn’t help much. Algorithmic informational complexity will be highest for y, as will entropy:

for each sequence composed out of an alphabet with counts, s. So we get: H(x) = 0, H(y) = 3.199809, and H(z) = 2.3281. Here’s some sample R code using the “entropy” package if you want to calculate yourself:

> z = "the fox met the hare and the fox saw the hare"
> zt = table(strsplit(z, '')[[1]])
> entropy(zt, method="ML")

Note that the alphabet of each string is slightly different, but the missing characters between them don’t matter since their probabilities are 0.

We can just arbitrarily scale entropy by the maximum entropy possible for the same length string like this:

This is somewhat like channel efficiency in communications theory, I think. And then just turn this into a parabolically-scaled measure that centers at 0.5:

where is an arbitrary non-zero scaling parameter.

But this calculation is only considering the individual character frequencies, not the composition of the characters into groupings. So we can consider pairs of characters in this same calculation, or triples, etc. And also, just looking at these n-gram sequences doesn’t capture potentially longer range repetitious structures. So we can gradually ladle on grammars as the counting mechanism. Now, if our measure of complexity is really going to capture what we intuitively consider to be complex, all of these different levels of connections within the string or other organized piece of information must be present.

This general program is present in every one of Seth Lloyd’s complexity metrics in various ways and even comes into play in discussions of consciousness, though many use mutual information rather than entropy per se. Here’s Max Tegmark using a variation on Giulio Tinoni’s Phi concept from Integrated Information Theory to demonstrate that integration is a key component of consciousness and how that might be calculated for general physical systems.

Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way. So other methods may need to be tweaked to get closer to the observed learning of people in these word tasks.

So what can this tell us about epistemology and belief? Classical foundationalism might be formulated as something is a “basic” or “justified” belief if it is self-evident or evident to our senses. Other beliefs may therefore be grounded by those basic beliefs. And a more modern reformulation might substitute “incorrigible” for “justified” with the layered meaning of incorrigibility built on the necessity that given the proposition it is in fact true.

Here’s Alvin Plantinga laying out a case for why justified and incorrigibility have a range of problems, problems serious enough for Plantinga that he suspects that god belief could just as easily be a basic belief, allowing for the kinds of presuppositional Natural Theology (think: I look around me and the hand of God is obvious) that is at the heart of some of the loftier claims concerning the viability or non-irrationality of god belief. It even provides a kind of coherent interpretative framework for historical interpretation.

Plantinga positions the problem of properly basic belief then as an inductive problem:

And hence the proper way to arrive at such a criterion is, broadly speaking, inductive. We must assemble examples of beliefs and conditions such that the former are obviously properly basic in the latter, and examples of beliefs and conditions such that the former are obviously not properly basic in the latter. We must then frame hypotheses as to the necessary and sufficient conditions of proper basicality and test these hypothesis by reference to those examples. Under the right conditions, for example, it is clearly rational to believe that you see a human person before you: a being who has thoughts and feelings, who knows and believes things, who makes decisions and acts. It is clear, furthermore, that you are under no obligation to reason to this belief from others you hold; under those conditions that belief is properly basic for you.

He goes on to conclude that this opens up the god hypothesis as providing this kind of coherence mechanism:

By way of conclusion then: being self-evident, or incorrigible, or evident to the senses is not a necessary condition of proper basicality. Furthermore, one who holds that belief in God is properly basic is not thereby committed to the idea that belief in God is groundless or gratuitous or without justifying circumstances. And even if he lacks a general criterion of proper basicality, he is not obliged to suppose that just any or nearly any belief—belief in the Great Pumpkin, for example—is properly basic. Like everyone should, he begins with examples; and he may take belief in the Great Pumpkin as a paradigm of irrational basic belief.

So let’s assume that the word learning mechanism based on this Bayesian scaling is representative of our human inductive capacities. Now this may or may not be broadly true. It is possible that it is true of words but not other domains of perceptual phenomena. Nevertheless, given this scaling property, the relative inductive truth of a given proposition (a meaning hypothesis) is strictly Bayesian. Moreover, this doesn’t succumb to problems of verificationalism because it only claims relative truth. Properly basic or basic is then the scaled contending explanatory hypotheses and the god hypothesis has to compete with other explanations like evolutionary theory (for human origins), empirical evidence of materialism (for explanations contra supernatural ones), perceptual mistakes (ditto), myth scholarship, textual analysis, influence of parental belief exposure, the psychology of wish fulfillment, the pragmatic triumph of science, etc. etc.

And so we can stick to a relative scaling of hypotheses as to what constitutes basicality or justified true belief. That’s fine. We can continue to argue the previous points as to whether they support or override one hypothesis or another. But the question Plantinga raises as to what ethics to apply in making those decisions is important. He distinguishes different reasons why one might want to believe more true things than others (broadly) or maybe some things as properly basic rather than others, or, more correctly, why philosophers feel the need to pin god-belief as irrational. But we succumb to a kind of unsatisfying relativism insofar as the space of these hypotheses is not, in fact, weighted in a manner that most reflects the known facts. The relativism gets deeper when the weighting is washed out by wish fulfillment, pragmatism, aspirations, and personal insights that lack falsifiability. That is at least distasteful, maybe aretetically so (in Plantinga’s framework) but probably more teleologically so in that it influences other decision-making and the conflicts and real harms societies may cause.

Rationality and the Intelligibility of Philosophy

6a00d83542d51e69e20133f5650edd970b-800wiThere is a pervasive meme in the physics community that holds as follows: there are many physical phenomena that don’t correspond in any easy way to our ordinary experiences of life on earth. We have wave-particle duality wherein things behave like waves sometimes and particles other times. We have simultaneous entanglement of physically distant things. We have quantum indeterminacy and the emergence of stuff out of nothing. The tiny world looks like some kind of strange hologram with bits connected together by virtual strings. We have a universe that began out of nothing and that begat time itself. It is, in this framework, worthwhile to recognize that our every day experiences are not necessarily useful (and are often confounding) when trying to understand the deep new worlds of quantum and relativistic physics.

And so it is worthwhile to ask whether many of the “rational” queries that have been made down through time have any intelligible meaning given our modern understanding of the cosmos. For instance, if we were to state the premise “all things are either contingent or necessary” that underlies a poor form of the Kalam Cosmological Argument, we can immediately question the premise itself. And a failed premise leads to a failed syllogism. Maybe the entanglement of different things is piece-part of the entanglement of large-scale space time, and that the insights we have so far are merely shadows of the real processes acting behind the scenes? Who knows what happened before the Big Bang?

In other words, do the manipulations of logic and the assumptions built into the terms lead us to empty and destructive conclusions? There is no reason not to suspect that and therefore the bits of rationality that don’t derive from empirical results are immediately suspect. This seems to press for a more coherence-driven view of epistemology, one which accords with known knowledge but adjusts automatically as semantics change.

There is an interesting mental exercise concerning why we should be able to even undertake these empirical discoveries and all their seemingly non-sensible results that are nevertheless fashioned into a cohesive picture of the physical world (and increasingly the mental one). Are we not making an assumption that our brains are capable of rational thinking given our empirical understanding of our evolved pasts? Plantinga’s Evolutionary Argument Against Naturalism tries, for instance, to upend this perspective by claiming it is highly unlikely that a random process of evolution could produce reliable mental faculties because it would be focused too much on optimization for survival. This makes no sense empirically, however, since we have good evidence for evolution and we have good evidence for reliable mental faculties when subjected to the crucible of group examination and scientific process. We might be deluding ourselves, it’s true, but there are too many artifacts of scientific understanding and progress to take that terribly seriously.

So we get back to coherence and watchful empiricism. No necessity for naturalism as an ideology. It’s just the only thing that currently makes sense.