Category: epistemology

Rationality and the Intelligibility of Philosophy

6a00d83542d51e69e20133f5650edd970b-800wiThere is a pervasive meme in the physics community that holds as follows: there are many physical phenomena that don’t correspond in any easy way to our ordinary experiences of life on earth. We have wave-particle duality wherein things behave like waves sometimes and particles other times. We have simultaneous entanglement of physically distant things. We have quantum indeterminacy and the emergence of stuff out of nothing. The tiny world looks like some kind of strange hologram with bits connected together by virtual strings. We have a universe that began out of nothing and that begat time itself. It is, in this framework, worthwhile to recognize that our every day experiences are not necessarily useful (and are often confounding) when trying to understand the deep new worlds of quantum and relativistic physics.

And so it is worthwhile to ask whether many of the “rational” queries that have been made down through time have any intelligible meaning given our modern understanding of the cosmos. For instance, if we were to state the premise “all things are either contingent or necessary” that underlies a poor form of the Kalam Cosmological Argument, we can immediately question the premise itself. And a failed premise leads to a failed syllogism. Maybe the entanglement of different things is piece-part of the entanglement of large-scale space time, and that the insights we have so far are merely shadows of the real processes acting behind the scenes? Who knows what happened before the Big Bang?

In other words, do the manipulations of logic and the assumptions built into the terms lead us to empty and destructive conclusions? There is no reason not to suspect that and therefore the bits of rationality that don’t derive from empirical results are immediately suspect. This seems to press for a more coherence-driven view of epistemology, one which accords with known knowledge but adjusts automatically as semantics change.

There is an interesting mental exercise concerning why we should be able to even undertake these empirical discoveries and all their seemingly non-sensible results that are nevertheless fashioned into a cohesive picture of the physical world (and increasingly the mental one). Are we not making an assumption that our brains are capable of rational thinking given our empirical understanding of our evolved pasts? Plantinga’s Evolutionary Argument Against Naturalism tries, for instance, to upend this perspective by claiming it is highly unlikely that a random process of evolution could produce reliable mental faculties because it would be focused too much on optimization for survival. This makes no sense empirically, however, since we have good evidence for evolution and we have good evidence for reliable mental faculties when subjected to the crucible of group examination and scientific process. We might be deluding ourselves, it’s true, but there are too many artifacts of scientific understanding and progress to take that terribly seriously.

So we get back to coherence and watchful empiricism. No necessity for naturalism as an ideology. It’s just the only thing that currently makes sense.

A Critique of Pure Randomness

Random MemeThe notion of randomness brings about many interesting considerations. For statisticians, randomness is a series of events with chances that are governed by a distribution function. In everyday parlance, equally-likely means random, while an even more common semantics is based on both how unlikely and how unmotivated an event might be (“That was soooo random!”) In physics, there are only certain physical phenomena that can be said to be truly random, including the probability of a given nucleus decomposing into other nuclei via fission. The exact position of a quantum thingy is equally random when it’s momentum is nailed down, and vice-versa. Vacuums have a certain chance of spontaneously creating matter, too, and that chance appears to be perfectly random. In algorithmic information theory, a random sequence of bits is a sequence that can’t be represented by a smaller descriptive algorithm–it is incompressible. Strangely enough, we simulate random number generators using a compact algorithm that has a complicated series of steps that lead to an almost impossible to follow trajectory through a deterministic space of possibilities; it’s acceptible to be random enough that the algorithm parameters can’t be easily reverse engineered and the next “random” number guessed.

One area where we often speak of randomness is in biological evolution. Random mutations lead to change and to deleterious effects like dead-end evolutionary experiments. Or so we hypothesized. The exact mechanism of the transmission of inheritance and of mutations were unknown to Darwin, but soon in the evolutionary synthesis notions like random genetic drift and the role of ionizing radiation and other external factors became exciting candidates for the explanation of the variation required for evolution to function. Amusingly, arguing largely from a stance that might be called a fallacy of incredulity, creationists have often seized on a logical disconnect they perceive between the appearance of purpose both in our lives and in the mechanisms of biological existence, and the assumption of underlying randomness and non-directedness as evidence for the paucity of arguments from randomness.

I give you Stephen Talbott in The New Atlantis, Evolution and the Illusion of Randomness, wherein he unpacks the mounting evidence and the philosophical implications of jumping genes, self-modifying genetic regulatory frameworks, transposons, and the likelihood that randomness in the strong sense of cosmic ray trajectories bouncing around in cellular nuclei are simply wrong. Randomness is at best a minor contribution to evolutionary processes. We are not just purposeful at the social, personal, systemic, cellular, and sub-cellular levels, we are also purposeful through time around the transmission of genetic information and the modification thereof.

This opens a wildly new avenue for considering the certain normative claims that anti-evolutionists bring to the table, such as that a mechanistic universe devoid of central leadership is meaningless and allows for any behavior to be equally acceptable. This hoary chestnut is ripe to the point of rot, of course, but the response to it should be much more vibrant than the usual retorts. The evolution of social and moral outcomes can be every bit as inevitable as if they were designed because co-existence and greater group success (yes, I wrote it) is a potential well on the fitness landscape. And, equally, we need to stop being so reticent to claim that there is a purposefulness to life, a teleology, but simply make sure that we are according the proper mechanistic feel to that teleology. Fine, call it teleonomy, or even an urge to existence. A little poetry might actually help here.

The Rise and Triumph of the Bayesian Toolshed

Bayes LawIn Asimov’s Foundation, psychohistory is the mathematical treatment of history, sociology, and psychology to predict the future of human populations. Asimov was inspired by Gibbon’s Decline and Fall of the Roman Empire that postulated that Roman society was weakened by Christianity’s focus on the afterlife and lacked the pagan attachment to Rome as an ideal that needed defending. Psychohistory detects seeds of ideas and social movements that are predictive of the end of the galactic empire, creating foundations to preserve human knowledge against a coming Dark Age.

Applying statistics and mathematical analysis to human choices is a core feature of economics, but Richard Carrier’s massive tome, On the Historicity of Jesus: Why We Might Have Reason for Doubt, may be one of the first comprehensive applications to historical analysis (following his other related work). Amusingly, Carrier’s thesis dovetails with Gibbon’s own suggestion, though there is a certain irony to a civilization dying because of a fictional being.

Carrier’s methods use Bayesian analysis to approach a complex historical problem that has a remarkably impoverished collection of source material. First century A.D. (C.E. if you like; I agree with Carrier that any baggage about the convention is irrelevant) sources are simply non-existent or sufficiently contradictory that the background knowledge of paradoxography (tall tales), rampant messianism, and general political happenings at the time lead to a likelihood that Jesus was made up. Carrier constructs the argument around equivalence classes of prior events that then reduce or strengthen the evidential materials (a posteriori). And he does this without ablating the richness of the background information. Indeed, his presentation and analysis of works like Inanna’s Descent into the Underworld and its relationship to the Ascension of Isaiah are both didactic and beautiful in capturing the way ancient minds seem to have worked.

We’ve come a long way from Gibbon’s era where we now have mathematical tools directly influencing historical arguments. The notion of inference and probability has always played a role in history, but perhaps never so directly. All around us we have the sharpening of our argumentation, whether in policymaking, in history, or in law.  Even the arts and humanities are increasingly impacted by scientific and technological change and the metaphors that emerge from it. Perhaps not a Cathedral of Computation, but modestly at least a new toolshed.

Language Games

Word GamesOn The Thinking Atheist, C.J. Werleman promotes the idea that atheists can’t be Republicans based on his new book. Why? Well, for C.J. it’s because the current Republican platform is not grounded in any kind of factual reality. Supply-side economics, Libertarianism, economic stimuli vs. inflation, Iraqi WMDs, Laffer curves, climate change denial—all are grease for the wheels of a fantastical alternative reality where macho small businessmen lift all boats with their steely gaze, the earth is forever resilient to our plunder, and simple truths trump obscurantist science. Watch out for the reality-based community!

Is politics essentially religion in that it depends on ideology not grounded in reality, spearheaded by ideologues who serve as priests for building policy frameworks?

Likely. But we don’t really seem to base our daily interactions on rationality either. 538 Science tells us that it has taken decades to arrive at the conclusion that vitamin supplements are probably of little use to those of us lucky enough to live in the developed world. Before that we latched onto indirect signaling about vitamin C, E, D, B12, and others to decide how to proceed. The thinking typically took on familiar patterns: someone heard or read that vitamin X is good for us/I’m skeptical/why not?/maybe there are negative side-effects/it’s expensive anyway/forget it. The language games are at all levels in promoting, doubting, processing, and reinforcing the microclaims for each option. We embrace signals about differences and nuances but it often takes many months and collections of those signals in order to make up our minds. And then we change them again.

Among the well educated, I’ve variously heard the wildest claims about the effectiveness of chiropractors, pseudoscientific remedies, the role of immunizations in autism (not due to preservatives in this instance; due to immune responses themselves), and how karma works in software development practice.

And what about C.J.’s central claims? Well I haven’t read the book and don’t plan to, so I can only build on what he said during the interview. If we require evidence for our political beliefs as much as we require it for our religious perspective we probably need to have a scheme for how to rank the likelihood of different beliefs and policy commitments. For instance, C.J. follows the continued I-told-you-so approach of Paul Krugman in his comments on fiscal stimulus; not enough was done and there is no evidence of inflationary pressure. Well and good that fiscal stimulus as a macro-economic stabilizer has been established in the most recent economic past. The non-appearance of inflation was somewhat surprising, actually, but is now the retrospective majority opinion of economists concerned with such matters. It was a cause for concern, however, as were the problematic bailouts that softened the consequences (if not rewarded them) of risky behavior in pursuit of broader stability.

The language game theory of politics and religion accounts for most of the uncertainty and chaos that drives thinking about politics and economics. We learn the rules (social and pragmatic impact as well as grammatical rules) and the game pieces (words, phrases, and concepts) early on. They don’t have firm referential extension, of course. In fact, they never really do. But they cohere more and more over time unless radically disrupted, and even then they try to recohere against the tangle of implications as the dust settles. This is Wittgensteinian and anti-Positivist, but it is also somewhat value-free in that there is no sense for why one language game should be preferential to another.

For C.J., there is a clear demarcation that facts trump fantasy, and our lives and society would be better served by factually-derived policies and factually enervated perspectives on the claims of most religions. But it is far less clear to me as to how to apply some rationalist overlay to the problem of politics that would have consistent and meaningful improvements in our lives and society save the obvious one of improving general education and thinking.

I recently irritated and frustrated my teen son in questioning him about some claims he was making about bad teachers in the local school system. The irritation came as I probed into various rumors about a teacher who had been fired because she was, according to him, sexist and graded boys poorly. It turns out he only had a handful of rumors about everything from the teacher’s firing to the sexism. It looked more likely that one of his friends made it up in conjunction with other boys who were doing poorly in the teacher’s class. They created a meme in a language game and it propagated. My son was defensive about the possibility of the whole story and I admitted it was possible but that it was sufficiently unlikely as to not warrant concern. The attachment of levels of likely veracity and valuations were ultimately the only difference in the end.

I apologized for making him mad but didn’t apologize for my skepticism and, later, there were signals that his network of beliefs had been moved a bit, the vile evil sexist teacher drifting out of focus among the other shades of consideration.

And that is how the language game is played.

Humbly Evolving in a Non-Simulated Universe

darwin-changeThe New York Times seems to be catching up to me, first with an interview of Alvin Plantinga by Gary Cutting in The Stone on February 9th, and then with notes on Bostrom’s Simulation Hypothesis in the Sunday Times.

I didn’t see anything new in the Plantinga interview, but reviewed my previous argument that adaptive fidelity combined with adaptive plasticity must raise the probability of rationality at a rate that is much greater than the contributions that would be “deceptive” or even mildly cognitively or perceptually biased. Worth reading is Branden Fitelsen and Eliot Sober’s very detailed analysis of Plantinga’s Evolutionary Argument Against Naturalism (EAAN), here. Most interesting are the beginning paragraphs of Section 3, which I reproduce here because it is a critical addition that should surprise no one but often does:

Although Plantinga’s arguments don’t work, he has raised a question that needs to be answered by people who believe evolutionary theory and who also believe that this theory says that our cognitive abilities are in various ways imperfect. Evolutionary theory does say that a device that is reliable in the environment in which it evolved may be highly unreliable when used in a novel environment. It is perfectly possible that our mental machinery should work well on simple perceptual tasks, but be much less reliable when applied to theoretical matters. We hasten to add that this is possible, not inevitable. It may be that the cognitive procedures that work well in one domain also work well in another; Modus Ponens may be useful for avoiding tigers and for doing quantum physics.

Anyhow, if evolutionary theory does say that our ability to theorize about the world is apt to be rather unreliable, how are evolutionists to apply this point to their own theoretical beliefs, including their belief in evolution? One lesson that should be extracted is a certain humility—an admission of fallibility. This will not be news to evolutionists who have absorbed the fact that science in general is a fallible enterprise. Evolutionary theory just provides an important part of the explanation of why our reasoning about theoretical matters is fallible.

Far from showing that evolutionary theory is self-defeating, this consideration should lead those who believe the theory to admit that the best they can do in theorizing is to do the best they can. We are stuck with the cognitive equipment that we have. We should try to be as scrupulous and circumspect about how we use this equipment as we can. When we claim that evolutionary theory is a very well confirmed theory, we are judging this theory by using the fallible cognitive resources we have at our disposal. We can do no other.

And such humility helps to dismiss arguments about the arrogance of science and scientism.

On the topic of Bostrom’s Simulation Hypothesis, I remain skeptical that we live in a simulated universe.

Predicting Black Swans

black-swanNasim Taleb’s 2nd Edition of The Black Swan argues—not unpersuasively—that rare, cataclysmic events dominate ordinary statistics. Indeed, he notes that almost all wealth accumulation is based on long-tail distributions where a small number of individuals reap unexpected rewards. The downsides are also equally challenging, where he notes that casinos lose money not in gambling where the statistics are governed by Gaussians (the house always wins), but instead when tigers attack, when workers sue, and when other external factors intervene.

Black Swan Theory adds an interesting challenge to modern inference theories like Algorithmic Information Theory (AIT) that anticipate predictability to the universe. Even variant coding approaches like Minimum Description Length theory modify the anticipatory model based on relatively smooth error functions rather than high “kurtosis” distributions of variable change. And for the most part, for the regular events of life and our sensoriums, that is adequate. It is only where we start to look at rare existential threats that we begin to worry about Black Swans and inference.

How might we modify the typical formulations of AIT and the trade-offs between model complexity and data to accommodate the exceedingly rare? Several approaches are possible. First, if we are combining a predictive model with a resource accumulation criteria, we can simply pad out the model memory by reducing kurtosis risk through additional resource accumulation; any downside is mitigated by the storing of nuts for a rainy day. Good strategy for moderately rare events like weather change, droughts and whatnot. But what about even rarer events like little ice ages and dinosaur extinction-level meteorite hits? An alternative strategy is to maintain sufficient diversity in the face of radical unknowns that coping becomes a species-level achievement.

Any underlying objective function for these radical events has to sacrifice fidelity to temporally local conditions in order to cope with the outliers. Even a simple model based on populations of inductive machines with variant parameterizations wins when the population converges on the best outcomes. There is only medium term losing to focusing exclusively on the rare downside. Yet Taleb claims the rare dominates the smoothly predictable. Using the alternative strategy, that means that there might be an addition to AIT-based decision theory that adds a longer-term, multi-horizon valuation that promotes diversity against the short term gains.

 

 

Contingency and Irreducibility

JaredTarbell2Thomas Nagel returns to defend his doubt concerning the completeness—if not the efficacy—of materialism in the explanation of mental phenomena in the New York Times. He quickly lays out the possibilities:

  1. Consciousness is an easy product of neurophysiological processes
  2. Consciousness is an illusion
  3. Consciousness is a fluke side-effect of other processes
  4. Consciousness is a divine property supervened on the physical world

Nagel arrives at a conclusion that all four are incorrect and that a naturalistic explanation is possible that isn’t “merely” (1), but that is at least (1), yet something more. I previously commented on the argument, here, but the refinement of the specifications requires a more targeted response.

Let’s call Nagel’s new perspective Theory 1+ for simplicity. What form might 1+ take on? For Nagel, the notion seems to be a combination of Chalmers-style qualia combined with a deep appreciation for the contingencies that factor into the personal evolution of individual consciousness. The latter is certainly redundant in that individuality must be absolutely tied to personal experiences and narratives.

We might be able to get some traction on this concept by looking to biological evolution, though “ontogeny recapitulates phylogeny” is about as close as we can get to the topic because any kind of evolutionary psychology must be looking for patterns that reinforce the interpretation of basic aspects of cognitive evolution (sex, reproduction, etc.) rather than explore the more numinous aspects of conscious development. So we might instead look for parallel theories that focus on the uniqueness of outcomes, that reify the temporal evolution without reference to controlling biology, and we get to ideas like uncomputability as a backstop. More specifically, we can explore ideas like computational irreducibility to support the development of Nagel’s new theory; insofar as the environment lapses towards weak predictability, a consciousness that self-observes, regulates, and builds many complex models and metamodels is superior to those that do not.

I think we already knew that, though. Perhaps Nagel has been too much a philosopher and too little involved in the sciences that surround and enervate modern theories of learning and adaption to see the movement towards the exits?

 

 

Towards an Epistemology of Uncertainty (the “I Don’t Know” club)

space-timeToday there was an acute overlay of reinforcing ideas when I encountered Sylvia McLain’s piece in Occam’s Corner on The Guardian drawing out Niall Ferguson for deriving Keynesianism from Keynes’ gayness. And just when I was digesting Lee Smolin’s new book, Time Reborn: From the Crisis in Physics to the Future of the Universe.

The intersection was a tutorial in the limits of expansive scientism and in the conclusions that led to unexpected outcomes. We get to euthanasia and forced sterilization down that path–or just a perception of senility when it comes to Ferguson. The fix to this kind of programme is fairly simple: doubt. I doubt that there is any coherent model that connects sexual orientation to economic theory. I doubt that selective breeding and euthanasia can do anything more than lead to inbreeding depression. Or, for Smolin, I doubt that the scientific conclusions that we have reached so far are the end of the road.

That wasn’t too hard, was it?

The I Don’t Know club is pretty easy to join. All one needs is intellectual honesty and earnesty.

A Paradigm of Guessing

boxesThe most interesting thing I’ve read this week comes from Jurgen Schmidhuber’s paper, Algorithmic Theories of Everything, which should be provocative enough to pique the most jaded of interests. And the quote is from way into the paper:

The first number is 2, the second is 4, the third is 6, the fourth is 8. What is the fifth? The correct answer is “250,” because the nth number is n 5 −5n^4 −15n^3 + 125n^2 −224n+ 120. In certain IQ tests, however, the answer “250” will not yield maximal score, because it does not seem to be the “simplest” answer consistent with the data (compare [73]). And physicists and others favor “simple” explanations of observations.

And this is the beginning and the end of logical positivism. How can we assign truth to inductive judgments without crossing from fact to value, and what should that value system be?

The Churches of Evil

The New York Times continues to mine the dark territory between religious belief and atheism in a series of articles in the opinion section, with the most recent being Gary Cutting’s thoughtful meditation on agnosticism, ways of knowing, and the contributions of religion to individual lives and society. In response, Penn Jillette and others discuss atheism as a religion-like venture.

We can dissect Cutting’s argument while still being generous to his overall thrust. It is certainly true that aside from the specific knowledge claims of religious people that there are traditions of practice that result in positive outcomes for religious folk. But when we drill into the knowledge dimension, Cutting props up Alvin Plantinga and Richard Swinburne as representing “the role of evidence and argument” in advanced religious argument. He might have been better to restrict the statement to “argument” in this case, because both philosophers focus primarily on argument in their philosophical works. So evidence remains elusively private in the eyes of the believer.

Interestingly, many of the arguments of both are simply arguments against a counter-assumption that anticipates a secular universe. For instance, Plantinga shows that the Logical Problem of Evil is not incoherent, resulting in a conclusion that evil (neglect “natural evil” for the moment) is not logically incompatible with omnibenevolence, omnipotence, and omniscience. But, and here we get back to Cutting, it does nothing to persuade us that the rapacious cruelty of Yahweh much less the moral evil expressed in the new concept of Hell in the New Testament are anything more than logically possible. The human dimension and the appropriate moral outrage are unabated and we loop back to the generosity of Cutting towards the religious: shouldn’t we provide equal generosity to the scriptural problem of evil as expressed in everything from the Hebrew Bible through to the Book of Mormon? That is, after all, where everyday believers pick the cherries of their arguments?

Of course, such realizations are not an argument for atheism per se, for why concern oneself with such barbarism if it is all simple hooey? Instead, it is the moral character of such deities that is put in question by the scriptural analysis of evil. The religious should actively discourage reference and reverence to the works that define them. Cutting hints at this in his extended critique:

There are serious moral objections to aspects of some religions.  But many believers rightly judge that their religion has great moral value for them, that it gives them access to a rich and fulfilling life of love.  What is not justified is an exclusivist or infallibilist reading of this belief, implying that the life of a given religion is the only or the best way toward moral fulfillment for everyone, or that there is no room for criticism of the religion’s moral stances.

I would argue that the first statement takes precedence over issues of exclusivity and infallibility, however, and not just based on contemporary applications of Christian or Muslim belief concerning gay rights or whether or not infidels must die. Instead, the extensive cruelty and moral evil expressed in most ancient texts must be addressed by religious believers by distancing themselves from their traditions. The modern, evolved versions of the faiths could easily declare themselves as “derived from the best teachings of Christ” or “the most loving aspects of Islam” and so forth, thus also ensuring that they don’t have to confront the grotesque shades haunting their own traditions.