Tagged: epistemology

Brain Gibberish with a Convincing Heart

Elon Musk believes that direct brain interfaces will help people better transmit ideas to one another in addition to just allowing thought-to-text generation. But there is a fundamental problem with this idea. Let’s take Hubert Dreyfus’ conception of the way meaning works as being tied to a more holistic view of our social interactions with others. Hilary Putnam would probably agree with this perspective, though now I am speaking for two dead philosphers of mind. We can certainly conclude that my mental states when thinking about the statement “snow is white” are, borrowing from Putnam who borrows from Quine, different from a German person thinking “Schnee ist weiß.” The orthography, grammar, and pronunciation are different to begin with. Then there is what seems to transpire when I think about that statement: mild visualizations of white snow-laden rocks above a small stream for instance, or, just now, Joni Mitchell’s “As snow gathers like bolts of lace/Waltzing on a ballroom girl.” The centrality or some kind of logical ground that merely asserts that such a statement is a propositional truth that is shared in some kind of mind interlingua doesn’t bear much fruit to the complexities of what such a statement entails.

Religious and political terminology is notoriously elastic. Indeed, for the former, it hardly even seems coherent to talk about the concept of supernatural things or events. If they are detectable by any other sense than some kind of unverifiable gnosis, then they are at least natural in that they are manifesting in the observable world. So supernatural imposes a barrier that seems to preclude any kind of discussion using ordinary language. The only thing left is a collection of metaphysical assumptions that, in lacking any sort of reference, must merely conform to the patterns of synonymy, metonymy, and other language games that we ordinarily reserve for discernible events and things. And, of course, where unverifiable gnosis holds sway, it is not public knowledge and therefore seems to mainly serve as a social mechanism for attracting attention to oneself.

Politics takes on a similar quality, with it often said to be a virtue if a leader can translate complex policies into simple sound bites. But, as we see in modern American politics, what instead happens is that abstract fear signaling is the primary currency to try to motivate (and manipulate) the voter. The elasticity of a concept like “freedom” is used to polarize the sides of political negotiation that almost always involves the management of winners and losers and the dividing line between them. Fear mixes with complex nostalgia about times that never were, or were more nuanced than most recall, and jeremiads serve to poison the well of discourse.

So, if I were to have a brain interface, it might be trainable to write words for me by listening to the regular neural firing patterns that accompany my typing or speaking, but I doubt it would provide some kind of direct transmission or telepathy between people that would have any more content than those written or spoken forms. Instead, the inscrutable and non-referential abstractions about complex ideas would be tied together and be in contrast with the existing holistic meaning network. And that would just be gibberish to any other mind. Worst still, such a system might also be able to convey raw emotion from person to person, thus just amplifying the fear or joy component of the idea without being able to transmit the specifics of the thoughts. And that would be worse than mere gibberish, it would be gibberish with a convincing heart.

Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way. So other methods may need to be tweaked to get closer to the observed learning of people in these word tasks.

So what can this tell us about epistemology and belief? Classical foundationalism might be formulated as something is a “basic” or “justified” belief if it is self-evident or evident to our senses. Other beliefs may therefore be grounded by those basic beliefs. And a more modern reformulation might substitute “incorrigible” for “justified” with the layered meaning of incorrigibility built on the necessity that given the proposition it is in fact true.

Here’s Alvin Plantinga laying out a case for why justified and incorrigibility have a range of problems, problems serious enough for Plantinga that he suspects that god belief could just as easily be a basic belief, allowing for the kinds of presuppositional Natural Theology (think: I look around me and the hand of God is obvious) that is at the heart of some of the loftier claims concerning the viability or non-irrationality of god belief. It even provides a kind of coherent interpretative framework for historical interpretation.

Plantinga positions the problem of properly basic belief then as an inductive problem:

And hence the proper way to arrive at such a criterion is, broadly speaking, inductive. We must assemble examples of beliefs and conditions such that the former are obviously properly basic in the latter, and examples of beliefs and conditions such that the former are obviously not properly basic in the latter. We must then frame hypotheses as to the necessary and sufficient conditions of proper basicality and test these hypothesis by reference to those examples. Under the right conditions, for example, it is clearly rational to believe that you see a human person before you: a being who has thoughts and feelings, who knows and believes things, who makes decisions and acts. It is clear, furthermore, that you are under no obligation to reason to this belief from others you hold; under those conditions that belief is properly basic for you.

He goes on to conclude that this opens up the god hypothesis as providing this kind of coherence mechanism:

By way of conclusion then: being self-evident, or incorrigible, or evident to the senses is not a necessary condition of proper basicality. Furthermore, one who holds that belief in God is properly basic is not thereby committed to the idea that belief in God is groundless or gratuitous or without justifying circumstances. And even if he lacks a general criterion of proper basicality, he is not obliged to suppose that just any or nearly any belief—belief in the Great Pumpkin, for example—is properly basic. Like everyone should, he begins with examples; and he may take belief in the Great Pumpkin as a paradigm of irrational basic belief.

So let’s assume that the word learning mechanism based on this Bayesian scaling is representative of our human inductive capacities. Now this may or may not be broadly true. It is possible that it is true of words but not other domains of perceptual phenomena. Nevertheless, given this scaling property, the relative inductive truth of a given proposition (a meaning hypothesis) is strictly Bayesian. Moreover, this doesn’t succumb to problems of verificationalism because it only claims relative truth. Properly basic or basic is then the scaled contending explanatory hypotheses and the god hypothesis has to compete with other explanations like evolutionary theory (for human origins), empirical evidence of materialism (for explanations contra supernatural ones), perceptual mistakes (ditto), myth scholarship, textual analysis, influence of parental belief exposure, the psychology of wish fulfillment, the pragmatic triumph of science, etc. etc.

And so we can stick to a relative scaling of hypotheses as to what constitutes basicality or justified true belief. That’s fine. We can continue to argue the previous points as to whether they support or override one hypothesis or another. But the question Plantinga raises as to what ethics to apply in making those decisions is important. He distinguishes different reasons why one might want to believe more true things than others (broadly) or maybe some things as properly basic rather than others, or, more correctly, why philosophers feel the need to pin god-belief as irrational. But we succumb to a kind of unsatisfying relativism insofar as the space of these hypotheses is not, in fact, weighted in a manner that most reflects the known facts. The relativism gets deeper when the weighting is washed out by wish fulfillment, pragmatism, aspirations, and personal insights that lack falsifiability. That is at least distasteful, maybe aretetically so (in Plantinga’s framework) but probably more teleologically so in that it influences other decision-making and the conflicts and real harms societies may cause.

The Unreasonable Success of Reason

May 2012 eclipse refracted through a lemon tree

Math and natural philosophy were discovered several times in human history: Classical Greece, Medieval Islam, Renaissance Europe. Arguably, the latter two were strongly influenced by the former, but even so they built additional explanatory frameworks. Moreover, the explosion that arose from Europe became the Enlightenment and the modern edifice of science and technology

So, on the eve of an eclipse that sufficiently darkened the skies of Northern California, it is worth noting the unreasonable success of reason. The gods are not angry. The spirits are not threatening us over a failure to properly propitiate their symbolic requirements. Instead, the mathematics worked predictively and perfectly to explain a wholly natural phenomenon.

But why should the mathematics work so exceptionally well? It could be otherwise, as Eugene Wigner’s marvelous 1960 paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, points out:

All the laws of nature are conditional statements which permit a prediction of some future events on the basis of the knowledge of the present, except that some aspects of the present state of the world, in practice the overwhelming majority of the determinants of the present state of the world, are irrelevant from the point of view of the prediction.

A possible explanation of the physicist’s use of mathematics to formulate his laws of nature is that he is a somewhat irresponsible person. As a result, when he finds a connection between two quantities which resembles a connection well-known from mathematics, he will jump at the conclusion that the connection is that discussed in mathematics simply because he does not know of any other similar connection.

Galileo’s rocks fall at the same rates but only provided that they are not unduly flat and light. And pieces of paper and feathers definitely do not, but instead drift insouciantly along the channels of heat and air towards the ground. Yet we assume the laws apply and a secondary explanation (air resistance) is applied to compensate for the central tendency that is expressed by a relatively simple proportionality. But what of the geographical variations in the gravitational field of the Earth? This was mapped extensively during the Cold War to improve the reliability of ballistic missiles. Another complex suite of variables that we ignore until the swamping effect of the noise is overridden by the requirements of the specific technological application.

And it all works well enough that we soldier on, our television signals carried from perfectly still geosynchronous satellites that are actually sliding through their orbits at a breakneck speed in order to preserve the illusion of absolute stillness. It even leads to an additional question concerning whether there are phenomena that are so complex that we cannot easily characterize their underlying mathematics or that are simply uncharacterizable in terms of analytic formulations?