Follow the Paths

There is a little corner of philosophical inquiry that asks whether knowledge is justified based on all our other knowledge. This epistemological foundationalism rests on the concept that if we keep finding justifications for things we can literally get to the bottom of it all. So, for instance, if we ask why we think there is a planet called Earth, we can find reasons for that belief that go beyond just “’cause I know!” like “I sense the ground beneath my feet” and “I’ve learned empirically-verified facts about the planet during my education that have been validated by space missions.” Then, in turn, we need to justify the idea that empiricism is a valid way of attaining knowledge with something like, “It’s shown to be reliable over time.” This idea of reliability is certainly changing and variable, however, since scientific insights and theories have varied, depending on the domain in question and timeframe. And why should we in fact value our senses as being reliable (or mostly reliable) given what we know about hallucinations, apophenia, and optical illusions?

There is also a curious argument in philosophy that parallels this skepticism about the reliability of our perceptions, reason, and the “warrants” for our beliefs called the Evolutionary Argument Against Naturalism (EAAN). I’ve previously discussed some aspects of EAAN, but it is, amazingly, still discussed in academic circles. In a nutshell it asserts that our reliable reasoning can’t be evolved because evolution does not reliably deliver good, truthful ways of thinking about the world.

While it may seem obvious that the evolutionary algorithm does not deliver or guarantee completely reliable facilities for discerning true things from false things, the notion of epistemological pragmatism is a direct parallel to evolutionary search (as Fitelson and Sober hint).… Read the rest

Evolving Visions of Chaotic Futures

FlutterbysMost artificial intelligence researchers think unlikely the notion that a robot apocalypse or some kind of technological singularity is coming anytime soon. I’ve said as much, too. Guessing about the likelihood of distant futures is fraught with uncertainty; current trends are almost impossible to extrapolate.

But if we must, what are the best ways for guessing about the future? In the late 1950s the Delphi method was developed. Get a group of experts on a given topic and have them answer questions anonymously. Then iteratively publish back the group results and ask for feedback and revisions. Similar methods have been developed for face-to-face group decision making, like Kevin O’Connor’s approach to generating ideas in The Map of Innovation: generate ideas and give participants votes equaling a third of the number of unique ideas. Keep iterating until there is a consensus. More broadly, such methods are called “nominal group techniques.”

Most recently, the notion of prediction markets has been applied to internal and external decision making. In prediction markets,  a similar voting strategy is used but based on either fake or real money, forcing participants towards a risk-averse allocation of assets.

Interestingly, we know that optimal inference based on past experience can be codified using algorithmic information theory, but the fundamental problem with any kind of probabilistic argument is that much change that we observe in society is non-linear with respect to its underlying drivers and that the signals needed are imperfect. As the mildly misanthropic Nassim Taleb pointed out in The Black Swan, the only place where prediction takes on smooth statistical regularity is in Las Vegas, which is why one shouldn’t bother to gamble.… Read the rest