The Noble Gases of Social Theory

elem_inertgas1“Intellectually inert” is an insult that I reserve only for vast elaborations that present little in the way of new knowledge. I use it sparingly and with hesitation. Ross Douthat usually doesn’t rise to that level, though he does tend to be obsessed with vague theories about the breakdown of traditional (read “conservative”) societal mores and the consequences to modern America.

But his recent blog post “Social Liberalism as Class Warfare” is so numbing in his rhetorical elaborations that it was the only phrase that came to mind after slogging my way through it. So what’s the gist of the post?

  1. Maybe rich, smart folks pushed through divorce and abortion because they thought it made them freer.
  2. But poor, not-so-smart folks lacked sufficient self-control to use these tools wisely.
  3. Therefore, the rich, smart folks inadvertently made poor, not-so-smart folks engage in adverse behaviors that tore-up traditional families.
  4. And we get increased income and social inequality as a result.

An alternative argument might be:

  1. Folks kept getting smarter and better educated (everyone).
  2. They wanted to be free of old stuffy traditions.
  3. There were no good, new traditions that took their place, and insufficient touchstones of the “elite” values in the cultural ecosystems of the underclass.
  4. And we get increased income and social inequality as a result.

And here we get to the crux of my suggestion of inertness: it doesn’t matter whether the unintended consequences of iconoclasty differentially impact socioeconomic strata. What matters is what can actually be done about it that is voluntary rather than imposed. After all, that is what the meritocracy of educated folks do in Douthat’s own calculus of assortative mating. And it won’t be that Old Time Religion because of (1) and (2), above.Read the rest

Parsimonious Portmanteaus

portmanteauMeaning is a problem. We think we might know what something means but we keep being surprised by the facts, research, and logical difficulties that surround the notion of meaning. Putnam’s Representation and Reality runs through a few different ways of thinking about meaning, though without reaching any definitive conclusions beyond what meaning can’t be.

Children are a useful touchstone concerning meaning because we know that they acquire linguistic skills and consequently at least an operational understanding of meaning. And how they do so is rather interesting: first, presume that whole objects are the first topics for naming; next, assume that syntactic differences lead to semantic differences (“the dog” refers to the class of dogs while “Fido” refers to the instance); finally, prefer that linguistic differences point to semantic differences. Paul Bloom slices and dices the research in his Précis of How Children Learn the Meanings of Words, calling into question many core assumptions about the learning of words and meaning.

These preferences become useful if we want to try to formulate an algorithm that assigns meaning to objects or groups of objects. Probabilistic Latent Semantic Analysis, for example, assumes that words are signals from underlying probabilistic topic models and then derives those models by estimating all of the probabilities from the available signals. The outcome lacks labels, however: the “meaning” is expressed purely in terms of co-occurrences of terms. Reconciling an approach like PLSA with the observations about children’s meaning acquisition presents some difficulties. The process seems too slow, for example, which was always a complaint about connectionist architectures of artificial neural networks as well. As Bloom points out, kids don’t make many errors concerning meaning and when they do, they rapidly compensate.… Read the rest

Predicting Black Swans

black-swanNasim Taleb’s 2nd Edition of The Black Swan argues—not unpersuasively—that rare, cataclysmic events dominate ordinary statistics. Indeed, he notes that almost all wealth accumulation is based on long-tail distributions where a small number of individuals reap unexpected rewards. The downsides are also equally challenging, where he notes that casinos lose money not in gambling where the statistics are governed by Gaussians (the house always wins), but instead when tigers attack, when workers sue, and when other external factors intervene.

Black Swan Theory adds an interesting challenge to modern inference theories like Algorithmic Information Theory (AIT) that anticipate predictability to the universe. Even variant coding approaches like Minimum Description Length theory modify the anticipatory model based on relatively smooth error functions rather than high “kurtosis” distributions of variable change. And for the most part, for the regular events of life and our sensoriums, that is adequate. It is only where we start to look at rare existential threats that we begin to worry about Black Swans and inference.

How might we modify the typical formulations of AIT and the trade-offs between model complexity and data to accommodate the exceedingly rare? Several approaches are possible. First, if we are combining a predictive model with a resource accumulation criteria, we can simply pad out the model memory by reducing kurtosis risk through additional resource accumulation; any downside is mitigated by the storing of nuts for a rainy day. Good strategy for moderately rare events like weather change, droughts and whatnot. But what about even rarer events like little ice ages and dinosaur extinction-level meteorite hits? An alternative strategy is to maintain sufficient diversity in the face of radical unknowns that coping becomes a species-level achievement.… Read the rest