Category: Evolutionary

Contingency and Irreducibility

JaredTarbell2Thomas Nagel returns to defend his doubt concerning the completeness—if not the efficacy—of materialism in the explanation of mental phenomena in the New York Times. He quickly lays out the possibilities:

  1. Consciousness is an easy product of neurophysiological processes
  2. Consciousness is an illusion
  3. Consciousness is a fluke side-effect of other processes
  4. Consciousness is a divine property supervened on the physical world

Nagel arrives at a conclusion that all four are incorrect and that a naturalistic explanation is possible that isn’t “merely” (1), but that is at least (1), yet something more. I previously commented on the argument, here, but the refinement of the specifications requires a more targeted response.

Let’s call Nagel’s new perspective Theory 1+ for simplicity. What form might 1+ take on? For Nagel, the notion seems to be a combination of Chalmers-style qualia combined with a deep appreciation for the contingencies that factor into the personal evolution of individual consciousness. The latter is certainly redundant in that individuality must be absolutely tied to personal experiences and narratives.

We might be able to get some traction on this concept by looking to biological evolution, though “ontogeny recapitulates phylogeny” is about as close as we can get to the topic because any kind of evolutionary psychology must be looking for patterns that reinforce the interpretation of basic aspects of cognitive evolution (sex, reproduction, etc.) rather than explore the more numinous aspects of conscious development. So we might instead look for parallel theories that focus on the uniqueness of outcomes, that reify the temporal evolution without reference to controlling biology, and we get to ideas like uncomputability as a backstop. More specifically, we can explore ideas like computational irreducibility to support the development of Nagel’s new theory; insofar as the environment lapses towards weak predictability, a consciousness that self-observes, regulates, and builds many complex models and metamodels is superior to those that do not.

I think we already knew that, though. Perhaps Nagel has been too much a philosopher and too little involved in the sciences that surround and enervate modern theories of learning and adaption to see the movement towards the exits?

 

 

Red Queens of Hearts

redqueenAn incomplete area of study in philosophy and science is the hows and whys of social cooperation. We can easily assume that social organisms gain benefits in terms of the propagation of genes by speculating about the consequences of social interactions versus individual ones, but translating that speculation into deep insights has remained a continuing research program. The consequences couldn’t be more significant because we immediately gain traction on the Naturalistic Fallacy and build a bridge towards a clearer understanding of human motivation in arguing for a type of Moral Naturalism that embodies much of the best we know and hope for from human history.

So worth tracking are continued efforts to understand how competition can be outdone by cooperation in the most elementary and mathematical sense. The superlatively named Freeman Dyson (who doesn’t want to be a free man?) cast a cloud of doubt on the ability of cooperation to be a working strategy when he and colleague William Press analyzed the payoff matrixes of iterated prisoner’s dilemma games and discovered a class of play strategies called “Zero-Determinant” strategies that always pay-off regardless of the opponent’s strategies. Hence, the concern that there is a large corner in the adaptive topology where strong-arming always wins. And evolutionary search must seek out that corner and winners must accumulate there, thus ruling out cooperation as a prominent feature of evolutionary success.

But that can’t reflect the reality we think we see, where cooperation in primates and other eusocial organisms seems to be the precursor to the kinds of virtues that are reflected in moral, religious, and ethical traditions. So what might be missing in this analysis? Christophe Adami and Arend Hintze at Michigan State may have some of the answers in their paper, Evolutionary instability of zero-determinant strategies demonstrates that winning is not everything. The reasons for instability are several, but one key one is that in the matrixed encounters between individual players, geography interferes with the capacity for one player to exploit any other players; the mathematics breaks down because of the inability of individuals to recognize one another. Interestingly, though, a method for improving recognition memory or by “tagging” the other players for enhanced recognition becomes subject to an evolutionary arms race. And this is a Red Queen effect, running forever to stay in place through development and counter-strategies.

Expanding towards a consideration of the ethical and moral consequences of cooperation brings us to the Red Queen of Hearts.

Novelty in the Age of Criticism

Lower Manhattan Panorama because I am in Jersey City, NJ tonight.
Lower Manhattan panorama because I am in Jersey City, NJ as I write this, with an awesomely aesthetic view.

Gary Cutting from Notre Dame and the New York Times knows how to incite an intellectual riot, as demonstrated by his most recent The Stone piece, Mozart vs. the Beatles. “High art” is superior to “low art” because of its “stunning intellectual and emotional complexity.” He sums up:

My argument is that this distinctively aesthetic value is of great importance in our lives and that works of high art achieve it much more fully than do works of popular art.

But what makes up these notions of complexity and distinctive aesthetic value? One might try to enumerate those values or create a list. Or, alternatively, one might instead claim that time serves as a sieve for the values that Cutting is claiming make one work of art superior to another, thus leaving open the possibility for the enumerated list approach to be incomplete but still a useful retrospective system of valuation.

I previously argued in a 1994 paper (published in 1997), Complexity Formalisms, Order and Disorder in the Structure of Art, that simplicity and random chaos exist in a careful balance in art that reflects our underlying grammatical systems that are used to predict the environment. And Jürgen Schmidhuber took the approach further by applying algorithmic information theory to novelty seeking behavior that leads, in turn, to aesthetically pleasing models. The reflection of this behavioral optimization in our sideline preoccupations emerges as art, with the ultimate causation machine of evolution driving the proximate consequences for men and women.

But let’s get back to the flaw I see in Cutting’s argument that, in turn, fits better with Schmidhuber’s approach: much of what is important in art is cultural novelty. Picasso is not aesthetically superior to the detailed hyper-reality of Dutch Masters, for instance, but is notable for his cultural deconstruction of the role of art as photography and reproduction took hold. And the simplicity and unstructured chaos of the Abstract Expressionists is culturally significant as well. Importantly, changes in technology are essential to changes in artistic outlook, from the aforementioned role of photography in diminishing the aesthetic value of hand renderings to the application of electronic instruments in Philip Glass symphonies. Is Mozart better than Glass or Stravinsky? Using this newer standard for aesthetics, no, because Mozart was working skillfully (and perhaps brilliantly) but within the harmonic model of Classical composition and Classical forms. He was one of many. But Wagner or Debussy changed the aural landscape, by comparison, and by the time of tone rows and aleatoric composition, conventional musical aesthetics were largely abandoned, if only fleetingly.

Modernism and postmodernism in prose and poetry follow similar trajectories, but I think there may have been a counter-opposing force to novelty seeking in much prose literature. That force is the requirement for narrative stories that are about human experiences, which is not a critical component of music or visual art. Human experience has a temporal flow and spatial unity. When novelists break these requirements in complex ways, writing becomes increasingly difficult to comprehend (perhaps a bit like aleatoric music?), so the efforts of novelists more often cling to convention while using other prose tools and stylistic fireworks to enhance the reader’s aesthetic valuations. Novelty hits less often, but often with greater challenges. Poetry has, by comparison, been more experimental in forms and concepts.

And architecture? Cutting’s Chartres versus Philip Johnson?

So, returning to Cutting, I have largely been arguing about the difficulty of calling one piece of what Cutting might declare high art as aesthetically superior to another piece of high art. But my goal is that if we use cultural novelty as the primary yardstick, then we need to reorder the valuations. Early rock and roll pioneers, early blues artists, early modern jazz impresarios—all the legends we can think of—get top billing alongside Debussy. Heavy metal, rap, and electronica inventors live proudly with the Baroque masters. They will likely survive that test-of-time criteria, too, because of the invention of recording technologies, which were not available to the Baroque composers.

Curiouser and Curiouser

georgeJürgen Schmidhuber’s work on algorithmic information theory and curiosity is worth a few takes, if not more, for the researcher has done something that is both flawed and rather brilliant at the same time. The flaws emerge when we start to look deeply into the motivations for ideas like beauty (is symmetry and noncomplex encoding enough to explain sexual attraction? Well-understood evolutionary psychology is probably a better bet), but the core of his argument is worth considering.

If induction is an essential component of learning (and we might suppose it is for argument’s sake), then why continue to examine different parameterizations of possible models for induction? Why be creative about how to explain things, like we expect and even idolize of scientists?

So let us assume that induction is explained by the compression of patterns into better and better models using an information theoretic-style approach. Given this, Schmidhuber makes the startling leap that better compression and better models are best achieved by information harvesting behavior that involves finding novelty in the environment. Thus curiosity. Thus the implementation of action in support of ideas.

I proposed a similar model to explain aesthetic preferences for mid-ordered complex systems of notes, brush-strokes, etc. around 1994, but Schmidhuber’s approach has the benefit of not just characterizing the limitations and properties of aesthetic systems, but also justifying them. We find interest because we are programmed to find novelty, and we are programmed to find novelty because we want to optimize our predictive apparatus. The best optimization is actively seeking along the contours of the perceivable (and quantifiable) universe, and isolating the unknown patterns to improve our current model.

Chinese Feudal Wasps

waspsIn Fukuyama’s The Origins of Political Order, the author points out that Chinese feudalism was not at all like European feudalism. In the latter, vassals were often unrelated to lords and the relationship between them was consensual and renewed annually. Only later did patriarchal lineages become important in preserving the line of descent among the lords. But that was not the case in China where extensive networks of blood relations dominated the lord-vassal relationship; the feudalism was more like tribalism and clans than the European model, but with Confucianism layered on top.

So when E.O. Wilson, still intellectually agile in his twilight years, describes the divide between kin selection and multi-level selection in the New York Times, we start to see a similar pattern of explanation for both models at far more basic level than just in the happenstances of Chinese versus European cultures. Kin selection predicts that genetic co-representation can lead an individual to self-sacrifice in an evolutionary sense (from loss of breeding possibilities in Hymenoptera like bees and ants, through to sacrificial behavior like standing watch against predators and thus becoming a target, too). This is the traditional explanation and the one that fits well for the Chinese model. But we also have the multi-level selection model that posits that selection operates at the group level, too. In kin selection there is no good explanation for the European feudal tradition unless the vassals are inbred with their lords, which seems unlikely in such a large, diverse cohort. Consolidating power among the lords and intermarrying practices possibly did result in inbreeding depression later on, but the overall model was one based on social ties that were not based on genetic familiarity.

China represents the opposite in the continuous flow of power to related individuals as the number of political units filtered down from thousands to just a few over several hundred years during the “Axial Age“. It was kin selection all over the place though the rapacious nature of those affiliations would later lead to the rise of bureaucracies and the institutionalization of professional management with less allegiance to any family line.

But the existence of both models as successful transitional societies points to the failure of any exclusivity of one model versus another. Either we are somewhat kin selective, somewhat multi-level selective, or just too contingent to make such sweeping claims. The data has to sort it out and the only thing we can really be certain about is the contingent nature of science itself.

Bats and Belfries

Thomas Nagel proposes a radical form of skepticism in his new book, Minds and Cosmos, continuing his trajectory through subjective experience and moral realism first began with bats zigging and zagging among the homunculi of dualism reimagined in the form of qualia. The skepticism involves disputing materialistic explanations and proposing, instead, that teleological ones of an unspecified form will likely apply, for how else could his subtitle that paints the “Neo-Darwinian Concept of Nature” as likely false hold true?

Nagel is searching for a non-religious explanation, of course, because just enervating nature through fiat is hardly an explanation at all; any sort of powerful, non-human entelechy could be gaming us and the universe in a non-coherent fashion. But what parameters might support his argument? Since he apparently requires a “significant likelihood” argument to hold sway in support of the origins of life, for instance, we might imagine what kind of thinking could result in highly likely outcomes that begin with inanimate matter and lead to goal-directed behavior while supporting a significant likelihood of that outcome. The parameters might involve the conscious coordination of the events leading towards the emergence of goal-directed life, thus presupposing a consciousness that is not our own. We are back then to our non-human entelechy looming like an alien or like a strange creator deity (which is not desirable to Nagel). We might also consider the possibility that there are properties to the universe itself that result in self-organization and that either we don’t yet know or that we are only beginning to understand. Elliot Sober’s critique suggests that the 2nd Law of Thermodynamics results in what I might call “patterned” behavior while not becoming “goal-directed” per se. Yet it is precisely the capacity for self-organization that begins at the borderline of energy harvesting mediated by the 2nd Law that results in some of the clearest examples of physical structures emerging from simpler forms, and in an inevitable way. That is, with a “significant likelihood” of occurrence. Can we, in fact, say that there is a meaningful distinction that can be drawn between an inevitable self-organizing crystal process and an evolutionary one that involves large populations of entities interacting together? It seems to me that if we can conceive of the first, we can attribute an equal or better weight of probability to the second.

Are there other options? Could the form of this “new teleology” that is non-materialistic in nature achieve other insights that are significantly likely? One possibility would be if a physical property or process showed a unique affinity for goal-directed behavior that could not be explained by bridging rules that straddled known neuropsychological and evolutionary models. Such a phenomena would be recognized for its resilience to explanation, I think: there are no effective explanations for creativity; it is a uniquely human quality. Yet we just don’t have any compelling examples. Creativity does not appear to be completely resilient to explanation, nor does any human mental process.

Our bats, our belfries, remain uniquely our own.

Talking Musical Heads

David Byrne gets all scientifical in the most recent Smithsonian, digging into the developmental and evolved neuropsychiatry of musical enjoyment. Now, you may ask yourself, how did DB get so clinical about the emotions of music? And you may ask yourself, how did he get here? And you may ask yourself, how did this music get written?

…one can envision a day when all types of music might be machine-generated. The basic, commonly used patterns that occur in various genres could become the algorithms that guide the manufacture of sounds. One might view much of corporate pop and hip-hop as being machine-made—their formulas are well established, and one need only choose from a variety of available hooks and beats, and an endless recombinant stream of radio-friendly music emerges. Though this industrial approach is often frowned on, its machine-made nature could just as well be a compliment—it returns musical authorship to the ether. All these developments imply that we’ve come full circle: We’ve returned to the idea that our universe might be permeated with music.

It seems fairly obvious that the music I’m listening to right now (Arvo Part) could be automatized, but just hasn’t been so far. And this points to the future world Byrne points to, where we are permeated with music and the contrast with silence is the most sophisticated distinction that can be drawn.

Reciprocity and Abstraction

Fukuyama’s suggestion is intriguing but needs further development and empirical support before it can be considered more than a hypothesis. To be mildly repetitive, ideology derived from scientific theories should be subject to even more scrutiny than religious-political ideologies if for no other reason than it can be. But in order to drill down into the questions surrounding how reciprocal altruism might enable the evolution of linguistic and mental abstractions, we need to simplify the problems down to basics, then work outward.

So let’s start with reciprocal altruism as a mere mathematical game. The iterated prisoner’s dilemma is a case study: you and a compatriot are accused of a heinous crime and put in separate rooms. If you deny involvement and so does your friend you will each get 3 years prison. If you admit to the crime and so does your friend you will both get 1 year (cooperation behavior). But if you or your co-conspirator deny involvement while fingering the other, one gets to walk free while the other gets 6 years (defection strategy). Joint fingering is equivalent to two denials at 3 years since the evidence is equivocal. What does one do as a “rational actor” in order to minimize penalization? The only solution is to betray your friend while denying involvement (deny, deny, deny): you get either 3 years (assuming he also denies involvement), or you walk (he denies), or he fingers you also which is the same as dual denials at 3 years each. The average years served are 1/3*3 + 1/3*0 + 1/3*3 = 3 years versus 1/2*1 + 1/2*6 = 3.5 years for admitting to the crime.

In other words it doesn’t pay to cooperate.

But that isn’t the “iterated” version of the game. In the iterated prisoner’s dilemma the game is played over and over again. What strategy is best then? An initial empirical result showed that “tit for tat” worked impressively well between two actors. In tit-for-tat you don’t need much memory about your co-conspirator’s past behavior. It suffices for you to simply do in the current round what they just did in the last round. If they defected, you defect to punish them. If they cooperated, you cooperate.

But this is just two actors and robust payoff matrixes. What if we expand the game to include hundreds of interacting agents who are all competing for mating privileges and access to resources? Fukuyama’s claim is being applied to human prehistory, after all. How does a more complex competitive-cooperative landscape change these simple games and lead to an upward trajectory of abstraction, induction, abduction, or other mechanisms that feed into cognitive processes and then into linguistic ones? We can bound the problem in the following way: the actors need at least as many bits as there are interacting actors to be able to track their defection rates to the last interaction. And, since there are observable limitations to identifying defection (cheating) with regard to mating opportunities or other complex human behaviors, we can expand the bits requirement to floating point representations that cast past behavior in terms of an estimate of their likelihood of future defections. Next, you have to maintain individual statistical models of each participant to better estimate their likelihood of defection versus cooperation (hundreds of estimates and variables). You also need a vast array of predictive neural structures that are tuned to various social cues (Did he just flirt with my girlfriend? Did he just suck-up to the head man?)

We do seem to end up with big brains, just like Vonnegut predicted and lamented in Galapagos, though contra-Vonnegut whether those big brains translate into species-wide destruction is less about prediction and more about policy choices. Still, Fukuyama is better than most historians in that he neither succumbs to atheoretical reporting (ODTAA: history is just “one damn thing after another”) nor to fixating on the support of a central theory that forces the interpretation of the historical record (OMEX: “one more example of X”).

Science, Pre-science, and Religion

Francis Fukuyama in The Origins of Political Order: From Prehuman Times to the French Revolution draws a bright line from reciprocal altruism to abstract reasoning, and then through to religious belief:

Game theory…suggests that individuals who interact with one another repeatedly tend to gravitate toward cooperation with those who have shown themselves to be honest and reliable, and shun those who have behaved opportunistically. But to do this effectively, they have to be able to remember each other’s past behavior and to anticipate likely future behavior based on an interpretation of other people’s motives.

Then, language allows transmission of historical patterns (largely gossip in tight-knit social groups) and abstractions about ethical behaviors until, ultimately:

The ability to create mental models and to attribute causality to invisible abstractions is in turn the basis for the emergence of religion.

But this can’t be the end of the line. Insofar as abstract beliefs can attribute repetitive weather patterns to Olympian gods, or consolidate moral reasoning to a monotheistic being, the same mechanisms of abstraction must be the basis for scientific reasoning as well. Either that or the cognitive capacities for linguistic abstraction and game theory are not cross-applicable to scientific thinking, which seems unlikely.

So the irony of assertions that science is just another religion is that they certainly share a similar initial cognitive evolution, while nevertheless diverging in their dependence on faith and supernatural expectations, on the one hand, and channeling the predictive models along empirical contours on the other.

Bloodless Technonomy

The last link provided in the previous post leads down a rabbit hole. The author translates a Chinese report and then translates the data into geospatial visualizations and pie charts, sure, but he also begins very rapidly to layer on his ideological biases. He is part of the “AltRight” movement with a focus on human biodiversity. The memes of AltRight are largely racially charged, much less racist, defined around an interpretation of Darwinism that anoints difference and worships a kind of biological determinism. The thought cycles are large, elliptical constructs that play with sociobiology and evolutionary psychology to describe why inequities exist in the human world. Fair enough, though we can quibble over whether any scientisms rise far enough out of the dark waters of data to move policy more than a hair either way. And we can also argue against the interpretations of biology that nurtured the claims, especially the ever-present meme that inter-human competition is somehow discernible as Darwinian at all. That is the realm of the Social Darwinists and Fascists, and the realm of evil given the most basic assumptions about others. It also begs explanation of cooperation at a higher level than the superficial realization that kin selection might have a role in primitive tribal behavior. To be fair, of course, it has parallels in attempts to tie Freudian roles to capitalism and desire, or in the deeper contours of Marxist ideology.

But this war of ideologies, of intellectual histories, of grasping at ever-deeper ways of reinterpreting the goals and desires of political actors, might be coming to an end in a kind of bloodless, technocratic way. Specifically, surveillance, monitoring, and data analysis can potentially erode the theologies of policy into refined understandings of how groups react to changes in laws, regulations, incentives, taxes, and entitlements.

How will this work?

Let’s take gerrymandering as an example. There is an uncomplicated competition for power involved in reengineering political districts that can be solved fairly easily via algorithms that remove human decision making from the process (see Wikipedia article for examples of splitline algorithms and isoperimetric quotients). A similar approach that uses experimentation and non-ideological mechanisms can be applied to many (though not all) divisive political problems:

  • Global warming controversial? Apply cap-and-trade or other CO2 reductions at half optimal strength (as argued by proponents). Surveil outcomes and establish a decision criterion for next steps.
  • Health care reform unappetizing? Create smaller-scale laboratories to identify what works and what doesn’t (say, like Massachusetts). Identify the social goods and bads and expand where appropriate.
  • Welfare systems under the microscope? Reform and reimplement using state and community block grants to test alternative ways of solving the problem. Leave existing system intact until the data is in.

Behavioral economics somewhat foreshadows this future of outcome- and data-driven policy development. I’ve coined the term “technonomy” based on Pittendrigh’s notion of “teleonomy” to capture this idea of basing policy decisions on experimental and data-driven methodologies, and to distinguish it from technocracy (it also appears to have other meanings already that involve “synergies” and other vacuous crap). If the AltRight want to deny a specific government action based on racial theories, or if the Very Left want to spend more to correct for a perceived injustice, or if Libertarians want a return to a gold standard, then all that is required is for the groups to design a policy laboratory that controls the variables of interest well enough that the theory can be tested. It will require enormous creativity that goes beyond conspiracy theories and mere kvetching, but would certainly be more informative than the current guerrilla wars of partisan intellectual rage.