Category: Evolutionary

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus. In English, the syntax and semantics are less constrained and the pragmatics more pronounced, but the emotional high is much the same.

With the recent death of Hubert Dreyfus at Berkeley it seems an opportune time to consider the uniquely human capabilities that are involved in each of these creative ventures. Uniquely, I suggest, because we can’t yet imagine what it would be like for a machine to do the same kinds of intelligent tasks. Yet, from Stephen Hawking through to Elon Musk, influential minds are worried about what might happen if we develop machines that rise to the level of human consciousness. This might be considered a science fiction-like speculation since we have little basis for conjecture beyond the works of pure imagination. We know that mechanization displaces workers, for instance, and think it will continue, but what about conscious machines?

For Dreyfus, the human mind is too embodied and situational to be considered an encodable thing representable by rules and algorithms. Much like the trajectory of a species through an evolutionary landscape, the mind is, in some sense, an encoded reflection of the world in which it lives. Taken further, the evolutionary parallel becomes even more relevant in that it is embodied in a sensory and physical identity, a product of a social universe, and an outgrowth of some evolutionary ping pong through contingencies that led to greater intelligence and self-awareness.

Obsession with whatever cultivars, whatever traits and tendencies, lead to this riot of wordplay and software refinement is a fine example of how this moves away from the fears of Hawking and towards the impossibilities of Dreyfus. We might imagine that we can simulate our way to the kernel of instinct and emotion that makes such things possible. We might also claim that we can disconnect the product of the effort from these internal states and the qualia that defy easy description. The books and the new technologies have only desultory correspondence to the process by which they are created. But I doubt it. It’s more likely that getting from great automatic speech recognition or image classification to the general AI that makes us fearful is a longer hike than we currently imagine.

Evolving Visions of Chaotic Futures

FlutterbysMost artificial intelligence researchers think unlikely the notion that a robot apocalypse or some kind of technological singularity is coming anytime soon. I’ve said as much, too. Guessing about the likelihood of distant futures is fraught with uncertainty; current trends are almost impossible to extrapolate.

But if we must, what are the best ways for guessing about the future? In the late 1950s the Delphi method was developed. Get a group of experts on a given topic and have them answer questions anonymously. Then iteratively publish back the group results and ask for feedback and revisions. Similar methods have been developed for face-to-face group decision making, like Kevin O’Connor’s approach to generating ideas in The Map of Innovation: generate ideas and give participants votes equaling a third of the number of unique ideas. Keep iterating until there is a consensus. More broadly, such methods are called “nominal group techniques.”

Most recently, the notion of prediction markets has been applied to internal and external decision making. In prediction markets,  a similar voting strategy is used but based on either fake or real money, forcing participants towards a risk-averse allocation of assets.

Interestingly, we know that optimal inference based on past experience can be codified using algorithmic information theory, but the fundamental problem with any kind of probabilistic argument is that much change that we observe in society is non-linear with respect to its underlying drivers and that the signals needed are imperfect. As the mildly misanthropic Nassim Taleb pointed out in The Black Swan, the only place where prediction takes on smooth statistical regularity is in Las Vegas, which is why one shouldn’t bother to gamble. Taleb’s approach is to look instead at minimizing the impact of shocks (or hedging them in financial markets).

But maybe we can learn something from philosophical circles. For instance, Evolutionary Epistemology (EE), as formulated by Donald Campbell, Sir Karl Popper, and others, posits that central to knowledge formation is blind variation and selective retention. Combined with optimal induction, this leads to random processes being injected into any kind of predictive optimization. We do this in evolutionary algorithms like Genetic Algorithms, Evolutionary Programming, Genetic Programming, and Evolutionary Strategies, as well as in related approaches like Simulated Annealing. But EE also suggests that there are several levels of learning by variation/retention, from the phylogenetic learning of species through to the mental processes of higher organisms. We speculate and trial-and-error continuously, repeating loops of what-ifs in our minds in an effort to optimize our responses in the future. It’s confounding as hell but we do remarkable things that machines can’t yet do like folding towels or learning to bake bread.

This noosgeny-recapitulates-ontogeny-recapitulates-phylogeny (just made that up) can be exploited in a variety of ways for abductive inference about the future. We can, for instance, use evolutionary optimization with a penalty for complexity that simulates the informational trade-off of AIT-style inductive optimality. Further, the noosgeny component (by which I mean the internalized mental trial-and-error) can reduce phylogenetic waste in simulations by providing speculative modeling that retains the “parental” position on the fitness landscape before committing to a next generation of potential solutions, allowing for further probing of complex adaptive landscapes.

On Woo-Woo and Schrödinger’s Cat

schrodingers-cat-walks-into-a-bar-memeMichael Shermer and Sam Harris got together with an audience at Caltech to beat up on Deepak Chopra and a “storyteller” named Jean Houston in The Future of God debate hosted by ABC News. And Deepak got uncharacteristically angry back behind his crystal-embellished eyewear, especially at Shermer’s assertion that Deepak is just talking “woo-woo.”

But is there any basis for the woo-woo that Deepak is weaving? As it turns out, he is building on some fairly impressive work by Stuart Hameroff, MD, of University of Arizona and Sir Roger Penrose of Oxford University. Under development for more than 25 years, this work has most recently been summed up in their 2014 paper, “Consciousness in the universe: A review of the ‘Orch OR’ theory” available for free (but not the commentaries, alas). Deepak was even invited to comment on the paper in Physics of Life Reviews, though the content of his commentary was challenged as being somewhat orthogonal or contradictory to the main argument.

To start somewhere near the beginning, Penrose became obsessed with the limits of computation in the late 80s. The Halting Problem sums up his concerns about the idea that human minds can possibly be isomorphic with computational devices. There seems to be something that allows for breaking free of the limits of “mere” Turing Complete computation to Penrose. Whatever that something is, it should be physical and reside within the structure of the brain itself. Hameroff and Penrose would also like that something to explain consciousness and all of its confusing manifestations, for surely consciousness is part of that brain operation.

Now, to get at some necessary and sufficient sorts of explanations for this new model requires looking at Hameroff’s medical speciality: anesthesiology. Anyone who has had surgery has had the experience of consciousness going away while body function continues on, still mediated by brain activities. So certain drugs like halothane erase consciousness through some very targeted action. Next, consider that certain prokaryotes have internally coordinated behaviors without the presence of a nervous system. Finally, consider that it looks like most neurons do not integrate and fire like the classic model (and the model that artificial neural networks emulate), but instead have some very strange and random activation behaviors in the presence of the same stimuli.

How do these relate? Hameroff has been very focused on one particular component to the internal architecture of neural cells: microtubules or MTs. These are very small (compared to cellular scale) and there are millions in neurons (10^9 or so). They are just cylindrical polymers with some specific chemical properties. They also are small enough (25nm in diameter) that it might be possible that quantum effects are present in their architecture. There is some very recent evidence to this effect based on strange reactions of MTs to tiny currents of varying frequencies used to probe them. Also, anesthetics appear to bind to MTs, Indeed, they could also provide a memory strata that is orders of magnitude greater than the traditional interneuron concept of how memories form.

But what does this have to do with consciousness beyond the idea that MTs get interfered with by anesthetics and therefore might be around or part of the machinery that we label conscious? They also appear to be related to Alzheimer’s disease, but this could be just related to the same machinery. Well, this is where we get woo-woo-ey. If consciousness is not just an epiphenomena arising from standard brain function as a molecular computer, and it is also not some kind of dualistic soul overlay, then maybe it is something that is there but is not a classical computer. Hence quantum effects.

So Sir Penrose has been promoting a rather wild conjecture called the Diósi-Penrose theory that puts an upper limit on the amount of time a quantum superposition can survive. It does this based on some arguments I don’t fully understand but that integrate gravity with quantum phenomena to suggest that the mass displaced by the superposed wave functions snaps the superposition into wave collapse. So Schrödinger’s cat dies or lives very quickly even without an observer because there are a lot of superposed quantum particles in a big old cat and therefore very rapid resolution of the wave function evolution (10^-24s). Single particles can live in superposition for much longer because the mass difference between their wave functions is very small.

Hence the OR in “Orch OR” stands for Objective Resolution: wave functions are subject to collapse by probing but they also collapse just because they are unstable in that state. The resolution is objective and not subjective. The “Orch” stands for “Orchestrated.” And there is the seat of consciousness in the Hameroff-Penrose theory. In MTs little wave function collapses are constantly occurring and the presence of superposition means quantum computing can occur. And the presence of quantum computing means that non-classical computation can take place and maybe even be more than Turing Complete.

Now the authors are careful to suggest that these are actually proto-conscious events and that only their large-scale orchestration leads to what we associate with consciousness per se. Otherwise they are just quantum superpositions that collapse, maybe with 1 qubit of resolution under the right circumstances.

At least we know the cat has a fate now. That fate is due to an objective event, too, and not some added woo-woo from the strange world of quantum phenomena. And the cat’s curiosity is part of the same conscious machinery.

Artsy Women

Victoire LemoineA pervasive commitment to ambiguity. That’s the most compelling sentence I can think of to describe the best epistemological stance concerning the modern world. We have, at best, some fairly well-established local systems that are reliable. We have consistency that may, admittedly, only pertain to some local system that is relatively smooth or has a modicum of support for the most general hypotheses that we can generate.

It’s not nihilistic to believe these things. It’s prudent and, when carefully managed, it’s productive.

And with such prudence we can tear down the semantic drapery that commands attention at every turn, from the grotesqueries of the political sphere that seek to command us through emotive hyperbole to the witchdoctors of religious canons who want us to immanentize some silly Middle Eastern eschaton or shoot up a family-planning clinic.

It is all nonsense. We are perpetuating and inventing constructs that cling to our contingent neurologies like mold, impervious to the broadest implications and best thinking we can muster. That’s normal, I suppose, for that is the sub rosa history of our species. But only beneath the firmament, while there is hope above and inventiveness and the creation of a new honor that derives from fairness and not from reactive disgust.

In opposition to the structures that we know and live with—that we tolerate—there is both clarity in this cocksure target and a certainty that, at least, we can deconstruct the self-righteousness and build a new sensibility to (at least) equality if not some more grand vision.

I picked up Laura Marling’s Short Movie last week and propagated it to various cars. It is only OK, but it joins a rather large collection of recent female musicians in my music archive. Indeed, the women have outnumbered the men at this point: St. Vincent, Joni Mitchell, Joanna Newsom, Hole, P.J. Harvey, Gwen Stefani, Courtney Love (sans Hole), Ani DiFranco, Joan Armatrading, Lily Allen, Valerie June. I’m particularly fascinated by female artists because they are unspoken or underrepresented in our brief human history, and maybe also because my wife is one. But more than some progressive political commitment, female voices simply discuss things in different ways than male voices do.

Where there is perhaps an evolutionary inevitability for a perspective of pursuit and desire from men, for women there is the rage against social and familial expectations, of abuse, of being pursued, and of the complex relationship with the power of men. These aspects make for new thoughts that would not arise in male arts.


A Critique of Pure Randomness

Random MemeThe notion of randomness brings about many interesting considerations. For statisticians, randomness is a series of events with chances that are governed by a distribution function. In everyday parlance, equally-likely means random, while an even more common semantics is based on both how unlikely and how unmotivated an event might be (“That was soooo random!”) In physics, there are only certain physical phenomena that can be said to be truly random, including the probability of a given nucleus decomposing into other nuclei via fission. The exact position of a quantum thingy is equally random when it’s momentum is nailed down, and vice-versa. Vacuums have a certain chance of spontaneously creating matter, too, and that chance appears to be perfectly random. In algorithmic information theory, a random sequence of bits is a sequence that can’t be represented by a smaller descriptive algorithm–it is incompressible. Strangely enough, we simulate random number generators using a compact algorithm that has a complicated series of steps that lead to an almost impossible to follow trajectory through a deterministic space of possibilities; it’s acceptible to be random enough that the algorithm parameters can’t be easily reverse engineered and the next “random” number guessed.

One area where we often speak of randomness is in biological evolution. Random mutations lead to change and to deleterious effects like dead-end evolutionary experiments. Or so we hypothesized. The exact mechanism of the transmission of inheritance and of mutations were unknown to Darwin, but soon in the evolutionary synthesis notions like random genetic drift and the role of ionizing radiation and other external factors became exciting candidates for the explanation of the variation required for evolution to function. Amusingly, arguing largely from a stance that might be called a fallacy of incredulity, creationists have often seized on a logical disconnect they perceive between the appearance of purpose both in our lives and in the mechanisms of biological existence, and the assumption of underlying randomness and non-directedness as evidence for the paucity of arguments from randomness.

I give you Stephen Talbott in The New Atlantis, Evolution and the Illusion of Randomness, wherein he unpacks the mounting evidence and the philosophical implications of jumping genes, self-modifying genetic regulatory frameworks, transposons, and the likelihood that randomness in the strong sense of cosmic ray trajectories bouncing around in cellular nuclei are simply wrong. Randomness is at best a minor contribution to evolutionary processes. We are not just purposeful at the social, personal, systemic, cellular, and sub-cellular levels, we are also purposeful through time around the transmission of genetic information and the modification thereof.

This opens a wildly new avenue for considering the certain normative claims that anti-evolutionists bring to the table, such as that a mechanistic universe devoid of central leadership is meaningless and allows for any behavior to be equally acceptable. This hoary chestnut is ripe to the point of rot, of course, but the response to it should be much more vibrant than the usual retorts. The evolution of social and moral outcomes can be every bit as inevitable as if they were designed because co-existence and greater group success (yes, I wrote it) is a potential well on the fitness landscape. And, equally, we need to stop being so reticent to claim that there is a purposefulness to life, a teleology, but simply make sure that we are according the proper mechanistic feel to that teleology. Fine, call it teleonomy, or even an urge to existence. A little poetry might actually help here.

On Killing Kids


Mark S. Smith’s The Early History of God is a remarkable piece of scholarship. I was recently asked what I read for fun and had to admit that I have been on a trajectory towards reading books that have, on average, more footnotes than text. J.P. Mallory’s In Search of the Indo-Europeans kindly moves the notes to the end of the volume. Smith’s Chapter 5, Yahwistic Cult Practices, and particularly Section 3, The mlk sacrifice, are illuminating on the widespread belief that killing children could propitiate the gods. This practice was likely widespread among the Western Semitic peoples, including the Israelites and Canaanites (Smith’s preference for Western Semitic is to lump the two together ca. 1200 BC because they appear to have been culturally the same, possibly made distinct after the compilation of OT following the Exile).

I recently argued with some young street preachers about violence and horror in Yahweh’s name and by His command while waiting outside a rock shop in Old Sacramento. Human sacrifice came up, too, with the apologetics being that, despite the fact that everyone was bad back then, the Chosen People did not perform human sacrifice and therefore they were marginally better than the other people around them. They passed quickly on the topic of slavery, which was wise for rhetorical purposes, because slavery was widespread and acceptable. I didn’t remember the particulars of the examples of human sacrifice in OT, but recalled them broadly to which they responded that there were translation and interpretation errors with “burnt offering” and “fire offerings of first borns” that, of course, immediately contradicted their assertion of acceptance and perfection of the scriptures.

More interesting, though, is the question of why might human sacrifice be so pervasive, whether among Yahwists and Carthiginians or Aztecs? On Patheos, Chris Hallquist comments on the brilliant Is God a Moral Compromiser? by Thom Stark (free PDF!) that runs through the attitudes concerning the efficacy of human sacrifice for achieving military goals. And maybe you can control the weather or make your crops grow better. Killing one’s own kids sets up a dilemma for evolutionary psychology in that it immediately reduces your genetic representation. So the commitment to the gods must override the commitment to family and its role as a proxy for biology. Sacrificing other people’s children is less incomprehensible, though it does affect the tribe and larger political constructs as well.

Looking at the story of Abraham and the emergence out of the Yahwistic cultic mlk, we might see even more evidence of the effect of multi-level selection overriding the individual’s biological urges. Order, obedience, tribal practices, and one’s identity as part of the group overrule preservation and familial bonds, and society gradually emerges as the lawmaker and orchestrator of human interractions.

Just So Disruptive

i-don-t-always-meme-generator-i-don-t-always-buy-companies-but-when-i-do-i-do-it-for-no-reason-925b08The “just so” story is a pejorative for cultural or physical traits that drive an evolutionary explanation. Things are “just so” when the explanation is unfalsifiable and theoretically fitted to current observations. Less controversial and pejorative is the essential character of evolutionary process where there is no doubt that genetic alternatives will mostly fail. The ones that survive this crucible are disruptive to the status quo, sure, but these disruptions tend to be geographically or sexually isolated from the main population anyway, so they are more an expansion than a disruption; little competition is tooth-and-claw, mostly species survive versus the environment, not one another.

Jill Lapore of Harvard subjects business theory to a similar crucible in the New Yorker, questioning Clayton Christensen’s classic argument in The Innovator’s Dilemma that businesses are unwilling to adapt to changing markets because they are making rational business decisions to maximize profits. After analyzing core business cases from Christensen’s books, Lapore concludes that the argument holds little water and that its predictions are both poor and inapplicable to other areas like journalism and college education.

Central to her critique is her analysis of the “just so” nature of disruptive innovation:

Christensen has compared the theory of disruptive innovation to a theory of nature: the theory of evolution. But among the many differences between disruption and evolution is that the advocates of disruption have an affinity for circular arguments. If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. (“Stop being afraid of failure and start embracing it,” the organizers of FailCon, an annual conference, implore, suggesting that, in the era of disruption, innovators face unprecedented challenges. For instance: maybe you made the wrong hires?) When an established company succeeds, that’s only because it hasn’t yet failed. And, when any of these things happen, all of them are only further evidence of disruption.

But her critiques of Christensen are not actually of modern start-up culture and its celebration of disruption and failure (except obliquely and culturally). Instead, Lapore is mostly concerned with steam shovels, 3.5″ disk drives, and mass transit.

And that’s where the evolutionary comparison comes in again. Where multiple experimental tests can be applied to a problem or, as economists put it, a need, minor variation is the standard mechanism (as Lapore asserts). Major variation is the exception and small tweaks like externalizing new businesses (Kresge’s Kmart, etc.) are inconclusive in their effectiveness. But in start-up world, everything is externalized from risks to rewards.

So is there a take-away from the filtered and refined view of innovation and reinvention? Perhaps only that disruption may be best handled through complete externalization of the innovation process; make strategic investments and nurture businesses based on their own market opportunities. The best we may be able to do is play a volume game where we do exactly what the venture capitalist does and accept that only 1 in 10 investments will thrive or excel. We don’t need just so stories then, just a realization that we can’t read the tea leaves of the past as just so stories for the future.

Humbly Evolving in a Non-Simulated Universe

darwin-changeThe New York Times seems to be catching up to me, first with an interview of Alvin Plantinga by Gary Cutting in The Stone on February 9th, and then with notes on Bostrom’s Simulation Hypothesis in the Sunday Times.

I didn’t see anything new in the Plantinga interview, but reviewed my previous argument that adaptive fidelity combined with adaptive plasticity must raise the probability of rationality at a rate that is much greater than the contributions that would be “deceptive” or even mildly cognitively or perceptually biased. Worth reading is Branden Fitelsen and Eliot Sober’s very detailed analysis of Plantinga’s Evolutionary Argument Against Naturalism (EAAN), here. Most interesting are the beginning paragraphs of Section 3, which I reproduce here because it is a critical addition that should surprise no one but often does:

Although Plantinga’s arguments don’t work, he has raised a question that needs to be answered by people who believe evolutionary theory and who also believe that this theory says that our cognitive abilities are in various ways imperfect. Evolutionary theory does say that a device that is reliable in the environment in which it evolved may be highly unreliable when used in a novel environment. It is perfectly possible that our mental machinery should work well on simple perceptual tasks, but be much less reliable when applied to theoretical matters. We hasten to add that this is possible, not inevitable. It may be that the cognitive procedures that work well in one domain also work well in another; Modus Ponens may be useful for avoiding tigers and for doing quantum physics.

Anyhow, if evolutionary theory does say that our ability to theorize about the world is apt to be rather unreliable, how are evolutionists to apply this point to their own theoretical beliefs, including their belief in evolution? One lesson that should be extracted is a certain humility—an admission of fallibility. This will not be news to evolutionists who have absorbed the fact that science in general is a fallible enterprise. Evolutionary theory just provides an important part of the explanation of why our reasoning about theoretical matters is fallible.

Far from showing that evolutionary theory is self-defeating, this consideration should lead those who believe the theory to admit that the best they can do in theorizing is to do the best they can. We are stuck with the cognitive equipment that we have. We should try to be as scrupulous and circumspect about how we use this equipment as we can. When we claim that evolutionary theory is a very well confirmed theory, we are judging this theory by using the fallible cognitive resources we have at our disposal. We can do no other.

And such humility helps to dismiss arguments about the arrogance of science and scientism.

On the topic of Bostrom’s Simulation Hypothesis, I remain skeptical that we live in a simulated universe.

In Like Flynn

The exceptionally interesting James Flynn explains the cognitive history of the past century and what it means in terms of human intelligence in this TED talk:

What does the future hold? While we might decry the “twitch” generation and their inundation by social media, gaming stimulation, and instant interpersonal engagement, the slowing observed in the Flynn Effect might be getting ready for another ramp-up over the next 100 years.

Perhaps most intriguing is the discussion of the ability to think in terms of hypotheticals as a a core component of ethical reasoning. Ethics is about gaming outcomes and also about empathizing with others. The influence of media as a delivery mechanism for narratives about others emerged just as those changes in cognitive capabilities were beginning to mature in the 20th Century. Widespread media had a compounding effect on the core abstract thinking capacity, and with the expansion of smartphones and informational flow, we may only have a few generations to go before the necessary ingredients for good ethical reasoning are widespread even in hard-to-reach areas of the world.

Contingency and Irreducibility

JaredTarbell2Thomas Nagel returns to defend his doubt concerning the completeness—if not the efficacy—of materialism in the explanation of mental phenomena in the New York Times. He quickly lays out the possibilities:

  1. Consciousness is an easy product of neurophysiological processes
  2. Consciousness is an illusion
  3. Consciousness is a fluke side-effect of other processes
  4. Consciousness is a divine property supervened on the physical world

Nagel arrives at a conclusion that all four are incorrect and that a naturalistic explanation is possible that isn’t “merely” (1), but that is at least (1), yet something more. I previously commented on the argument, here, but the refinement of the specifications requires a more targeted response.

Let’s call Nagel’s new perspective Theory 1+ for simplicity. What form might 1+ take on? For Nagel, the notion seems to be a combination of Chalmers-style qualia combined with a deep appreciation for the contingencies that factor into the personal evolution of individual consciousness. The latter is certainly redundant in that individuality must be absolutely tied to personal experiences and narratives.

We might be able to get some traction on this concept by looking to biological evolution, though “ontogeny recapitulates phylogeny” is about as close as we can get to the topic because any kind of evolutionary psychology must be looking for patterns that reinforce the interpretation of basic aspects of cognitive evolution (sex, reproduction, etc.) rather than explore the more numinous aspects of conscious development. So we might instead look for parallel theories that focus on the uniqueness of outcomes, that reify the temporal evolution without reference to controlling biology, and we get to ideas like uncomputability as a backstop. More specifically, we can explore ideas like computational irreducibility to support the development of Nagel’s new theory; insofar as the environment lapses towards weak predictability, a consciousness that self-observes, regulates, and builds many complex models and metamodels is superior to those that do not.

I think we already knew that, though. Perhaps Nagel has been too much a philosopher and too little involved in the sciences that surround and enervate modern theories of learning and adaption to see the movement towards the exits?