A Most Porous Barrier

Whenever there is a scientific—or even a quasi-scientific—theory invented, there are those who take an expansive view of the theory, broadly applying it to other areas of thought. This is perhaps inherent in the metaphorical nature of these kinds of thought patterns. Thus we see Darwinian theory influenced by Adam Smith’s “invisible hand” of economic optimization. Then we get Spencer’s Social Darwinism arising from Darwin. And E.O. Wilson’s sociobiology leads to evolutionary psychology, immediately following an activist’s  pitcher of ice water.

The is-ought barrier tends towards porousness, allowing the smuggling of insights and metaphors lifted from the natural world as explanatory footwork for our complex social and political interactions. After all, we are as natural as we are social. But at the same time, we know that science is best when it is tentative and subject to infernal levels of revision and reconsideration. Decisions about social policy derived from science, and especially those that have significant human impact, should be cushioned by a tentative level of trust as well.

E.O. Wilson’s most recent book, Genesis: The Deep Origin of Societies, is a continuation of his late conversion to what is now referred to as “multi-level selection,” where natural selection is believed to operate at multiple levels, from genes to whole societies. It remains a controversial theory that has been under development and under siege since Darwin’s time, when the mechanism of inheritance was not understood.

The book is brief and does not provide much, if any, new material since his Social Conquest of Earth, which was significantly denser and contained notes derived from his controversial 2010 Nature paper that called into question whether kin selection was overstated as a gene-level explanation of altruism and sacrifice within eusocial species.… Read the rest

Metaphors as Bridges to the Future

David Lewis’s (I’m coming to accept this new convention with s-ending possessives!) solution to Putnam’s semantic indeterminacy is that we have a network of concepts that interrelate in a manner that is consistent under probing. As we read, we know from cognitive psychology, texts that bridge unfamiliar concepts from paragraph to paragraph help us to settle those ideas into the network, sometimes tentatively, and sometimes needing some kind of theoretical reorganization as we learn more. Then there are some concepts that have special referential magnetism and are piers for the bridges.

You can see these same kinds of bridging semantics being applied in the quest to solve some our most difficult and unresolved scientific conundrums. Quantum physics has presented strangeness from its very beginning and the various interpretations of that strangeness and efforts to reconcile the strange with our everyday logic remains incomplete. So it is not surprising that efforts to unravel the strange in quantum physics often appeal to Einstein’s descriptive approach to deciphering the strange problems of electromagnetic wave propagation that ultimately led to Special and then General Relativity.

Two recent approaches that borrow from the Einstein model are Carlo Rovelli’s Relational Quantum Mechanics and David Albert’s How to Teach Quantum Mechanics. Both are quite explicit in drawing comparisons to the relativity approach; Einstein, in merging space and time, and in realizing inertial and gravitational frames of reference were indistinguishable, introduced an explanation that defied our expectations of ordinary, Newtonian physical interactions. Time was no longer a fixed universal but became locked to observers and their relative motion, and to space itself.

Yet the two quantum approaches are decidedly different, as well. For Rovelli, there is no observer-independent state to quantum affairs.… Read the rest

Doubt at the Limit

I seem to have a central theme to many of the last posts that is related to the demarcation between science and non-science, and also to the limits of what rationality allows where we care about such limits. This is not purely abstract, though, as we can see in today’s anti-science movements, whether anti-vaccination, flat Earthers, climate change deniers, or intelligent design proponents. Just today, Ars Technica reports on the first of these. The speakers at the event, held in close proximity to a massive measles outbreak, ranged from a “disgraced former gastroenterologist” to an angry rabbi. Efforts to counter them, in the form of a letter from a county supervisor and another rabbi, may have had an impact on the broader community, but probably not the die-hards of the movement.

Meanwhile, Lee Mcyntire at Boston University suggests what we are missing in these engagements in a great piece in Newsweek. Mcyntire applies the same argument to flat Earthers that I have applied to climate change deniers: what we need to reinforce is the value and, importantly, the limits inherent in scientific reasoning. Insisting, for example, that climate change science is 100% squared away just fuels the micro-circuits in the so-called meta-cognitive strategies regions of the brains of climate change deniers. Instead, Mcyntire recommends science engages the public in thinking about the limits of science, showing how doubt and process lead us to useable conclusions about topics that are suddenly fashionably in dispute.

No one knows if this approach is superior to the alternatives like the letter-writing method by authorities in the vaccination seminar approach, and it certainly seems longer term in that it needs to build against entrenched ideas and opinions, but it at least argues for a new methodology.… Read the rest

Causing Incoherence to Exist

I was continuing discussion on Richard Carrier vs. the Apologists but the format of the blog posting system made a detailed conversation difficult, so I decided to continue here. My core argument is that the premises of Kalam are incoherent. I also think some of the responses are as well.

But what do we mean by incoherent?

Richard interpreted that to mean logically impossible, but my intent was that incoherence is a property of the semantics of the words. Statements are incoherent when they don’t make sense or only make sense with a very narrow and unwarranted reading of the statement. The following argument follows a fairly standard analytic tradition analysis of examining the meaning of statements. I am currently fond of David Lewis’s school of thought on semantics, where the meaning of words exist as a combination of mild referential attachment, coherence within a network of other words, and, importantly, some words within that network achieve what is called “reference magnetism” in that they are tied to reality in significant ways and pull at the meaning of other words.

For instance, consider Premise 1 of a modern take on Kalam:

All things that begin to exist have a cause.

OK, so what does begin to exist mean? And how about cause? Let’s unpack “begin to exist,” first. We might say in our everyday world of people that, say, cars begin to exist at some point. But when is that point? For instance, is it latent in the design for the car? Is it when the body panels are attached on the assembly line? Is it when the final system is capable of car behavior? That is, when all the parts that were in fact designed are fully operational?Read the rest

Two Points on Penrose, and One On Motivated Reasoning

Sir Roger Penrose is, without doubt, one of the most interesting polymaths of recent history. Even where I find his ideas fantastical, they are most definitely worth reading and understanding. Sean Carroll’s Mindscape podcast interview with Penrose from early January of this year is a treat.

I’ve previously discussed the Penrose-Hameroff conjectures concerning wave function collapse and their implication of quantum operations in the micro-tubule structure of the brain. I also used the conjecture in a short story. But the core driver for Penrose’s original conjecture, namely that algorithmic processes can’t explain human consciousness, has always been a claim in search of support. Equally difficult is pushing consciousness into the sphere of quantum phenomena that tend to show random, rather than directed, behavior. Randomness doesn’t clearly relate to the “hard problem” of consciousness that is about the experience of being conscious.

But take the idea that since mathematicians can prove things that are blocked by Gödel incompleteness, our brains must be different from Turing machines or collections of them. Our brains are likely messy and not theorem proving machines per se, despite operating according to logico-causal processes. Indeed, throw in an active analog to biological evolution based on variation-and-retention of ideas and insights that might actually have a bit of pseudo-randomness associated with it, and there is no reason to doubt that we are capable of the kind of system transcendence that Penrose is looking for.

Note that this doesn’t in any way impact the other horn of Penrose-Hameroff concerning the measurement problem in quantum theory, but there is no reason to suspect that quantum collapse is necessary for consciousness. It might flow the other way, though, and Penrose has created the Penrose Institute to look experimentally for evidence about these effects.… Read the rest

Narcissism, Nonsense and Pseudo-Science

I recently began posting pictures of our home base in Sedona to Instagram (check it out in column to right). It’s been a strange trip. If you are not familiar with how Instagram works, it’s fairly simple: you post pictures and other Instagram members can “follow” you and you can follow them, meaning that you see their pictures and can tap a little heart icon to show you like their pictures. My goal, if I have one, is just that I like the Northern Arizona mountains and deserts and like thinking about the composition of photographs. I’m also interested in the gear and techniques involved in taking and processing pictures. I did, however, market my own books on the platform—briefly, and with apologies.

But Instagram, like Facebook, is a world unto itself.

Shortly after starting on the platform, I received follows from blond Russian beauties who appear to be marketing online sex services. I have received odd follows from variations on the same name who have no content on their pages and who disappear after a day or two if I don’t follow them back. Though I don’t have any definitive evidence, I suspect these might be bots. I have received follows from people who seemed to be marketing themselves as, well, people—including one who bait-and-switched with good landscape photography. They are typically attractive young people, often showing off their six-pack abs, and trying to build a following with the goal of making money off of Instagram. Maybe they plan to show off products or reference them, thus becoming “influencers” in the lingo of social media. Maybe they are trying to fund their travel experiences by reaping revenue from advertisers that co-exist with their popularity in their image feed.… Read the rest

Theoretical Reorganization

Sean Carroll of Caltech takes on the philosophy of science in his paper, Beyond Falsifiability: Normal Science in a Multiverse, as part of a larger conversation on modern theoretical physics and experimental methods. Carroll breaks down the problems of Popper’s falsification criterion and arrives at a more pedestrian Bayesian formulation for how to view science. Theories arise, theories get their priors amplified or deflated, that prior support changes due to—often for Carroll—coherence reasons with other theories and considerations and, in the best case, the posterior support improves with better experimental data.

Continuing with the previous posts’ work on expanding Bayes via AIT considerations, the non-continuous changes to a group of scientific theories that arrive with new theories or data require some better model than just adjusting priors. How exactly does coherence play a part in theory formation? If we treat each theory as a binary string that encodes a Turing machine, then the best theory, inductively, is the shortest machine that accepts the data. But we know that there is no machine that can compute that shortest machine, so there needs to be an algorithm that searches through the state space to try to locate the minimal machine. Meanwhile, the data may be varying and the machine may need to incorporate other machines that help improve the coverage of the original machine or are driven by other factors, as Carroll points out:

We use our taste, lessons from experience, and what we know about the rest of physics to help guide us in hopefully productive directions.

The search algorithm is clearly not just brute force in examining every micro variation in the consequences of changing bits in the machine. Instead, large reusable blocks of subroutines get reparameterized or reused with variation.… Read the rest

Free Will and Algorithmic Information Theory

I was recently looking for examples of applications of algorithmic information theory, also commonly called algorithmic information complexity (AIC). After all, for a theory to be sound is one thing, but when it is sound and valuable it moves to another level. So, first, let’s review the broad outline of AIC. AIC begins with the problem of randomness, specifically random strings of 0s and 1s. We can readily see that given any sort of encoding in any base, strings of characters can be reduced to a binary sequence. Likewise integers.

Now, AIC states that there are often many Turing machines that could generate a given string and, since we can represent those machines also as a bit sequence, there is at least one machine that has the shortest bit sequence while still producing the target string. In fact, if the shortest machine is as long or a bit longer (given some machine encoding requirements), then the string is said to be AIC random. In other words, no compression of the string is possible.

Moreover, we can generalize this generator machine idea to claim that given some set of strings that represent the data of a given phenomena (let’s say natural occurrences), the smallest generator machine that covers all the data is a “theoretical model” of the data and the underlying phenomena. An interesting outcome of this theory is that it can be shown that there is, in fact, no algorithm (or meta-machine) that can find the smallest generator for any given sequence. This is related to Turing Incompleteness.

In terms of applications, Gregory Chaitin, who is one of the originators of the core ideas of AIC, has proposed that the theory sheds light on questions of meta-mathematics and specifically that it demonstrates that mathematics is a quasi-empirical pursuit capable of producing new methods rather than being idealistically derived from analytic first-principles.… Read the rest

Structure and Causality in Political Revolutions

Can political theories be tested like scientific ones? And if they can, does it matter? Alexis Papazoglou argues in the New Republic that, even if they can be tested, it is less important than other factors in the success of the political theory. In his signal case, the conflict between anti-globalist populists and the conventional international order is questioned as resulting in clear outcomes that somehow will determine the viability of one theory versus the other. It’s an ongoing experiment. Papazoglou breaks down the conflict as parallel to the notion that scientific processes ultimately win on falsifiability and rationality. In science, as per Kuhn’s landmark The Structure of Scientific Revolutions, the process is more paradigmatic agendas, powerful leaders, and less calculated rationality.

The scientific process may have been all of those things, of course, and may continue to be so in the future, but there are ongoing developments that make it less likely that sociological factors will dominate. And this is why the comparison with political theories is perhaps wrongheaded. There may be a community of political theorists but they are hardly the primary architects and spectators of politics, unlike science and scientists. We are all political actors, yet very few have the time or inclination to look carefully at the literature on the threat of successful authoritarian Chinese civilization versus Western liberal democracy, for instance. But we are not all scientific actors, despite being governed by the reality of the world around us. Politics yells and seethes while science quietly attends a conference. Even the consequences of science are often so gradualistic in their unfolding that we barely notice them; see the astonishing progress on cancer survival in the past decades and note the need for economic discounting for global climate change, where the slow creep of existential threats are somehow given dollar values.… Read the rest

Incompressibility and the Mathematics of Ethical Magnetism

One of the most intriguing aspects of the current U.S. border crisis is the way that human rights and American decency get articulated in the public sphere of discourse. An initial pull is raw emotion and empathy, then there are counterweights where the long-term consequences of existing policies are weighed against the exigent effects of the policy, and then there are crackpot theories of “crisis actors” and whatnot as bizarro-world distractions. But, if we accept the general thesis of our enlightenment values carrying us ever forward into increasing rights for all, reduced violence and war, and the closing of the curtain on the long human history of despair, poverty, and hunger, we must also ask more generally how this comes to be. Steven Pinker certainly has rounded up some social theories, but what kind of meta-ethics might be at work that seems to push human civilization towards these positive outcomes?

Per the last post, I take the position that we can potentially formulate meaningful sentences about what “ought” to be done, and that those meaningful sentences are, in fact, meaningful precisely because they are grounded in the semantics we derive from real world interactions. How does this work? Well, we can invoke the so-called Cornell Realists argument that the semantics of a word like “ought” is not as flexible as Moore’s Open Question argument suggests. Indeed, if we instead look at the natural world and the theories that we have built up about it (generally “scientific theories” but, also, perhaps “folk scientific ideas” or “developing scientific theories”), certain concepts take on the character of being so-called “joints of reality.” That is, they are less changeable than other concepts and become referential magnets that have an elite status among the concepts we use for the world.… Read the rest