Be Persistent and Evolve

If we think about the evolution of living things we generally start from the idea that evolution requires replicators, variation, and selection. But what if we loosened that up to the more everyday semantics of the word “evolution” when we talk about the evolution of galaxies or of societies or of crystals? Each changes, grows, contracts, and has some kind of persistence that is mediated by a range of internal and external forces. For crystals, the availability of heat and access to the necessary chemicals is key. For galaxies, elements and gravity and nuclear forces are paramount. In societies, technological invention and social revolution overlay the human replicators and their biological evolution. Should we make a leap and just declare that there is some kind of impetus or law to the universe such that when there are composable subsystems and composition constraints, there will be an exploration of the allowed state space for composition? Does this add to our understanding of the universe?

Wong, et. al. say exactly that in “On the roles of function and selection in evolving systems” in PNAS. The paper reminds me of the various efforts to explain genetic information growth given raw conceptions of entropy and, indeed, some of those papers appear in the cites. It was once considered an intriguing problem how organisms become increasingly complex in the face of, well, the grinding dissolution of entropy. It wasn’t really that hard for most scientists: Earth receives an enormous load of solar energy that supports the push of informational systems towards negentropy. But, to the earlier point about composability and constraints, the energy is in a proportion that supports the persistence of systems that are complex.… Read the rest

Entanglements: Collected Short Works

Now available in Kindle, softcover, and hardcover versions, Entanglements assembles a decade of short works by author, scientist, entrepreneur, and inventor Mark William Davis.

The fiction includes an intimate experimental triptych on the evolution of sexual identities. A genre-defying poetic meditation on creativity and environmental holocaust competes with conventional science fiction about quantum consciousness and virtual worlds. A postmodern interrogation of the intersection of storytelling and film rounds out the collected works as a counterpoint to an introductory dive into the ethics of altruism.

The nonfiction is divided into topics ranging from literary theory to philosophical concerns of religion, science, and artificial intelligence. Legal theories are magnified to examine the meaning of liberty and autonomy. A qualitative mathematics of free will is developed over the course of two essays and contextualized as part of the algorithm of evolution. What meaning really amounts to is always a central concern, whether discussing politics, culture, or ideas.

The works show the author’s own evolution in his thinking of our entanglement with reality as driven by underlying metaphors that transect science, reason, and society. For Davis, metaphors and the constellations of words that help frame them are the raw materials of thought, and their evolution and refinement is the central narrative of our growth as individuals in a webwork of societies and systems.

Entanglements is for readers who are in love with ideas and the networks of language that support and enervate them. It is a metalinguistic swim along a polychromatic reef of thought where fiction and nonfictional analysis coexist like coral and fish in a greater ecosystem.

Mark William Davis is the author of three dozen scientific papers and patents in cognitive science, search, machine translation, and even the structure of art.… Read the rest

Time at Work

Time is a strange concept according to several strains of science and related philosophical concerns. We have this everyday medium-macroscopic set of ideas about how there is an undiscovered country of the future, a now we are experiencing, and a past that we remember or model based on accumulated historical facts. When we venture into extensions of conceptual ideas like an infinite past or sequenced events we deploy reasoning about what their properties might be by excluding contradictory compositions of properties and using other kinds of limiting semantics to constrain a mental model of those concepts.

But that isn’t the weirder stuff. The weirder stuff is the result of a collision of measurement and scientific theory.

Take, for instance, the oft-described reversibility of Newtonian physics. We have an equation for an object’s motion that can be run backward in time. But entropy in large ensembles of things in motion is not reversible because of some odd property of energy dissipation into the environment that arises because of micro-interactions. Some say this creates an “arrow of time” in the face of these reversible equations.

But this is an odd way of characterizing mathematical statements that represent the uniformity of physical interactions. The idea of “reversibility” is just a matter of a computational representation of processes that do always flow forward in time. Running t from 0 to -∞ in an equation has no real relationship to any physical phenomena. So the reversibility of mathematical forms is just an interesting fact.

We can bind up space and time, as well, which also provokes feelings of incongruity when we start to talk about gravitational effects on relative elapsed time, or relative speed effects.… Read the rest

A Learning Smorgasbord

Compliments of a discovery by Futurism, the paper The Autodidactic Universe by a smorgasbord of contemporary science and technology thinkers caught my attention for several reasons. First was Jaron Lanier as a co-author. I knew Jaron’s dad, Ellery, when I was a researcher at NMSU’s now defunct Computing Research Laboratory. Ellery had returned to school to get his psychology PhD during retirement. In an odd coincidence, my brother had also rented a trailer next to the geodesic dome Jaron helped design and Ellery lived after my brother became emancipated in his teens. Ellery may have been his landlord, but I am not certain of that.

The paper is an odd piece of kit that I read over two days in fits and spurts with intervening power lifting interludes (I recently maxed out my Bowflex and am considering next steps!). It initially has the feel of physicists trying to reach into machine learning as if the domain specialists clearly missed something that the hardcore physical scientists have known all along. But that concern dissipated fairly quickly and the paper settled into showing isomorphisms between various physical theories and the state evolution of neural networks. OK, no big deal. Perhaps they were taken by the realization that the mathematics of tensors was a useful way to describe network matrices and gradient descent learning. They then riffed on that and looked at the broader similarities between the temporal evolution of learning and quantum field theory, approaches to quantum gravity, and cosmological ideas.

The paper, being a smorgasbord, then investigates the time evolution of graphs using a lens of graph theory. The core realization, as I gleaned it, is that there are more complex graphs (visually as well as based on the diversity of connectivity within the graph) and pointlessly uniform or empty ones.… Read the rest

Causing Incoherence to Exist

I was continuing discussion on Richard Carrier vs. the Apologists but the format of the blog posting system made a detailed conversation difficult, so I decided to continue here. My core argument is that the premises of Kalam are incoherent. I also think some of the responses are as well.

But what do we mean by incoherent?

Richard interpreted that to mean logically impossible, but my intent was that incoherence is a property of the semantics of the words. Statements are incoherent when they don’t make sense or only make sense with a very narrow and unwarranted reading of the statement. The following argument follows a fairly standard analytic tradition analysis of examining the meaning of statements. I am currently fond of David Lewis’s school of thought on semantics, where the meaning of words exist as a combination of mild referential attachment, coherence within a network of other words, and, importantly, some words within that network achieve what is called “reference magnetism” in that they are tied to reality in significant ways and pull at the meaning of other words.

For instance, consider Premise 1 of a modern take on Kalam:

All things that begin to exist have a cause.

OK, so what does begin to exist mean? And how about cause? Let’s unpack “begin to exist,” first. We might say in our everyday world of people that, say, cars begin to exist at some point. But when is that point? For instance, is it latent in the design for the car? Is it when the body panels are attached on the assembly line? Is it when the final system is capable of car behavior? That is, when all the parts that were in fact designed are fully operational?Read the rest

Two Points on Penrose, and One On Motivated Reasoning

Sir Roger Penrose is, without doubt, one of the most interesting polymaths of recent history. Even where I find his ideas fantastical, they are most definitely worth reading and understanding. Sean Carroll’s Mindscape podcast interview with Penrose from early January of this year is a treat.

I’ve previously discussed the Penrose-Hameroff conjectures concerning wave function collapse and their implication of quantum operations in the micro-tubule structure of the brain. I also used the conjecture in a short story. But the core driver for Penrose’s original conjecture, namely that algorithmic processes can’t explain human consciousness, has always been a claim in search of support. Equally difficult is pushing consciousness into the sphere of quantum phenomena that tend to show random, rather than directed, behavior. Randomness doesn’t clearly relate to the “hard problem” of consciousness that is about the experience of being conscious.

But take the idea that since mathematicians can prove things that are blocked by Gödel incompleteness, our brains must be different from Turing machines or collections of them. Our brains are likely messy and not theorem proving machines per se, despite operating according to logico-causal processes. Indeed, throw in an active analog to biological evolution based on variation-and-retention of ideas and insights that might actually have a bit of pseudo-randomness associated with it, and there is no reason to doubt that we are capable of the kind of system transcendence that Penrose is looking for.

Note that this doesn’t in any way impact the other horn of Penrose-Hameroff concerning the measurement problem in quantum theory, but there is no reason to suspect that quantum collapse is necessary for consciousness. It might flow the other way, though, and Penrose has created the Penrose Institute to look experimentally for evidence about these effects.… Read the rest

Theoretical Reorganization

Sean Carroll of Caltech takes on the philosophy of science in his paper, Beyond Falsifiability: Normal Science in a Multiverse, as part of a larger conversation on modern theoretical physics and experimental methods. Carroll breaks down the problems of Popper’s falsification criterion and arrives at a more pedestrian Bayesian formulation for how to view science. Theories arise, theories get their priors amplified or deflated, that prior support changes due to—often for Carroll—coherence reasons with other theories and considerations and, in the best case, the posterior support improves with better experimental data.

Continuing with the previous posts’ work on expanding Bayes via AIT considerations, the non-continuous changes to a group of scientific theories that arrive with new theories or data require some better model than just adjusting priors. How exactly does coherence play a part in theory formation? If we treat each theory as a binary string that encodes a Turing machine, then the best theory, inductively, is the shortest machine that accepts the data. But we know that there is no machine that can compute that shortest machine, so there needs to be an algorithm that searches through the state space to try to locate the minimal machine. Meanwhile, the data may be varying and the machine may need to incorporate other machines that help improve the coverage of the original machine or are driven by other factors, as Carroll points out:

We use our taste, lessons from experience, and what we know about the rest of physics to help guide us in hopefully productive directions.

The search algorithm is clearly not just brute force in examining every micro variation in the consequences of changing bits in the machine. Instead, large reusable blocks of subroutines get reparameterized or reused with variation.… Read the rest

Simulator Superputz

The simulation hypothesis is perhaps a bit more interesting than how to add clusters of neural network nodes to do a simple reference resolution task, but it is also less testable. This is the nature of big questions since they would otherwise have been resolved by now. Nevertheless, some theory and experimental analysis has been undertaken for the question of whether or not we are living in a simulation, all based on an assumption that the strangeness of quantum and relativistic realities might be a result of limited computing power in the grand simulator machine. For instance, in a virtual reality game, only the walls that you, as a player, can see need to be calculated and rendered. The other walls that are out of sight exist only as a virtual map in the computer’s memory or persisted to longer-term storage. Likewise, the behavior of virtual microscopic phenomena need not be calculated insofar as the macroscopic results can be rendered, like the fire patterns in a virtual torch.

So one way of explaining physics conundrums like delayed choice quantum erasers, Bell’s inequality, or ER = EPR might be to claim that these sorts of phenomena are the results of a low-fidelity simulation necessitated by the limits of the simulator computer. I think the likelihood that this is true is low, however, because we can imagine that there exists an infinitely large cosmos that merely includes our universe simulation as a mote within it. Low-fidelity simulation constraints might give experimental guidance, but the results could also be supported by just living with the indeterminacy and non-locality as fundamental features of our universe.

It’s worth considering, however, what we should think about the nature of the simulator given this potentially devious (and poorly coded) little Matrix that we find ourselves trapped in?… Read the rest

Gravity and the Dark Star

I began at 5 AM from the Broomfield Aloft hotel, strategically situated in a sterile “new urban” office park cum apartment complex along the connecting freeway between Denver and Boulder. The whole weekend was fucked in a way: colleges across Colorado were moving in for a Monday start, half of Texas was here already, and most of Colorado planned to head north to the zone of totality. I split off I-25 around Loveland and had success using US 85 northbound through Cheyenne. Continuing up 85 was the original plan, but that fell apart when 85 came to a crawl in the vast prairie lands of Wyoming. I dodged south and east, then, (dodging will be a continuing theme) and entered Nebraska’s panhandle with middling traffic.

I achieved totality on schedule north of Scottsbluff. And it was spectacular. A few fellow adventurers were hanging out along the outflow lane of an RV dump at a state recreation area. One guy flew his drone around a bit. Maybe he wanted B roll for other purposes. I got out fast, but not fast enough, and dodged my way through lane closures designed to provide access from feeder roads. The Nebraska troopers were great, I should add, always willing to wave to us science and spectacle immigrants. Meanwhile, SiriusXM spewed various Sibelius pieces that had “sun” in their name, while the Grateful Dead channel gave us a half dozen versions of Dark Star, the quintessential jam song for the band that dates to the early, psychedelic era of the band.

Was it worth it? I think so, though one failed dodge that left me in a ten mile bumper-to-bumper crawl in rural Nebraska with a full bladder tested my faith in the stellar predictability of gravity.… Read the rest

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.

 … Read the rest