Quintessence of Rust

¡Reconquista! is making the rounds looking for representation. This is a first for me. I’ve been strongly attracted to the idea of disintermediating publishing, music, film, transportation, business, and anything else that happens by; my Silicon Valley persona sees disruption as a virtue, for better or worse. But why not learn from the mainstream model for the trajectory of books and ideas?

Meanwhile, nothing sits still. I’m journeying through Steven Pinker’s latest data dump, Enlightenment Now. My bookshelf crawls with his books, including Learnability and Cognition, Language Learnability and Cognitive Development, The Stuff of Thought, Words and Rules, and The Language Instinct, while my Kindle app holds The Better Angels of Our Nature and the new tome. I also have a novel by his wife, Rebecca Newberger Goldstein, though I have never read any of her more scholarly materials on Spinoza.

Enlightenment Now (shouldn’t it have an exclamation point?) gives an upbeat shine to these days of social media anger and the jeremiads of fracturing America. Pinker is a polished writer who should be required reading, if only for his clarity and perseverance in structuring support for mature theses. There is some overlap with Better Angels, but it’s easy to skip past the repetition and still find new insights.

And I have a new science fiction/cyberpunk series under development, tentatively titled Quintessence of Rust. Robots, androids, transhumans, AIs, and a future world where reality, virtual existences, and constructed fictions all vie for political and social relevance. Early experimentation with alternative voices and story arc tuning have shown positive results. There is even some concept art executed with an iPad Pro + Pencil combined with Procreate (amused a firm would choose that name!).


Instrumentality and Terror in the Uncanny Valley

I got an Apple HomePod the other day. I have several Airplay speakers already, two in one house and a third in my separate office. The latter, a Naim Mu-So, combines Airplay with internet radio and bluetooth, but I mostly use it for the streaming radio features (KMozart, KUSC, Capital Public Radio, etc.). The HomePod’s Siri implementation combined with Apple Music allows me to voice control playlists and experiment with music that I wouldn’t generally have bothered to buy and own. I can now sample at my leisure without needing to broadcast via a phone or tablet or computer. Steve Reich, Bill Evans, Theolonius Monk, Bach organ mixes, variations of Tristan and Isolde, and, yesterday, when I asked for “workout music” I was gifted with Springsteen’s Born to Run, which I would never have associated with working out, but now I have dying on the mean streets of New Jersey with Wendy in some absurd drag race conflagration replaying over and over again in my head.

Right after setup, I had a strange experience. I was shooting random play thoughts to Siri, then refining them and testing the limits. There are many, as reviewers have noted. Items easily found in Apple Music are occasionally fails for Siri in HomePod, but simple requests and control of a few HomeKit devices work acceptably. The strange experience was my own trepidation over barking commands at the device, especially when I was repeating myself: “Hey Siri. Stop. Play Bill Evans. Stop. Play Bill Evans’ Peace Piece.” (Oh my, homophony, what will happen? It works.) I found myself treating Siri as a bit of a human being in that I didn’t want to tell her to do a trivial task that I had just asked her to perform. A person would become irritated and we naturally avoid that kind of repetitious behavior when asking others to perform tasks for us. Unless there is an employer-employee relationship where that kind of repetition is part of the work duties, it is not something we do among friends, family, acquaintances, or co-workers. It is rude. It is demanding. It treats people insensitively as instruments for fulfilling one’s trivial goals.

I found this odd. I occasionally play video games with lifelike visual and auditory representations of characters, but I rarely ask them to do things that involve selecting from an open-ended collection of possibilities, since most interactions with non-player entities are channeled by the needs of the storyline. They are never “eerie” as the research on uncanny valley effects refers to them. This is likely mediated by context and expectations. I’m playing a game and it is on a flat projection. My expectations are never violated by the actions within the storyline.

But why then did Siri as a repetitious slave elicit such a concern?

There are only a handful of studies designed to better understand the nature of the uncanny valley eeriness effect. One of the more interesting studies investigates the relationship between our thoughts of death and the appearance of uncanny wax figures or androids. Karl MacDorman’s Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can We Exploit It? investigates the relationship between Terror Management Theory and our reactions to eeriness in objects. Specifically, the work builds on the idea that we have developed different cultural and psychological mechanisms to distract ourselves from the anxiety of death concerns. These take different forms in their details but largely involve:

… a dual-component cultural anxiety buffer consisting of (a) a cultural world-view — a humanly constructed symbolic conception of reality that imbues life with order, permanence, and stability; a set of standards through which individuals can attain a sense of personal value; and some hope of either literally or symbolically transcending death for those who live up to these standards of value; and (b) self-esteem, which is acquired by believing that one is living up to the standards of value inherent in one’s cultural worldview.

This theory has been tested in various ways, largely through priming with death-related terms (coffin, buried, murder, etc.) and looking at the the impact of exposure to these terms have on other decision-making after short delays. In this particular study, an android face and a similar human face were shown to study participants and their reactions were examined for evidence that the android affected their subsequent choices. For instance, a charismatic speech versus a “relationship-oriented” speech by politician. Our terror response hypothetically causes us to prefer the charismatic leader more when we are under threat. Another form of testing involved doing word completion puzzles that were ambiguous. For instance, the subject is presented with COFF_ _ and asked to choose an E or an I for the next letter (COFFEE versus COFFIN), or MUR _ _ R (MURMUR or MURDER). Other “uncanny” word sets (_ _ REEPY) (CREEPY, SLEEPY) and ST _ _ _ GE (STRANGE, STORAGE) were also included as were controls that had no such associations. The android presentation resulted in statistically significant increases in the uncanny word set as well as combined uncanny/death word presentations, though the death words alone were not statistically significant.

And what about my fear of treating others instrumentally? It may fall in a similar category, but due to the “set of standards through which individuals can attain a sense of personal value.” I am sensitive to mistreating others as both a barely-conscious recognition of their humanity and as a set of heuristic guidelines that fire automatically as a form of nagging conscience. I will note, however, that after a few days I appear to have become desensitized to the concern. Siri, please turn off the damn noise.

Black and Gray Boxes with Autonomous Meta-Cognition

Vijay Pande of VC Andreessen Horowitz (who passed on my startups twice but, hey, it’s just business!) has a relevant article in New York Times concerning fears of the “black box” of deep learning and related methods: is the lack of explainability and limited capacity for interrogation of the underlying decision making a deal-breaker for applications to critical areas like medical diagnosis or parole decisions? His point is simple, and related to the previous post’s suggestion of the potential limitations of our capacity to truly understand many aspects of human cognition. Even the doctor may only be able to point to a nebulous collection of clinical experiences when it comes to certain observational aspects of their jobs, like in reading images for indicators of cancer. At least the algorithm has been trained on a significantly larger collection of data than the doctor could ever encounter in a professional lifetime.

So the human is almost as much a black box (maybe a gray box?) as the algorithm. One difference that needs to be considered, however, is that the deep learning algorithm might make unexpected errors when confronted with unexpected inputs. The classic example from the early history of artificial neural networks involved a DARPA test of detecting military tanks in photographs. The apocryphal to legendary formulation of the story is that there was a difference in the cloud cover between the tank images and the non-tank images. The end result was that the system performed spectacularly on the training and test data sets but then failed miserably on new data that lacked the cloud cover factor. I recalled this slightly differently recently and substituted film grain for the cloudiness. In any case, it became a discussion point about the limits of data-driven learning that showed how radically incorrect solutions could be created without careful understanding of how the systems work.

How can the fears of radical failure be reduced? In medicine we expect automated decision making to be backed up by a doctor who serves as a kind of meta-supervisor. When the diagnosis or prognosis looks dubious, the doctor will order more tests or countermand the machine. When the parole board sees the parole recommendation, they always have the option of ignoring it based on special circumstances. In each case, it is the presence of some anomaly in the recommendation or the input data that would lead to reconsideration. Similarly, it is certainly possible to automate that scrutiny at a meta-level. In machine learning, statistical regularization is used to reduce or eliminate outliers in data sets in an effort to prevent overtraining or overfitting on noisy data elements. In much the same way, the regularization process can provide warnings and clues about data viability. And, in turn, unusual outputs that are statistically unlikely given the history of the machine’s decisions can trigger warnings about anomalous results.

Deep Simulation in the Southern Hemisphere

I’m unusually behind in my postings due to travel. I’ve been prepping for and now deep inside a fresh pass through New Zealand after two years away. The complexity of the place seems to have a certain draw for me that has lured me back, yet again, to backcountry tramping amongst the volcanoes and glaciers, and to leasurely beachfront restaurants painted with eruptions of summer flowers fueled by the regular rains.

I recently wrote a technical proposal that rounded up a number of the most recent advances in deep learning neural networks. In each case, like with Google’s transformer architecture, there is a modest enhancement that is based on a realization of a deficit in the performance of one of two broad types of networks, recurrent and convolutional.

An old question is whether we learn anything about human cognition if we just simulate it using some kind of automatically learning mechanism. That is, if we use a model acquired through some kind of supervised or unsupervised learning, can we say we know anything about the original mind and its processes?

We can at least say that the learning methodology appears to be capable of achieving the technical result we were looking for. But it also might mean something a bit different: that there is not much more interesting going on in the original mind. In this radical corner sits the idea that cognitive processes in people are tactical responses left over from early human evolution. All you can learn from them is that they may be biased and tilted towards that early human condition, but beyond that things just are the way they turned out.

If we take this position, then, we might have to discard certain aspects of the social sciences. We can no longer expect to discover some hauntingly elegant deep structure to our minds, language, or personalities. Instead, we are just about as deep as we are shallow, nothing more.

I’m alright with this, but think it is premature to draw the more radical conclusion. In the same vein that there are tactical advances—small steps—that improve simple systems in solving more complex problems, the harder questions of self-reflectivity and “general AI” still have had limited traction. It’s perfectly reasonable that upon further development, the right approaches and architectures that do reveal insights into deep cognitive “simulation” will, in turn, uncover some of that deep structure.

The Universal Roots of Fantasyland

Intellectual history and cultural criticism always teeters on the brink of totalism. So it was when Christopher Hitchens was forced to defend the hyperbolic subtitle of God Is Not Great: How Religion Poisons Everything. The complaint was always the same: everything, really? Or when Neil Postman downplayed the early tremors of the internet in his 1985 Amusing Ourselves to Death. Email couldn’t be anything more than another movement towards entertainment and celebrity. So it is no surprise that Kurt Andersen’s Fantasyland: How America Went Wrong: A 500-Year History is open to similar charges.

Andersen’s thesis is easily digestible: we built a country on fantasies. From the earliest charismatic stirrings of the Puritans to the patent medicines of the 19th century, through to the counterculture of the 1960s, and now with an incoherent insult comedian and showman as president, America has thrived on inventing wild, fantastical narratives that coalesce into movements. Andersen’s detailed analysis is breathtaking as he pulls together everything from linguistic drift to the psychology of magical thinking to justify his thesis.

Yet his thesis might be too narrow. It is not a uniquely American phenomenon. When Andersen mentions cosplay, he fails to identify its Japanese contributions, including the word itself. In the California Gold Rush, he sees economic fantasies driving a generation to unmoor themselves from their merely average lives. Yet the conquistadores had sought to enrich themselves, God, and country while Americans were forming their shining cities on hills. And in mid-19th-century Europe, while the Americans panned in the Sierra, romanticism was throwing off the oppressive yoke of Enlightenment rationality as the West became increasingly exposed to enigmatic Asian cultures. By the 20th century, Weimar Berlin was a hotbed of cultural fantasies that dovetailed with the rise of Nazism and a fantastical theory of race, German volk culture, and Indo-European mysticism. In India, film has been the starting point for many politicians. The religion of Marxism led to Heroic Realism as the stained glass of the Communist cathedrals.

Is America unique or is it simply human nature to strive for what has not yet existed and, in so doing, create and live in alternative fictions that transcend the mundanity of ordinary reality? If the latter, then Andersen’s thesis still stands but not as a singular evolution. Cultural change is driven by equal parts fantasy and reality. Exploration and expansion was paired with fantastical justifications from religious and literary sources. The growth of an entertainment industry was two-thirds market-driven commerce and one-third creativity. The development of the World Wide Web was originally to exchange scientific information but was exchanging porn from nearly the moment it began.

To be fair, Chapter 32 (America Versus the Godless Civilized Word: Why Are We So Exceptional), provides an argument for the exceptionalism of America at least in terms of religiosity. The pervasiveness of religious belief in America is unlike nearly all other developed nations, and the variation and creativity of those beliefs seems to defy economic and social science predictions about how religions shape modern life across nations. In opposition, however, is a following chapter on postmodernism in academia that again shows how a net wider than America is needed to explain anti-rationalist trends. From Foucault and Continental philosophy we see the trend towards fantasy; Anglo-American analytical philosophy has determinedly moved towards probabilistic formulations of epistemology and more and more scientism.

So what is the explanation of irrationality, whether uniquely American or more universal? In Fantasyland Andersen pins the blame on the persistence of intense religiosity in America. Why America alone remains a mystery, but the consequence is that the adolescent transition from belief in fairytales never occurs and there is a bleed-over effect into the acceptance of alternative formulations of reality:

The UC Berkeley psychologist Alison Gopnik studies the minds of small children and sees them as little geniuses, models of creativity and innovation. “They live twenty-four/seven in these crazy pretend worlds,” she says. “They have a zillion different imaginary friends.” While at some level, they “know the difference between imagination and reality…it’s just they’d rather live in imaginary worlds than in real ones. Who could blame them?” But what happens when that set of mental habits persists into adulthood too generally and inappropriately? A monster under the bed is true for her, the stuffed animal that talks is true for him, speaking in tongues and homeopathy and vaccines that cause autism and Trilateral Commission conspiracies are true for them.

This analysis extends the umbrella of religious theories built around instincts for perceiving purposeful action to an unceasing escalation of imaginary realities to buttress these personified habits of mind. It’s a strange preoccupation for many of us, though we can be accused of being coastal elites (or worse) just for entertaining such thoughts.

Fantasyland doesn’t end on a positive note but I think the broader thesis just might. We are all so programmed, I might claim. Things slip and slide, politics see and saw, but there seems to be a gradual unfolding of more rights and more opportunity for the many. Theocracy has always lurked in the basement of the American soul, but the atavistic fever dream has been eroded by a cosmopolitan engagement with the world. Those who long for utopia get down to the business of non-zero-sum interactions with a broader clientele and drift away, their certitude fogging until it lifts and a more conscientious idealization of what is and what can be takes over.

I, Robot and Us

What happens if artificial intelligence (AI) technologies become significant economic players? The topic has come up in various ways for the past thirty years, perhaps longer. One model, the so-called technological singularity, posits that self-improving machines may be capable of a level of knowledge generation and disruption that will eliminate humans from economic participation. How far out this singularity might be is a matter of speculation, but I have my doubts that we really understand intelligence enough to start worrying about the impacts of such radical change.

Barring something essentially unknowable because we lack sufficient priors to make an informed guess, we can use evidence of the impact of mechanization on certain economic sectors, like agribusiness or transportation manufacturing, to try to plot out how mechanization might impact other sectors. Aghion, Jones, and Jones’ Artificial Intelligence and Economic Growth, takes a deep dive into the topic. The math is not particularly hard, though the reasons for many of the equations are tied up in macro and microeconomic theory that requires a specialist’s understanding to fully grok.

Of special interest are the potential limiting role of inputs and organizational competition. For instance, automation speed-ups may be limited by human limitations within the economic activity. This may extend even further due to fundamental limitations of physics for a given activity. The pointed example is that power plants are limited by thermodynamics; no amount of additional mechanization can change that. Other factors related to inputs or the complexity of a certain stage of production may also drag economic growth to a capped, limiting level.

Organizational competition and intellectual property considerations come into play, as well. While the authors suggest that corporations will remain relevant, they should become more horizontal by eliminating much of the middle tier of management and outsourcing components of their productivity. The labor consequences are less dire than some singularity speculations: certain low knowledge workers may achieve more influence in the economic activities because they remain essential to the production chain and their value and salaries rise. They become more fluid, as well, because they can operate as free lancers and thus have a broader impact.

This kind of specialization and out-sized knowledge influence, whether by low or high-knowledge workers, is a kind of singularity in itself. Consider the influence of the printing press in disseminating knowledge or the impact of radio and television. The economic costs of moving humans around to convey ideas or to entertain evaporate or minimize, but the influence is limited then to highly-regarded specialists who are competing to get a slice of the public’s attention. Similarly, the knowledge worker who is not easily replaceable by machine becomes the star of the new, AI economy. This may be happening already, with rumors of astronomical compensation for certain AI experts percolating out of Silicon Valley.

Simulator Superputz

The simulation hypothesis is perhaps a bit more interesting than how to add clusters of neural network nodes to do a simple reference resolution task, but it is also less testable. This is the nature of big questions since they would otherwise have been resolved by now. Nevertheless, some theory and experimental analysis has been undertaken for the question of whether or not we are living in a simulation, all based on an assumption that the strangeness of quantum and relativistic realities might be a result of limited computing power in the grand simulator machine. For instance, in a virtual reality game, only the walls that you, as a player, can see need to be calculated and rendered. The other walls that are out of sight exist only as a virtual map in the computer’s memory or persisted to longer-term storage. Likewise, the behavior of virtual microscopic phenomena need not be calculated insofar as the macroscopic results can be rendered, like the fire patterns in a virtual torch.

So one way of explaining physics conundrums like delayed choice quantum erasers, Bell’s inequality, or ER = EPR might be to claim that these sorts of phenomena are the results of a low-fidelity simulation necessitated by the limits of the simulator computer. I think the likelihood that this is true is low, however, because we can imagine that there exists an infinitely large cosmos that merely includes our universe simulation as a mote within it. Low-fidelity simulation constraints might give experimental guidance, but the results could also be supported by just living with the indeterminacy and non-locality as fundamental features of our universe.

It’s worth considering, however, what we should think about the nature of the simulator given this potentially devious (and poorly coded) little Matrix that we find ourselves trapped in? There are some striking alternatives. To make this easier, I’ll use the following abbreviations:

S = Simulator (creator of simulation)

U = Simulation

SC = Simulation Computer (whatever the simulation runs on)

MA = Morally Aware (the perception, rightly or wrongly, that judgments and choices influence simulation-level phenomena)

US = Simulatees

CA = Conscious Awareness (the perception that one is aware of stuff)

So let’s get started:

  1. S is unaware of events in U due to limited monitoring resources.
  2. S is unaware of events in U due to lack of interest.
  3. S is incapable of conscious awareness of U (S is some kind of automatic system).
  4. It seems unlikely that limited monitoring resources would be a constraint given the scale and complexity of U because they would be of a lower cost than U and simply tuned to filter active categories of interest, so S must either lack interest (2) or be incapable of awareness (3).
  5. We can dismiss (3) due to an infinite regress on the nature of the simulator in general, since the origin of the Simulation Hypothesis is the probability that we humans will create ever-better simulations in the future. There is no other simulation hypothesis that involves pure automation of S lacking CA and some form of MA.
  6. Given (2), why would S lack interest in U? Perhaps S created a large ensemble of universes and is only interested in long-term outcomes. But maybe S is just a putz.
  7. For (6), if S is MA, then S is wrong to create U that supports the evolution of US insofar as S allows for CA and MA in US combined with radical uncertainty in U.
  8. Conclusion: S is a putz or this ain’t a simulation.

Theists can squint and see the problem, here. We might add 7.5: it’s certainly wrong of S to actively burn, drown, imprison, enslave, and murder CA and MA US. If S is doing 7.5, that makes S a superputz.

In my novel, Teleology, the creation of another simulated universe by a first one was a religious imperative. The entities saw, once in contact with their S, that it must be the ultimate fulfillment of purpose for them to become S. Yet their S was very concerned with their U and would have objected to even cleanly pulling the plug on their U. He did lack instrumentation (1) into U, but built a great deal of it after discovering that there was evidence of CA. He was no putz.

Ambiguously Slobbering Dogs

I was initially dismissive of this note from Google Research on improving machine translation via Deep Learning Networks by adding in a sentence-level network. My goodness, they’ve rediscovered anaphora and co-reference resolution! Next thing they will try is some kind of network-based slot-filler ontology to carry gender metadata. But their goal was to add a framework to their existing recurrent neural network architecture that would support a weak, sentence-level resolution of translational ambiguities while still allowing the TPU/GPU accelerators they have created to function efficiently. It’s a hack, but one that potentially solves yet another corner of the translation problem and might result in a few percent further improvements in the quality of the translation.

But consider the following sentences:

The dog had the ball. It was covered with slobber.

The dog had the ball. It was thinking about lunch while it played.

In these cases, the anaphora gets resolved by semantics and the resolution seems largely an automatic and subconscious process to us as native speakers. If we had to translate these into a second language, however, we would be able to articulate that there are specific reasons for correctly assigning the “It” to the ball in the first two sentences. Well, it might be possible for the dog to be covered with slobber, but we would guess the sentence writer would intentionally avoid that ambiguity. The second set of sentences could conceivably be ambiguous if, in the broader context, the ball was some intelligent entity controlling the dog. Still, when our guesses are limited to the sentence pairs in isolation we would assign the obvious interpretations. Moreover, we can resolve giant, honking passage-level ambiguities with ease, where the author is showing off in not resolving the co-referents until obscenely late in the text.

In combination, we can see the obvious problem with sentence-level “attention” calculations. The context has to be moving and fairly long.

Gravity and the Dark Star

Totality in Nebraska

I began at 5 AM from the Broomfield Aloft hotel, strategically situated in a sterile “new urban” office park cum apartment complex along the connecting freeway between Denver and Boulder. The whole weekend was fucked in a way: colleges across Colorado were moving in for a Monday start, half of Texas was here already, and most of Colorado planned to head north to the zone of totality. I split off I-25 around Loveland and had success using US 85 northbound through Cheyenne. Continuing up 85 was the original plan, but that fell apart when 85 came to a crawl in the vast prairie lands of Wyoming. I dodged south and east, then, (dodging will be a continuing theme) and entered Nebraska’s panhandle with middling traffic.

I achieved totality on schedule north of Scottsbluff. And it was spectacular. A few fellow adventurers were hanging out along the outflow lane of an RV dump at a state recreation area. One guy flew his drone around a bit. Maybe he wanted B roll for other purposes. I got out fast, but not fast enough, and dodged my way through lane closures designed to provide access from feeder roads. The Nebraska troopers were great, I should add, always willing to wave to us science and spectacle immigrants. Meanwhile, SiriusXM spewed various Sibelius pieces that had “sun” in their name, while the Grateful Dead channel gave us a half dozen versions of Dark Star, the quintessential jam song for the band that dates to the early, psychedelic era of the band.

Was it worth it? I think so, though one failed dodge that left me in a ten mile bumper-to-bumper crawl in rural Nebraska with a full bladder tested my faith in the stellar predictability of gravity. Gravity remains an enigma in many ways, though the perfection of watching the corona flare around the black hole sun shows just how unenigmatic it can be in the macroscopic sphere.

But reconciling gravity with quantum-scale phenomena remains remarkably elusive and is the beginning of the decades-long detour through string theory which, admittedly, some have characterized as “fake science” due to our inability to find testable aspects of the theory. Yet, there are some interesting recent developments that, though they are not directly string theoretic, have a relationship to the quantum symmetries that, in turn, led to stringiness.

So I give you Juan Maldacena and Leonard Susskind’s suggestion that ER = EPR. This is a rather remarkable conclusion that unites quantum and relativistic realities, but is based on a careful look at the symmetry between two theoretical outcomes at the two different scales. So how does it work? In a nut shell, the claim is that quantum entanglement is identical to relativistic entanglement. Just like the science fiction idea of wormholes connecting distant things together to facilitate faster-than-light travel, ER connects singularities like black holes together. And the correlations that occur between black holes is just like the correlations between entangled quanta. Neither is amenable to either FTL travel or signaling due to Lorentzian traversability issues (former) or Bell’s Inequality (latter).

Today was just a shadow, classically projected, maybe just slightly twisted by the gravity wells, not some wormhole wending its way through space and time. It is worth remembering, though, that the greatest realization of 20th century physics is that reality really isn’t in accord with our everyday experiences. Suns and moons kind of are, briefly and ignoring fusion in the sun, but reality is almost mystically entangled with itself, a collection of vibrating potentialities that extend out everywhere, and then, unexpectedly, the potentialities are connected together in another way that defies these standard hypothetical representations and the very notion of space connectivities.

Fantastical Places and the Ethics of Architecture

Lemuria was a hypothetical answer to the problem of lemurs in Madagascar and India. It was a connective tissue for the naturalism observed during the formative years of naturalism itself. Only a few years had passed since Darwin’s Origin of the Species came out and the patterns of observations that drove Darwin’s daring hypothesis were resonating throughout the European intellectual landscape. Years later, the Pangaea supercontinent would replace the temporary placeholder of Lemuria and the concept would be relegated to mythologized abstractions alongside Atlantis and, well, Hyperborea.

I’m in Lemuria right now, but it is a different fantastical place. In this case, I’m in the Lemuria Earthship Biotecture near Taos, New Mexico. I rented it out on a whim. I needed to travel to Colorado to drop off some birthday cards for our son and thought I might come by and observe this ongoing architectural experiment that I’ve been tracking for decades but never visited. I was surprised to find that I could rent a unit.

First, though, you have to get here, which involves crossing the Rio Grande Gorge:

Once I arrived, I encountered throngs of tourists, including an extended Finnish family that I had to eavesdrop on to guess the language they were speaking. The Earthship project has a long history, but it is always a history of trying to create sustainable, off-the-grid structures that maximize the use of disposable aspects of our society. So the walls are tires filled with dirt or cut wine bottles embedded in cement. Photovoltaics charge batteries and gray water (shower and washing water) is reused to flush toilets and grow food plants. Black water (toilet water) flows into leachfields that support landscape plants. Rainwater is captured from the roof to fill the gray water reservoirs. And, amazingly, it all works very well.

Here’s my video on arrival at Lemuria. There is wind noise when I’m on the roof, but it dies off when I get inside.

Architecture and ethics have always had an uneasy truce. At the most basic, there are the ethical limits of not deceiving clients about materials, costs, or functionality. But the harder questions build around aesthetic value versus functional value. A space that is sculptural like a Calatrava train station or Frank Gehry music hall is a space that values aesthetics at least as highly as functionality. There is no reusability in curved magnesium panels.

Where experiements like the Earthship thrive is in finding a weighted balance that gives functional and sustainable solutions precedence over the purely conceptual aspects of architecture. What could be is grounded by ethical stewardship.

Lemuria is standing up to a heavy downpour quite well right now as the monsoonal storms lash over the high plateau. I think I can hear the water flowing into the cisterns and an occasional pump pushing water through filters. It almost seems more fantastical that we don’t build houses like this.