Running, Ancient Roman Science, Arizona Dive Bars, and Lightning Machine Learning

I just returned from running in Chiricahua National Monument, Sedona, Painted Desert, and Petrified Forest National Park, taking advantage of the late spring before the heat becomes too intense. Even so, though I got to Massai Point in Chiricahua through 90+ degree canyons and had around a liter of water left, I still had to slow down and walk out after running short of liquid nourishment two-thirds down. There is an eerie, uncertain nausea that hits when hydration runs low under high stress. Cliffs and steep ravines take on a wolfish quality. The mind works to control feet against stumbling and the lips get serrated edges of parched skin that bite off without relieving the dryness.

I would remember that days later as I prepped to overnight with a wilderness permit in Petrified Forest only to discover that my Osprey Exos pack frame had somehow been bent, likely due to excessive manhandling by airport checked baggage weeks earlier. I considered my options and drove eighty miles to Flagstaff to replace the pack, then back again.

I arrived in time to join Dr. Richard Carrier in an unexpected dive bar in Holbrook, Arizona as the sunlight turned to amber and a platoon of Navajo pool sharks descended on the place for billiards and beers. I had read that Dr. Carrier would be stopping there and it was convenient to my next excursion, so I picked up signed copies of his new book, The Scientist in the Early Roman Empire, as well as his classic, On the Historicity of Jesus, that remains part of the controversial samizdat of so-called “Jesus mythicism.”

If there is a distinguishing characteristic of OHJ it is the application of Bayesian Theory to the problems of historical method.… Read the rest

Instrumentality and Terror in the Uncanny Valley

I got an Apple HomePod the other day. I have several Airplay speakers already, two in one house and a third in my separate office. The latter, a Naim Mu-So, combines Airplay with internet radio and bluetooth, but I mostly use it for the streaming radio features (KMozart, KUSC, Capital Public Radio, etc.). The HomePod’s Siri implementation combined with Apple Music allows me to voice control playlists and experiment with music that I wouldn’t generally have bothered to buy and own. I can now sample at my leisure without needing to broadcast via a phone or tablet or computer. Steve Reich, Bill Evans, Theolonius Monk, Bach organ mixes, variations of Tristan and Isolde, and, yesterday, when I asked for “workout music” I was gifted with Springsteen’s Born to Run, which I would never have associated with working out, but now I have dying on the mean streets of New Jersey with Wendy in some absurd drag race conflagration replaying over and over again in my head.

Right after setup, I had a strange experience. I was shooting random play thoughts to Siri, then refining them and testing the limits. There are many, as reviewers have noted. Items easily found in Apple Music are occasionally fails for Siri in HomePod, but simple requests and control of a few HomeKit devices work acceptably. The strange experience was my own trepidation over barking commands at the device, especially when I was repeating myself: “Hey Siri. Stop. Play Bill Evans. Stop. Play Bill Evans’ Peace Piece.” (Oh my, homophony, what will happen? It works.) I found myself treating Siri as a bit of a human being in that I didn’t want to tell her to do a trivial task that I had just asked her to perform.… Read the rest

Black and Gray Boxes with Autonomous Meta-Cognition

Vijay Pande of VC Andreessen Horowitz (who passed on my startups twice but, hey, it’s just business!) has a relevant article in New York Times concerning fears of the “black box” of deep learning and related methods: is the lack of explainability and limited capacity for interrogation of the underlying decision making a deal-breaker for applications to critical areas like medical diagnosis or parole decisions? His point is simple, and related to the previous post’s suggestion of the potential limitations of our capacity to truly understand many aspects of human cognition. Even the doctor may only be able to point to a nebulous collection of clinical experiences when it comes to certain observational aspects of their jobs, like in reading images for indicators of cancer. At least the algorithm has been trained on a significantly larger collection of data than the doctor could ever encounter in a professional lifetime.

So the human is almost as much a black box (maybe a gray box?) as the algorithm. One difference that needs to be considered, however, is that the deep learning algorithm might make unexpected errors when confronted with unexpected inputs. The classic example from the early history of artificial neural networks involved a DARPA test of detecting military tanks in photographs. The apocryphal to legendary formulation of the story is that there was a difference in the cloud cover between the tank images and the non-tank images. The end result was that the system performed spectacularly on the training and test data sets but then failed miserably on new data that lacked the cloud cover factor. I recalled this slightly differently recently and substituted film grain for the cloudiness. In any case, it became a discussion point about the limits of data-driven learning that showed how radically incorrect solutions could be created without careful understanding of how the systems work.… Read the rest

Deep Simulation in the Southern Hemisphere

I’m unusually behind in my postings due to travel. I’ve been prepping for and now deep inside a fresh pass through New Zealand after two years away. The complexity of the place seems to have a certain draw for me that has lured me back, yet again, to backcountry tramping amongst the volcanoes and glaciers, and to leasurely beachfront restaurants painted with eruptions of summer flowers fueled by the regular rains.

I recently wrote a technical proposal that rounded up a number of the most recent advances in deep learning neural networks. In each case, like with Google’s transformer architecture, there is a modest enhancement that is based on a realization of a deficit in the performance of one of two broad types of networks, recurrent and convolutional.

An old question is whether we learn anything about human cognition if we just simulate it using some kind of automatically learning mechanism. That is, if we use a model acquired through some kind of supervised or unsupervised learning, can we say we know anything about the original mind and its processes?

We can at least say that the learning methodology appears to be capable of achieving the technical result we were looking for. But it also might mean something a bit different: that there is not much more interesting going on in the original mind. In this radical corner sits the idea that cognitive processes in people are tactical responses left over from early human evolution. All you can learn from them is that they may be biased and tilted towards that early human condition, but beyond that things just are the way they turned out.

If we take this position, then, we might have to discard certain aspects of the social sciences.… Read the rest

I, Robot and Us

What happens if artificial intelligence (AI) technologies become significant economic players? The topic has come up in various ways for the past thirty years, perhaps longer. One model, the so-called technological singularity, posits that self-improving machines may be capable of a level of knowledge generation and disruption that will eliminate humans from economic participation. How far out this singularity might be is a matter of speculation, but I have my doubts that we really understand intelligence enough to start worrying about the impacts of such radical change.

Barring something essentially unknowable because we lack sufficient priors to make an informed guess, we can use evidence of the impact of mechanization on certain economic sectors, like agribusiness or transportation manufacturing, to try to plot out how mechanization might impact other sectors. Aghion, Jones, and Jones’ Artificial Intelligence and Economic Growth, takes a deep dive into the topic. The math is not particularly hard, though the reasons for many of the equations are tied up in macro and microeconomic theory that requires a specialist’s understanding to fully grok.

Of special interest are the potential limiting role of inputs and organizational competition. For instance, automation speed-ups may be limited by human limitations within the economic activity. This may extend even further due to fundamental limitations of physics for a given activity. The pointed example is that power plants are limited by thermodynamics; no amount of additional mechanization can change that. Other factors related to inputs or the complexity of a certain stage of production may also drag economic growth to a capped, limiting level.

Organizational competition and intellectual property considerations come into play, as well. While the authors suggest that corporations will remain relevant, they should become more horizontal by eliminating much of the middle tier of management and outsourcing components of their productivity.… Read the rest

Ambiguously Slobbering Dogs

I was initially dismissive of this note from Google Research on improving machine translation via Deep Learning Networks by adding in a sentence-level network. My goodness, they’ve rediscovered anaphora and co-reference resolution! Next thing they will try is some kind of network-based slot-filler ontology to carry gender metadata. But their goal was to add a framework to their existing recurrent neural network architecture that would support a weak, sentence-level resolution of translational ambiguities while still allowing the TPU/GPU accelerators they have created to function efficiently. It’s a hack, but one that potentially solves yet another corner of the translation problem and might result in a few percent further improvements in the quality of the translation.

But consider the following sentences:

The dog had the ball. It was covered with slobber.

The dog had the ball. It was thinking about lunch while it played.

In these cases, the anaphora gets resolved by semantics and the resolution seems largely an automatic and subconscious process to us as native speakers. If we had to translate these into a second language, however, we would be able to articulate that there are specific reasons for correctly assigning the “It” to the ball in the first two sentences. Well, it might be possible for the dog to be covered with slobber, but we would guess the sentence writer would intentionally avoid that ambiguity. The second set of sentences could conceivably be ambiguous if, in the broader context, the ball was some intelligent entity controlling the dog. Still, when our guesses are limited to the sentence pairs in isolation we would assign the obvious interpretations. Moreover, we can resolve giant, honking passage-level ambiguities with ease, where the author is showing off in not resolving the co-referents until obscenely late in the text.… Read the rest

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus.… Read the rest

Tweak, Memory

Artificial Neural Networks (ANNs) were, from early on in their formulation as Threshold Logic Units (TLUs) or Perceptrons, mostly focused on non-sequential decision-making tasks. With the invention of back-propagation training methods, the application to static presentations of data became somewhat fixed as a methodology. During the 90s Support Vector Machines became the rage and then Random Forests and other ensemble approaches held significant mindshare. ANNs receded into the distance as a quaint, historical approach that was fairly computationally expensive and opaque when compared to the other methods.

But Deep Learning has brought the ANN back through a combination of improvements, both minor and major. The most important enhancements include pre-training of the networks as auto-encoders prior to pursuing error-based training using back-propagation or  Contrastive Divergence with Gibbs Sampling. The critical other enhancement derives from Schmidhuber and others work in the 90s on managing temporal presentations to ANNs so the can effectively process sequences of signals. This latter development is critical for processing speech, written language, grammar, changes in video state, etc. Back-propagation without some form of recurrent network structure or memory management washes out the error signal that is needed for adjusting the weights of the networks. And it should be noted that increased compute fire-power using GPUs and custom chips has accelerated training performance enough that experimental cycles are within the range of doable.

Note that these are what might be called “computer science” issues rather than “brain science” issues. Researchers are drawing rough analogies between some observed properties of real neuronal systems (neurons fire and connect together) but then are pursuing a more abstract question as to how a very simple computational model of such neural networks can learn.… Read the rest

The Ethics of Knowing

In the modern American political climate, I’m constantly finding myself at sea in trying to unravel the motivations and thought processes of the Republican Party. The best summation I can arrive at involves the obvious manipulation of the electorate—but that is not terrifically new—combined with a persistent avoidance of evidence and facts.

In my day job, I research a range of topics trying to get enough of a grasp on what we do and do not know such that I can form a plan that innovates from the known facts towards the unknown. Here are a few recent investigations:

  • What is the state of thinking about the origins of logic? Logical rules form into broad classes that range from the uncontroversial (modus tollens, propositional logic, predicate calculus) to the speculative (multivalued and fuzzy logic, or quantum logic, for instance). In most cases we make an assumption based on linguistic convention that they are true and then demonstrate their extension, despite the observation that they are tautological. Synthetic knowledge has no similar limitations but is assumed to be girded by the logical basics.
  • What were the early Christian heresies, how did they arise, and what was their influence? Marcion of Sinope is perhaps the most interesting one of these, in parallel with the Gnostics, asserting that the cruel tribal god of the Old Testament was distinct from the New Testament Father, and proclaiming perhaps (see various discussions) a docetic Jesus figure. The leading “mythicists” like Robert Price are invaluable in this analysis (ignore first 15 minutes of nonsense). The thin braid of early Christian history and the constant humanity that arises in morphing the faith before settling down after Nicaea (well, and then after Martin Luther) reminds us that abstractions and faith have a remarkable persistence in the face of cultural change.
Read the rest

Twilight of the Artistic Mind

Kristen Stewart, of Twilight fame, co-authored a paper on using deep learning neural networks in her new movie that she is directing. The basic idea is very old but the details and scale are more recent. If you take an artificial neural network and have it autoencode the input stream with bottlenecking, you can then submit any stimulus and will get some reflection of the training in the output. The output can be quite surreal, too, because the effect of bottlenecking combined with other optimizations results in an exaggeration of the features that define the input data set. If the input is images, the output will contain echoes of those images.

For Stewart’s effort, the goal was to transfer her highly stylized concept art into the movie scene. So they trained the network on her concept image and then submitted frames from the film to the network. The result reflected aspects of the original stylized image and the input image, not surprisingly.

There has been a long meditation on the unique status of art and music as a human phenomenon since the beginning of the modern era. The efforts at actively deconstructing the expectations of art play against a background of conceptual genius or divine inspiration. The abstract expressionists and the aleatoric composers show this as a radical 20th Century urge to re-imagine what art might be when freed from the strictures of formal ideas about subject, method, and content.

Is there any significance to the current paper? Not a great deal. The bottom line was that there was a great deal of tweaking to achieve a result that was subjectively pleasing and fit with the production goals of the film.… Read the rest