Category: AI

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus. In English, the syntax and semantics are less constrained and the pragmatics more pronounced, but the emotional high is much the same.

With the recent death of Hubert Dreyfus at Berkeley it seems an opportune time to consider the uniquely human capabilities that are involved in each of these creative ventures. Uniquely, I suggest, because we can’t yet imagine what it would be like for a machine to do the same kinds of intelligent tasks. Yet, from Stephen Hawking through to Elon Musk, influential minds are worried about what might happen if we develop machines that rise to the level of human consciousness. This might be considered a science fiction-like speculation since we have little basis for conjecture beyond the works of pure imagination. We know that mechanization displaces workers, for instance, and think it will continue, but what about conscious machines?

For Dreyfus, the human mind is too embodied and situational to be considered an encodable thing representable by rules and algorithms. Much like the trajectory of a species through an evolutionary landscape, the mind is, in some sense, an encoded reflection of the world in which it lives. Taken further, the evolutionary parallel becomes even more relevant in that it is embodied in a sensory and physical identity, a product of a social universe, and an outgrowth of some evolutionary ping pong through contingencies that led to greater intelligence and self-awareness.

Obsession with whatever cultivars, whatever traits and tendencies, lead to this riot of wordplay and software refinement is a fine example of how this moves away from the fears of Hawking and towards the impossibilities of Dreyfus. We might imagine that we can simulate our way to the kernel of instinct and emotion that makes such things possible. We might also claim that we can disconnect the product of the effort from these internal states and the qualia that defy easy description. The books and the new technologies have only desultory correspondence to the process by which they are created. But I doubt it. It’s more likely that getting from great automatic speech recognition or image classification to the general AI that makes us fearful is a longer hike than we currently imagine.

Tweak, Memory

Artificial Neural Networks (ANNs) were, from early on in their formulation as Threshold Logic Units (TLUs) or Perceptrons, mostly focused on non-sequential decision-making tasks. With the invention of back-propagation training methods, the application to static presentations of data became somewhat fixed as a methodology. During the 90s Support Vector Machines became the rage and then Random Forests and other ensemble approaches held significant mindshare. ANNs receded into the distance as a quaint, historical approach that was fairly computationally expensive and opaque when compared to the other methods.

But Deep Learning has brought the ANN back through a combination of improvements, both minor and major. The most important enhancements include pre-training of the networks as auto-encoders prior to pursuing error-based training using back-propagation or  Contrastive Divergence with Gibbs Sampling. The critical other enhancement derives from Schmidhuber and others work in the 90s on managing temporal presentations to ANNs so the can effectively process sequences of signals. This latter development is critical for processing speech, written language, grammar, changes in video state, etc. Back-propagation without some form of recurrent network structure or memory management washes out the error signal that is needed for adjusting the weights of the networks. And it should be noted that increased compute fire-power using GPUs and custom chips has accelerated training performance enough that experimental cycles are within the range of doable.

Note that these are what might be called “computer science” issues rather than “brain science” issues. Researchers are drawing rough analogies between some observed properties of real neuronal systems (neurons fire and connect together) but then are pursuing a more abstract question as to how a very simple computational model of such neural networks can learn. And there are further analogies that start building up: learning is due to changes in the strength of neural connections, for instance, and neurons fire after suitable activation. Then there are cognitive properties of human minds that might be modeled, as well, which leads us to a consideration of working memory in building these models.

It is this latter consideration of working memory that is critical to holding stimuli presentations long enough that neural connections can process them and learn from them. Schmidhuber et. al.’s methodology (LSTM) is as ad hoc as most CS approaches in that it observes a limitation with a computational architecture and the algorithms that operate within that architecture and then tries to remedy the limitation by architectural variations. There tends to be a tinkering and tweaking that goes on in the gradual evolution of these kinds of systems until something starts working. Theory walks hand-in-hand with practice in applied science.

Given that, however, it should be noted that there are researchers who are attempting to create a more biologically-plausible architecture that solves some of the issues with working memory and training neural networks. For instance, Frank, Loughry, and O’Reilly at University of Colorado have been developing a computational model that emulates the circuits that connect the frontal cortex and the basal ganglia. The model uses an elaborate series of activating and inhibiting connections to provide maintenance of perceptual stimuli in working memory. The model shows excellent performance on specific temporal presentation tasks. In its attempt to preserve a degree of fidelity to known brain science, it does lose some of the simplicity that purely CS-driven architectures provide, but I think it has a better chance of helping overcome another vexing problem for ANNs. Specifically, the slow learning properties of ANNs have only scant resemblance to much human learning. We don’t require many, many presentations of a given stimulus in order to learn it; often, one presentation is sufficient. Reconciling the slow tuning of ANN models, even recurrent ones, with this property of human-like intelligence remains an open issue, and more biology may be the key.

The Ethics of Knowing

In the modern American political climate, I’m constantly finding myself at sea in trying to unravel the motivations and thought processes of the Republican Party. The best summation I can arrive at involves the obvious manipulation of the electorate—but that is not terrifically new—combined with a persistent avoidance of evidence and facts.

In my day job, I research a range of topics trying to get enough of a grasp on what we do and do not know such that I can form a plan that innovates from the known facts towards the unknown. Here are a few recent investigations:

  • What is the state of thinking about the origins of logic? Logical rules form into broad classes that range from the uncontroversial (modus tollens, propositional logic, predicate calculus) to the speculative (multivalued and fuzzy logic, or quantum logic, for instance). In most cases we make an assumption based on linguistic convention that they are true and then demonstrate their extension, despite the observation that they are tautological. Synthetic knowledge has no similar limitations but is assumed to be girded by the logical basics.
  • What were the early Christian heresies, how did they arise, and what was their influence? Marcion of Sinope is perhaps the most interesting one of these, in parallel with the Gnostics, asserting that the cruel tribal god of the Old Testament was distinct from the New Testament Father, and proclaiming perhaps (see various discussions) a docetic Jesus figure. The leading “mythicists” like Robert Price are invaluable in this analysis (ignore first 15 minutes of nonsense). The thin braid of early Christian history and the constant humanity that arises in morphing the faith before settling down after Nicaea (well, and then after Martin Luther) reminds us that abstractions and faith have a remarkable persistence in the face of cultural change.
  • How do mathematical machines take on so many forms while achieving the same abstract goals? Machine learning, as a reificiation of human-like learning processes, can imitate neural networks (or an extreme sketch and caricature of what we know about real neural systems), or can be just a parameter slicing machine like Support Vector Machines or ID3, or can be a Bayesian network or mixture model of parameters.  We call them generative or non-generative, we categorize them as to discrete or continuous decision surfaces, and we label them in a range of useful ways. But why should they all achieve similar outcomes with similar ranges of error? Indeed, Random Forests were the belles of the ball until Deep Learning took its tiara.

In each case, I try to work my way, as carefully as possible, through the thicket of historical and intellectual concerns that provide point and counterpoint to the ideas. It feels ethically wrong to make a short, fast judgment about any such topics. I can’t imagine doing anything less with a topic as fraught as the US health care system. It’s complex, indeed, Mr. President.

So, I tracked down a foundational paper on this idea of ethics and epistemology. It dates to 1877 and provides a grounding for why and when we should believe in anything. William Clifford’s paper, The Ethics of Belief, tracks multiple lines of argumentation and the consequences of believing without clarity. Even tentative clarity comes with moral risk, as Clifford shows in his thought experiments.

In summary, though, there is no more important statement than Clifford’s final assertion that it is wrong to believe without sufficient evidence. It’s that simple. And it’s even more wrong to act on those beliefs.

Twilight of the Artistic Mind

Deep Dream Generated Image: deepdreamgenerator.com

Kristen Stewart, of Twilight fame, co-authored a paper on using deep learning neural networks in her new movie that she is directing. The basic idea is very old but the details and scale are more recent. If you take an artificial neural network and have it autoencode the input stream with bottlenecking, you can then submit any stimulus and will get some reflection of the training in the output. The output can be quite surreal, too, because the effect of bottlenecking combined with other optimizations results in an exaggeration of the features that define the input data set. If the input is images, the output will contain echoes of those images.

For Stewart’s effort, the goal was to transfer her highly stylized concept art into the movie scene. So they trained the network on her concept image and then submitted frames from the film to the network. The result reflected aspects of the original stylized image and the input image, not surprisingly.

There has been a long meditation on the unique status of art and music as a human phenomenon since the beginning of the modern era. The efforts at actively deconstructing the expectations of art play against a background of conceptual genius or divine inspiration. The abstract expressionists and the aleatoric composers show this as a radical 20th Century urge to re-imagine what art might be when freed from the strictures of formal ideas about subject, method, and content.

Is there any significance to the current paper? Not a great deal. The bottom line was that there was a great deal of tweaking to achieve a result that was subjectively pleasing and fit with the production goals of the film. That is a long way from automated art and perhaps mostly reflects the ability of artificial neural networks to encode complex transformations that are learned directly from examples. I was reminded of the Nadsat filters available for Unix in the 90s that transformed text into the fictional argot of A Clockwork Orange. Other examples were available, too. The difference was that these were hand-coded while the film example learned from examples. Not hard to do in the language case, though, and likely easier in certain computational aspects due to the smaller range of symbol values.

So it’s a curiosity at best, but plaudits to Stewart for trying new things in her film efforts.

Apprendre à traduire

Google’s translate has always been a useful tool for awkward gists of short texts. The method used was based on building a phrase-based statistical translation model. To do this, you gather up “parallel” texts that are existing, human, translations. You then “align” them by trying to find the most likely corresponding phrases in each sentence or sets of sentences. Often, between languages, fewer or more sentences will be used to express the same ideas. Once you have that collection of phrasal translation candidates, you can guess the most likely translation of a new sentence by looking up the sequence of likely phrase groups that correspond to that sentence. IBM was the progenitor of this approach in the late 1980’s.

It’s simple and elegant, but it always was criticized for telling us very little about language. Other methods that use techniques like interlingual transfer and parsers showed a more linguist-friendly face. In these methods, the source language is parsed into a parse tree and then that parse tree is converted into a generic representation of the meaning of the sentence. Next a generator uses that representation to create a surface form rendering in the target language. The interlingua must be like the deep meaning of linguistic theories, though the computer science versions of it tended to look a lot like ontological representations with fixed meanings. Flexibility was never the strong suit of these approaches, but their flaws were much deeper than just that.

For one, nobody was able to build a robust parser for any particular language. Next, the ontology was never vast enough to accommodate the rich productivity of real human language. Generators, being the inverse of the parser, remained only toy projects in the computational linguistic community. And, at the end of the day, no functional systems were built.

Instead, the statistical methods plodded along but had their own limitations. For instance, the translation of a never-before-seen sentence consisting of never-before-seen phrases, is the null set. Rare and strange words in the data have problems too, because they have very low probabilities and are swamped by well-represented candidates that lack the nuances of the rarer form. The model doesn’t care, of course; the probabilities rule everything. So you need more and more data. But then you get noisy data mixed in with the good data that distorts the probabilities. And you have to handle completely new words and groupings like proper nouns and numbers that are due to the unique productivity of these classes of forms.

So, where to go from here? For Google and its recent commitment to Deep Learning, the answer was to apply Deep Learning Neural Network approaches. The approach threw every little advance of recent history at the problem to pretty good effect. For instance, to cope with novel and rare words, they broke the input text up into sub-word letter groupings. The segmentation of the groupings was based, itself, on a learned model of the most common break-ups of terms, though they didn’t necessarily correspond to syllables or other common linguistic expectations. Sometimes they also used character-level models. The models were then combined into an ensemble, which is a common way of overcoming brittleness and overtraining on subsets of the data set. They used GPUs in some cases as well as reduced-precision arithmetic to speed-up the training of the models. They also used an attention-based intermediary between the encoder layers and the decoder layers to limit the influence of the broader context within a sentence.

The results improved translation quality by as much as 60% over the baseline phrase-based approach and, interestingly, showed a close approach to the average human translator’s performance. Is this enough? Not at all. You are not going to translate poetry this way any time soon. The productiveness of human language and the open classes of named entities remain a barrier. The subtleties of pragmatics might still vex any data driven approach—at least until there are a few examples in the corpora. And there might need to be a multi-sensory model somehow merged with the purely linguistic one to help manage some translation candidates. For instance, knowing the way in which objects fall could help move a translation from “plummeted” to “settled” to the ground.

Still, data-driven methods continue to reshape the intelligent machines of the future.

Boredom and Being a Decider

tds_decider2_v6Seth Lloyd and I have rarely converged (read: absolutely never) on a realization, but his remarkable 2013 paper on free will and halting problems does, in fact, converge on a paper I wrote around 1986 for an undergraduate Philosophy of Language course. I was, at the time, very taken by Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter’s poetic excursion around the topic of recursion, vertical structure in ricercars, and various other topics that stormed about in his book. For me, when combined with other musings on halting problems, it led to a conclusion that the halting problem could be probabilistically solved by an observer who decides when the recursion is too repetitive or too deep. Thus, it prescribes an overlay algorithm that guesses about the odds of another algorithm when subjected to a time or resource constraint. Thus we have a boredom algorithm.

I thought this was rather brilliant at the time and I ended up having a one-on-one with my prof who scoffed at GEB as a “serious” philosophical work. I had thought it was all psychedelically transcendent and had no deep understanding of more serious philosophical work beyond the papers by Kripke, Quine, and Davidson that we had been tasked to read. So I plead undergraduateness. Nevertheless, he had invited me to a one-on-one and we clashed over the concept of teleology and directedness in evolutionary theory. How we got to that from the original decision trees of halting or non-halting algorithms I don’t recall.

But now we have an argument that essentially recapitulates that original form, though with the help of the Hartmanis-Stearns theorem to support it. Whatever the algorithm that runs in our heads, it needs to simulate possible outcomes and try to determine what the best course of action might be (or the worst course, or just some preference). That algorithm is in wetware and is therefore perfectly deterministic. And, importantly, quantum indeterminacy doesn’t rescue us from the free-will implications of that determinism at all; randomness is just random, not decision-making. Instead, the impossibility of assessing the possible outcomes comes from one algorithm monitoring another. In a few narrow cases, it may be possible to enumerate all the stopping results of the enclosed algorithm, but in general, all you can do is greedily terminate branches in the production tree based on some kind of temporal or resource-based criteria,

Free will is neither random nor classically deterministic, but is an algorithmic constraint on the processing power to simulate reality in a conscious, but likely deterministic, head.

Startup Next

I’m thrilled to announce my new startup, Like Human. The company is focused on making significant new advances to the state of the art in cognitive computing and artificial intelligence. We will remain a bit stealthy for another six months or so and then will open up shop for early adopters.

I’m also pleased to share with you Like Human’s logo that goes by the name Logo McLogoface, or LM for short. LM combines imagery from nuclear warning signs, Robby the Robot from Forbidden Planet, and Leonardo da Vinci’s Vitruvian Man. I think you will agree about Mr. McLogoface’s agreeability:

logo-b

You can follow developments at @likehumancom on Twitter, and I will make a few announcements here as well.

Motivation, Boredom, and Problem Solving

shatteredIn the New York Times Stone column, James Blachowicz of Loyola challenges the assumption that the scientific method is uniquely distinguishable from other ways of thinking and problem solving we regularly employ. In his example, he lays out how writing poetry involves some kind of alignment of words that conform to the requirements of the poem. Whether actively aware of the process or not, the poet is solving constraint satisfaction problems concerning formal requirements like meter and structure, linguistic problems like parts-of-speech and grammar, semantic problems concerning meaning, and pragmatic problems like referential extension and symbolism. Scientists do the same kinds of things in fitting a theory to data. And, in Blachowicz’s analysis, there is no special distinction between scientific method and other creative methods like the composition of poetry.

We can easily see how this extends to ideas like musical composition and, indeed, extends with even more constraints that range from formal through to possibly the neuropsychology of sound. I say “possibly” because there remains uncertainty on how much nurture versus nature is involved in the brain’s reaction to sounds and music.

In terms of a computational model of this creative process, if we presume that there is an objective function that governs possible fits to the given problem constraints, then we can clearly optimize towards a maximum fit. For many of the constraints there are, however, discrete parameterizations (which part of speech? which word?) that are not like curve fitting to scientific data. In fairness, discrete parameters occur there, too, especially in meta-analyses of broad theoretical possibilities (Quantum loop gravity vs. string theory? What will we tell the children?) The discrete parameterizations blow up the search space with their combinatorics, demonstrating on the one hand why we are so damned amazing, and on the other hand why a controlled randomization method like evolutionary epistemology’s blind search and selective retention gives us potential traction in the face of this curse of dimensionality. The blind search is likely weakened for active human engagement, though. Certainly the poet or the scientist would agree; they are using learned skills, maybe some intellectual talent of unknown origin, and experience on how to traverse the wells of improbability in finding the best fit for the problem. This certainly resembles pre-training in deep learning, though on a much more pervasive scale, including feedback from categorical model optimization into the generative basis model.

But does this extend outwards to other ways in which we form ideas? We certainly know that motivated reasoning is involved in key aspects of our belief formation, which plays strongly into how we solve these constraint problems. We tend to actively look for confirmations and avoid disconfirmations of fit. We positively bias recency of information, or repeated exposures, and tend to only reconsider in much slower cycles.

Also, as the constraints of certain problem domains become, in turn, extensions that can result in change—where there is a dynamic interplay between belief and success—the fixity of the search space itself is no longer guaranteed. Broad human goals like the search for meaning are an example of that. In come complex human factors, like how boredom correlates with motivation and ideological extremism (overview, here, journal article, here).

This latter data point concerning boredom crosses from mere bias that might preclude certain parts of a search space into motivation that focuses it, and that optimizes for novelty seeking and other behaviors.

Local Minima and Coatimundi

CoatimundiEven given the basic conundrum of how deep learning neural networks might cope with temporal presentations or linear sequences, there is another oddity to deep learning that only seems obvious in hindsight. One of the main enhancements to traditional artificial neural networks is a phase of supervised pre-training that forces each layer to try to create a generative model of the input pattern. The deep learning networks then learn a discriminant model after the initial pre-training is done, focusing on the error relative to classification versus simply recognizing the phrase or image per se.

Why this makes a difference has been the subject of some investigation. In general, there is an interplay between the smoothness of the error function and the ability of the optimization algorithms to cope with local minima. Visualize it this way: for any machine learning problem that needs to be solved, there are answers and better answers. Take visual classification. If the system (or you) gets shown an image of a coatimundi and a label that says coatimundi (heh, I’m running in New Mexico right now…), learning that image-label association involves adjusting weights assigned to different pixels in the presentation image down through multiple layers of the network that provide increasing abstractions about the features that define a coatimundi. And, importantly, that define a coatimundi versus all the other animals and non-animals.,

These weight choices define an error function that is the optimization target for the network as a whole, and this error function can have many local minima. That is, by enhancing the weights supporting a coati versus a dog or a raccoon, the algorithm inadvertently leans towards a non-optimal assignment for all of them by focusing instead on a balance between them that is predestined by the previous dog and raccoon classifications (or, in general, the order of presentation).

Improvements require “escaping” these local optima in favor of a global solution that accords the best overall outcome to all the animals and a minimization of the global error. And pre-training seems to do that. It likely moves each discriminative category closer to the global possibilities because those global possibilities are initially encoded by the pre-training phase.

This has the added benefit of regularizing or smoothing out the noise that is inherent in any real data set. Indeed, the two approaches appear to be closely allied in their impact on the overall machine learning process.

New Behaviorism and New Cognitivism

lstm_memorycellDeep Learning now dominates discussions of intelligent systems in Silicon Valley. Jeff Dean’s discussion of its role in the Alphabet product lines and initiatives shows the dominance of the methodology. Pushing the limits of what Artificial Neural Networks have been able to do has been driven by certain algorithmic enhancements and the ability to process weight training algorithms at much higher speeds and over much larger data sets. Google even developed specialized hardware to assist.

Broadly, though, we see mostly pattern recognition problems like image classification and automatic speech recognition being impacted by these advances. Natural language parsing has also recently had some improvements from Fernando Pereira’s team. The incremental improvements using these methods should not be minimized but, at the same time, the methods don’t emulate key aspects of what we observe in human cognition. For instance, the networks train incrementally and lack the kinds of rapid transitions that we observe in human learning and thinking.

In a strong sense, the models that Deep Learning uses can be considered Behaviorist in that they rely almost exclusively on feature presentation with a reward signal. The internal details of how modularity or specialization arise within the network layers are interesting but secondary to the broad use of back-propagation or Gibb’s sampling combined with autoencoding. This is a critique that goes back to the early days of connectionism, of course, and why it was somewhat sidelined after an initial heyday in the late eighties. Then came statistical NLP, then came hybrid methods, then a resurgence of corpus methods, all the while with image processing getting more and more into the hand-crafted modular space.

But we can see some interesting developments that start to stir more Cognitivism into this stew. Recurrent Neural Networks provided interesting temporal behavior that might be lacking in some feedforward NNs, and Long-Short-Term Memory (LSTM) NNs help to overcome some specific limitations of  recurrent NNs like the disconnection between temporally-distant signals and the reward patterns.

Still, the modularity and rapid learning transitions elude us. While these methods are enhancing the ability to learn the contexts around specific events (and even the unique variability of contexts), that learning still requires many exposures to get right. We might consider our language or vision modules to be learned over evolutionary history and so not expect learning within a lifetime from scratch to result in similarly structured modules, but the differences remain not merely quantitative but significantly qualitative. A New Cognitivism requires more work to rise from this New Behaviorism.