Subtly Motivating Reasoning

larson-sheepContinuing on with the general theme of motivated reasoning, there are some rather interesting results reported in New Republic, here. Specifically, Ian Anson from University of Maryland, Baltimore County, found that political partisans reinforced their perspectives on the state of the U.S. economy more strongly when they were given “just the facts” rather than a strong partisan statement combined with the facts. Even when the partisan statements aligned with their own partisan perspectives, the effect held.

The author concludes that people, in constructing their views of the causal drivers of the economy, believe that they are unbiased in their understanding of the underlying mechanisms. The barefaced partisan statements interrupt that construction process, perhaps, or at least distract from it. Dr. Anson points out that subtly manufacturing consent therefore makes for better partisan fellow travelers.

There are a number of theories concerning how meanings must get incorporated into our semantic systems, and whether the idea of meaning itself is as good or worse than simply discussing reference. More, we can rate or gauge the uncertainty we must have concerning complex systems. They seem to form a hierarchy, with actors in our daily lives and the motivations of those we have long histories with in the mostly-predictable camp. Next we may have good knowledge about a field or area of interest that we have been trained in. When this framework has a scientific basis, we also rate our knowledge as largely reliable, but we also know the limits of that knowledge. It is in predictive futures and large-scale policy that we become subject to the difficulty of integrating complex signals into a cohesive framework. The partisans supply factoids and surround them with causal reasoning.… Read the rest

Startup Next

I’m thrilled to announce my new startup, Like Human. The company is focused on making significant new advances to the state of the art in cognitive computing and artificial intelligence. We will remain a bit stealthy for another six months or so and then will open up shop for early adopters.

I’m also pleased to share with you Like Human’s logo that goes by the name Logo McLogoface, or LM for short. LM combines imagery from nuclear warning signs, Robby the Robot from Forbidden Planet, and Leonardo da Vinci’s Vitruvian Man. I think you will agree about Mr. McLogoface’s agreeability:

logo-b

You can follow developments at @likehumancom on Twitter, and I will make a few announcements here as well.… Read the rest

Euhemerus and the Bullshit Artist

trump-minotaurSailing down through the Middle East, past the monuments of Egypt and the wild African coast, and then on into the Indian Ocean, past Arabia Felix, Euhemerus came upon an island. Maybe he came upon it. Maybe he sailed. He was perhaps—yes, perhaps; who can say?—sailing for Cassander in deconstructing the memory of Alexander the Great. And that island, Panchaea, held a temple of Zeus with a written history of the deeds of men who became the Greek gods.

They were elevated, they became fixed in the freckled amber of ancient history, their deeds escalated into myths and legends. And, likewise, the ancient tribes of the Levant brought their El and Yah-Wah, and Asherah and Baal, and then the Zoroastrians influenced the diaspora in refuge in Babylon, until they returned and had found dualism, elemental good and evil, and then reimagined their origins pantheon down through monolatry and into monotheism. These great men and women were reimagined into something transcendent and, ultimately, barely understandable.

Even the rational Yankee in Twain’s Connecticut Yankee in King Arthur’s Court realizes almost immediately why he would soon rule over the medieval world as he is declared a wild dragon when presented to the court. He waits for someone to point out that he doesn’t resemble a dragon, but the medieval mind does not seem to question the reasonableness of the mythic claims, even in the presence of evidence.

So it goes with the human mind.

And even today we have Fareed Zakaria justifying his use of the term “bullshit artist” for Donald Trump. Trump’s logorrhea is punctuated by so many incomprehensible and contradictory statements that it becomes a mythic whirlwind. He lets slip, now and again, that his method is deliberate:

DT: Therefore, he was the founder of ISIS.

Read the rest

Motivation, Boredom, and Problem Solving

shatteredIn the New York Times Stone column, James Blachowicz of Loyola challenges the assumption that the scientific method is uniquely distinguishable from other ways of thinking and problem solving we regularly employ. In his example, he lays out how writing poetry involves some kind of alignment of words that conform to the requirements of the poem. Whether actively aware of the process or not, the poet is solving constraint satisfaction problems concerning formal requirements like meter and structure, linguistic problems like parts-of-speech and grammar, semantic problems concerning meaning, and pragmatic problems like referential extension and symbolism. Scientists do the same kinds of things in fitting a theory to data. And, in Blachowicz’s analysis, there is no special distinction between scientific method and other creative methods like the composition of poetry.

We can easily see how this extends to ideas like musical composition and, indeed, extends with even more constraints that range from formal through to possibly the neuropsychology of sound. I say “possibly” because there remains uncertainty on how much nurture versus nature is involved in the brain’s reaction to sounds and music.

In terms of a computational model of this creative process, if we presume that there is an objective function that governs possible fits to the given problem constraints, then we can clearly optimize towards a maximum fit. For many of the constraints there are, however, discrete parameterizations (which part of speech? which word?) that are not like curve fitting to scientific data. In fairness, discrete parameters occur there, too, especially in meta-analyses of broad theoretical possibilities (Quantum loop gravity vs. string theory? What will we tell the children?) The discrete parameterizations blow up the search space with their combinatorics, demonstrating on the one hand why we are so damned amazing, and on the other hand why a controlled randomization method like evolutionary epistemology’s blind search and selective retention gives us potential traction in the face of this curse of dimensionality.… Read the rest

Soul Optimization

Against SuperheroesI just did a victory lap around wooden columns in my kitchen and demanded high-fives all around: Against Superheroes is done. Well, technically it just topped the first hurdle.  Core writing is complete at 100,801 words. I will now do two editorial passes and then send it to my editor for clean-up. Finally, I’ll get some feedback from my wife before sending it out for independent review.

I try to write according to a daily schedule but I have historically been an inconsistent worker. I track everything using a spreadsheet and it doesn’t look pretty:

wordchart

Note the long gaps. The gaps are problematic for several reasons, not the least of which is that I have to go back and read everything again to return to form. The gaps arrive with excuses, then get amplified by more excuses, then get massaged into to-do lists, and then always get resolved by unknown forces. Maybe they are unknowable.

The one consistency that I have found is that I always start strong and finish strong, bursts of enthusiasm for the project arriving with runner’s high on the trail, or while waiting in traffic. The plot thickets open to luxuriant fields. When I’m in the gap periods I distract myself too easily, finding the deep research topics an easy way to justify an additional pause of days, then weeks, sometimes months.

I guess I should resolve to find my triggers and work to overcome these tendencies, but I’m not certain that it matters. There is no rush, and those exuberant starts and ends are perhaps enough of a reward that no deeper optimization of my soul is needed.… Read the rest

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.

 … Read the rest

Local Minima and Coatimundi

CoatimundiEven given the basic conundrum of how deep learning neural networks might cope with temporal presentations or linear sequences, there is another oddity to deep learning that only seems obvious in hindsight. One of the main enhancements to traditional artificial neural networks is a phase of supervised pre-training that forces each layer to try to create a generative model of the input pattern. The deep learning networks then learn a discriminant model after the initial pre-training is done, focusing on the error relative to classification versus simply recognizing the phrase or image per se.

Why this makes a difference has been the subject of some investigation. In general, there is an interplay between the smoothness of the error function and the ability of the optimization algorithms to cope with local minima. Visualize it this way: for any machine learning problem that needs to be solved, there are answers and better answers. Take visual classification. If the system (or you) gets shown an image of a coatimundi and a label that says coatimundi (heh, I’m running in New Mexico right now…), learning that image-label association involves adjusting weights assigned to different pixels in the presentation image down through multiple layers of the network that provide increasing abstractions about the features that define a coatimundi. And, importantly, that define a coatimundi versus all the other animals and non-animals.,

These weight choices define an error function that is the optimization target for the network as a whole, and this error function can have many local minima. That is, by enhancing the weights supporting a coati versus a dog or a raccoon, the algorithm inadvertently leans towards a non-optimal assignment for all of them by focusing instead on a balance between them that is predestined by the previous dog and raccoon classifications (or, in general, the order of presentation).… Read the rest

Dates, Numbers, and Canadian Makings

Against SuperheroesOn the 21st of June, 1997, which was the solstice, my wife and I married. We celebrated that date again today, but it is not the solstice again due to astronomical drift around the calendar. And I also crossed the border of 100,000 words on Against Superheroes, moving towards resolution of a novel that could, conceivably, have no ending. There are always more mythologies to be explored.

Just last week I was in Banff, Canada, sitting quietly with my bear spray and a little titanium cook pot. I didn’t have to deploy the mace, and was relieved I also didn’t have to endure twelve hours of wolf stalking like this Canadian woman.

And while I was north of the US border, I learned that a Canadian animated film I was involved with was released to Amazon Prime video. I am just an Executive Producer of the film, which means that I had no creative input, but I am really pleased with the film. Ironically, I couldn’t watch this Canadian product while in Canada, just an hour from the studio that produced it. But rest assured that Christmas will be saved in the end!… Read the rest

New Behaviorism and New Cognitivism

lstm_memorycellDeep Learning now dominates discussions of intelligent systems in Silicon Valley. Jeff Dean’s discussion of its role in the Alphabet product lines and initiatives shows the dominance of the methodology. Pushing the limits of what Artificial Neural Networks have been able to do has been driven by certain algorithmic enhancements and the ability to process weight training algorithms at much higher speeds and over much larger data sets. Google even developed specialized hardware to assist.

Broadly, though, we see mostly pattern recognition problems like image classification and automatic speech recognition being impacted by these advances. Natural language parsing has also recently had some improvements from Fernando Pereira’s team. The incremental improvements using these methods should not be minimized but, at the same time, the methods don’t emulate key aspects of what we observe in human cognition. For instance, the networks train incrementally and lack the kinds of rapid transitions that we observe in human learning and thinking.

In a strong sense, the models that Deep Learning uses can be considered Behaviorist in that they rely almost exclusively on feature presentation with a reward signal. The internal details of how modularity or specialization arise within the network layers are interesting but secondary to the broad use of back-propagation or Gibb’s sampling combined with autoencoding. This is a critique that goes back to the early days of connectionism, of course, and why it was somewhat sidelined after an initial heyday in the late eighties. Then came statistical NLP, then came hybrid methods, then a resurgence of corpus methods, all the while with image processing getting more and more into the hand-crafted modular space.

But we can see some interesting developments that start to stir more Cognitivism into this stew.… Read the rest