Category: Psychology

Motivation, Boredom, and Problem Solving

shatteredIn the New York Times Stone column, James Blachowicz of Loyola challenges the assumption that the scientific method is uniquely distinguishable from other ways of thinking and problem solving we regularly employ. In his example, he lays out how writing poetry involves some kind of alignment of words that conform to the requirements of the poem. Whether actively aware of the process or not, the poet is solving constraint satisfaction problems concerning formal requirements like meter and structure, linguistic problems like parts-of-speech and grammar, semantic problems concerning meaning, and pragmatic problems like referential extension and symbolism. Scientists do the same kinds of things in fitting a theory to data. And, in Blachowicz’s analysis, there is no special distinction between scientific method and other creative methods like the composition of poetry.

We can easily see how this extends to ideas like musical composition and, indeed, extends with even more constraints that range from formal through to possibly the neuropsychology of sound. I say “possibly” because there remains uncertainty on how much nurture versus nature is involved in the brain’s reaction to sounds and music.

In terms of a computational model of this creative process, if we presume that there is an objective function that governs possible fits to the given problem constraints, then we can clearly optimize towards a maximum fit. For many of the constraints there are, however, discrete parameterizations (which part of speech? which word?) that are not like curve fitting to scientific data. In fairness, discrete parameters occur there, too, especially in meta-analyses of broad theoretical possibilities (Quantum loop gravity vs. string theory? What will we tell the children?) The discrete parameterizations blow up the search space with their combinatorics, demonstrating on the one hand why we are so damned amazing, and on the other hand why a controlled randomization method like evolutionary epistemology’s blind search and selective retention gives us potential traction in the face of this curse of dimensionality. The blind search is likely weakened for active human engagement, though. Certainly the poet or the scientist would agree; they are using learned skills, maybe some intellectual talent of unknown origin, and experience on how to traverse the wells of improbability in finding the best fit for the problem. This certainly resembles pre-training in deep learning, though on a much more pervasive scale, including feedback from categorical model optimization into the generative basis model.

But does this extend outwards to other ways in which we form ideas? We certainly know that motivated reasoning is involved in key aspects of our belief formation, which plays strongly into how we solve these constraint problems. We tend to actively look for confirmations and avoid disconfirmations of fit. We positively bias recency of information, or repeated exposures, and tend to only reconsider in much slower cycles.

Also, as the constraints of certain problem domains become, in turn, extensions that can result in change—where there is a dynamic interplay between belief and success—the fixity of the search space itself is no longer guaranteed. Broad human goals like the search for meaning are an example of that. In come complex human factors, like how boredom correlates with motivation and ideological extremism (overview, here, journal article, here).

This latter data point concerning boredom crosses from mere bias that might preclude certain parts of a search space into motivation that focuses it, and that optimizes for novelty seeking and other behaviors.

Quantum Field Is-Oughts

teleologySean Carroll’s Oxford lecture on Poetic Naturalism is worth watching (below). In many ways it just reiterates several common themes. First, it reinforces the is-ought barrier between values and observations about the natural world. It does so with particular depth, though, by identifying how coarse-grained theories at different levels of explanation can be equally compatible with quantum field theory. Second, and related, he shows how entropy is an emergent property of atomic theory and the interactions of quantum fields (that we think of as particles much of the time) and, importantly, that we can project the same notion of boundary conditions that result in entropy into the future resulting in a kind of effective teleology. That is, there can be some boundary conditions for the evolution of large-scale particle systems that form into configurations that we can label purposeful or purposeful-like. I still like the term “teleonomy” to describe this alternative notion, but the language largely doesn’t matter except as an educational and distinguishing tool against the semantic embeddings of old scholastic monks.

Finally, the poetry aspect resolves in value theories of the world. Many are compatible with descriptive theories, and our resolution of them is through opinion, reason, communications, and, yes, violence and war. There is no monopoly of policy theories, religious claims, or idealizations that hold sway. Instead we have interests and collective movements, and the above, all working together to define our moral frontiers.


New Behaviorism and New Cognitivism

lstm_memorycellDeep Learning now dominates discussions of intelligent systems in Silicon Valley. Jeff Dean’s discussion of its role in the Alphabet product lines and initiatives shows the dominance of the methodology. Pushing the limits of what Artificial Neural Networks have been able to do has been driven by certain algorithmic enhancements and the ability to process weight training algorithms at much higher speeds and over much larger data sets. Google even developed specialized hardware to assist.

Broadly, though, we see mostly pattern recognition problems like image classification and automatic speech recognition being impacted by these advances. Natural language parsing has also recently had some improvements from Fernando Pereira’s team. The incremental improvements using these methods should not be minimized but, at the same time, the methods don’t emulate key aspects of what we observe in human cognition. For instance, the networks train incrementally and lack the kinds of rapid transitions that we observe in human learning and thinking.

In a strong sense, the models that Deep Learning uses can be considered Behaviorist in that they rely almost exclusively on feature presentation with a reward signal. The internal details of how modularity or specialization arise within the network layers are interesting but secondary to the broad use of back-propagation or Gibb’s sampling combined with autoencoding. This is a critique that goes back to the early days of connectionism, of course, and why it was somewhat sidelined after an initial heyday in the late eighties. Then came statistical NLP, then came hybrid methods, then a resurgence of corpus methods, all the while with image processing getting more and more into the hand-crafted modular space.

But we can see some interesting developments that start to stir more Cognitivism into this stew. Recurrent Neural Networks provided interesting temporal behavior that might be lacking in some feedforward NNs, and Long-Short-Term Memory (LSTM) NNs help to overcome some specific limitations of  recurrent NNs like the disconnection between temporally-distant signals and the reward patterns.

Still, the modularity and rapid learning transitions elude us. While these methods are enhancing the ability to learn the contexts around specific events (and even the unique variability of contexts), that learning still requires many exposures to get right. We might consider our language or vision modules to be learned over evolutionary history and so not expect learning within a lifetime from scratch to result in similarly structured modules, but the differences remain not merely quantitative but significantly qualitative. A New Cognitivism requires more work to rise from this New Behaviorism.

Evolving Visions of Chaotic Futures

FlutterbysMost artificial intelligence researchers think unlikely the notion that a robot apocalypse or some kind of technological singularity is coming anytime soon. I’ve said as much, too. Guessing about the likelihood of distant futures is fraught with uncertainty; current trends are almost impossible to extrapolate.

But if we must, what are the best ways for guessing about the future? In the late 1950s the Delphi method was developed. Get a group of experts on a given topic and have them answer questions anonymously. Then iteratively publish back the group results and ask for feedback and revisions. Similar methods have been developed for face-to-face group decision making, like Kevin O’Connor’s approach to generating ideas in The Map of Innovation: generate ideas and give participants votes equaling a third of the number of unique ideas. Keep iterating until there is a consensus. More broadly, such methods are called “nominal group techniques.”

Most recently, the notion of prediction markets has been applied to internal and external decision making. In prediction markets,  a similar voting strategy is used but based on either fake or real money, forcing participants towards a risk-averse allocation of assets.

Interestingly, we know that optimal inference based on past experience can be codified using algorithmic information theory, but the fundamental problem with any kind of probabilistic argument is that much change that we observe in society is non-linear with respect to its underlying drivers and that the signals needed are imperfect. As the mildly misanthropic Nassim Taleb pointed out in The Black Swan, the only place where prediction takes on smooth statistical regularity is in Las Vegas, which is why one shouldn’t bother to gamble. Taleb’s approach is to look instead at minimizing the impact of shocks (or hedging them in financial markets).

But maybe we can learn something from philosophical circles. For instance, Evolutionary Epistemology (EE), as formulated by Donald Campbell, Sir Karl Popper, and others, posits that central to knowledge formation is blind variation and selective retention. Combined with optimal induction, this leads to random processes being injected into any kind of predictive optimization. We do this in evolutionary algorithms like Genetic Algorithms, Evolutionary Programming, Genetic Programming, and Evolutionary Strategies, as well as in related approaches like Simulated Annealing. But EE also suggests that there are several levels of learning by variation/retention, from the phylogenetic learning of species through to the mental processes of higher organisms. We speculate and trial-and-error continuously, repeating loops of what-ifs in our minds in an effort to optimize our responses in the future. It’s confounding as hell but we do remarkable things that machines can’t yet do like folding towels or learning to bake bread.

This noosgeny-recapitulates-ontogeny-recapitulates-phylogeny (just made that up) can be exploited in a variety of ways for abductive inference about the future. We can, for instance, use evolutionary optimization with a penalty for complexity that simulates the informational trade-off of AIT-style inductive optimality. Further, the noosgeny component (by which I mean the internalized mental trial-and-error) can reduce phylogenetic waste in simulations by providing speculative modeling that retains the “parental” position on the fitness landscape before committing to a next generation of potential solutions, allowing for further probing of complex adaptive landscapes.

The Linguistics of Hate

keep-calm-and-hate-corpus-linguisticsRight-wing authoritarianism (RWA) and Social dominance orientation (SDO) are measures of personality traits and tendencies. To measure them, you ask people to rate statements like:

Superior groups should dominate inferior groups

The withdrawal from tradition will turn out to be a fatal fault one day

People rate their opinions on these questions using a 1 to 5 scale from Definitely Disagree to Strongly Agree. These scales have their detractors but they also demonstrate some useful and stable reliability across cultures.

Note that while both of these measures tend to be higher in American self-described “conservatives,” they also can be higher for leftist authoritarians and they may even pop up for subsets of attitudes among Western social liberals about certain topics like religion. Haters abound.

I used the R packages twitterR, textminer, wordcloud, SnowballC, and a few others and grabbed a few thousand tweets that contained the #DonaldJTrump hashtag. A quick scan of them showed the standard properties of tweets like repetition through retweeting, heavy use of hashtags, and, of course, the use of the #DonaldJTrump as part of anti-Trump sentiments (something about a cocaine-use video). But, filtering them down, there were definite standouts that seemed to support a RWA/SDO orientation. Here are some examples:

The last great leader of the White Race was #trump #trump2016 #donaldjtrump #DonaldTrump2016 #donaldtrump”

Just a wuss who cant handle the defeat so he cries to GOP for brokered Convention. # Trump #DonaldJTrump

I am a PROUD Supporter of #DonaldJTrump for the Highest Office in the land. If you don’t like it, LEAVE!

#trump army it’s time, we stand up for family, they threaten trumps family they threaten us, lock and load, push the vote…

Not surprising, but the density of them shows a real aggressiveness that somewhat shocked me. So let’s assume that Republicans make up around 29% of the US population, and that Trump is getting around 40% of their votes in the primary season, then we have an angry RWA/SDO-focused subpopulation of around 12% of the US population.

That seems to fit with results from an online survey of RWA, reported here. An interesting open question is whether there is a spectrum of personality types that is genetically predisposed, or whether childhood exposures to ideas and modes of childrearing are more likely the cause of these patterns (and their cross-cultural applicability).

Here are some interesting additional resources:

Bilewicz, Michal, et al. “When Authoritarians Confront Prejudice. Differential Effects of SDO and RWA on Support for Hate‐Speech Prohibition.” Political Psychology (2015).

Sylwester K, Purver M (2015) Twitter Language Use Reflects Psychological Differences between Democrats and Republicans. PLoS ONE 10(9): e0137422. doi:10.1371/journal.pone.0137422

The latter has a particularly good overview of RWA/SDO, other measures like openness, etc., and Twitter as an analytics tool.

Finally, below is some R code for Twitter analytics that I am developing. It is derivative of sample code like here and here, but reorients the function structure and adds deletion of Twitter hashtags to focus on the supporting language. There are some other enhancements like codeset normalization. All uses and reuses are welcome. I am starting to play with building classifiers and using Singular Value Decomposition to pull apart various dominating factors and relationships in the term structure. Ultimately, however, human intervention is needed to identify pro vs. anti tweets, as well as phrasal patterns that are more indicative of RWA/SDO than bags-of-words can indicate.

Also, here are wordclouds generated for #hillaryclinton and #DonaldJTrump, respectively. The Trump wordcloud was distorted by some kind of repetitive robotweeting that dominated the tweets.





 #Grab the tweets
 djtTweets <- searchTwitter(searchTerm, num)

 #Use a handy helper function to put the tweets into a dataframe 

 RemoveDots <- function(tweet) {
 gsub("[\\.\\,\\;]+", " ", tweet)

 RemoveLinks <- function(tweet) {
 gsub("http:[^ $]+", "", tweet)
 gsub("https:[^ $]+", "", tweet)

 RemoveAtPeople <- function(tweet) {
 gsub("@\\w+", "", tweet) 

 RemoveHashtags <- function(tweet) {
 gsub("#\\w+", "", tweet) 

 FixCharacters <- function(tweet){

 CleanTweets <- function(tweet){
 s1 <- RemoveLinks(tweet)
 s2 <- RemoveAtPeople(s1)
 s3 <- RemoveDots(s2) 
 s4 <- RemoveHashtags(s3)
 s5 <- FixCharacters(s4)

 tweets <- as.vector(sapply(tw.df$text, CleanTweets))
 if (verbose) print(tweets)

 generateCorpus= function(df,pstopwords){
 tw.corpus= Corpus(VectorSource(df))
 tw.corpus = tm_map(tw.corpus, content_transformer(removePunctuation))
 tw.corpus = tm_map(tw.corpus, content_transformer(tolower))
 tw.corpus = tm_map(tw.corpus, removeWords, stopwords('english'))
 tw.corpus = tm_map(tw.corpus, removeWords, pstopwords)

 corpus = generateCorpus(tweets)

 doc.m = TermDocumentMatrix(corpus, control = list(minWordLength = 1))
 dm = as.matrix(doc.m)
 # calculate the frequency of words
 v = sort(rowSums(dm), decreasing=TRUE)

 d = data.frame(word=names(v), freq=v)
 #Generate the wordcloud
 wc=wordcloud(d$word, d$freq, scale=c(4,0.3), min.freq=min.freq, colors = brewer.pal(8, "Paired"))

djttweets = tweets.grabber("#DonaldJTrump", 2000, verbose=TRUE)
djtcorpus = corpus.stats(djttweets)
wordcloud.generate(djtcorpus, 3)

On Woo-Woo and Schrödinger’s Cat

schrodingers-cat-walks-into-a-bar-memeMichael Shermer and Sam Harris got together with an audience at Caltech to beat up on Deepak Chopra and a “storyteller” named Jean Houston in The Future of God debate hosted by ABC News. And Deepak got uncharacteristically angry back behind his crystal-embellished eyewear, especially at Shermer’s assertion that Deepak is just talking “woo-woo.”

But is there any basis for the woo-woo that Deepak is weaving? As it turns out, he is building on some fairly impressive work by Stuart Hameroff, MD, of University of Arizona and Sir Roger Penrose of Oxford University. Under development for more than 25 years, this work has most recently been summed up in their 2014 paper, “Consciousness in the universe: A review of the ‘Orch OR’ theory” available for free (but not the commentaries, alas). Deepak was even invited to comment on the paper in Physics of Life Reviews, though the content of his commentary was challenged as being somewhat orthogonal or contradictory to the main argument.

To start somewhere near the beginning, Penrose became obsessed with the limits of computation in the late 80s. The Halting Problem sums up his concerns about the idea that human minds can possibly be isomorphic with computational devices. There seems to be something that allows for breaking free of the limits of “mere” Turing Complete computation to Penrose. Whatever that something is, it should be physical and reside within the structure of the brain itself. Hameroff and Penrose would also like that something to explain consciousness and all of its confusing manifestations, for surely consciousness is part of that brain operation.

Now, to get at some necessary and sufficient sorts of explanations for this new model requires looking at Hameroff’s medical speciality: anesthesiology. Anyone who has had surgery has had the experience of consciousness going away while body function continues on, still mediated by brain activities. So certain drugs like halothane erase consciousness through some very targeted action. Next, consider that certain prokaryotes have internally coordinated behaviors without the presence of a nervous system. Finally, consider that it looks like most neurons do not integrate and fire like the classic model (and the model that artificial neural networks emulate), but instead have some very strange and random activation behaviors in the presence of the same stimuli.

How do these relate? Hameroff has been very focused on one particular component to the internal architecture of neural cells: microtubules or MTs. These are very small (compared to cellular scale) and there are millions in neurons (10^9 or so). They are just cylindrical polymers with some specific chemical properties. They also are small enough (25nm in diameter) that it might be possible that quantum effects are present in their architecture. There is some very recent evidence to this effect based on strange reactions of MTs to tiny currents of varying frequencies used to probe them. Also, anesthetics appear to bind to MTs, Indeed, they could also provide a memory strata that is orders of magnitude greater than the traditional interneuron concept of how memories form.

But what does this have to do with consciousness beyond the idea that MTs get interfered with by anesthetics and therefore might be around or part of the machinery that we label conscious? They also appear to be related to Alzheimer’s disease, but this could be just related to the same machinery. Well, this is where we get woo-woo-ey. If consciousness is not just an epiphenomena arising from standard brain function as a molecular computer, and it is also not some kind of dualistic soul overlay, then maybe it is something that is there but is not a classical computer. Hence quantum effects.

So Sir Penrose has been promoting a rather wild conjecture called the Diósi-Penrose theory that puts an upper limit on the amount of time a quantum superposition can survive. It does this based on some arguments I don’t fully understand but that integrate gravity with quantum phenomena to suggest that the mass displaced by the superposed wave functions snaps the superposition into wave collapse. So Schrödinger’s cat dies or lives very quickly even without an observer because there are a lot of superposed quantum particles in a big old cat and therefore very rapid resolution of the wave function evolution (10^-24s). Single particles can live in superposition for much longer because the mass difference between their wave functions is very small.

Hence the OR in “Orch OR” stands for Objective Resolution: wave functions are subject to collapse by probing but they also collapse just because they are unstable in that state. The resolution is objective and not subjective. The “Orch” stands for “Orchestrated.” And there is the seat of consciousness in the Hameroff-Penrose theory. In MTs little wave function collapses are constantly occurring and the presence of superposition means quantum computing can occur. And the presence of quantum computing means that non-classical computation can take place and maybe even be more than Turing Complete.

Now the authors are careful to suggest that these are actually proto-conscious events and that only their large-scale orchestration leads to what we associate with consciousness per se. Otherwise they are just quantum superpositions that collapse, maybe with 1 qubit of resolution under the right circumstances.

At least we know the cat has a fate now. That fate is due to an objective event, too, and not some added woo-woo from the strange world of quantum phenomena. And the cat’s curiosity is part of the same conscious machinery.

Bayesianism and Properly Basic Belief

Kircher-Diagram_of_the_names_of_GodXu and Tenebaum, in Word Learning as Bayesian Inference (Psychological Review, 2007), develop a very simple Bayesian model of how children (and even adults) build semantic associations based on accumulated evidence. In short, they find contrastive elimination approaches as well as connectionist methods unable to explain the patterns that are observed. Specifically, the most salient problem with these other methods is that they lack the rapid transition that is seen when three exemplars are presented for a class of objects associated with a word versus one exemplar. Adults and kids (the former even more so) just get word meanings faster than those other models can easily show. Moreover, a space of contending hypotheses that are weighted according to their Bayesian statistics, provides an escape from the all-or-nothing of hypothesis elimination and some of the “soft” commitment properties that connectionist models provide.

The mathematical trick for the rapid transition is rather interesting. They formulate a “size principle” that weights the likelihood of a given hypothesis (this object is most similar to a “feb,” for instance, rather than the many other object sets that are available) according to a scaling that is exponential in the number of exposures. Hence the rapid transition:

Hypotheses with smaller extensions assign greater probability than do larger hypotheses to the same data, and they assign exponentially greater probability as the number of consistent examples increases.

It should be noted that they don’t claim that the psychological or brain machinery implements exactly this algorithm. As is usual in these matters, it is instead likely that whatever machinery is involved, it simply has at least these properties. It may very well be that connectionist architectures can do the same but that existing approaches to connectionism simply don’t do it quite the right way. So other methods may need to be tweaked to get closer to the observed learning of people in these word tasks.

So what can this tell us about epistemology and belief? Classical foundationalism might be formulated as something is a “basic” or “justified” belief if it is self-evident or evident to our senses. Other beliefs may therefore be grounded by those basic beliefs. And a more modern reformulation might substitute “incorrigible” for “justified” with the layered meaning of incorrigibility built on the necessity that given the proposition it is in fact true.

Here’s Alvin Plantinga laying out a case for why justified and incorrigibility have a range of problems, problems serious enough for Plantinga that he suspects that god belief could just as easily be a basic belief, allowing for the kinds of presuppositional Natural Theology (think: I look around me and the hand of God is obvious) that is at the heart of some of the loftier claims concerning the viability or non-irrationality of god belief. It even provides a kind of coherent interpretative framework for historical interpretation.

Plantinga positions the problem of properly basic belief then as an inductive problem:

And hence the proper way to arrive at such a criterion is, broadly speaking, inductive. We must assemble examples of beliefs and conditions such that the former are obviously properly basic in the latter, and examples of beliefs and conditions such that the former are obviously not properly basic in the latter. We must then frame hypotheses as to the necessary and sufficient conditions of proper basicality and test these hypothesis by reference to those examples. Under the right conditions, for example, it is clearly rational to believe that you see a human person before you: a being who has thoughts and feelings, who knows and believes things, who makes decisions and acts. It is clear, furthermore, that you are under no obligation to reason to this belief from others you hold; under those conditions that belief is properly basic for you.

He goes on to conclude that this opens up the god hypothesis as providing this kind of coherence mechanism:

By way of conclusion then: being self-evident, or incorrigible, or evident to the senses is not a necessary condition of proper basicality. Furthermore, one who holds that belief in God is properly basic is not thereby committed to the idea that belief in God is groundless or gratuitous or without justifying circumstances. And even if he lacks a general criterion of proper basicality, he is not obliged to suppose that just any or nearly any belief—belief in the Great Pumpkin, for example—is properly basic. Like everyone should, he begins with examples; and he may take belief in the Great Pumpkin as a paradigm of irrational basic belief.

So let’s assume that the word learning mechanism based on this Bayesian scaling is representative of our human inductive capacities. Now this may or may not be broadly true. It is possible that it is true of words but not other domains of perceptual phenomena. Nevertheless, given this scaling property, the relative inductive truth of a given proposition (a meaning hypothesis) is strictly Bayesian. Moreover, this doesn’t succumb to problems of verificationalism because it only claims relative truth. Properly basic or basic is then the scaled contending explanatory hypotheses and the god hypothesis has to compete with other explanations like evolutionary theory (for human origins), empirical evidence of materialism (for explanations contra supernatural ones), perceptual mistakes (ditto), myth scholarship, textual analysis, influence of parental belief exposure, the psychology of wish fulfillment, the pragmatic triumph of science, etc. etc.

And so we can stick to a relative scaling of hypotheses as to what constitutes basicality or justified true belief. That’s fine. We can continue to argue the previous points as to whether they support or override one hypothesis or another. But the question Plantinga raises as to what ethics to apply in making those decisions is important. He distinguishes different reasons why one might want to believe more true things than others (broadly) or maybe some things as properly basic rather than others, or, more correctly, why philosophers feel the need to pin god-belief as irrational. But we succumb to a kind of unsatisfying relativism insofar as the space of these hypotheses is not, in fact, weighted in a manner that most reflects the known facts. The relativism gets deeper when the weighting is washed out by wish fulfillment, pragmatism, aspirations, and personal insights that lack falsifiability. That is at least distasteful, maybe aretetically so (in Plantinga’s framework) but probably more teleologically so in that it influences other decision-making and the conflicts and real harms societies may cause.

Artsy Women

Victoire LemoineA pervasive commitment to ambiguity. That’s the most compelling sentence I can think of to describe the best epistemological stance concerning the modern world. We have, at best, some fairly well-established local systems that are reliable. We have consistency that may, admittedly, only pertain to some local system that is relatively smooth or has a modicum of support for the most general hypotheses that we can generate.

It’s not nihilistic to believe these things. It’s prudent and, when carefully managed, it’s productive.

And with such prudence we can tear down the semantic drapery that commands attention at every turn, from the grotesqueries of the political sphere that seek to command us through emotive hyperbole to the witchdoctors of religious canons who want us to immanentize some silly Middle Eastern eschaton or shoot up a family-planning clinic.

It is all nonsense. We are perpetuating and inventing constructs that cling to our contingent neurologies like mold, impervious to the broadest implications and best thinking we can muster. That’s normal, I suppose, for that is the sub rosa history of our species. But only beneath the firmament, while there is hope above and inventiveness and the creation of a new honor that derives from fairness and not from reactive disgust.

In opposition to the structures that we know and live with—that we tolerate—there is both clarity in this cocksure target and a certainty that, at least, we can deconstruct the self-righteousness and build a new sensibility to (at least) equality if not some more grand vision.

I picked up Laura Marling’s Short Movie last week and propagated it to various cars. It is only OK, but it joins a rather large collection of recent female musicians in my music archive. Indeed, the women have outnumbered the men at this point: St. Vincent, Joni Mitchell, Joanna Newsom, Hole, P.J. Harvey, Gwen Stefani, Courtney Love (sans Hole), Ani DiFranco, Joan Armatrading, Lily Allen, Valerie June. I’m particularly fascinated by female artists because they are unspoken or underrepresented in our brief human history, and maybe also because my wife is one. But more than some progressive political commitment, female voices simply discuss things in different ways than male voices do.

Where there is perhaps an evolutionary inevitability for a perspective of pursuit and desire from men, for women there is the rage against social and familial expectations, of abuse, of being pursued, and of the complex relationship with the power of men. These aspects make for new thoughts that would not arise in male arts.


A Soliloquy for Volcanoes and Nearest Neighbors

Tongariro National Park: Emerald Lake
Tongariro National Park: Emerald Lake

A German kid caught me talking to myself yesterday. It was my fault, really. I was trying to break a hypnotic trance-like repetition of exactly what I was going to say to the tramper’s hut warden about two hours away. OK, more specifically, I had left the Waihohonu camp site in Tongariro National Park at 7:30AM and was planning to walk out that day. To put this into perspective, it’s 28.8 km (17.9 miles) with elevation changes of around 900m, including a ridiculous final assault above red crater at something like 60 degrees along a stinking volcanic ridge line. And, to make things extra lovely, there was hail, then snow, then torrential downpours punctuated by hail again—a lovely tramp in the New Zealand summer—all in a full pack.

But anyway, enough bragging about my questionable judgement. I was driven by thoughts of a hot shower and the duck l’orange at Chateau Tongariro while my hands numbed to unfeeling arresting myself with trekking poles down through muddy canyons. I was talking to myself. I was trying to stop repeating to myself why I didn’t want my campsite for the night that I had reserved. This is the opposite of glorious runner’s high. This is when all the extra blood from one’s brain is obsessed with either making leg muscles go or watching how the feet will fall. I also had the hood of my rain fly up over my little Marmot ball cap. I was in full regalia, too, with the shifting rub of my Gortex rain pants a constant presence throughout the day.  I didn’t notice him easing up on me as I carried on about one-shot learning as some kind of trance-breaking ritual.

We exchanged pleasantries and he meandered on. With his tiny little day pack it was clear he had just come up from the car park at Mangatepopo for a little jaunt. Eurowimp. I caught up with him later slathering some kind of meat product on white bread trailside and pushed by, waiting on my own lunch of jerky, chili-tuna, crackers, and glorious spring water, gulp after gulp, an hour onward. He didn’t bring up the glossolalic soliloquy incident.

My mantra was simple: artificial neural networks, including deep learning approaches, require massive learning cycles and huge numbers of exemplars to learn. In a classic test, scores of handwritten digit images (0 to 9) are categorized as to which number they are. Deep learning systems have gotten to 99% accuracy on that problem, actually besting average human performance. Yet they require a huge training corpus to pull this off, combined with many CPU hours to optimize the models on that corpus. We humans can do much better than that with our neural systems.

So we get this recently lauded effort, One-Shot Learning of Visual Concepts, that uses an extremely complicated Bayesian mixture modeling approach that combines stroke exemplars together for trying to classify foreign and never-before-seen characters (like Bengali or Ethiopic) after only one exposure to the stimulus. In other words, if I show you some weird character with some curves and arcs and a vertical bar in it, you can find similar ones in a test set quite handily, but machines really can’t. A deep learning model could be trained on every possible example known in a long, laborious process, but when exposed to a new script like Amharic or a Cherokee syllabary, the generalizations break down. A simple comparison approach is to use a nearest neighbor match or vote. That is, simply create vectors of the image pixels starting at the top left and compare the distance between the new image vector and the example using an inner vector product. Similar things look the same and have similar pixel patterns, right? Well, except they are rotated. They are shifted. They are enlarged and shrunken.

And then it hit me that the crazy-complex stroke model could be simplified quite radically by simply building a similar collection of stroke primitives as splines and then looking at the K nearest neighbors in the stroke space. So a T is two strokes drawn from the primitives collection with a central junction and the horizontal laying atop the vertical. This builds on the stroke-based intuition of the paper’s authors (basically, all written scripts have strokes as a central feature and we as writers and readers understand the line-ness of them from experience with our own script).

I may have to try this out. I should note, also in critique of this antithesis of runner’s high (tramping doldrums?), that I was also deeply concerned that there were so many damn contending voices and thoughts racing around my head in the face of such incredible scenery. Why did I feel the need to distract my mind from it’s obsessions over something so humanly trivial? At least, I suppose, the distraction was interesting enough that it was worth the effort.

Lucifer on the Beach

glowwormsI picked up a whitebait pizza while stopped along the West Coast of New Zealand tonight. Whitebait are tiny little swarming immature fish that can be scooped out of estuarial river flows using big-mouthed nets. They run, they dart, and it is illegal to change river exit points to try to channel them for capture. Hence, whitebait is semi-precious, commanding NZD70-130/kg, which explains why there was a size limit on my pizza: only the small one was available.

By the time I was finished the sky had aged from cinereal to iron in a satire of the vivid, watch-me colors of CNN International flashing Donald Trump’s linguistic indirection across the television. I crept out, setting my headlamp to red LEDs designed to minimally interfere with night vision. Just up away from the coast, hidden in the impossible tangle of cold rainforest, there was a glow worm dell. A few tourists conjured with flashlights facing the ground to avoid upsetting the tiny arachnocampa luminosa that clung to the walls inside the dark garden. They were like faint stars composed into irrelevant constellations, with only the human mind to blame for any observed patterns.

And the light, what light, like white-light LEDs recently invented, but a light that doesn’t flicker or change, and is steady under the calmest observation. Driven by luciferin and luciferase, these tiny creatures lure a few scant light-seeking creatures to their doom and as food for absorption until they emerge to mate, briefly, lay eggs, and then die.

Lucifer again, named properly from the Latin as the light bringer, the chemical basis for bioluminescence was largely isolated in the middle of the 20th Century. Yet there is this biblical stigma hanging over the term—one that really makes no sense at all. The translation of morning star or some other such nonsense into Latin got corrupted into a proper name by a process of word conversion (this isn’t metonymy or something like that; I’m not sure there is a word for it other than “mistake”). So much for some kind of divine literalism tracking mechanism that preserves perfection. Even Jesus got rendered as lucifer in some passages.

But nothing new, here. Demon comes from the Greek daemon and Christianity tried to, well, demonize all the ancient spirits during the monolatry to monotheism transition. The spirits of the air that were in a constant flux for the Hellenists, then the Romans, needed to be suppressed and given an oppositional position to the Christian soteriology. Even “Satan” may have been borrowed from Persian court drama as a kind of spy or informant after the exile.

Oddly, we are left with a kind of naming magic for the truly devout who might look at those indifferent little glow worms with some kind of castigating eye, corrupted by a semantic chain that is as kinked as the popular culture epithets of Lucifer himself.