Tweak, Memory

Artificial Neural Networks (ANNs) were, from early on in their formulation as Threshold Logic Units (TLUs) or Perceptrons, mostly focused on non-sequential decision-making tasks. With the invention of back-propagation training methods, the application to static presentations of data became somewhat fixed as a methodology. During the 90s Support Vector Machines became the rage and then Random Forests and other ensemble approaches held significant mindshare. ANNs receded into the distance as a quaint, historical approach that was fairly computationally expensive and opaque when compared to the other methods.

But Deep Learning has brought the ANN back through a combination of improvements, both minor and major. The most important enhancements include pre-training of the networks as auto-encoders prior to pursuing error-based training using back-propagation or  Contrastive Divergence with Gibbs Sampling. The critical other enhancement derives from Schmidhuber and others work in the 90s on managing temporal presentations to ANNs so the can effectively process sequences of signals. This latter development is critical for processing speech, written language, grammar, changes in video state, etc. Back-propagation without some form of recurrent network structure or memory management washes out the error signal that is needed for adjusting the weights of the networks. And it should be noted that increased compute fire-power using GPUs and custom chips has accelerated training performance enough that experimental cycles are within the range of doable.

Note that these are what might be called “computer science” issues rather than “brain science” issues. Researchers are drawing rough analogies between some observed properties of real neuronal systems (neurons fire and connect together) but then are pursuing a more abstract question as to how a very simple computational model of such neural networks can learn.… Read the rest

The Inevitability of Cultural Appropriation

Picasso in Native HeaddressI’m on a TGV from Paris to Monaco. The sun was out this morning and the Jardin de Tuileries was filled with homages in tulips to various still lifes at the Louvre. Two days ago, at the Musée de quai Branly—Jacques Chirac, I saw the Picasso Primitif exposition that showcased the influence of indigenous arts on Picasso’s work through the years, often by presenting statues from Africa or Papua New Guinea side-by-side with examples of Picasso’s efforts through the years. If you never made the connection between his cubism and the statuary of Chad (like me), it is eye opening. He wasn’t particularly culturally sensitive—like everyone else until at least the 1960s—because the fascinating people and their cultural works were largely aesthetic objects to him. If he was aware of the significance of particular pieces (and he might have been), it was something he rarely acknowledged or discussed. The photos that tie Picasso to the African statues are the primary thread of the exhibition, with each one, taken at his California atelier or in Paris or whatnot, inscribed by the curators with a dainty red circle or oval to highlight a grainy African statue lurking in the background. Sometimes they provide a blow-up in case you can’t quite make it out. It is only with a full Native American headdress given to Picasso by the actor Gary Cooper that we see him actively mugging for a camera and providing weight to the show’s theme. Then, next, Brigitte Bardot is leaning over him at the California studio and her cleavage renders the distant red oval uninteresting.

I am writing daily about things I don’t fully understand but try to imbue with a sense of character, of interest, and even of humor.… Read the rest