Harry spent most of that summer involved in the Santa Fe Sangre de Cristo Church, first with the church summer camp, then with the youth group. He seemed happy and spent the evenings text messaging with his new friends. I was jealous in a way, but refused to let it show too much. Thursdays he was picked up by the church van and went to watch movies in a recreation center somewhere. I looked out one afternoon as the van arrived and could see Sarah’s bright hair shining through the high back window of the van.
Mom explained that they seemed to be evangelical, meaning that they liked to bring as many new worshippers into the religion as possible through outreach and activities. Harry didn’t talk much about his experiences. He was too much in the thick of things to be concerned with my opinions, I think, and snide comments were brushed aside with a beaming smile and a wave. “You just don’t understand,” Harry would dismissively tell me.
I was reading so much that Mom would often demand that I get out of the house on weekend evenings after she had encountered me splayed on the couch straight through lunch and into the shifting evening sunlight passing through the high windows of our thick-walled adobe. I would walk then, often for hours, snaking up the arroyos towards the mountains, then wend my way back down, traipsing through the thick sand until it was past dinner time.
It was during this time period that I read cyberpunk authors and became intrigued with the idea that someday, one day, perhaps computing machines would “wake up” and start to think on their own. I knew enough about computers that I could not even conceive of how that could possibly come about. My father had once described for me a simple guessing game that learned. If the system couldn’t guess your choice of animal, it would concede and use the correct answer to expand its repertoire. I had called it “learning by asking” at the time but only saw it as a simple game and never connected it to the problem of human learning.
Yet now the concept made some sense as an example of how an intelligent machine could escape from the confines of just producing the outputs that it was programmed to produce. Yet there were still confines; the system could never just reconfigure the rules system or decide to randomly guess when it got bored (or even get bored). There was something profound missing from our understanding of human intelligence.
Purposefulness seemed to be the missing attribute that we had and that machines did not. We were capable of making choices by a mechanism of purposefulness that transcended simple programmable rules systems, I hypothesized, and also traced that purpose back to more elementary programming that was part of our instinctive, animal core. There was a philosophical problem with this scheme, though, that I recognized early on; if our daily systems of learning and thought were just elaborations of logical games like that animal learning game, and the purpose was embedded more deeply, what natural rules governed that deeper thing, and how could it be fundamentally different than the higher-order rules?
I wanted to call this core “instinct” and even hypothesized that if it could be codified it would bridge the gap between truly thinking and merely programmed machines. But the alternative to instinct being a logical system seemed to be assigning it supernatural status and that wasn’t right for several reasons.
First, the commonsense notion of instinct associated with doing primitive things like eating, mating and surviving seemed far removed from the effervescent and transcendent ideas about souls that were preached by religions. I wanted to understand the animating principle behind simple ideas like wanting to eat and strategizing about how to do it—hardly the core that ascends to heaven in Christianity and other religions I was familiar with. It was also shared across all animals and even down to the level of truly freaky things like viruses and prions.
The other problem was that any answer of supernaturalism struck me as leading smack into an intellectual brick wall because we could explain and explain until we get to the core of our beings and then just find this billiard ball of God-light. Somehow, though, that billiard ball had to emanate energy or little logical arms to affect the rules systems by which we made decisions; after all, purposefulness can’t just be captive in the billiard ball but has to influence the real world, and at that point we must be able to characterize those interactions and guess a bit at the structure of the billiard ball.
So the simplest explanation seemed to be that the core, instinct, was a logically describable system shaped by natural processes and equipped with rules that governed how to proceed. Those rules didn’t need to be simple or even easily explainable, but they needed to be capable of explanation. Any other scheme I could imagine involved a problem of recursion, with little homunculi trapped inside other homunculi and ultimately powered by a billiard ball of cosmic energy.
I tried to imagine what the religious thought about this scheme of explanation but found what I had heard from Harry to be largely incompatible with any sort of explanation. Instead, the discussion was devoid of any sort of detailed analysis or arguments concerning human intelligence. There was a passion play between good and evil forces, the notion of betraying or denying the creator god, and an unexplained transmigration of souls, being something like our personalities or identities. If we wanted to ask a question about why someone had, say, committed a crime, it was due to supernatural influences that acted through their personalities. More fundamental questions like how somehow learned to speak a language, which I thought was pretty amazing, were not apparently subject to the same supernatural processes, but might be explained with a simple recognition of the eminence of God’s creation. So moral decisions were subject to evil while basic cognition was just an example of supernatural good in this scheme of things, with the latter perhaps subject to the arbitrary motivations of the creator being.
Supernaturalism was an appeal to non-explanation driven by a conscious desire to not look for answers. “God’s Will” was the refrain for this sort of reasoning and it was counterproductive to understanding how intelligence worked or had come about.
God was the end of all thought. The terminus of bland emptiness. A void.
But if natural processes were responsible, then the source of instinct was evolutionary in character. Evolution led to purpose, but in a kind of secondhand way. The desire to reproduce did not directly result in complex brains or those elaborate plumes on birds that showed up in biology textbooks. It was a distal effect built on a basic platform of sensing and reacting and controlling the environment. That seemed obvious enough but was just the beginning of the puzzle for me. It also left the possibility of machines “waking up” far too distant a possibility since evolution worked very slowly in the biological world.
I suddenly envisioned computer programs competing with each other to solve specific problems in massive phalanxes. Each program varied slightly from the others in minute details. One could print “X” while another could print “Y”. The programs that did better would then be replicated into the next generation. Gradually the programs would converge on solving a problem using a simple evolutionary scheme. There was an initial sense of elegant simplicity, though the computing system to carry the process out seemed at least as large as the internet itself. There was a problem, however. The system required a central governor to carry out the replication of the programs, their mutation and to measure the success of the programs. It would also have to kill off, to reap, the losers. That governor struck me as remarkably god-like in its powers, sitting above the population of actors and defining the world in which they acted. It was also inevitable that the solutions at which programs would arrive would be completely shaped by the adaptive landscape that they were presented with; though they were competing against one another, their behavior was mediated through an outside power. It was like a game show in a way and didn’t have the kind of direct competition that real evolutionary processes inherently have.
A solution required that the governor process go away, that the individual programs replicate themselves and that even that replication process be subject to variation and selection. Moreover, the selection process had to be very broadly defined based on harvesting resources in order to replicate, not based on an externally defined objective function. Under those circumstances, the range of replicating machines—automata—could be as vast as the types of flora and fauna on Earth itself.
As I trudged up the arroyo, I tried to imagine the number of insects, bacteria, spores, plants and vines in even this relatively sparse desert. A cricket began singing in a nearby mesquite bush, joining the chorus of other crickets in the late summer evening. The light of the moon was beginning to glow behind a mountain ridge. Darkness was coming fast and I could hear coyotes start calling further up the wash towards St. John’s College.
As I returned home, I felt as though I was the only person walking in the desert that night, isolated in the dark spaces that separated the haphazard Santa Fe roads, yet I also was warmed with the idea that there was a solution to the problem of purpose embedded deeply in our biology and that could be recreated in a laboratory of sorts, given a vastly complex computing system larger than the internet itself. That connection to a deep truth seemed satisfying in a way that the weird language of religion had never felt. We could know and understand our own nature through reason, through experiments and through simulation, and even perhaps create a completely new form of intelligence that had its own kind of soul derived from surviving generations upon generations of replications.
But did we, like gods, have the capacity to apprehend this? I recalled my Hamlet: The paragon of animals, indeed. A broad interpretation of the Biblical Fall as a desire to be like God lent a metaphorical flavor to this nascent notion. Were we reaching out to try to become like a creator god of sorts through the development of intelligent technologies and biological manipulation? If we did create a self-aware machine that seemed fully human-like, it would certainly support the idea that we were creators of new souls.
I was excited about this line of thinking as I slipped into the living room where Mom and Harry were watching a crime drama on TV. Harry would not understand this, I realized, and would lash out at me for being terrifically weird if I tried to discuss it with him. The distance between us had widened to the point that I would avoid talking directly to him. It felt a bit like the sense of loss after Dad died, though without the sense of finality that death brought with it. Harry and I could recover, I thought, reconnecting later on in life and reconciling our divergent views.
A commercial came and I stared at the back of his head like I had done so often, trying to burrow into his skull with my mind. “Harry, Harry!” I called in my thoughts. He suddenly turned around with his eyes bulging and a crooked smile erupting across his face.
“What?” he asked.
It still worked.