Category: Existential Threats

I, Robot and Us

What happens if artificial intelligence (AI) technologies become significant economic players? The topic has come up in various ways for the past thirty years, perhaps longer. One model, the so-called technological singularity, posits that self-improving machines may be capable of a level of knowledge generation and disruption that will eliminate humans from economic participation. How far out this singularity might be is a matter of speculation, but I have my doubts that we really understand intelligence enough to start worrying about the impacts of such radical change.

Barring something essentially unknowable because we lack sufficient priors to make an informed guess, we can use evidence of the impact of mechanization on certain economic sectors, like agribusiness or transportation manufacturing, to try to plot out how mechanization might impact other sectors. Aghion, Jones, and Jones’ Artificial Intelligence and Economic Growth, takes a deep dive into the topic. The math is not particularly hard, though the reasons for many of the equations are tied up in macro and microeconomic theory that requires a specialist’s understanding to fully grok.

Of special interest are the potential limiting role of inputs and organizational competition. For instance, automation speed-ups may be limited by human limitations within the economic activity. This may extend even further due to fundamental limitations of physics for a given activity. The pointed example is that power plants are limited by thermodynamics; no amount of additional mechanization can change that. Other factors related to inputs or the complexity of a certain stage of production may also drag economic growth to a capped, limiting level.

Organizational competition and intellectual property considerations come into play, as well. While the authors suggest that corporations will remain relevant, they should become more horizontal by eliminating much of the middle tier of management and outsourcing components of their productivity. The labor consequences are less dire than some singularity speculations: certain low knowledge workers may achieve more influence in the economic activities because they remain essential to the production chain and their value and salaries rise. They become more fluid, as well, because they can operate as free lancers and thus have a broader impact.

This kind of specialization and out-sized knowledge influence, whether by low or high-knowledge workers, is a kind of singularity in itself. Consider the influence of the printing press in disseminating knowledge or the impact of radio and television. The economic costs of moving humans around to convey ideas or to entertain evaporate or minimize, but the influence is limited then to highly-regarded specialists who are competing to get a slice of the public’s attention. Similarly, the knowledge worker who is not easily replaceable by machine becomes the star of the new, AI economy. This may be happening already, with rumors of astronomical compensation for certain AI experts percolating out of Silicon Valley.

Inclement Science

Found at 6,500 feet in New Mexico’s Organ Mountains this morning, driven into an old log, facing White Sands Missile Range:

Can’t help but think it is a statement on the threat to climate science and missions like Jason-3, but someone likely just lost it on the trail and a good soul pushed the pin into the wood for potential rediscovery.

The Obsessive Dreyfus-Hawking Conundrum

I’ve been obsessed lately. I was up at 5 A.M. yesterday and drove to Ruidoso to do some hiking (trails T93 to T92, if interested). The San Augustin Pass was desolate as the sun began breaking over, so I inched up into triple digit speeds in the M6. Because that is what the machine is made for. Booming across White Sands Missile Range, I recalled watching base police work with National Park Rangers to chase oryx down the highway while early F117s practiced touch-and-gos at Holloman in the background, and then driving my carpool truck out to the high energy laser site or desert ship to deliver documents.

I settled into Starbucks an hour and a half later and started writing on ¡Reconquista!, cranking out thousands of words before trying to track down the trailhead and starting on my hike. (I would have run the thing but wanted to go to lunch later and didn’t have access to a shower. Neither restaurant nor diners deserve an après-run moi.) And then I was on the trail and I kept stopping and taking plot and dialogue notes, revisiting little vignettes and annotating enhancements that I would later salt in to the main text over lunch. And I kept rummaging through the development of characters, refining and sifting the facts of their lives through different sets of sieves until they took on both a greater valence within the story arc and, often, more comedic value.

I was obsessed and remain so. It is a joyous thing to be in this state, comparable only to working on large-scale software systems when the hours melt away and meals slip as one cranks through problem after problem, building and modulating the subsystems until the units begin to sing together like a chorus. In English, the syntax and semantics are less constrained and the pragmatics more pronounced, but the emotional high is much the same.

With the recent death of Hubert Dreyfus at Berkeley it seems an opportune time to consider the uniquely human capabilities that are involved in each of these creative ventures. Uniquely, I suggest, because we can’t yet imagine what it would be like for a machine to do the same kinds of intelligent tasks. Yet, from Stephen Hawking through to Elon Musk, influential minds are worried about what might happen if we develop machines that rise to the level of human consciousness. This might be considered a science fiction-like speculation since we have little basis for conjecture beyond the works of pure imagination. We know that mechanization displaces workers, for instance, and think it will continue, but what about conscious machines?

For Dreyfus, the human mind is too embodied and situational to be considered an encodable thing representable by rules and algorithms. Much like the trajectory of a species through an evolutionary landscape, the mind is, in some sense, an encoded reflection of the world in which it lives. Taken further, the evolutionary parallel becomes even more relevant in that it is embodied in a sensory and physical identity, a product of a social universe, and an outgrowth of some evolutionary ping pong through contingencies that led to greater intelligence and self-awareness.

Obsession with whatever cultivars, whatever traits and tendencies, lead to this riot of wordplay and software refinement is a fine example of how this moves away from the fears of Hawking and towards the impossibilities of Dreyfus. We might imagine that we can simulate our way to the kernel of instinct and emotion that makes such things possible. We might also claim that we can disconnect the product of the effort from these internal states and the qualia that defy easy description. The books and the new technologies have only desultory correspondence to the process by which they are created. But I doubt it. It’s more likely that getting from great automatic speech recognition or image classification to the general AI that makes us fearful is a longer hike than we currently imagine.

The IQ of Machines

standard-dudePerhaps idiosyncratic to some is my focus in the previous post on the theoretical background to machine learning that derives predominantly from algorithmic information theory and, in particular, Solomonoff’s theory of induction. I do note that there are other theories that can be brought to bear, including Vapnik’s Structural Risk Minimization and Valiant’s PAC-learning theory. Moreover, perceptrons and vector quantization methods and so forth derive from completely separate principals that can then be cast into more fundamental problems in informational geometry and physics.

Artificial General Intelligence (AGI) is then perhaps the hard problem on the horizon that I disclaim as having had significant progress in the past twenty years of so. That is not to say that I am not an enthusiastic student of the topic and field, just that I don’t see risk levels from intelligent AIs rising to what we should consider a real threat. This topic of how to grade threats deserves deeper treatment, of course, and is at the heart of everything from so-called “nanny state” interventions in food and product safety to how to construct policy around global warming. Luckily–and unlike both those topics–killer AIs don’t threaten us at all quite yet.

But what about simply characterizing what AGIs might look like and how we can even tell when they arise? Mildly interesting is Simon Legg and Joel Veness’ idea of an Artificial Intelligence Quotient or AIQ that they expand on in An Approximation of the Universal Intelligence Measure. This measure is derived from, voilà, exactly the kind of algorithmic information theory (AIT) and compression arguments that I lead with in the slide deck. Is this the only theory around for AGI? Pretty much, but different perspectives tend to lead to slightly different focuses. For instance, there is little need to discuss AIT when dealing with Deep Learning Neural Networks. We just instead discuss statistical regularization and bottlenecking, which can be thought of as proxies for model compression.

So how can intelligent machines be characterized by something like AIQ? Well, the conclusion won’t be surprising. Intelligent machines are those machines that function well in terms of achieving goals over a highly varied collection of environments. This allows for tractable mathematical treatments insofar as the complexity of the landscapes can be characterized, but doesn’t really give us a good handle on what the particular machines might look like. They can still be neural networks or support vector machines, or maybe even something simpler, and through some selection and optimization process have the best performance over a complex topology of reward-driven goal states.

So still no reason to panic, but some interesting ideas that shed greater light on the still mysterious idea of intelligence and the human condition.

Machine Learning and the Coming Robot Apocalypse

Daliesque creepy dogsSlides from a talk I gave today on current advances in machine learning are available in PDF, below. The agenda is pretty straightforward: starting with some theory about overfitting based on algorithmic information theory, we proceed on through a taxonomy of ML types (not exhaustive), then dip into ensemble learning and deep learning approaches. An analysis of the difficulty and types of performance we get from various algorithms and problems is presented. We end with a discussion of whether we should be frightened about the progress we see around us.

Note: click on the gray square if you don’t see the embedded PDF…browsers vary.

Download the PDF file .

Alien Singularities and Great Filters

Life on MarsNick Bostrom at Oxford’s Future of Humanity Institute takes on Fermi’s question “Where are they?” in a new paper on the possibility of life on other planets. The paper posits probability filters (Great Filters) that may have existed in the past or might be still to come and that limit the likelihood of the outcome that we currently observe: our own, ahem, intelligent life. If a Great Filter existed in our past—say the event of abiogenesis or prokaryote to eukaryote transition—then we can somewhat explain the lack of alien contact thus far: our existence is of very low probability. Moreover, we can expect to not find life on Mars.

If, however, the Great Filter exists in our future then we might see life all over the place (including the theme of his paper, Mars). Primitive life is abundant but the Great Filter is somewhere in our future where we annihilate ourselves, thus explaining why Fermi’s They are not here while little strange things thrive on Mars, and beyond. It is only advanced life that got squeezed out by the Filter.

Bostrom’s Simulation Hypothesis provides a potential way out of this largely pessimistic perspective. If there is a very high probability that civilizations achieve sufficient simulation capabilities that they can create artificial universes prior to conquering the vast interstellar voids needed to move around and signal with adequate intensity, it is equally possible that their “exit strategy” is a benign incorporation into artificial realities that prevents corporeal destruction by other means. It seems unlikely that every advanced civilization would “give up” physical being under these circumstances (in Teleology there are hold-outs from the singularity though they eventually die out), which would mean that there might remain a sparse subset of active alien contact possibilities. This finite but nonzero probability is probably at least as great as the probability of any advanced civilization just succumbing to destruction by other technological means, however, which means they are out there but they just don’t care.

And that leaves us one exit strategy that is not as abhorrent as the future Great Filter might suggest.

NOTE: Thanks to the great Steve Diamond for his initial query of whether Bostrom’s Simulation Hypothesis impacted the Great Filter hypothesis.

Spurting into the Undiscovered Country

voyager_plaqueThere was glop on the windows of the International Space Station. Outside. It was algae. How? Now that is unclear, but there is a recent tradition of arguing against abiogenesis here on Earth and arguing for ideas like panspermia where biological material keeps raining down on the planet, carried by comets and meteorites, trapped in crystal matrices. And there may be evidence that some of that may have happened, if only in the local system, between Mars and Earth.

Panspermia includes as a subset the idea of Directed Panspermia whereby some alien intelligence for some reason sends biological material out to deliberately seed worlds with living things. Why? Well, maybe it is a biological prerogative or an ethical stance. Maybe they feel compelled to do so because they are in some dystopian sci-fi narrative where their star is dying. One last gasping hope for alien kind!

Directed Panspermia as an explanation for life on Earth only sets back the problem of abiogenesis to other ancient suns and other times, and implicitly posits that some of the great known achievements of life on Earth like multicellular forms are less spectacularly improbable than the initial events of proto-life as we hypothesize it might have been. Still, great minds have spent great mental energy on the topic to the point that elaborate schemes involving solar sails have been proposed so that we may someday engage in Directed Panspermia as needed. I give you:

Mautner, M; Matloff, G. (1979). “Directed panspermia: A technical evaluation of seeding nearby solar systems”. J. British Interplanetary Soc. 32: 419.

So we take solar sails and bioengineered lifeforms in tiny capsules. The solar sails are large and thin. They carry the tiny capsules into stellar formations and slow down due to friction. They survive thousands of years while exposed to thousands of rads of interstellar radiation without the benefit of magnetic fields or atmospheric shielding. And once in a great while (after all, space is vast) they start a new ecosystem. Indeed, maybe some eukaryotes are included to avoid that big probability barrier to bridging over to multicellular organisms, specialization, and all that.

The why of all this is interesting. Here is the list from Section 9 of the paper used to create an ethics of “Life”:

  1. Life is a process of the active self-propagation of organized molecular patterns.
  2. The patterns of organic terrestrial Life are embodied in biomolecular structures that actively reproduce through cycles of genetic code and protein action.
  3. But action that leads to a selected outcome is functionally equivalent to the pursuit of a purpose.
  4. Where there is Life there is therefore a purpose. The object inherent in Life in self-propagation.
  5. Humans share the self-propagating DNA/protein biophysics of all cellular organisms, and therefore share with the family of organic Life a common purpose.
  6. Assuming free will, the human purpose must be self-defined. From our identity with Life derives the human purpose to forever safeguard and propagate Life. In this pursuit human action will establish Life as a governing force in nature.
  7. The human purpose defines the axioms of ethics. Moral good is that which promotes Life, and evil is that which destroys Life.
  8. Life, in the complexity of its structures and processes, is unique amongst the hierarchy of structures in Nature. This unites the family of Life and raises it above the inanimate universe.
  9. Biology is possible only by a precise coincidence of the laws of physics. Thereby the physical universe itself also comes to a special point in the living process.
  10. New life-forms who are most fit survive and reproduce best. This tautology, judgement of fitness to survive by survival itself, is the logic of Life. The mechanisms of Life may forever change, but the logic of Life is forever permanent.
  11. Survival is best secured by expansion in space, and biological progress is best assured by adaptation to diverse multiple worlds. This process will foster biological and human/machine coevolution. In the latter, control must always remain with organic- based intelligences, who have vested interests to continue our organic life-form. When the future is subject to conscious control, the conscious will to continue Life must itself be forever propagated.
  12. The human purpose and the destiny of Life are intertwined. The results can light up the galaxy with life, and affect the future patterns of the universe. When the living pattern pervades nature, human existence will have attained a cosmic purpose.

Many of these points can be scrutinized for both logical entailments and, yes, for a bit of fun. OK, let’s get started. The paper deals effectively with any complaints about teleology in 3-5 by using an argument that the appearance of purpose-like outcomes is equivalent to purposeful outcomes and therefore not necessarily the same. Fair enough. Teleonomy is a fine term to deploy in these circumstances.

So then we get to 6. Couldn’t we equally say that the purpose of human life is to safeguard human life to the exclusion of other life forms. Deploying the Red Queen Hypothesis concerning the evolution of sexuality, for instance, would mean that we should be engaged in a carefully orchestrated battle against parasites that continuously lay siege to us? And, indeed, we are, with just today minor victories against Ebola. What would our Red Queen alternative to 6 look like? Maybe:

6. Assuming free will, the human purpose must be self-defined. From our identity with Life derives the human purpose to forever safeguard Life such that it maintains the highest order of achievements by living things and their preservation against contending living organisms. In this pursuit human action will establish Life as a governing force in nature.

This might be argued is too limiting because the advanced state of human existence is necessarily tied to the panoply of parasitic threats that we evolved “around” and therefore should be embraced as part of the tough love of life itself, but such an ethics among humans would be considered ridiculous and cruel. Propagate the Ebola virus because it holds a seat among the host of heavenly threats?

Among other problems with this list (and they are manifold) is 11, whereby survival, being a good thing for Life (capitals per the original), is best promoted by expansion in space. It’s a kind of biological Manifest Destiny: go up, young biome, go up! This assumes there is nothing really out there, for one. Our life, though possibly seeded from space, is clearly vastly different, having been magnified through multiple probability lenses into the aggressive earthly forms of today. It could wreak havoc on indigenous forms already out there in a kind of infectious plague against the natives. If we value Life, shouldn’t we also value existing Life?

And we get down to the overall goal in 12. Is a “cosmic purpose” a desirable goal for human life? It sounds good at the surface, but we generally regard more narrowly focused goals as ethical goods, like building better societies for our children and eradicating those pesky biological parasites that used to wipe them out in large numbers. If we have a cosmic purpose, built upon our strivings in this universe, it might be best served by survival, true, but it might be best if that survival is more intimately human than the spurting of our seeds throughout the undiscovered country of the future.

Inching Towards Shannon’s Oblivion

SkynetFollowing Bill Joy’s concerns over the future world of nanotechnology, biological engineering, and robotics in 2000’s Why the Future Doesn’t Need Us, it has become fashionable to worry over “existential threats” to humanity. Nuclear power and weapons used to be dreadful enough, and clearly remain in the top five, but these rapidly developing technologies, asteroids, and global climate change have joined Oppenheimer’s misquoted “destroyer of all things” in portending our doom. Here’s Max Tegmark, Stephen Hawking, and others in Huffington Post warning again about artificial intelligence:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

I almost always begin my public talks on Big Data and intelligent systems with a presentation on industrial revolutions that progresses through Robert Gordon’s phases and then highlights Paul Krugman’s argument that Big Data and the intelligent systems improvements we are seeing potentially represent a next industrial revolution. I am usually less enthusiastic about the timeline than nonspecialists, but after giving a talk at PASS Business Analytics Friday in San Jose, I stuck around to listen in on a highly technical talk concerning statistical regularization and deep learning and I found myself enthused about the topic once again. Deep learning is using artificial neural networks to classify information, but is distinct from traditional ANNs in that the systems are pre-trained using auto-encoders to have a general knowledge about the data domain. To be clear, though, most of the problems that have been tackled are “subsymbolic” for image recognition and speech problems. Still, the improvements have been fairly impressive based on some pretty simple ideas. First, the pre-training is accompanied by systematic bottlenecking of the number of nodes that can be used for learning. Second, the amount that each fires is kept low to avoid overfitting to nodes with dominating magnitudes. Together, the auto-encoders learn the patterns without training and can then be trained faster and easier to associate those patterns with classes.

I still have my doubts concerning the threat timeline, however. For one, these are mostly sub-symbolic systems that are not capable of the kinds of self-directed system modifications that many fear can lead to exponential self-improvement. Second, the tasks that are seeing improvements are not new but just relatively well-known classification problems. Finally, the improvements, while impressive, are incremental improvements. There is probably a meaningful threat profile that can convert into a decision tree for when action is needed. For global climate change there are consensus estimates about sea level changes for instance. For Evil AI I think we need to wait for a single act of machine intelligence out-of-control before spending excessively on containment, policy, or regulation. In the meantime, though, keep a close eye on your laptop.

And then there’s the mild misanthropy of Claude Shannon, possibly driven by living too long in New Jersey:

I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.

Predicting Black Swans

black-swanNasim Taleb’s 2nd Edition of The Black Swan argues—not unpersuasively—that rare, cataclysmic events dominate ordinary statistics. Indeed, he notes that almost all wealth accumulation is based on long-tail distributions where a small number of individuals reap unexpected rewards. The downsides are also equally challenging, where he notes that casinos lose money not in gambling where the statistics are governed by Gaussians (the house always wins), but instead when tigers attack, when workers sue, and when other external factors intervene.

Black Swan Theory adds an interesting challenge to modern inference theories like Algorithmic Information Theory (AIT) that anticipate predictability to the universe. Even variant coding approaches like Minimum Description Length theory modify the anticipatory model based on relatively smooth error functions rather than high “kurtosis” distributions of variable change. And for the most part, for the regular events of life and our sensoriums, that is adequate. It is only where we start to look at rare existential threats that we begin to worry about Black Swans and inference.

How might we modify the typical formulations of AIT and the trade-offs between model complexity and data to accommodate the exceedingly rare? Several approaches are possible. First, if we are combining a predictive model with a resource accumulation criteria, we can simply pad out the model memory by reducing kurtosis risk through additional resource accumulation; any downside is mitigated by the storing of nuts for a rainy day. Good strategy for moderately rare events like weather change, droughts and whatnot. But what about even rarer events like little ice ages and dinosaur extinction-level meteorite hits? An alternative strategy is to maintain sufficient diversity in the face of radical unknowns that coping becomes a species-level achievement.

Any underlying objective function for these radical events has to sacrifice fidelity to temporally local conditions in order to cope with the outliers. Even a simple model based on populations of inductive machines with variant parameterizations wins when the population converges on the best outcomes. There is only medium term losing to focusing exclusively on the rare downside. Yet Taleb claims the rare dominates the smoothly predictable. Using the alternative strategy, that means that there might be an addition to AIT-based decision theory that adds a longer-term, multi-horizon valuation that promotes diversity against the short term gains.

 

 

Minimizing Existential Toaster Threats

tesla2Philosophy in the modern world has strived for a sense of relevance as the sciences (“natural philosophy”) have become dominant. But philosophy may have found a footing in the complicated space between technological advances and defining human virtues with efforts to address and understand change and its impact on human existence. These efforts have included the ethics of biological manipulation and, critically, existential threats to humanity, including climate change, artificial intelligence, and genetic engineering.

I mention all this because I’m really writing about cars but need to fit the discussion somehow into the theme of this blog. So the existential threat of climate change means we need to pollute less and burn less fossil fuels. More tactically, however, my wife and I also needed to buy a new toaster because our five-year-old Oster four-burner unit was failing. There was therefore only one solution to this dilemma: take the brand new Tesla S 70 miles away to the foothills of the Sierra on a test drive and, yes, to buy a new toaster.

I had taken delivery of our Tesla S Performance 85 with Tech Package a week before but didn’t really have any opportunity to drive it because of work obligations that kept me firmly planted in front of a computer monitor. I had driven it briefly but it mostly sat charging in the garage (at the slowish pace of a 120V circuit; Tesla did not deliver my dual charger station in time and I haven’t had the 100A circuit installed to support it either), so when Saturday came, I realized that it was an opportunity to justify a longish trek to test the drivability of the car and to seek out and use the Tesla “supercharger” stations that promise rapid charging in 30 minutes to an hour. I had several choices for supercharging stations, including the outlet mall at Gilroy, Harris Ranch (basically some truck stops between San Francisco and Los Angeles), and Folsom (a suburb of Sacramento) outlet malls. Why outlet malls were chosen for these installations I do not know.

Roadrash
Roadrash

The journey began poorly, however, when I scraped a wheel against a curb at a Starbucks resulting in a small flap of rubber protruding from the Michelin Pilot Sports. That flap oscillated and rattled at highway speeds. I had to pull over and inspect. Luckily my wife as usual has a wood-and-turquoise-handled folding knife handy in her purse. I cut away the rubber and we proceeded north towards Sacramento.

Driving impressions: Quiet at freeway speeds, but not as quiet as I might have expected due to road and wind noises. The technology suite is impressive with such amenities as free internet radio via 3G connectivity (currently paid by Tesla), web browsing, Google-maps-based nav, SiriusXM radio, speech recognition, and smart innovations like garage door openers that pop up via GPS as the car approaches a programmed garage. The torque and acceleration are astonishing and definitely rival or exceed contemporary supercars (save a Veyron or Aventador), but driving hot and fast is discouraged by the psychology of electric cars where calm and slow maximize battery life.

Lunch took 2 hours at Ten 22 in Old Sacramento, starting with duck confit and fig pizzette, then followed by warm spinach and truffle salad, finalized with Marin brie and honeycomb with Earl Grey. We almost didn’t want to leave, much less buy a toaster, but the charging station was only 20 miles away and we found it after crawling around the ring of the outlet mall for 10 minutes. It turned out that there was another gray S charging there, but the other three stations were unoccupied. I spoke briefly with the owner, then re-encountered him 20 minutes later after we had re-charged to maximum range. He was chatting with another couple just hooking up their wine-colored S.

Continuing on to the Galleria at Roseville, I was disappointed to find only two Chargepoint stations for the entire mall (and both occupied by Volts), but at least we got a toaster while minimizing at least one existential threat.

tesla-windmills