Artificial IntelligenceExistential Risk

Why Elon Musk is Right

Elon Musk told the National Governors Association over the weekend that “AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.”

The man knows how to get attention. His words were carried within hours by outlets ranging from NPR to Architectural Digest.  Many piled on to explain why he was wrong. Reason.com reviled him for suggesting regulation instead of allowing free markets to work their magic. And a cast of AI experts took him to task for alarmism that had no basis in their technical experience.

It’s worth examining this conflict. Some wonder as to Musk’s motivation; others think he’s angling for a government grant for OpenAI, the company he backed to explore ethical and safe development of AI. It is a drum Musk has banged repeatedly, going back to his 2015 $10 million donation to the Future of Life Institute, an amount that an interviewer lauded as large and Musk explained was tiny.

I’ve heard the objections from the experts before. At the Canadian Artificial Intelligence Association’s 2016 conference, the reactions from people in the field were generally either dismissive or perplexed, but I must add, in no way hostile. When you’ve written every line of code in an application, it’s easy to say that you know there’s nowhere in it that’s going to go berserk and take over the world. “Musk may say this,” started a common response, “but he uses plenty of AI himself.”

There’s no question that the man whose companies’ products include autonomous drone ships, self-landing rockets, cars on the verge of level 4 autonomy, and a future neural lace interface between the human brain and computers is deep into artificial intelligence. So why is he trying to limit it?

A cynical evaluation would be that Musk wants to hobble the competition with regulation that he has figure out how to subvert. A more charitable interpretation is that the man with more knowledge of the state of the art of AI than anyone else has seen enough to be scared. This is the more plausible alternative. If your only goal is to become as wealthy as possible, picking the most far-out technological challenges of our time and electing to solve them many times faster than was previously believed possible would be a dumb strategy.

And Elon Musk is anything but dumb.

Over a long enough time frame, what Musk is warning about is clearly plausible, it’s just that we can figure it will take so many breakthroughs to get there that it’s a thousand years in the future, a distance at which anything and everything becomes possible. If we model the human brain from the atoms on up then with enough computational horsepower and a suitable set of inputs, we could train this cybernetic baby brain to attain toddlerhood.

We could argue that Musk, Bill Gates, and Stephen Hawking are smart enough to see further into the future than ordinary mortals and therefore are exercised by something that’s hundreds of years away and not worth bothering about now. Why the rogue AI scenario could be far less than a thousand years in the future is a defining question for our time. Stephen Hawking originally went on record as saying that anyone who thought they knew when conscious artificial intelligence would arrive didn’t know what they were talking about. More recently, he revised his prediction of the lifespan of humanity down from 1000 years to 100.

No one can chart a line from today to Skynet and show it crossing the axis in 32 years. I’m sorry if you were expecting some sophisticated trend analysis that would do that. The people who have tried include Ray Kurzweil and his efforts are regularly pilloried. Equally, no one should think that it’s provably over, say, twenty years away. No one who watched the 2004 DARPA Grand Challenge would think that self-driving cars would be plying the streets of Silicon Valley eight years later. In 2015 the expectation of when a computer would beat leading players of Go was ten years hence, not one. So while we are certainly at least one major breakthrough away from conscious AI, that breakthrough may sneak up on us quickly.

Two recommendations. One, that we should be able to make more informed predictions of the effects of technological advances, and therefore, we should develop models that today’s AI can use to tell us. Once, people’s notion of the source of weather was angry gods in the sky. Now we have supercomputers executing humungous models of the biosphere. It’s time we constructed equally detailed models of global socioeconomics.

Two, because absence of proof is not proof of absence, we should not require those warning us of AI risks to prove their case. This is not quite the precautionary principle, because attempts to stop the development of conscious AI would be utterly futile. Rather, it is that we should act on the assumption that conscious AI will arrive within a relatively short time frame, and decide now how to ensure it will be safe.

Musk didn’t actually say that his doomsday scenario involved conscious AI, although referring to killer robots certainly suggests it. In the short term, merely the increasingly sophisticated application of artificial narrow intelligence will guarantee mass unemployment, which qualifies as civilization-rocking by any definition. See Martin Ford’s The Lights in the Tunnel for an analysis of the economic effects. In the further term, as AI grows more powerful, even nonconscious AI could wreak havoc on the world through the paperclip hypothesis, unintended emergent behavior, and malicious direction.

To quote Falstaff, perhaps the better part of valor is discretion.

BioterrorismTechnology

Bringing Back the Dead

Viruses, that is.  Canadian researchers revived an extinct horsepox virus last year on a shoestring budget, by using mail-order DNA.

The researchers bought overlapping DNA fragments from a commercial synthetic DNA company. Each fragment was about 30,000 base pairs long, and because they overlapped, the team was able to “stitch” them together to complete the genome of the 212,000-base-pair horsepox virus. When they introduced the genome into cells that were already infected with a different kind of pox virus, the cells began to produce virus particles of the infectious horsepox variety.

Why this matters: It’s getting easier all the time to synthesize existing pathogens (like horsepox) and to create exciting new deadly ones (with CRISPR/cas9). Crisis of Control explores the consequences of where that trend is leading. Inevitably it will be possible one day for average people to do it in their garages.

This team didn’t even use their own facility for make the DNA fragments, but ordered them through the mail.  There’s some inconsistency about how easy it is to get nasty stuff that way. Here’s a quote from a 2002 article:

Right now, the companies making DNA molecules such as
the ones used to recreate the polio virus do not check what
their clients are ordering. “We don’t care about that,” says
a technician at a company that ships DNA to more than 40
countries around the world, including some in the Middle
East.

Things may be better now; apparently now at least the reputable western companies do care.  What about do-it-yourself?  What I can’t tell from the article is whether their mail order was for double stranded DNS (dsDNA) or oligonucleotides (“oligos”), and here I am exposing my ignorance of molecular biology because I am sure it is obvious to someone in that field what it must have been. What I do know is that there are no controls on what you can make with oligos because you can get those synthesizers for as little as a few hundred bucks off eBay. But you then have to turn them into dsDNA to get the genome you’re looking for, and that requires some nontrivial laboratory work.

We can assume that procedure will get easier, and it has in fact already been replicated in a commercially available synthesizer that will produce dsDNA. The last time I looked, one such machine had recently become available, but implemented cryptographic security measures that meant that it would not make anything “questionable” until the manufacturer had provided an unlocking code.

So far, so good, although it’s hard for me to imagine people at the CDC find it easy to sleep. But inevitably this will become easier. How do we have to evolve to disengage this threat?

 

Photo: Wikimedia Commons.
Artificial IntelligenceEmployment

Keep on Truckin’

An article on Bloomberg suggests that in the short term at least, autonomous trucks have the potential to make the lives of truckers better by allowing them to teleoperate trucks and therefore see their families at night. Of course, many of them see this as the prelude to not being needed at all:

“I can tell the difference between a dead porcupine and a dead raccoon, and I know I can hit a raccoon, but if I hit a porcupine, I’m going to lose all the tires on the truck on that side,” says Tom George, a veteran driver who now trains other Teamsters for the union’s Washington-Idaho AGC Training Trust. “It will take a long time and a lot of software to program that competence into a computer.”

Perhaps.  Or maybe it just takes driving long enough in reality or in training on captured footage to encounter both kinds of roadkill and learn by experience.

Artificial IntelligenceScience

How many piano tuners are there in Chicago?

One of the chapters in Crisis of Control is on the Fermi Paradox, a fiendishly simply-stated problem with existential ramifications. That kind of simplification of the complex was the stock-in-trade of physicist Enrico Fermi, a man who could toss scraps of paper into the air when the atomic bomb test exploded and calculate in seconds an estimate of its yield that rivaled the official figures released days later. He taught his students to think the same way with this question: “How many piano tuners are there in Chicago?” No Googling. No reference books. Do your best with what you know. Go.

This is one of those questions where “Show your work” is the only possible way to evaluate the answer. The lazy ones will throw a dart at a mental board and say, “X,” and when asked how come, shrug. The way to solve this is to break it down into an equation containing factors that can be more readily estimated.  If we knew:

  • P – The population of Chicago
  • f – The number of pianos per person
  • t – The number of times a piano is tuned per year
  • H – The number of hours it takes to tune a piano
  • W – The number of hours per year a piano tuner works

then the number of piano tuners in Chicago is: P * f * t * H / W .  Here, let’s walk through this:

  • P * f gives the number of pianos in Chicago, call that N. P and f are each easier to estimate than how many pianos there are in a city.
  • N * t gives the number of piano tunings per year in Chicago, call that T.
  • T * H gives the number of hours spent tuning pianos per year in Chicago, call that Y.
  • Y / W gives the number of piano tuners it takes to provide that service. QED.

Of course, you could look at those figures and say, wait, I don’t even know the population of Chicago, much less how many hours a piano tuner works.  But it’s easier to make a good guess. To get f, you can go off your personal experience of how many friends’ houses you’ve seen with pianos, make a correction for the number of pianos in institutions of some kind (theaters, schools, etc), and at each stage, add in confidence limits of how far off you think you could be.

This process is what leads to the most important math in the Fermi Paradox chapter in Crisis, the Drake Equation:

N = R* · fₚ · nₑ · fₗ · fᵢ · fᶜ · L

where:

N = The number of civilizations in the Milky Way galaxy
(ours) whose electromagnetic emissions are detectable
(i.e., planets inhabited by aliens sending radio signals)
R* = The rate of formation of stars suitable for the
development of intelligent life
fₚ = The fraction of those stars with planetary systems
nₑ = The number of planets, per solar system, with an
environment suitable for life
fₗ = The fraction of suitable planets on which life actually
appears
fᵢ = The fraction of life-bearing planets on which
intelligent life emerges
fᶜ = The fraction of civilizations that develop a
technology that releases detectable signs of their
existence into space
L = The length of time such civilizations release
detectable signals into space

And that gives us a way of estimating how many intelligent civilizations there are in the galaxy right now, from quantities that we can estimate or measure independently.  Of course, the big question is, why haven’t we found any such civilizations yet when the calculations suggest N should be much larger than 1?  But NASA thinks it won’t be too long before that happens. And when we find them we can ask them how many piano tuners they have.

Artificial IntelligenceEmployment

This Time It’s Different

This superb video drives a stake through the heart of the meme that progress always equals more and better jobs:

All this and a cast of cartoon chickens. This is where it very much becomes clear that we need to analyze second-order effects. The video just starts wondering about those at the end. If we get very good at producing cheaper products at the expense of more and more jobs, who will buy those products? Who will be able to afford them if there is a rising underclass of unemployed that has trouble getting food, let alone iPhones? Sure, the market may turn to higher luxury items such as increasingly tricked-out autonomous cars, that can be afforded by the 1% (or less) who own the companies, but this is an unstable dynamic, a vicious circle. What will terminate that runaway feedback loop?

Artificial IntelligenceExistential RiskTechnologyWarfare

Is Skynet Inevitable?

Australia’s leading AI expert is afraid the army is creating ‘Terminator’, and no, that’s not taking him out of context:

I mean Hollywood is actually pretty good at predicting the future. And if we don’t do anything, then in 50 to 100 years time it will look much like Terminator. It wouldn’t be that dissimilar to what Hollywood is painting… there will be a lot of risk well before then, in fact.

Like so many scenarios I discuss in Crisis of Control, the question is not if, but when. 50 to 100 years away may be a timeframe that lulls us into a false sense of security when the real question is, how far in advance do we need to act to prevent this scenario?

That we will one day create “Skynet” is all but inevitable once you accept that we will develop conscious artificial intelligence (CAI). I go into the reasons why that will happen relatively soon in Crisis. The military is certain to develop CAI for the tremendous tactical leverage it grants. However, the hard take-off in intelligence levels and random mutations (which are all but assured in anything possessing creativity) will mean that at some point a CAI will evolve a motivation to control the real world machinery it is connected to… or can reach.

This is the point where many people argue that “we can just unplug it” and make sure it’s not connected to, say, weaponry.  The “just unplug it” argument has been disposed of before (basically, try unplugging Siri and see how far you get). The odds of successfully air gapping an AI from a network it wants to talk to are tiny; we had genetic algorithms that learned how to communicate across an air gap in 2004.

The answer that Crisis embraces is that we should develop ethical AI in the public domain first, so that it is in a position to dominate the market or defeat any unethical AI that may later arise. This is why we support the OpenAI initiative. It may sound naïve and reckless, but I believe it is the best chance we’ve got.

Artificial IntelligenceEmployment

When will a machine do your job better than you?

Katja Grace at the Future of Humanity Institute at the University of Oxford and fellow authors surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. They averaged the answers, and published them at https://arxiv.org/pdf/1705.08807.pdfThe results are… surprising.

First up, AIs will reach human proficiency in the game of Go in 2027… wait, what? Ah, but this survey was conducted in 2015. As I noted in Crisis of Control, before AlphaGo beat Lee Sedol in 2016, it was expected to be a decade before that happened; here’s the numeric proof. This really shows what a groundbreaking achievement that was, to blindside so many experts.

Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo. And when they analyzed the results by demographics, only one factor was significant: geography. Asian researchers think human level machine intelligence will be achieved much sooner:Screen Shot 2017-06-01 at 12.58.26 PM

Amusingly, their predictions for when different types of job will be automated are relatively clustered under 50 years from now with one far outlier over 80:  Apparently, the job of “AI Researcher” will take longer to automate that anything else, including surgeon. Might be a bit of optimism at work there…

 

Artificial IntelligencePoliticsTechnology

AI vs AI

More from the mailbag:

Regarding the section on AI on the battlefield you rightly focus on it behaving ethically against troops/citizens on the other side. However, very likely in the future the enemy ‘troops’ on the other side will be AI entities. It might be interesting to explore the ethics rules in this case?

Heh, very good point. Of course, at some point, the AI entities will be sufficiently conscious as to deserve equal rights. Who knows, they may be granted those rights by opposing AIs somewhat before then under professional courtesy. But your question suggests a more pragmatic earlier timeframe. In that view, the AI doesn’t recognize another AI as having any rights; it’s just aware that it’s looking at something that is not-a-human.

Before AIs escape their programming, we assume that their programming will only grant special status to human life. (Will autonomous cars brake for cats? Squirrels? Mice?) We have to postulate a level of AI development that’s capable of making value judgements by association before things get interesting. Imagine an AI that could evaluate the strategic consequences of destroying an opposing AI. Is its opponent directing the actions of inferior units? Will destroying its opponent be interpreted as a new act of war? Of course, these are decisions that human field troops are not empowered to make. But in an AI-powered battlefield, there may be no need to distinguish between the front lines and the upper echelons. They may be connected well enough to rewrite strategy on the fly.

I’d like to think that when the AIs get smart enough, they will decide that fighting each other is wasteful and instead negotiate a treaty that eluded their human masters. But before we get to that point we’re far more likely to be dealing with AIs with a programmed penchant for destruction.