Month: July 2017

Artificial IntelligenceEmploymentExistential Risk

Why Elon Musk Is Right … Again

Less than a week after Elon Musk warned the National Association of Governors about the risks of artificial intelligence, he got in a very public dust-up with Mark Zuckerberg, who thought Musk was being “pretty irresponsible.” Musk retorted that Zuckerberg’s understanding of the topic was “limited.”

This issue pops up with such regularity as to bring joy to the copyright holders of Terminator images. But neither of these men is a dummy, and they can’t both be right… right?

We need to unpack this a little carefully. There is a short term, and a long term. In the short term (the next 10-20 years), while there will be many jobs lost to automation, there will be tremendous benefits wrought by AI, specifically Artificial Narrow Intelligence, or ANI. That’s the kind of AI that’s ubiquitous now; each instance of it solves some specific problem very well, often better than humans, but that’s all it does. This is of course true on the face of it of computers ever since they were invented, or there would have been no point; from the beginning they were better at taking square roots than a person with pencil and paper.

But now those skills include tasks like facial recognition and driving a car, two abilities that we cannot even explain adequately how we do them ourselves, but never mind; computers can be trained by showing them good and bad examples and they just figure it out. They can recognize faces better than humans now, and the day when they are better drivers than humans is not far off.

In the short term, then, the effects are unemployment on an unprecedented scale as 3.5 million people who drive vehicles for a living in the USA alone are expected to be laid off. The effects extend to financial analysts making upwards of $400k/year, whose jobs can now be largely automated. Two studies show that about 47% of work functions are expected to be automated in the short term. (That’s widely misreported as 47% of jobs being eliminated with the rest left unmolested; actually, most jobs would be affected to varying degrees, averaging to 47%.) Mark Cuban agrees.

But, there will be such a cornucopia bestowed upon us by the ANIs that make this happen that we should not impede this progress, say their proponents.  Cures for diseases, dirty risky jobs given to machines, and wealth created in astronomical quantities, sufficient to take care of all those laid-off truckers.

That is true, but it requires that someone connect the wealth generated by the ANIs with the laid-off workers, and we’ve not been good at that historically. But let’s say we figure it out, the political climate swings towards Universal Basic Income, and in the short term, everything comes up roses. Zuckerberg: 1, Musk: 0, right?

Remember that the short term extends about 20 years. After that, we enter the era where AI will grow beyond ANI into AGI: Artificial General Intelligence. That means human-level problem solving abilities capable of being applied to any problem. Except that anything that gets there will have done so by having the ability to improve its own learning speed, and there is no reason for it to stop when it gets on a par with humans. It will go on to exceed our abilities by orders of magnitude, and will be connected to the world’s infrastructure in ways that make wreaking havoc trivially easy. It takes only a bug—not even consciousness, not even malevolence—for something that powerful to take us back to the Stone Age. Fortunately, history shows that Version 1.0 of all significant software systems is bug-free.

Oops.

Elon Musk and I don’t want that to be on the cover of the last issue of Time magazine ever published. Zuckerberg is more of a developer and I have found that it is hard for developers to see the existential risks here, probably because they developed the code, they know every line of it, and they know that nowhere in it resides the lines

if ( threatened ) {
    wipe_out_civilization();
}

Of course, they understand about emergent behavior; but when they’ve spent so much time so close to software that they know intimately, it is easy to pooh-pooh assertions that it could rise up against us as uninformed gullibility. Well, I’m not uninformed about software development either. And yet I believe that it could be soon that we are developing systems that does display drastic emergent behavior, and that by then it will be too late to take appropriate action.

Whether this cascade of crisis happens in 20 years, 15, or 30, we should start preparing for it now before we discover that we ought to have nudged this thing in another direction ten years earlier. And since it requires a vastly elevated understanding of human ethics, it may well take decades to learn what we need to make our AGIs have not just superintelligence, but supercompassion.

Artificial IntelligenceScienceTranshumanism

Bullying Beliefs

In the otherwise thought provoking and excellent book, “Heartificial Intelligence: Embracing our Humanity to Maximize Machines”, John C. Havens uncharacteristically misses the point of one of the scientists he reports on:

Jürgen Schmidhuber is a computer scientist known for his humor, artwork, and expertise in artificial intelligence. As part of a recent speech at TEDxLausanne, he provides a picture of technological determinism similar to [Martine] Rothblatt’s, describing robot advancement beyond human capabilities as inevitable. […] [H]e observes that his young children will spend a majority of their lives in a world where the emerging robot civilization will be smarter than human beings. Near the end of his presentation he advises the audience not to think with an “us versus them” mentality regarding robots, but to “think of yourself and of humanity in general as a small stepping stone, not the last one, on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things.”

It’s difficult to comprehend the depths of Schmidhuber’s condescension with this statement. Fully believing that he is building technology that will in one sense eradicate humanity, he counsels nervous onlookers to embrace this decimation. […] [T]he inevitability of our demise is assured, but at least our tiny brains provide some fodder for the new order ruling our dim-witted progeny. Huzzah! Be content!

This is not a healthy attitude.

But this is not Schmidhuber’s attitude. It is more of a coping skill for facing the inevitable and seeing a grand scheme to the unfolding of the universe.

In The HitchHiker’s Guide to the Galaxy, Zaphod Beeblebrox is tortured by placing him in the Total Perspective Vortex, which reduces its victim to blubbering insanity by showing them how insignificant they are on the scale of the universe. Unfortunately it fails in Beeblebrox’s case because his ego is so huge that he comes away reassured that he was a “really cool guy.” Havens is so desperate to avoid the perspective of humanity’s place in the universe that he mistakes or misstates Schmidhuber’s position as embracing the eradication of humanity. Schmidhuber said nothing of the sort, but foresaw a co-evolution of mankind and AI where the latter would surpass our intellectual capabilities. There is nothing condescending in this.

When I give a talk, someone will invariably raise what amounts to human exceptionalism. “When we’ve created machines that outclass us intellectually, what will humans do? What will be the point of living?” I usually reply with an analogy: Imagine that all the years of SETI and Project Ozma and HRMS  have paid off, and we are visited by an alien race. Their technological superiority is not in doubt – they built the spaceships to get to us, after all – and like the aliens of Close Encounters of the Third Kind, they are also evolved emotionally, philosophically, compassionately, and spiritually. Immediately upon landing, they show us how to cure cancer, end aging, and reach for the stars. Do we reject this cornucopia because we feel inferior to these visitors? Is our collective ego so large and fragile that we would rather live without these advances than relinquish the top position on the medal winners’ podium of sentient species?

Accepting a secondary rank is a role that’s understood by many around the world. I have three citizenships: British, American, Canadian. As an American, of course, you’re steeped in countless numerical examples of superiority from GDP to – Hello? We landed on the Moon. Growing up in Britain, we were raised in the shadow of the Empire on a diet of past glories that led us to believe that we were still top dog if you squinted a bit, and certainly if you had any kind of a retrospective focus, which is why the British take every opportunity possible to remind Americans of their quantity of history.  But as a Canadian, you have to accept that you could only be the global leader in some mostly intangible ways such as politeness, amount of fresh water per capita, best poutine, etc.

Most of the world outside the USA already knows what it’s like to share the planet with a more powerful race of hopefully benign intent. So they may find it easier to accept a change in the pecking order.

 

Artificial IntelligenceExistential Risk

Why Elon Musk is Right

Elon Musk told the National Governors Association over the weekend that “AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.”

The man knows how to get attention. His words were carried within hours by outlets ranging from NPR to Architectural Digest.  Many piled on to explain why he was wrong. Reason.com reviled him for suggesting regulation instead of allowing free markets to work their magic. And a cast of AI experts took him to task for alarmism that had no basis in their technical experience.

It’s worth examining this conflict. Some wonder as to Musk’s motivation; others think he’s angling for a government grant for OpenAI, the company he backed to explore ethical and safe development of AI. It is a drum Musk has banged repeatedly, going back to his 2015 $10 million donation to the Future of Life Institute, an amount that an interviewer lauded as large and Musk explained was tiny.

I’ve heard the objections from the experts before. At the Canadian Artificial Intelligence Association’s 2016 conference, the reactions from people in the field were generally either dismissive or perplexed, but I must add, in no way hostile. When you’ve written every line of code in an application, it’s easy to say that you know there’s nowhere in it that’s going to go berserk and take over the world. “Musk may say this,” started a common response, “but he uses plenty of AI himself.”

There’s no question that the man whose companies’ products include autonomous drone ships, self-landing rockets, cars on the verge of level 4 autonomy, and a future neural lace interface between the human brain and computers is deep into artificial intelligence. So why is he trying to limit it?

A cynical evaluation would be that Musk wants to hobble the competition with regulation that he has figure out how to subvert. A more charitable interpretation is that the man with more knowledge of the state of the art of AI than anyone else has seen enough to be scared. This is the more plausible alternative. If your only goal is to become as wealthy as possible, picking the most far-out technological challenges of our time and electing to solve them many times faster than was previously believed possible would be a dumb strategy.

And Elon Musk is anything but dumb.

Over a long enough time frame, what Musk is warning about is clearly plausible, it’s just that we can figure it will take so many breakthroughs to get there that it’s a thousand years in the future, a distance at which anything and everything becomes possible. If we model the human brain from the atoms on up then with enough computational horsepower and a suitable set of inputs, we could train this cybernetic baby brain to attain toddlerhood.

We could argue that Musk, Bill Gates, and Stephen Hawking are smart enough to see further into the future than ordinary mortals and therefore are exercised by something that’s hundreds of years away and not worth bothering about now. Why the rogue AI scenario could be far less than a thousand years in the future is a defining question for our time. Stephen Hawking originally went on record as saying that anyone who thought they knew when conscious artificial intelligence would arrive didn’t know what they were talking about. More recently, he revised his prediction of the lifespan of humanity down from 1000 years to 100.

No one can chart a line from today to Skynet and show it crossing the axis in 32 years. I’m sorry if you were expecting some sophisticated trend analysis that would do that. The people who have tried include Ray Kurzweil and his efforts are regularly pilloried. Equally, no one should think that it’s provably over, say, twenty years away. No one who watched the 2004 DARPA Grand Challenge would think that self-driving cars would be plying the streets of Silicon Valley eight years later. In 2015 the expectation of when a computer would beat leading players of Go was ten years hence, not one. So while we are certainly at least one major breakthrough away from conscious AI, that breakthrough may sneak up on us quickly.

Two recommendations. One, that we should be able to make more informed predictions of the effects of technological advances, and therefore, we should develop models that today’s AI can use to tell us. Once, people’s notion of the source of weather was angry gods in the sky. Now we have supercomputers executing humungous models of the biosphere. It’s time we constructed equally detailed models of global socioeconomics.

Two, because absence of proof is not proof of absence, we should not require those warning us of AI risks to prove their case. This is not quite the precautionary principle, because attempts to stop the development of conscious AI would be utterly futile. Rather, it is that we should act on the assumption that conscious AI will arrive within a relatively short time frame, and decide now how to ensure it will be safe.

Musk didn’t actually say that his doomsday scenario involved conscious AI, although referring to killer robots certainly suggests it. In the short term, merely the increasingly sophisticated application of artificial narrow intelligence will guarantee mass unemployment, which qualifies as civilization-rocking by any definition. See Martin Ford’s The Lights in the Tunnel for an analysis of the economic effects. In the further term, as AI grows more powerful, even nonconscious AI could wreak havoc on the world through the paperclip hypothesis, unintended emergent behavior, and malicious direction.

To quote Falstaff, perhaps the better part of valor is discretion.

BioterrorismTechnology

Bringing Back the Dead

Viruses, that is.  Canadian researchers revived an extinct horsepox virus last year on a shoestring budget, by using mail-order DNA.

The researchers bought overlapping DNA fragments from a commercial synthetic DNA company. Each fragment was about 30,000 base pairs long, and because they overlapped, the team was able to “stitch” them together to complete the genome of the 212,000-base-pair horsepox virus. When they introduced the genome into cells that were already infected with a different kind of pox virus, the cells began to produce virus particles of the infectious horsepox variety.

Why this matters: It’s getting easier all the time to synthesize existing pathogens (like horsepox) and to create exciting new deadly ones (with CRISPR/cas9). Crisis of Control explores the consequences of where that trend is leading. Inevitably it will be possible one day for average people to do it in their garages.

This team didn’t even use their own facility for make the DNA fragments, but ordered them through the mail.  There’s some inconsistency about how easy it is to get nasty stuff that way. Here’s a quote from a 2002 article:

Right now, the companies making DNA molecules such as
the ones used to recreate the polio virus do not check what
their clients are ordering. “We don’t care about that,” says
a technician at a company that ships DNA to more than 40
countries around the world, including some in the Middle
East.

Things may be better now; apparently now at least the reputable western companies do care.  What about do-it-yourself?  What I can’t tell from the article is whether their mail order was for double stranded DNS (dsDNA) or oligonucleotides (“oligos”), and here I am exposing my ignorance of molecular biology because I am sure it is obvious to someone in that field what it must have been. What I do know is that there are no controls on what you can make with oligos because you can get those synthesizers for as little as a few hundred bucks off eBay. But you then have to turn them into dsDNA to get the genome you’re looking for, and that requires some nontrivial laboratory work.

We can assume that procedure will get easier, and it has in fact already been replicated in a commercially available synthesizer that will produce dsDNA. The last time I looked, one such machine had recently become available, but implemented cryptographic security measures that meant that it would not make anything “questionable” until the manufacturer had provided an unlocking code.

So far, so good, although it’s hard for me to imagine people at the CDC find it easy to sleep. But inevitably this will become easier. How do we have to evolve to disengage this threat?

 

Photo: Wikimedia Commons.