Category: Existential Risk

Artificial IntelligenceBioterrorismExistential RiskTechnologyTranshumanism

Is Big Brother Inevitable?

Art Kleiner, writing in Strategy+Business, cited much-reported research that a deep neural network had learned to classify sexuality from facial images better than people can, and went on to some alarming applications of the technology:

The Chinese government is reportedly considering a system to monitor how its citizens behave. There is a pilot project under way in the city of Hangzhou, in Zhejiang province in East China. “A person can incur black marks for infractions such as fare cheating, jaywalking, and violating family-planning rules,” reported the Wall Street Journal in November 2016. “Algorithms would use a range of data to calculate a citizen’s rating, which would then be used to determine all manner of activities, such as who gets loans, or faster treatment at government offices, or access to luxury hotels.”

It is no surprise that China would come up with the most blood-curdling uses of AI to control its citizens. Speculations as to how this may be inventively gamed or creatively sidestepped by said citizens welcome.

But the more ominous point to ponder is whether this is in the future for everyone. Some societies will employ this as an extension of their natural proclivity for surveillance (I’m looking at you, Great Britain), because they can. But when technology makes it easier for people of average means to construct weapons of global destruction, will we end up following China’s lead just to secure our own society? Or can we become a race that is both secure and free?

Artificial IntelligenceExistential RiskTechnology

A.I. Joe

When I wrote in Crisis of Control about the danger of AI in the military being developed with an inadequate ethical foundation, I was hopeful that there would at least be more time to act before the military ramped up its development. That may not be the case, according to this article in TechRepublic:

…advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare … many of the most transformative applications of AI have not yet been addressed.

Roman Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville, who graciously provided one of the endorsements on my book, has been warning against this trend for some time:

Unfortunately, for the humanity that means development of killer robots, unsupervised drones and other mechanisms of killing people in an automated process.

“Killer robots” is of course a sensational catchphrase, but it captures attention enough to make it serviceable by both Yampolskiy and Elon Musk, and while scenarios centered on AIs roaming the cloud and striking us through existing infrastructure are far more likely,  roving killer robots aren’t entirely out of the question either.

I see only the open development of ethical AI as a way to beat the amount of entrenched money and power that is behind the creation of unethical AI.

 

Photo Credit: Wikimedia Commons
Artificial IntelligenceEmploymentExistential Risk

Why Elon Musk Is Right … Again

Less than a week after Elon Musk warned the National Association of Governors about the risks of artificial intelligence, he got in a very public dust-up with Mark Zuckerberg, who thought Musk was being “pretty irresponsible.” Musk retorted that Zuckerberg’s understanding of the topic was “limited.”

This issue pops up with such regularity as to bring joy to the copyright holders of Terminator images. But neither of these men is a dummy, and they can’t both be right… right?

We need to unpack this a little carefully. There is a short term, and a long term. In the short term (the next 10-20 years), while there will be many jobs lost to automation, there will be tremendous benefits wrought by AI, specifically Artificial Narrow Intelligence, or ANI. That’s the kind of AI that’s ubiquitous now; each instance of it solves some specific problem very well, often better than humans, but that’s all it does. This is of course true on the face of it of computers ever since they were invented, or there would have been no point; from the beginning they were better at taking square roots than a person with pencil and paper.

But now those skills include tasks like facial recognition and driving a car, two abilities that we cannot even explain adequately how we do them ourselves, but never mind; computers can be trained by showing them good and bad examples and they just figure it out. They can recognize faces better than humans now, and the day when they are better drivers than humans is not far off.

In the short term, then, the effects are unemployment on an unprecedented scale as 3.5 million people who drive vehicles for a living in the USA alone are expected to be laid off. The effects extend to financial analysts making upwards of $400k/year, whose jobs can now be largely automated. Two studies show that about 47% of work functions are expected to be automated in the short term. (That’s widely misreported as 47% of jobs being eliminated with the rest left unmolested; actually, most jobs would be affected to varying degrees, averaging to 47%.) Mark Cuban agrees.

But, there will be such a cornucopia bestowed upon us by the ANIs that make this happen that we should not impede this progress, say their proponents.  Cures for diseases, dirty risky jobs given to machines, and wealth created in astronomical quantities, sufficient to take care of all those laid-off truckers.

That is true, but it requires that someone connect the wealth generated by the ANIs with the laid-off workers, and we’ve not been good at that historically. But let’s say we figure it out, the political climate swings towards Universal Basic Income, and in the short term, everything comes up roses. Zuckerberg: 1, Musk: 0, right?

Remember that the short term extends about 20 years. After that, we enter the era where AI will grow beyond ANI into AGI: Artificial General Intelligence. That means human-level problem solving abilities capable of being applied to any problem. Except that anything that gets there will have done so by having the ability to improve its own learning speed, and there is no reason for it to stop when it gets on a par with humans. It will go on to exceed our abilities by orders of magnitude, and will be connected to the world’s infrastructure in ways that make wreaking havoc trivially easy. It takes only a bug—not even consciousness, not even malevolence—for something that powerful to take us back to the Stone Age. Fortunately, history shows that Version 1.0 of all significant software systems is bug-free.

Oops.

Elon Musk and I don’t want that to be on the cover of the last issue of Time magazine ever published. Zuckerberg is more of a developer and I have found that it is hard for developers to see the existential risks here, probably because they developed the code, they know every line of it, and they know that nowhere in it resides the lines

if ( threatened ) {
    wipe_out_civilization();
}

Of course, they understand about emergent behavior; but when they’ve spent so much time so close to software that they know intimately, it is easy to pooh-pooh assertions that it could rise up against us as uninformed gullibility. Well, I’m not uninformed about software development either. And yet I believe that it could be soon that we are developing systems that does display drastic emergent behavior, and that by then it will be too late to take appropriate action.

Whether this cascade of crisis happens in 20 years, 15, or 30, we should start preparing for it now before we discover that we ought to have nudged this thing in another direction ten years earlier. And since it requires a vastly elevated understanding of human ethics, it may well take decades to learn what we need to make our AGIs have not just superintelligence, but supercompassion.

Artificial IntelligenceExistential Risk

Why Elon Musk is Right

Elon Musk told the National Governors Association over the weekend that “AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.”

The man knows how to get attention. His words were carried within hours by outlets ranging from NPR to Architectural Digest.  Many piled on to explain why he was wrong. Reason.com reviled him for suggesting regulation instead of allowing free markets to work their magic. And a cast of AI experts took him to task for alarmism that had no basis in their technical experience.

It’s worth examining this conflict. Some wonder as to Musk’s motivation; others think he’s angling for a government grant for OpenAI, the company he backed to explore ethical and safe development of AI. It is a drum Musk has banged repeatedly, going back to his 2015 $10 million donation to the Future of Life Institute, an amount that an interviewer lauded as large and Musk explained was tiny.

I’ve heard the objections from the experts before. At the Canadian Artificial Intelligence Association’s 2016 conference, the reactions from people in the field were generally either dismissive or perplexed, but I must add, in no way hostile. When you’ve written every line of code in an application, it’s easy to say that you know there’s nowhere in it that’s going to go berserk and take over the world. “Musk may say this,” started a common response, “but he uses plenty of AI himself.”

There’s no question that the man whose companies’ products include autonomous drone ships, self-landing rockets, cars on the verge of level 4 autonomy, and a future neural lace interface between the human brain and computers is deep into artificial intelligence. So why is he trying to limit it?

A cynical evaluation would be that Musk wants to hobble the competition with regulation that he has figure out how to subvert. A more charitable interpretation is that the man with more knowledge of the state of the art of AI than anyone else has seen enough to be scared. This is the more plausible alternative. If your only goal is to become as wealthy as possible, picking the most far-out technological challenges of our time and electing to solve them many times faster than was previously believed possible would be a dumb strategy.

And Elon Musk is anything but dumb.

Over a long enough time frame, what Musk is warning about is clearly plausible, it’s just that we can figure it will take so many breakthroughs to get there that it’s a thousand years in the future, a distance at which anything and everything becomes possible. If we model the human brain from the atoms on up then with enough computational horsepower and a suitable set of inputs, we could train this cybernetic baby brain to attain toddlerhood.

We could argue that Musk, Bill Gates, and Stephen Hawking are smart enough to see further into the future than ordinary mortals and therefore are exercised by something that’s hundreds of years away and not worth bothering about now. Why the rogue AI scenario could be far less than a thousand years in the future is a defining question for our time. Stephen Hawking originally went on record as saying that anyone who thought they knew when conscious artificial intelligence would arrive didn’t know what they were talking about. More recently, he revised his prediction of the lifespan of humanity down from 1000 years to 100.

No one can chart a line from today to Skynet and show it crossing the axis in 32 years. I’m sorry if you were expecting some sophisticated trend analysis that would do that. The people who have tried include Ray Kurzweil and his efforts are regularly pilloried. Equally, no one should think that it’s provably over, say, twenty years away. No one who watched the 2004 DARPA Grand Challenge would think that self-driving cars would be plying the streets of Silicon Valley eight years later. In 2015 the expectation of when a computer would beat leading players of Go was ten years hence, not one. So while we are certainly at least one major breakthrough away from conscious AI, that breakthrough may sneak up on us quickly.

Two recommendations. One, that we should be able to make more informed predictions of the effects of technological advances, and therefore, we should develop models that today’s AI can use to tell us. Once, people’s notion of the source of weather was angry gods in the sky. Now we have supercomputers executing humungous models of the biosphere. It’s time we constructed equally detailed models of global socioeconomics.

Two, because absence of proof is not proof of absence, we should not require those warning us of AI risks to prove their case. This is not quite the precautionary principle, because attempts to stop the development of conscious AI would be utterly futile. Rather, it is that we should act on the assumption that conscious AI will arrive within a relatively short time frame, and decide now how to ensure it will be safe.

Musk didn’t actually say that his doomsday scenario involved conscious AI, although referring to killer robots certainly suggests it. In the short term, merely the increasingly sophisticated application of artificial narrow intelligence will guarantee mass unemployment, which qualifies as civilization-rocking by any definition. See Martin Ford’s The Lights in the Tunnel for an analysis of the economic effects. In the further term, as AI grows more powerful, even nonconscious AI could wreak havoc on the world through the paperclip hypothesis, unintended emergent behavior, and malicious direction.

To quote Falstaff, perhaps the better part of valor is discretion.

Artificial IntelligenceExistential RiskTechnologyWarfare

Is Skynet Inevitable?

Australia’s leading AI expert is afraid the army is creating ‘Terminator’, and no, that’s not taking him out of context:

I mean Hollywood is actually pretty good at predicting the future. And if we don’t do anything, then in 50 to 100 years time it will look much like Terminator. It wouldn’t be that dissimilar to what Hollywood is painting… there will be a lot of risk well before then, in fact.

Like so many scenarios I discuss in Crisis of Control, the question is not if, but when. 50 to 100 years away may be a timeframe that lulls us into a false sense of security when the real question is, how far in advance do we need to act to prevent this scenario?

That we will one day create “Skynet” is all but inevitable once you accept that we will develop conscious artificial intelligence (CAI). I go into the reasons why that will happen relatively soon in Crisis. The military is certain to develop CAI for the tremendous tactical leverage it grants. However, the hard take-off in intelligence levels and random mutations (which are all but assured in anything possessing creativity) will mean that at some point a CAI will evolve a motivation to control the real world machinery it is connected to… or can reach.

This is the point where many people argue that “we can just unplug it” and make sure it’s not connected to, say, weaponry.  The “just unplug it” argument has been disposed of before (basically, try unplugging Siri and see how far you get). The odds of successfully air gapping an AI from a network it wants to talk to are tiny; we had genetic algorithms that learned how to communicate across an air gap in 2004.

The answer that Crisis embraces is that we should develop ethical AI in the public domain first, so that it is in a position to dominate the market or defeat any unethical AI that may later arise. This is why we support the OpenAI initiative. It may sound naïve and reckless, but I believe it is the best chance we’ve got.

Artificial IntelligenceExistential Risk

The Future of Human Cusp

I received this helpful comment from a reader:

Your book does a fantastic job covering a large number of related subjects very well and we are on the same page on virtually all of them. That said when I am for example talking with someone about how automation will shortly lead to massive unemployment I need to recommend a book for them to read, I find myself leaning toward a different book “Rise of the Robots” because many/most of the people I interact with can’t handle all of the topics you bring up in one book and can only focus on one topic at a time, e.g. automation replacing jobs. I really appreciate your overarching coverage but you might want to also create several targeted books for each main topic.

He makes a very good point. Trying to hit a market target with a book like this is like fighting jello. I am aiming for a broad readership, one that’s mostly educated but nontechnical. Someone with experience building Machine Learning tools would find the explanation of neural networks plodding, and many scientists would be chafing at the analogies for exponential growth.

For better or worse, however, I deliberately created a broad view of the topic, because I found too many writings were missing vital points in considering only a narrow issue. Martin Ford’s books (I prefer The Lights in the Tunnel) do get very well into the economic impact of automation but don’t touch on the social and philosophical questions raised by AIs approaching consciousness, or the dangers of bioterrorism. And I find these issues to be all interconnected.

So what I was going for here was an introduction to the topic that would be accessible to the layperson, a sort of Beginner’s Guide to the Apocalypse. There will be more books, but I’m not going to try to compete with Ford or anyone else who can deploy more authorial firepower on a narrow subtopic. I will instead be looking to build the connection between the technical and nontechnical worlds.

Artificial IntelligenceExistential RiskScienceTechnologyThe Singularity

Rebuttal to “The AI Misinformation Epidemic”

Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like. Sometimes that falls within the limits of what a layperson can reasonably be expected to learn about a field in a thirty minute interview; sometimes it’s rank sensationalism.  Zachary Lipton, a PhD candidate at UCSD, takes aim at the press coverage of AI in The AI Misinformation Epidemic, and its followup, but uses a blunderbuss and takes out a lot of innocent bystanders.

He rants against media sensationalism and misinformation but provides no examples
to debate other than a Vanity Fair article about Elon Musk. He goes after Kurzweil in particular but doesn’t say what specifically he disagrees with Kurzweil on. He says that Kurzweil’s date
for the Singularity is made up but Kurzweil has published his reasoning
and Lipton doesn’t say what data or assertions in that reasoning he
disagrees with. He says the Singularity is a nebulous concept, which is
bound to be true in the minds of many laypeople who have heard the term
but he references Kurzweil immediately adjacent to that assertion and
yet doesn’t say what about Kurzweil’s vision is wrong or unfounded, dismissing it instead as “religion,” which apparently means he doesn’t have to be specific.

He
says that there are many people making pronouncements in the field who
are unqualified to do so but doesn’t name anyone aside from Kurzweil and
Vanity Fair, nor does he say what credentials should be required to
qualify someone. Kurzweil, having invented the optical character reader
and music synthesizer, is not qualified?

He takes no
discernible stand on the issue of AI safety yet his scornful tone is
readily interpreted as poo-poohing not just utopianism but alarmism as well. That
puts him at odds with Stephen Hawking, whose academic credentials are
beyond question. For some reason, Nick Bostrom also comes in for attack in the comments, for no specified reason but with the implication that since he makes AI dangers digestible for the masses he is therefore sensationalist.  Perhaps that is why I reacted so much to this article.

There is of course hype, but it is hard to tell exactly where. In 2015 an article claiming that a computer would beat the world champion Go player within a year would be roundly dismissed as hype.  In 2009 any article asserting that self-driving vehicles would be ready for public roads within five years would have been overreaching. Kurzweil has a good track record of predictions, they just tend to be behind schedule. The point is, if an assertion about an existential threat turns out to be well founded but we ignore it because existential threats have always appeared over-dramatized, then it will be too late to say, “Oops, missed one.” We have to take this stuff seriously.

Artificial IntelligenceExistential RiskScienceTechnology

Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse

Vanity Fair describes a meeting between Elon Musk and Demis Hassabis, a leading creator of advanced artificial intelligence, which likely propelled Musk’s alarm about AI:

Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).

Mostly about Musk, the article is replete with Crisis of Control tropes that are now playing out in the real world far sooner than even I had thought likely. Musk favors opening AI development and getting to super-AI before government or “tech elites” – even when the elites are Google or Facebook.

Artificial IntelligenceExistential RiskScience

Preparing for the Biggest Change

The repercussions of the January Asilomar Principles meeting continue to reverberate:

Importance Principle:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

[…]

As AI professor Roman Yampolskiy told me, “Design of human-level AI will be the most impactful event in the history of humankind. It is impossible to over-prepare for it.”

This is about effects vs probability. It evokes what I said in Crisis of Control, that the probability of being killed by an asteroid impact is roughly the same as that of dying in a plane crash: probability of event happening times number of people killed. Advanced AI could affect us more profoundly than climate change, could require even longer to prepare for, and could happen sooner. All that adds up to taking this seriously, starting now.

Artificial IntelligenceExistential Risk

2027: Ethicists Lose Battle with Omnipotent AI

The UK’s Guardian produced this little set piece that neatly summarizes many of the issues surrounding AI-as-existential-threat. The smug ethicist brought in to teach a blossoming AI is more interested in defending human exceptionalism (and the “Chinese Room” argument), but is eventually backed into a corner, stating that “You can’t rely on humanity to provide a model for humanity. That goes without saying.” Meanwhile the AI is bent on proving the “hard take-off” hypothesis…