The debate about existential risks from AI is clouded in uncertainty. We don’t know whether human-scale AIs will emerge in ten years or fifty. But there’s also an unfortunate tendency among scientific types to avoid any kind of guessing when they have insufficient information, because they’re trained to be precise. That can rob us of useful speculation. So let’s take some guesses at the rises and falls of various AI-driven threats.  The numbers on the axes may turn out to be wrong, but maybe the shapes and ordering will not.

Screen Shot 2017-12-03 at 5.33.51 PM

The Y-axis is a logarithmic scale of number of humans affected, ranging from a hundred (102) to a billion (109). So some of those curves impact roughly the entire population of the world. “Affected” does not always mean “exterminated.” The X-axis is time from now.

We start out with the impact of today’s autonomous weapons, which could become easily-obtained and subverted weapons of mass assassination unless stringent controls are adopted. See this video by the Future of Life Institute and the Campaign Against Lethal Autonomous Weapons. It imagines a scenario where thousands of activist students are killed by killer drones (bearing a certain resemblance to the hunter-seekers from Dune). Cheap manufacturing with 3-D printers might stretch the impact of these devices towards a million, but I don’t see it easy enough for average people to make precision-shaped explosive charges to go past that.

At the same time, a rising tide of unemployment from automation is projected by two studies to affect half the workforce of North America and by extension, of the developed world, in ten to twenty years. An impact in the hundreds of millions would be a conservative estimate. So far we have not seen new jobs created beyond the field of AI research, which few of those displaced will be able to move into.

Starting around 2030 we have the euphemistically-labeled “Control Failures,” the result of bugs in the specifications, design, or implementation of AIs causing havoc on any number of scales. This could culminate in the paperclip scenario, which would certainly put a final end to further activity in the chart.

The paperclip maximizer does not require artificial consciousness – if anything, it operates better without it – so I put the risk of conscious AIs in a separate category starting around 20 years from now. That’s around the median time predicted by AI researchers for human scale AI to be developed. Again, “lives impacted” isn’t necessarily “lives lost” – we could be looking at the impact of humans integrating with a new species – but equally, it might mean an Armageddon scenario if conscious AI decides that humanity is a problem best solved by its elimination.

If we make it through those perils, we still face the risk of self-replicating machines running amok. This is a hybrid risk combining the ultimate evolution of autonomous weapons and the control problem. A paperclip maximizer doesn’t have to end up creating self-replicating factories… but it certainly is more fun when it does.

Of course, this is a lot of rampant speculation – I said as much to begin with – but it gives us something to throw darts at.

Posted by Peter Scott

Peter Scott’s résumé reads like a Monty Python punchline: half business coach, half information technology specialist, half teacher, three-quarters daddy. After receiving a master’s degree in Computer Science from Cambridge University, he has worked for NASA’s Jet Propulsion Laboratory as an employee and contractor for over thirty years, helping advance our exploration of the Solar System. Over the years, he branched out into writing technical books and training. Yet at the same time, he developed a parallel career in “soft” fields of human development, getting certifications in NeuroLinguistic Programming from founder John Grinder and in coaching from the International Coaching Federation. In 2007 he co-created a convention honoring the centennial of the birth of author Robert Heinlein, attended by over 700 science fiction fans and aerospace experts, a unique fusion of the visionary with the concrete. Bridging these disparate worlds positions him to envisage a delicate solution to the existential crises facing humanity. He lives in the Pacific Northwest with his wife and two daughters, writing the Human Cusp blog on dealing with exponential change.

8 Comments

  1. […] * The Neuroscience of Changing Your Mind >> * Is being gay in your DNA? Homosexual men share two gene variants suggest they born with their sexual preference, study >> * Timeline For Artificial Intelligence Risks >> […]

    Like

    Reply

  2. Darin Hitchings December 7, 2017 at 1:52 pm

    AI, I believe, is a misnomer if not an oxymoron. I think an algorithm that can learn to do anything is possible. I think am algorithm that can learn to do everything is impossible. The set of skills that humans have is uncountable and infinite. So we’re not talking about consciousness or sentience. Unless you believe you can turn a dead ant into a live one after stepping on it… That said autonomous, non-sentient, systems are a risk, and could very plausibly be subverted. Also, I believe you’re making a linear extrapolation in a non linear world. What about curvature? What about the proliferation of information and constant trend towards miniturization? I think as our control over energy and matter asymptotes towards the absolute, the only reasonable conclusion is that if any human being wants to turn MC^2 into E,it’s un fait accompli. The risks I see are more like those of the Matrix where the complexity grows beyond our ability to grasp or maintain. Not killer machines necessarily. Bit a golden age followed by a catastrophe caused by our inability to maintain what we’ve built. The knowledge will transfer instantly. And perhaps we’ll have 1e6 useful deep learning algorithms 50 years from now. Today is a lower bound on tomorrow. I still don’t believe we’ll have general AI. And non general AI is an oxymoron, it’s stupidity. And everyone using such terminology to my mind is lost in a Sci fi world or a space case. There is no I in AI today.

    Like

    Reply

  3. Agree 100% with previous comment. Conscious “Artificial Intelligence” is an incoherent concept that has somehow captivated the mainstream of futurist thought. Intelligence and consciousness correlate with an almost infinitely complex organismic and neural-perceptual process, of which humans are one manifestation. And any meaningful definition of these things cannot evade the sheer fact of biological embodiment and the vast complexities of the perceptual apparatus. Our *machines* will never reach this, at least within the next 20,000 years. Having said that, yes, there are real dangers in the development and deployment of ever more sophisticated automatic/“autonomous” systems. Partly because, precisely, they are *not* in fact truly intelligent or conscious. They are not, in other words, moral agents.

    Like

    Reply

    1. …”they” are republicans??

      Like

      Reply

  4. Daniel R Miller December 8, 2017 at 3:26 am

    Agree 100% with previous comment. Conscious “Artificial Intelligence” is an incoherent concept that has somehow captivated the mainstream of futurist thought. Intelligence and consciousness correlate with an almost infinitely complex organismic and neural-perceptual process, of which humans are one manifestation. And any meaningful definition of these things cannot evade the sheer fact of biological embodiment and the vast complexities of the perceptual apparatus. Our *machines* will never reach this, at least within the next 20,000 years. Having said that, yes, there are real dangers in the development and deployment of ever more sophisticated automatic/“autonomous” systems. Partly because, precisely, they are *not* in fact truly intelligent or conscious. They are not, in other words, moral agents.

    Like

    Reply

  5. Excellent graph. Your timeline for self-replicating machines might be overly conservative, though. I did a science fact vs. science fiction piece on the issue for the inaugural issue of Age of Robots magazine. Science fiction author Will Mitchell had replicating machines going berserk in 2040, but thought his estimate might be optimistic. But Alex Ellery, of Carleton University, who is putting the finishing touch on a 3D printer that can replicate itself, thinks that money is the only obstacle to it happening much sooner. https://seekingdelphi.com/2017/08/11/age-of-robots-first-look/

    Like

    Reply

  6. Why don’t we make AI sound like a rainy day, or the voice of God, or Morgan Freeman (he’s in everything already)?

    Like

    Reply

  7. hmmmnnn…today’s youth consult hand driven hard drives which can be accessed by “google glasses” with blink of eyes…this tech to go “internal”, chip connected to eye lobe…variations of “access” will be in millions, including satellite observation. There will be no “need” “educate”, as all information dictated, to be previously internalized, thereby precluding “problems” extant with education…only wealthy will practice “higher Ed.”, which is already telling reality, and provides another dimension-link, A.I. Any resistance – rebellion involves manipulation of, or ending “access”…such individual control of decreasing population to be accomplished through A.I., further empowering.

    U.W. university experience, “Utopian Philosophy”, left me unable conceive how Huxley’s, “Brave New World”, and Orwell’s vision, “1984” could ever reconcile. Today that answer reverberates…

    Frank (Herbert) “Golden Path” would be forgiven…met him twice while attending U.W., and while training Bruce Lee’s neurological art with earliest progenitors, and over past 45 years. Meeting RFK days before California primary win – assassination led to university inquisition beyond Spanish…

    Like

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s