Month: December 2017

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?
Artificial IntelligenceTechnologyThe SingularityWarfare

Timeline For Artificial Intelligence Risks

The debate about existential risks from AI is clouded in uncertainty. We don’t know whether human-scale AIs will emerge in ten years or fifty. But there’s also an unfortunate tendency among scientific types to avoid any kind of guessing when they have insufficient information, because they’re trained to be precise. That can rob us of useful speculation. So let’s take some guesses at the rises and falls of various AI-driven threats.  The numbers on the axes may turn out to be wrong, but maybe the shapes and ordering will not.

Screen Shot 2017-12-03 at 5.33.51 PM

The Y-axis is a logarithmic scale of number of humans affected, ranging from a hundred (102) to a billion (109). So some of those curves impact roughly the entire population of the world. “Affected” does not always mean “exterminated.” The X-axis is time from now.

We start out with the impact of today’s autonomous weapons, which could become easily-obtained and subverted weapons of mass assassination unless stringent controls are adopted. See this video by the Future of Life Institute and the Campaign Against Lethal Autonomous Weapons. It imagines a scenario where thousands of activist students are killed by killer drones (bearing a certain resemblance to the hunter-seekers from Dune). Cheap manufacturing with 3-D printers might stretch the impact of these devices towards a million, but I don’t see it easy enough for average people to make precision-shaped explosive charges to go past that.

At the same time, a rising tide of unemployment from automation is projected by two studies to affect half the workforce of North America and by extension, of the developed world, in ten to twenty years. An impact in the hundreds of millions would be a conservative estimate. So far we have not seen new jobs created beyond the field of AI research, which few of those displaced will be able to move into.

Starting around 2030 we have the euphemistically-labeled “Control Failures,” the result of bugs in the specifications, design, or implementation of AIs causing havoc on any number of scales. This could culminate in the paperclip scenario, which would certainly put a final end to further activity in the chart.

The paperclip maximizer does not require artificial consciousness – if anything, it operates better without it – so I put the risk of conscious AIs in a separate category starting around 20 years from now. That’s around the median time predicted by AI researchers for human scale AI to be developed. Again, “lives impacted” isn’t necessarily “lives lost” – we could be looking at the impact of humans integrating with a new species – but equally, it might mean an Armageddon scenario if conscious AI decides that humanity is a problem best solved by its elimination.

If we make it through those perils, we still face the risk of self-replicating machines running amok. This is a hybrid risk combining the ultimate evolution of autonomous weapons and the control problem. A paperclip maximizer doesn’t have to end up creating self-replicating factories… but it certainly is more fun when it does.

Of course, this is a lot of rampant speculation – I said as much to begin with – but it gives us something to throw darts at.