Category: Existential Risk

Artificial IntelligenceExistential Risk

The Future of Human Cusp

I received this helpful comment from a reader:

Your book does a fantastic job covering a large number of related subjects very well and we are on the same page on virtually all of them. That said when I am for example talking with someone about how automation will shortly lead to massive unemployment I need to recommend a book for them to read, I find myself leaning toward a different book “Rise of the Robots” because many/most of the people I interact with can’t handle all of the topics you bring up in one book and can only focus on one topic at a time, e.g. automation replacing jobs. I really appreciate your overarching coverage but you might want to also create several targeted books for each main topic.

He makes a very good point. Trying to hit a market target with a book like this is like fighting jello. I am aiming for a broad readership, one that’s mostly educated but nontechnical. Someone with experience building Machine Learning tools would find the explanation of neural networks plodding, and many scientists would be chafing at the analogies for exponential growth.

For better or worse, however, I deliberately created a broad view of the topic, because I found too many writings were missing vital points in considering only a narrow issue. Martin Ford’s books (I prefer The Lights in the Tunnel) do get very well into the economic impact of automation but don’t touch on the social and philosophical questions raised by AIs approaching consciousness, or the dangers of bioterrorism. And I find these issues to be all interconnected.

So what I was going for here was an introduction to the topic that would be accessible to the layperson, a sort of Beginner’s Guide to the Apocalypse. There will be more books, but I’m not going to try to compete with Ford or anyone else who can deploy more authorial firepower on a narrow subtopic. I will instead be looking to build the connection between the technical and nontechnical worlds.

Artificial IntelligenceExistential RiskScienceTechnologyThe Singularity

Rebuttal to “The AI Misinformation Epidemic”

Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like. Sometimes that falls within the limits of what a layperson can reasonably be expected to learn about a field in a thirty minute interview; sometimes it’s rank sensationalism.  Zachary Lipton, a PhD candidate at UCSD, takes aim at the press coverage of AI in The AI Misinformation Epidemic, and its followup, but uses a blunderbuss and takes out a lot of innocent bystanders.

He rants against media sensationalism and misinformation but provides no examples
to debate other than a Vanity Fair article about Elon Musk. He goes after Kurzweil in particular but doesn’t say what specifically he disagrees with Kurzweil on. He says that Kurzweil’s date
for the Singularity is made up but Kurzweil has published his reasoning
and Lipton doesn’t say what data or assertions in that reasoning he
disagrees with. He says the Singularity is a nebulous concept, which is
bound to be true in the minds of many laypeople who have heard the term
but he references Kurzweil immediately adjacent to that assertion and
yet doesn’t say what about Kurzweil’s vision is wrong or unfounded, dismissing it instead as “religion,” which apparently means he doesn’t have to be specific.

He
says that there are many people making pronouncements in the field who
are unqualified to do so but doesn’t name anyone aside from Kurzweil and
Vanity Fair, nor does he say what credentials should be required to
qualify someone. Kurzweil, having invented the optical character reader
and music synthesizer, is not qualified?

He takes no
discernible stand on the issue of AI safety yet his scornful tone is
readily interpreted as poo-poohing not just utopianism but alarmism as well. That
puts him at odds with Stephen Hawking, whose academic credentials are
beyond question. For some reason, Nick Bostrom also comes in for attack in the comments, for no specified reason but with the implication that since he makes AI dangers digestible for the masses he is therefore sensationalist.  Perhaps that is why I reacted so much to this article.

There is of course hype, but it is hard to tell exactly where. In 2015 an article claiming that a computer would beat the world champion Go player within a year would be roundly dismissed as hype.  In 2009 any article asserting that self-driving vehicles would be ready for public roads within five years would have been overreaching. Kurzweil has a good track record of predictions, they just tend to be behind schedule. The point is, if an assertion about an existential threat turns out to be well founded but we ignore it because existential threats have always appeared over-dramatized, then it will be too late to say, “Oops, missed one.” We have to take this stuff seriously.

Artificial IntelligenceExistential RiskScienceTechnology

Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse

Vanity Fair describes a meeting between Elon Musk and Demis Hassabis, a leading creator of advanced artificial intelligence, which likely propelled Musk’s alarm about AI:

Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).

Mostly about Musk, the article is replete with Crisis of Control tropes that are now playing out in the real world far sooner than even I had thought likely. Musk favors opening AI development and getting to super-AI before government or “tech elites” – even when the elites are Google or Facebook.

Artificial IntelligenceExistential RiskScience

Preparing for the Biggest Change

The repercussions of the January Asilomar Principles meeting continue to reverberate:

Importance Principle:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

[…]

As AI professor Roman Yampolskiy told me, “Design of human-level AI will be the most impactful event in the history of humankind. It is impossible to over-prepare for it.”

This is about effects vs probability. It evokes what I said in Crisis of Control, that the probability of being killed by an asteroid impact is roughly the same as that of dying in a plane crash: probability of event happening times number of people killed. Advanced AI could affect us more profoundly than climate change, could require even longer to prepare for, and could happen sooner. All that adds up to taking this seriously, starting now.

Artificial IntelligenceExistential Risk

2027: Ethicists Lose Battle with Omnipotent AI

The UK’s Guardian produced this little set piece that neatly summarizes many of the issues surrounding AI-as-existential-threat. The smug ethicist brought in to teach a blossoming AI is more interested in defending human exceptionalism (and the “Chinese Room” argument), but is eventually backed into a corner, stating that “You can’t rely on humanity to provide a model for humanity. That goes without saying.” Meanwhile the AI is bent on proving the “hard take-off” hypothesis…

 

EmploymentExistential RiskPsychologyThe Singularity

Existential risk and coaching: A Manifesto

My article in the November 2016 issue of Coaching World brought an email from Pierre Dussault, who has been writing about many of the same issues that I covered in Crisis of Control. His thoughtful manifesto is a call to the International Coaching Federation to extend the reach and capabilities of the profession of coaching so that the impact of coaching on individual consciousness can make a global impact. I would urge you to read it here.

BioterrorismEmploymentExistential RiskPoliticsPsychology

Crisis of Control: The Book

The first book in the Human Cusp series has just been published: Crisis of Control: How Artificial Superintelligences May Destroy or Save the Human Race. Paperback will be available within two weeks.

Many thanks to my reviewers, friends, and especially my publisher, Jim Gifford, who has made this so beautiful. As a vehicle for delivering my message, I could not have asked him for more.

Artificial IntelligenceExistential Risk

Wired Talks AI

The November issue of Wired – guest edited by President Obama, no less – contains the responses of thought leaders to six challenges issued by Obama. One of those was “Ensure that artificial intelligence helps us rather than hurts us,” and the response came from Facebook’s Mark Zuckerberg:

Whoever cares about saving lives should be optimistic about the difference that AI can make. If we slow down progress in defer­ence to unfounded concerns, we stand in the way of real gains.

As I say in Crisis of Control, I’m not for limiting development of artificial intelligence. That would be a first-order thinking response to its existential threat. It would be futile. But it would also be counterproductive. AI is essential to the survival of the human race. It also happens to be the possible end of the human race. To an aficionado of story, it seems like we’re in someone’s idea of a suspense thriller. You couldn’t write something more gripping if you tried. Unfortunately, the stakes are humanity.