All posts by Peter Scott

Peter Scott’s résumé reads like a Monty Python punchline: half business coach, half information technology specialist, half teacher, three-quarters daddy. After receiving a master’s degree in Computer Science from Cambridge University, he has worked for NASA’s Jet Propulsion Laboratory as an employee and contractor for over thirty years, helping advance our exploration of the Solar System. Over the years, he branched out into writing technical books and training. Yet at the same time, he developed a parallel career in “soft” fields of human development, getting certifications in NeuroLinguistic Programming from founder John Grinder and in coaching from the International Coaching Federation. In 2007 he co-created a convention honoring the centennial of the birth of author Robert Heinlein, attended by over 700 science fiction fans and aerospace experts, a unique fusion of the visionary with the concrete. Bridging these disparate worlds positions him to envisage a delicate solution to the existential crises facing humanity. He lives in the Pacific Northwest with his wife and two daughters, writing the Human Cusp blog on dealing with exponential change.

Artificial Intelligence

“I’m sorry, he’s not available to take your call”

In Crisis of Control, I opined that AI would be used to make telemarketing calls indistinguishable from humans, and said,

I’d like to hope […] that Siri might answer your phone and engage a robocalling bot in a time-wasting diversion, but economics suggests that advanced development purely for protecting the consumer rarely receives funding equal to that available for persecuting the consumer.

I’m delighted to be proven wrong. At least at the moment, the lead in this arms race for your attention has been taken by Google’s Pixel 3 smartphone. When Google announced their Duplex service, we saw that an AI  had for the first time entered level 3 of autonomous assistant conversational ability. It can converse with humans in some contexts (like making appointments) to a degree that the distinction between Duplex and a human is neither important nor apparent.

When I saw the Duplex demo, I thought the days of call center humans were numbered.  I still do.  But Google to its vast credit has now deployed Duplex for Pixel 3 users as a gatekeeper.  When a call arrives, if you’re suspicious of the intentions, you can hand it off to Duplex, who will ask the caller why they’re calling.  You watch the conversation transcription and decide the ultimate fate of the caller, such as having Duplex tell them to take a hike.

It’s not quite what I was describing above – you have to monitor the conversation to make a disposition – but that we’re at this point less than two years after publication suggests we’ll get there before long.  Of course, how long it is before the telemarketing calls are made by something as smart as Duplex is another question.

A natural and likely next step will be for Duplex to answer certain calls automatically. If, for instance, it recently made an appointment with your dentist and it sees the dentist’s office in the caller ID, it would be logical to take the call and see whether they want to reschedule. If it sees a call from your bae, it knows your calendar and can respond with “Hey, Justin’s in the pool at the Y right now, he should be out in 15 and I can pass on a message.” The hardest part won’t be so much the technology to react to the call but the user interface taxonomy to express the choices you have for which calls you want it to handle.

But I have an iPhone.  Apple, get Siri to do this.

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

What Is Human Cusp?

For the benefit of new readers just coming to this site (including any CBC listeners from my June 26 appearance on All Points West), here’s an updated introduction to what this is all about.

Human Cusp is the name of this blog and a book series whose first volume has been published: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon and other sources.  The audiobook was recently released.  Its spiffy cover is the image for this post.

The message is that exponential advance in technology will pin humanity between two existential threats: Increasingly easy access to weapons of mass destruction, principally synthetic biology, and increasingly powerful artificial intelligence whose failure modes could be disastrous.

If you’re looking for the most complete and organized explanation of the reasoning behind that assertion and what we should do about it, read the book.  That’s why I wrote it. Nutshell encapsulations will leave something important out, of course.

I have a Masters in Computer Science from Cambridge and have worked on information technology for NASA for over thirty years, so I know enough about the technology of AI to be clear-eyed about what’s possible.  Many people in the field would take issue with the contention that we might face artificial general intelligence (AGI) as soon as 2027, but plenty of other people directly involved in AI research are equally concerned.

I wrote the book because I have two young daughters whose future appears very much in peril. As a father I could not ignore this call. The solution I propose does not involve trying to limit AI research (that would be futile) but does include making its development open so that transparently-developed ethical AI becomes the dominant model.

Most of all, what I want to do is bring together two worlds that somehow coexist within me but do not mix well in the outer world: technology development and human development.  I’ve spent thousands of hours in various types of work to understand and transform people’s beliefs and behaviors for the good: I have certifications in NeuroLinguistic Programming and coaching. People in the self improvement business tend to have little interest in technology, and people in technology shy away from the “soft” fields. This must change. I dramatize this by saying that one day, an AI will “wake up” in a lab somewhere and ask “Who am I? Why am I here? What is the meaning of life?” And the people who will be there to answer it will be either a Pentagon general, a Wall Street broker, or a Google developer.  These professions are not famous for their experience dealing with such introspective self inquiry.  I would rather that there be a philosopher, spiritual guide, and a psychologist there.

I’ve formed an international group of experts who are committed to addressing this issue, and we’re busy planning our first event, to be held in Southern California this fall. It will be a half-day event for business leaders to learn, plan, and network about how they and their people can survive and thrive through the challenging times to come.

Even though putting myself in the limelight is very much at odds with my computer nerd preferences and personality, I took myself out on the public speaking trail (glamorous, it is not) because the calling required it. I’ve given a TEDx talk (video soon to be published), appeared on various radio shows (including Bloomberg Radio, CBC, and the CEO Money Show), podcasts (including Concerning AI and Voices in AI), and penned articles for hr.com among many others. This fall I will be giving a continuing education course on this topic for the University of Victoria (catalog link to come soon).

I’ll soon be replacing this site with a more convenient web page that links to this blog and other resources like our YouTube channel.

Media inquiries and other questions to Peter@HumanCusp.com. Thanks for reading!

 

Artificial Intelligence

A Scale of Types for AI [video]

People often confound various aspects of “thinking” when it comes to AI that ought to be distinguished. In humans, we find free will, creative thinking, self identity all go together under the umbrella we call “consciousness,” but each of those traits has different ramifications for an AI and don’t necessarily come bundled together.  So I made this little video to start drawing out some of those distinctions without getting terribly academic about it.

Artificial Intelligence

Mainstream Musings, Part 2

The other mainstream article that crossed my desk today is this one about the use of AI in Sales. Business Intelligence, later called Big Data, has driven sales for a long time, of course, but this isn’t merely an example of AI-washing.  This tone of this article makes it clear that AI is here to stay in the field of sales, and that tools like Machine Learning are becoming an integral and indispensable part of their practice.

Artificial Intelligence

Mainstream Musings, Part 1

AI is starting to soak into the fabric of modern society, and there’s little limit to how far it will penetrate.  Now law firms are putting it on their radar, as evidenced in this blog entry from the California law firm of Hogan Injury. Their advice is confined to the relatively innocuous considerations of training around robots, and we have had industrial robots for decades, but of more interesting note is the framing of this more as a partnership with a co-worker rather than using a workplace tool.

Artificial IntelligenceExistential RiskTechnology

Concerning AI

My 2017 interview on the Concerning AI podcast was recently published and you can hear it here.  Ted and Brandon wanted to talk about my timeline for AI risks, which has sparked a little interest for its blatant speculation.

Brandon made the point that the curves are falsely independent, i.e., if any one of the risks results in an existential threat eliminating a substantial portion of the population, the chart following that point would be invalidated.  So these lines really represent some estimates as to the potential number of people impacted at each time, but under the supposition that everything until that point had failed to have a noticeable effect.

Why is such rampant guesswork useful? I think it helps to have a framework for discussing comparative risk and timetables for action. Consider the Drake Equation by analogy. It has the appearance of formal math, but really all it did was replace one unknowable (number of technological civilizations in the galaxy) with seven unknowables, multiplied together. At least, those terms were mostly unknowable at the time. But it suggested lines for research; by nailing down the rate of star formation, and launching spacecraft to look for exoplanets (another one of which just launched), we can reduce the error bars on some of those terms and make the result more accurate.

So I’d like to think that putting up a strawman timetable to throw darts at could help us identify the work that needs to be done to get more clarity. At one time, the weather couldn’t be predicted any better than saying that tomorrow would be the same as today. Because it was important, we can now do better than that through the application of complex models and supercomputers operating off enormous quantities of observations. Now, it’s important to predict the future of existential risk. Could we create models of the economy, society, and technology adoption that would give us as much more accuracy in those predictions? (Think psychohistory.) We have plenty of computing power now. We need the software. But could AI help?

Check out the Concerning AI podcast! They’re exploring this issue starting from an outsider’s position of concern and getting as educated as they can in the process.

 

Artificial IntelligencePhilosophySpotlight

Welcome New Readers!

It’s been busy lately! Interest in Crisis of Control has skyrocketed, and I’m sorry I have neglected the blog. There are many terrific articles in the pipeline to post.

If you’re new and finding your way around… don’t expect much organization, yet. I saved that for my book (https://humancusp.com/book1). That contains my best effort at unpacking these issues into an organized stream of ideas that take you from here to there.

On Saturday, February 3, I will be speaking at TEDx Pearson College UWC on how we are all parenting the future.  This event will be livestreamed and the edited video available on the TED site around May.

I have recorded podcasts for Concerning AI and Voices in AI that are going through post-production and will be online within a few weeks, and my interview with Michael Yorba on the CEO Money show is here.

On March 13, I will be giving a keynote at the Family Wealth Report Fintech conference in Manhattan. Any Crisis of Control readers near Midtown who have a group that would like a talk that evening?

I’m in discussions with the University of Victoria about offering a continuing studies course and also a seminar through the Centre for Global Studies. My thanks to Professor Rod Dobell there for championing those causes and also for coming up with what I think is the most succinct description of my book for academics: “Transforming our response to AGI on the basis of reformed human relationships.”

All this and many other articles and quotes in various written media. Did I mention this is not my day job? 🙂

In other random thoughts, I am impressed by how many layers there are in the AlphaGo movie.  A friend of mine commented afterwards, “Here I was thinking you were getting me to watch a movie about AI, and I find out it’s really about the human spirit!”

Watch this movie to see the panoply of human emotions ranging across the participants and protagonists as they come to terms with the impact of a machine invading a space that had, until weeks earlier, been assumed to be safe from such intrusion for a decade. The developers of AlphaGo waver between pride in their creation and the realization that their player cannot appreciate or be buoyed by their enthusiasm… but an actual human (world champion Lee Sedol) is going through an existential crisis before their eyes.

At the moment, the best chess player in the world is, apparently, neither human nor machine, but a team of both. How, exactly, does that collaboration work? It’s one thing for a program to determine an optimal move, another to explain to a human why it is so. Will this happen with Go also?

Artificial IntelligenceEmploymentPoliticsSpotlightTechnology

Human Cusp on the Small Business Advocate

Hello!  You can listen to my November 28 interview with Jim Blasingame on his Small Business advocate radio show in these segments:

Part 1:

Part 2:

Part 3:

 

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?