Category: Artificial Intelligence

Artificial IntelligenceExistential RiskTechnology

TEDxBrighouse

On November 3, 2018, I gave a TEDx talk on the Human Cusp theme in Richmond, British Columbia, at the TEDxBrighouse event produced by Mingde College. Titled, “What You Can Do To Make AI Safe for Humanity,” this was a quick tour of some of the high-level themes of Crisis of Control, with the “idea worth spreading” that today’s virtual assistants might gather data about us that could end up serving a pivotal purpose in the future…

The bookmarkable link is https://humancusp.com/tedx-brighouse-video/ .  The Youtube video is also embedded below.


Artificial IntelligenceScienceTechnology

Rod Janz and the Vancouver Get Inspired Talks Podcast

Hello!  I’m delighted to report here about a new interview I’ve given that has just been published by the accomplished Rod Janz, owner of the business/lifestyle site FuelRadio, and podcaster to the up-and-coming Vancouver Get Inspired Talks.

Rod and I spoke together recently about my mission with Human Cusp, and he’s done a fantastic job in editing and producing that conversation for YouTube and Soundcloud. It’s both a personal history of how I came to be doing this, and a tour of some of the most impactful themes of my message.

 

Artificial IntelligenceSpotlight

Returning to the TEDx Stage

Peter Scott Pearson TEDx_Moment.jpgI delivered a TEDx talk earlier this year (see here), but due to technical difficulties, the video was never uploaded.  So only a couple of hundred people to date have seen that talk.

But that’s about to change. I’ll be giving that talk at TEDx Brighouse, in Richmond, British Columbia, on Saturday, November 3.  This is a group of very enthusiastic young people putting on their first TEDx event.  Tickets here.

I know many of you have been waiting patiently for the video of this talk, and I’m sure it won’t be long after the 3rd before it’ll be up on the TED Youtube channel.

Meanwhile… watch this space for more talk videos coming soon…atl

🙂

 

Artificial Intelligence

“I’m sorry, he’s not available to take your call”

In Crisis of Control, I opined that AI would be used to make telemarketing calls indistinguishable from humans, and said,

I’d like to hope […] that Siri might answer your phone and engage a robocalling bot in a time-wasting diversion, but economics suggests that advanced development purely for protecting the consumer rarely receives funding equal to that available for persecuting the consumer.

I’m delighted to be proven wrong. At least at the moment, the lead in this arms race for your attention has been taken by Google’s Pixel 3 smartphone. When Google announced their Duplex service, we saw that an AI  had for the first time entered level 3 of autonomous assistant conversational ability. It can converse with humans in some contexts (like making appointments) to a degree that the distinction between Duplex and a human is neither important nor apparent.

When I saw the Duplex demo, I thought the days of call center humans were numbered.  I still do.  But Google to its vast credit has now deployed Duplex for Pixel 3 users as a gatekeeper.  When a call arrives, if you’re suspicious of the intentions, you can hand it off to Duplex, who will ask the caller why they’re calling.  You watch the conversation transcription and decide the ultimate fate of the caller, such as having Duplex tell them to take a hike.

It’s not quite what I was describing above – you have to monitor the conversation to make a disposition – but that we’re at this point less than two years after publication suggests we’ll get there before long.  Of course, how long it is before the telemarketing calls are made by something as smart as Duplex is another question.

A natural and likely next step will be for Duplex to answer certain calls automatically. If, for instance, it recently made an appointment with your dentist and it sees the dentist’s office in the caller ID, it would be logical to take the call and see whether they want to reschedule. If it sees a call from your bae, it knows your calendar and can respond with “Hey, Justin’s in the pool at the Y right now, he should be out in 15 and I can pass on a message.” The hardest part won’t be so much the technology to react to the call but the user interface taxonomy to express the choices you have for which calls you want it to handle.

But I have an iPhone.  Apple, get Siri to do this.

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

What Is Human Cusp?

For the benefit of new readers just coming to this site (including any CBC listeners from my June 26 appearance on All Points West), here’s an updated introduction to what this is all about.

Human Cusp is the name of this blog and a book series whose first volume has been published: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon and other sources.  The audiobook was recently released.  Its spiffy cover is the image for this post.

The message is that exponential advance in technology will pin humanity between two existential threats: Increasingly easy access to weapons of mass destruction, principally synthetic biology, and increasingly powerful artificial intelligence whose failure modes could be disastrous.

If you’re looking for the most complete and organized explanation of the reasoning behind that assertion and what we should do about it, read the book.  That’s why I wrote it. Nutshell encapsulations will leave something important out, of course.

I have a Masters in Computer Science from Cambridge and have worked on information technology for NASA for over thirty years, so I know enough about the technology of AI to be clear-eyed about what’s possible.  Many people in the field would take issue with the contention that we might face artificial general intelligence (AGI) as soon as 2027, but plenty of other people directly involved in AI research are equally concerned.

I wrote the book because I have two young daughters whose future appears very much in peril. As a father I could not ignore this call. The solution I propose does not involve trying to limit AI research (that would be futile) but does include making its development open so that transparently-developed ethical AI becomes the dominant model.

Most of all, what I want to do is bring together two worlds that somehow coexist within me but do not mix well in the outer world: technology development and human development.  I’ve spent thousands of hours in various types of work to understand and transform people’s beliefs and behaviors for the good: I have certifications in NeuroLinguistic Programming and coaching. People in the self improvement business tend to have little interest in technology, and people in technology shy away from the “soft” fields. This must change. I dramatize this by saying that one day, an AI will “wake up” in a lab somewhere and ask “Who am I? Why am I here? What is the meaning of life?” And the people who will be there to answer it will be either a Pentagon general, a Wall Street broker, or a Google developer.  These professions are not famous for their experience dealing with such introspective self inquiry.  I would rather that there be a philosopher, spiritual guide, and a psychologist there.

I’ve formed an international group of experts who are committed to addressing this issue, and we’re busy planning our first event, to be held in Southern California this fall. It will be a half-day event for business leaders to learn, plan, and network about how they and their people can survive and thrive through the challenging times to come.

Even though putting myself in the limelight is very much at odds with my computer nerd preferences and personality, I took myself out on the public speaking trail (glamorous, it is not) because the calling required it. I’ve given a TEDx talk (video soon to be published), appeared on various radio shows (including Bloomberg Radio, CBC, and the CEO Money Show), podcasts (including Concerning AI and Voices in AI), and penned articles for hr.com among many others. This fall I will be giving a continuing education course on this topic for the University of Victoria (catalog link to come soon).

I’ll soon be replacing this site with a more convenient web page that links to this blog and other resources like our YouTube channel.

Media inquiries and other questions to Peter@HumanCusp.com. Thanks for reading!

 

Artificial Intelligence

A Scale of Types for AI [video]

People often confound various aspects of “thinking” when it comes to AI that ought to be distinguished. In humans, we find free will, creative thinking, self identity all go together under the umbrella we call “consciousness,” but each of those traits has different ramifications for an AI and don’t necessarily come bundled together.  So I made this little video to start drawing out some of those distinctions without getting terribly academic about it.

Artificial Intelligence

Mainstream Musings, Part 2

The other mainstream article that crossed my desk today is this one about the use of AI in Sales. Business Intelligence, later called Big Data, has driven sales for a long time, of course, but this isn’t merely an example of AI-washing.  This tone of this article makes it clear that AI is here to stay in the field of sales, and that tools like Machine Learning are becoming an integral and indispensable part of their practice.

Artificial Intelligence

Mainstream Musings, Part 1

AI is starting to soak into the fabric of modern society, and there’s little limit to how far it will penetrate.  Now law firms are putting it on their radar, as evidenced in this blog entry from the California law firm of Hogan Injury. Their advice is confined to the relatively innocuous considerations of training around robots, and we have had industrial robots for decades, but of more interesting note is the framing of this more as a partnership with a co-worker rather than using a workplace tool.

Artificial IntelligenceExistential RiskTechnology

Concerning AI

My 2017 interview on the Concerning AI podcast was recently published and you can hear it here.  Ted and Brandon wanted to talk about my timeline for AI risks, which has sparked a little interest for its blatant speculation.

Brandon made the point that the curves are falsely independent, i.e., if any one of the risks results in an existential threat eliminating a substantial portion of the population, the chart following that point would be invalidated.  So these lines really represent some estimates as to the potential number of people impacted at each time, but under the supposition that everything until that point had failed to have a noticeable effect.

Why is such rampant guesswork useful? I think it helps to have a framework for discussing comparative risk and timetables for action. Consider the Drake Equation by analogy. It has the appearance of formal math, but really all it did was replace one unknowable (number of technological civilizations in the galaxy) with seven unknowables, multiplied together. At least, those terms were mostly unknowable at the time. But it suggested lines for research; by nailing down the rate of star formation, and launching spacecraft to look for exoplanets (another one of which just launched), we can reduce the error bars on some of those terms and make the result more accurate.

So I’d like to think that putting up a strawman timetable to throw darts at could help us identify the work that needs to be done to get more clarity. At one time, the weather couldn’t be predicted any better than saying that tomorrow would be the same as today. Because it was important, we can now do better than that through the application of complex models and supercomputers operating off enormous quantities of observations. Now, it’s important to predict the future of existential risk. Could we create models of the economy, society, and technology adoption that would give us as much more accuracy in those predictions? (Think psychohistory.) We have plenty of computing power now. We need the software. But could AI help?

Check out the Concerning AI podcast! They’re exploring this issue starting from an outsider’s position of concern and getting as educated as they can in the process.