Month: February 2018

Artificial IntelligencePhilosophySpotlight

Welcome New Readers!

It’s been busy lately! Interest in Crisis of Control has skyrocketed, and I’m sorry I have neglected the blog. There are many terrific articles in the pipeline to post.

If you’re new and finding your way around… don’t expect much organization, yet. I saved that for my book (https://humancusp.com/book1). That contains my best effort at unpacking these issues into an organized stream of ideas that take you from here to there.

On Saturday, February 3, I will be speaking at TEDx Pearson College UWC on how we are all parenting the future.  This event will be livestreamed and the edited video available on the TED site around May.

I have recorded podcasts for Concerning AI and Voices in AI that are going through post-production and will be online within a few weeks, and my interview with Michael Yorba on the CEO Money show is here.

On March 13, I will be giving a keynote at the Family Wealth Report Fintech conference in Manhattan. Any Crisis of Control readers near Midtown who have a group that would like a talk that evening?

I’m in discussions with the University of Victoria about offering a continuing studies course and also a seminar through the Centre for Global Studies. My thanks to Professor Rod Dobell there for championing those causes and also for coming up with what I think is the most succinct description of my book for academics: “Transforming our response to AGI on the basis of reformed human relationships.”

All this and many other articles and quotes in various written media. Did I mention this is not my day job? 🙂

In other random thoughts, I am impressed by how many layers there are in the AlphaGo movie.  A friend of mine commented afterwards, “Here I was thinking you were getting me to watch a movie about AI, and I find out it’s really about the human spirit!”

Watch this movie to see the panoply of human emotions ranging across the participants and protagonists as they come to terms with the impact of a machine invading a space that had, until weeks earlier, been assumed to be safe from such intrusion for a decade. The developers of AlphaGo waver between pride in their creation and the realization that their player cannot appreciate or be buoyed by their enthusiasm… but an actual human (world champion Lee Sedol) is going through an existential crisis before their eyes.

At the moment, the best chess player in the world is, apparently, neither human nor machine, but a team of both. How, exactly, does that collaboration work? It’s one thing for a program to determine an optimal move, another to explain to a human why it is so. Will this happen with Go also?