Category: Artificial Intelligence

Artificial IntelligencePhilosophySpotlight

Welcome New Readers!

It’s been busy lately! Interest in Crisis of Control has skyrocketed, and I’m sorry I have neglected the blog. There are many terrific articles in the pipeline to post.

If you’re new and finding your way around… don’t expect much organization, yet. I saved that for my book (https://humancusp.com/book1). That contains my best effort at unpacking these issues into an organized stream of ideas that take you from here to there.

On Saturday, February 3, I will be speaking at TEDx Pearson College UWC on how we are all parenting the future.  This event will be livestreamed and the edited video available on the TED site around May.

I have recorded podcasts for Concerning AI and Voices in AI that are going through post-production and will be online within a few weeks, and my interview with Michael Yorba on the CEO Money show is here.

On March 13, I will be giving a keynote at the Family Wealth Report Fintech conference in Manhattan. Any Crisis of Control readers near Midtown who have a group that would like a talk that evening?

I’m in discussions with the University of Victoria about offering a continuing studies course and also a seminar through the Centre for Global Studies. My thanks to Professor Rod Dobell there for championing those causes and also for coming up with what I think is the most succinct description of my book for academics: “Transforming our response to AGI on the basis of reformed human relationships.”

All this and many other articles and quotes in various written media. Did I mention this is not my day job? 🙂

In other random thoughts, I am impressed by how many layers there are in the AlphaGo movie.  A friend of mine commented afterwards, “Here I was thinking you were getting me to watch a movie about AI, and I find out it’s really about the human spirit!”

Watch this movie to see the panoply of human emotions ranging across the participants and protagonists as they come to terms with the impact of a machine invading a space that had, until weeks earlier, been assumed to be safe from such intrusion for a decade. The developers of AlphaGo waver between pride in their creation and the realization that their player cannot appreciate or be buoyed by their enthusiasm… but an actual human (world champion Lee Sedol) is going through an existential crisis before their eyes.

At the moment, the best chess player in the world is, apparently, neither human nor machine, but a team of both. How, exactly, does that collaboration work? It’s one thing for a program to determine an optimal move, another to explain to a human why it is so. Will this happen with Go also?

Artificial IntelligenceEmploymentPoliticsSpotlightTechnology

Human Cusp on the Small Business Advocate

Hello!  You can listen to my November 28 interview with Jim Blasingame on his Small Business advocate radio show in these segments:

Part 1:

Part 2:

Part 3:

 

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?
Artificial IntelligenceTechnologyThe SingularityWarfare

Timeline For Artificial Intelligence Risks

The debate about existential risks from AI is clouded in uncertainty. We don’t know whether human-scale AIs will emerge in ten years or fifty. But there’s also an unfortunate tendency among scientific types to avoid any kind of guessing when they have insufficient information, because they’re trained to be precise. That can rob us of useful speculation. So let’s take some guesses at the rises and falls of various AI-driven threats.  The numbers on the axes may turn out to be wrong, but maybe the shapes and ordering will not.

Screen Shot 2017-12-03 at 5.33.51 PM

The Y-axis is a logarithmic scale of number of humans affected, ranging from a hundred (102) to a billion (109). So some of those curves impact roughly the entire population of the world. “Affected” does not always mean “exterminated.” The X-axis is time from now.

We start out with the impact of today’s autonomous weapons, which could become easily-obtained and subverted weapons of mass assassination unless stringent controls are adopted. See this video by the Future of Life Institute and the Campaign Against Lethal Autonomous Weapons. It imagines a scenario where thousands of activist students are killed by killer drones (bearing a certain resemblance to the hunter-seekers from Dune). Cheap manufacturing with 3-D printers might stretch the impact of these devices towards a million, but I don’t see it easy enough for average people to make precision-shaped explosive charges to go past that.

At the same time, a rising tide of unemployment from automation is projected by two studies to affect half the workforce of North America and by extension, of the developed world, in ten to twenty years. An impact in the hundreds of millions would be a conservative estimate. So far we have not seen new jobs created beyond the field of AI research, which few of those displaced will be able to move into.

Starting around 2030 we have the euphemistically-labeled “Control Failures,” the result of bugs in the specifications, design, or implementation of AIs causing havoc on any number of scales. This could culminate in the paperclip scenario, which would certainly put a final end to further activity in the chart.

The paperclip maximizer does not require artificial consciousness – if anything, it operates better without it – so I put the risk of conscious AIs in a separate category starting around 20 years from now. That’s around the median time predicted by AI researchers for human scale AI to be developed. Again, “lives impacted” isn’t necessarily “lives lost” – we could be looking at the impact of humans integrating with a new species – but equally, it might mean an Armageddon scenario if conscious AI decides that humanity is a problem best solved by its elimination.

If we make it through those perils, we still face the risk of self-replicating machines running amok. This is a hybrid risk combining the ultimate evolution of autonomous weapons and the control problem. A paperclip maximizer doesn’t have to end up creating self-replicating factories… but it certainly is more fun when it does.

Of course, this is a lot of rampant speculation – I said as much to begin with – but it gives us something to throw darts at.

Artificial IntelligenceBioterrorismExistential RiskTechnologyTranshumanism

Is Big Brother Inevitable?

Art Kleiner, writing in Strategy+Business, cited much-reported research that a deep neural network had learned to classify sexuality from facial images better than people can, and went on to some alarming applications of the technology:

The Chinese government is reportedly considering a system to monitor how its citizens behave. There is a pilot project under way in the city of Hangzhou, in Zhejiang province in East China. “A person can incur black marks for infractions such as fare cheating, jaywalking, and violating family-planning rules,” reported the Wall Street Journal in November 2016. “Algorithms would use a range of data to calculate a citizen’s rating, which would then be used to determine all manner of activities, such as who gets loans, or faster treatment at government offices, or access to luxury hotels.”

It is no surprise that China would come up with the most blood-curdling uses of AI to control its citizens. Speculations as to how this may be inventively gamed or creatively sidestepped by said citizens welcome.

But the more ominous point to ponder is whether this is in the future for everyone. Some societies will employ this as an extension of their natural proclivity for surveillance (I’m looking at you, Great Britain), because they can. But when technology makes it easier for people of average means to construct weapons of global destruction, will we end up following China’s lead just to secure our own society? Or can we become a race that is both secure and free?

Artificial IntelligenceExistential RiskTechnology

A.I. Joe

When I wrote in Crisis of Control about the danger of AI in the military being developed with an inadequate ethical foundation, I was hopeful that there would at least be more time to act before the military ramped up its development. That may not be the case, according to this article in TechRepublic:

…advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare … many of the most transformative applications of AI have not yet been addressed.

Roman Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville, who graciously provided one of the endorsements on my book, has been warning against this trend for some time:

Unfortunately, for the humanity that means development of killer robots, unsupervised drones and other mechanisms of killing people in an automated process.

“Killer robots” is of course a sensational catchphrase, but it captures attention enough to make it serviceable by both Yampolskiy and Elon Musk, and while scenarios centered on AIs roaming the cloud and striking us through existing infrastructure are far more likely,  roving killer robots aren’t entirely out of the question either.

I see only the open development of ethical AI as a way to beat the amount of entrenched money and power that is behind the creation of unethical AI.

 

Photo Credit: Wikimedia Commons
Artificial IntelligenceEmploymentExistential Risk

Why Elon Musk Is Right … Again

Less than a week after Elon Musk warned the National Association of Governors about the risks of artificial intelligence, he got in a very public dust-up with Mark Zuckerberg, who thought Musk was being “pretty irresponsible.” Musk retorted that Zuckerberg’s understanding of the topic was “limited.”

This issue pops up with such regularity as to bring joy to the copyright holders of Terminator images. But neither of these men is a dummy, and they can’t both be right… right?

We need to unpack this a little carefully. There is a short term, and a long term. In the short term (the next 10-20 years), while there will be many jobs lost to automation, there will be tremendous benefits wrought by AI, specifically Artificial Narrow Intelligence, or ANI. That’s the kind of AI that’s ubiquitous now; each instance of it solves some specific problem very well, often better than humans, but that’s all it does. This is of course true on the face of it of computers ever since they were invented, or there would have been no point; from the beginning they were better at taking square roots than a person with pencil and paper.

But now those skills include tasks like facial recognition and driving a car, two abilities that we cannot even explain adequately how we do them ourselves, but never mind; computers can be trained by showing them good and bad examples and they just figure it out. They can recognize faces better than humans now, and the day when they are better drivers than humans is not far off.

In the short term, then, the effects are unemployment on an unprecedented scale as 3.5 million people who drive vehicles for a living in the USA alone are expected to be laid off. The effects extend to financial analysts making upwards of $400k/year, whose jobs can now be largely automated. Two studies show that about 47% of work functions are expected to be automated in the short term. (That’s widely misreported as 47% of jobs being eliminated with the rest left unmolested; actually, most jobs would be affected to varying degrees, averaging to 47%.) Mark Cuban agrees.

But, there will be such a cornucopia bestowed upon us by the ANIs that make this happen that we should not impede this progress, say their proponents.  Cures for diseases, dirty risky jobs given to machines, and wealth created in astronomical quantities, sufficient to take care of all those laid-off truckers.

That is true, but it requires that someone connect the wealth generated by the ANIs with the laid-off workers, and we’ve not been good at that historically. But let’s say we figure it out, the political climate swings towards Universal Basic Income, and in the short term, everything comes up roses. Zuckerberg: 1, Musk: 0, right?

Remember that the short term extends about 20 years. After that, we enter the era where AI will grow beyond ANI into AGI: Artificial General Intelligence. That means human-level problem solving abilities capable of being applied to any problem. Except that anything that gets there will have done so by having the ability to improve its own learning speed, and there is no reason for it to stop when it gets on a par with humans. It will go on to exceed our abilities by orders of magnitude, and will be connected to the world’s infrastructure in ways that make wreaking havoc trivially easy. It takes only a bug—not even consciousness, not even malevolence—for something that powerful to take us back to the Stone Age. Fortunately, history shows that Version 1.0 of all significant software systems is bug-free.

Oops.

Elon Musk and I don’t want that to be on the cover of the last issue of Time magazine ever published. Zuckerberg is more of a developer and I have found that it is hard for developers to see the existential risks here, probably because they developed the code, they know every line of it, and they know that nowhere in it resides the lines

if ( threatened ) {
    wipe_out_civilization();
}

Of course, they understand about emergent behavior; but when they’ve spent so much time so close to software that they know intimately, it is easy to pooh-pooh assertions that it could rise up against us as uninformed gullibility. Well, I’m not uninformed about software development either. And yet I believe that it could be soon that we are developing systems that does display drastic emergent behavior, and that by then it will be too late to take appropriate action.

Whether this cascade of crisis happens in 20 years, 15, or 30, we should start preparing for it now before we discover that we ought to have nudged this thing in another direction ten years earlier. And since it requires a vastly elevated understanding of human ethics, it may well take decades to learn what we need to make our AGIs have not just superintelligence, but supercompassion.

Artificial IntelligenceScienceTranshumanism

Bullying Beliefs

In the otherwise thought provoking and excellent book, “Heartificial Intelligence: Embracing our Humanity to Maximize Machines”, John C. Havens uncharacteristically misses the point of one of the scientists he reports on:

JĂźrgen Schmidhuber is a computer scientist known for his humor, artwork, and expertise in artificial intelligence. As part of a recent speech at TEDxLausanne, he provides a picture of technological determinism similar to [Martine] Rothblatt’s, describing robot advancement beyond human capabilities as inevitable. […] [H]e observes that his young children will spend a majority of their lives in a world where the emerging robot civilization will be smarter than human beings. Near the end of his presentation he advises the audience not to think with an “us versus them” mentality regarding robots, but to “think of yourself and of humanity in general as a small stepping stone, not the last one, on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things.”

It’s difficult to comprehend the depths of Schmidhuber’s condescension with this statement. Fully believing that he is building technology that will in one sense eradicate humanity, he counsels nervous onlookers to embrace this decimation. […] [T]he inevitability of our demise is assured, but at least our tiny brains provide some fodder for the new order ruling our dim-witted progeny. Huzzah! Be content!

This is not a healthy attitude.

But this is not Schmidhuber’s attitude. It is more of a coping skill for facing the inevitable and seeing a grand scheme to the unfolding of the universe.

In The HitchHiker’s Guide to the Galaxy, Zaphod Beeblebrox is tortured by placing him in the Total Perspective Vortex, which reduces its victim to blubbering insanity by showing them how insignificant they are on the scale of the universe. Unfortunately it fails in Beeblebrox’s case because his ego is so huge that he comes away reassured that he was a “really cool guy.” Havens is so desperate to avoid the perspective of humanity’s place in the universe that he mistakes or misstates Schmidhuber’s position as embracing the eradication of humanity. Schmidhuber said nothing of the sort, but foresaw a co-evolution of mankind and AI where the latter would surpass our intellectual capabilities. There is nothing condescending in this.

When I give a talk, someone will invariably raise what amounts to human exceptionalism. “When we’ve created machines that outclass us intellectually, what will humans do? What will be the point of living?” I usually reply with an analogy: Imagine that all the years of SETI and Project Ozma and HRMS  have paid off, and we are visited by an alien race. Their technological superiority is not in doubt – they built the spaceships to get to us, after all – and like the aliens of Close Encounters of the Third Kind, they are also evolved emotionally, philosophically, compassionately, and spiritually. Immediately upon landing, they show us how to cure cancer, end aging, and reach for the stars. Do we reject this cornucopia because we feel inferior to these visitors? Is our collective ego so large and fragile that we would rather live without these advances than relinquish the top position on the medal winners’ podium of sentient species?

Accepting a secondary rank is a role that’s understood by many around the world. I have three citizenships: British, American, Canadian. As an American, of course, you’re steeped in countless numerical examples of superiority from GDP to – Hello? We landed on the Moon. Growing up in Britain, we were raised in the shadow of the Empire on a diet of past glories that led us to believe that we were still top dog if you squinted a bit, and certainly if you had any kind of a retrospective focus, which is why the British take every opportunity possible to remind Americans of their quantity of history.  But as a Canadian, you have to accept that you could only be the global leader in some mostly intangible ways such as politeness, amount of fresh water per capita, best poutine, etc.

Most of the world outside the USA already knows what it’s like to share the planet with a more powerful race of hopefully benign intent. So they may find it easier to accept a change in the pecking order.