Category: Technology

Artificial IntelligenceExistential RiskTechnology

Concerning AI

My 2017 interview on the Concerning AI podcast was recently published and you can hear it here.  Ted and Brandon wanted to talk about my timeline for AI risks, which has sparked a little interest for its blatant speculation.

Brandon made the point that the curves are falsely independent, i.e., if any one of the risks results in an existential threat eliminating a substantial portion of the population, the chart following that point would be invalidated.  So these lines really represent some estimates as to the potential number of people impacted at each time, but under the supposition that everything until that point had failed to have a noticeable effect.

Why is such rampant guesswork useful? I think it helps to have a framework for discussing comparative risk and timetables for action. Consider the Drake Equation by analogy. It has the appearance of formal math, but really all it did was replace one unknowable (number of technological civilizations in the galaxy) with seven unknowables, multiplied together. At least, those terms were mostly unknowable at the time. But it suggested lines for research; by nailing down the rate of star formation, and launching spacecraft to look for exoplanets (another one of which just launched), we can reduce the error bars on some of those terms and make the result more accurate.

So I’d like to think that putting up a strawman timetable to throw darts at could help us identify the work that needs to be done to get more clarity. At one time, the weather couldn’t be predicted any better than saying that tomorrow would be the same as today. Because it was important, we can now do better than that through the application of complex models and supercomputers operating off enormous quantities of observations. Now, it’s important to predict the future of existential risk. Could we create models of the economy, society, and technology adoption that would give us as much more accuracy in those predictions? (Think psychohistory.) We have plenty of computing power now. We need the software. But could AI help?

Check out the Concerning AI podcast! They’re exploring this issue starting from an outsider’s position of concern and getting as educated as they can in the process.

 

Artificial IntelligenceEmploymentPoliticsSpotlightTechnology

Human Cusp on the Small Business Advocate

Hello!  You can listen to my November 28 interview with Jim Blasingame on his Small Business advocate radio show in these segments:

Part 1:

Part 2:

Part 3:

 

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?
Artificial IntelligenceTechnologyThe SingularityWarfare

Timeline For Artificial Intelligence Risks

The debate about existential risks from AI is clouded in uncertainty. We don’t know whether human-scale AIs will emerge in ten years or fifty. But there’s also an unfortunate tendency among scientific types to avoid any kind of guessing when they have insufficient information, because they’re trained to be precise. That can rob us of useful speculation. So let’s take some guesses at the rises and falls of various AI-driven threats.  The numbers on the axes may turn out to be wrong, but maybe the shapes and ordering will not.

Screen Shot 2017-12-03 at 5.33.51 PM

The Y-axis is a logarithmic scale of number of humans affected, ranging from a hundred (102) to a billion (109). So some of those curves impact roughly the entire population of the world. “Affected” does not always mean “exterminated.” The X-axis is time from now.

We start out with the impact of today’s autonomous weapons, which could become easily-obtained and subverted weapons of mass assassination unless stringent controls are adopted. See this video by the Future of Life Institute and the Campaign Against Lethal Autonomous Weapons. It imagines a scenario where thousands of activist students are killed by killer drones (bearing a certain resemblance to the hunter-seekers from Dune). Cheap manufacturing with 3-D printers might stretch the impact of these devices towards a million, but I don’t see it easy enough for average people to make precision-shaped explosive charges to go past that.

At the same time, a rising tide of unemployment from automation is projected by two studies to affect half the workforce of North America and by extension, of the developed world, in ten to twenty years. An impact in the hundreds of millions would be a conservative estimate. So far we have not seen new jobs created beyond the field of AI research, which few of those displaced will be able to move into.

Starting around 2030 we have the euphemistically-labeled “Control Failures,” the result of bugs in the specifications, design, or implementation of AIs causing havoc on any number of scales. This could culminate in the paperclip scenario, which would certainly put a final end to further activity in the chart.

The paperclip maximizer does not require artificial consciousness – if anything, it operates better without it – so I put the risk of conscious AIs in a separate category starting around 20 years from now. That’s around the median time predicted by AI researchers for human scale AI to be developed. Again, “lives impacted” isn’t necessarily “lives lost” – we could be looking at the impact of humans integrating with a new species – but equally, it might mean an Armageddon scenario if conscious AI decides that humanity is a problem best solved by its elimination.

If we make it through those perils, we still face the risk of self-replicating machines running amok. This is a hybrid risk combining the ultimate evolution of autonomous weapons and the control problem. A paperclip maximizer doesn’t have to end up creating self-replicating factories… but it certainly is more fun when it does.

Of course, this is a lot of rampant speculation – I said as much to begin with – but it gives us something to throw darts at.

Artificial IntelligenceBioterrorismExistential RiskTechnologyTranshumanism

Is Big Brother Inevitable?

Art Kleiner, writing in Strategy+Business, cited much-reported research that a deep neural network had learned to classify sexuality from facial images better than people can, and went on to some alarming applications of the technology:

The Chinese government is reportedly considering a system to monitor how its citizens behave. There is a pilot project under way in the city of Hangzhou, in Zhejiang province in East China. “A person can incur black marks for infractions such as fare cheating, jaywalking, and violating family-planning rules,” reported the Wall Street Journal in November 2016. “Algorithms would use a range of data to calculate a citizen’s rating, which would then be used to determine all manner of activities, such as who gets loans, or faster treatment at government offices, or access to luxury hotels.”

It is no surprise that China would come up with the most blood-curdling uses of AI to control its citizens. Speculations as to how this may be inventively gamed or creatively sidestepped by said citizens welcome.

But the more ominous point to ponder is whether this is in the future for everyone. Some societies will employ this as an extension of their natural proclivity for surveillance (I’m looking at you, Great Britain), because they can. But when technology makes it easier for people of average means to construct weapons of global destruction, will we end up following China’s lead just to secure our own society? Or can we become a race that is both secure and free?

Artificial IntelligenceExistential RiskTechnology

A.I. Joe

When I wrote in Crisis of Control about the danger of AI in the military being developed with an inadequate ethical foundation, I was hopeful that there would at least be more time to act before the military ramped up its development. That may not be the case, according to this article in TechRepublic:

…advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare … many of the most transformative applications of AI have not yet been addressed.

Roman Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville, who graciously provided one of the endorsements on my book, has been warning against this trend for some time:

Unfortunately, for the humanity that means development of killer robots, unsupervised drones and other mechanisms of killing people in an automated process.

“Killer robots” is of course a sensational catchphrase, but it captures attention enough to make it serviceable by both Yampolskiy and Elon Musk, and while scenarios centered on AIs roaming the cloud and striking us through existing infrastructure are far more likely,  roving killer robots aren’t entirely out of the question either.

I see only the open development of ethical AI as a way to beat the amount of entrenched money and power that is behind the creation of unethical AI.

 

Photo Credit: Wikimedia Commons
BioterrorismTechnology

Bringing Back the Dead

Viruses, that is.  Canadian researchers revived an extinct horsepox virus last year on a shoestring budget, by using mail-order DNA.

The researchers bought overlapping DNA fragments from a commercial synthetic DNA company. Each fragment was about 30,000 base pairs long, and because they overlapped, the team was able to “stitch” them together to complete the genome of the 212,000-base-pair horsepox virus. When they introduced the genome into cells that were already infected with a different kind of pox virus, the cells began to produce virus particles of the infectious horsepox variety.

Why this matters: It’s getting easier all the time to synthesize existing pathogens (like horsepox) and to create exciting new deadly ones (with CRISPR/cas9). Crisis of Control explores the consequences of where that trend is leading. Inevitably it will be possible one day for average people to do it in their garages.

This team didn’t even use their own facility for make the DNA fragments, but ordered them through the mail.  There’s some inconsistency about how easy it is to get nasty stuff that way. Here’s a quote from a 2002 article:

Right now, the companies making DNA molecules such as
the ones used to recreate the polio virus do not check what
their clients are ordering. “We don’t care about that,” says
a technician at a company that ships DNA to more than 40
countries around the world, including some in the Middle
East.

Things may be better now; apparently now at least the reputable western companies do care.  What about do-it-yourself?  What I can’t tell from the article is whether their mail order was for double stranded DNS (dsDNA) or oligonucleotides (“oligos”), and here I am exposing my ignorance of molecular biology because I am sure it is obvious to someone in that field what it must have been. What I do know is that there are no controls on what you can make with oligos because you can get those synthesizers for as little as a few hundred bucks off eBay. But you then have to turn them into dsDNA to get the genome you’re looking for, and that requires some nontrivial laboratory work.

We can assume that procedure will get easier, and it has in fact already been replicated in a commercially available synthesizer that will produce dsDNA. The last time I looked, one such machine had recently become available, but implemented cryptographic security measures that meant that it would not make anything “questionable” until the manufacturer had provided an unlocking code.

So far, so good, although it’s hard for me to imagine people at the CDC find it easy to sleep. But inevitably this will become easier. How do we have to evolve to disengage this threat?

 

Photo: Wikimedia Commons.
Artificial IntelligenceExistential RiskTechnologyWarfare

Is Skynet Inevitable?

Australia’s leading AI expert is afraid the army is creating ‘Terminator’, and no, that’s not taking him out of context:

I mean Hollywood is actually pretty good at predicting the future. And if we don’t do anything, then in 50 to 100 years time it will look much like Terminator. It wouldn’t be that dissimilar to what Hollywood is painting… there will be a lot of risk well before then, in fact.

Like so many scenarios I discuss in Crisis of Control, the question is not if, but when. 50 to 100 years away may be a timeframe that lulls us into a false sense of security when the real question is, how far in advance do we need to act to prevent this scenario?

That we will one day create “Skynet” is all but inevitable once you accept that we will develop conscious artificial intelligence (CAI). I go into the reasons why that will happen relatively soon in Crisis. The military is certain to develop CAI for the tremendous tactical leverage it grants. However, the hard take-off in intelligence levels and random mutations (which are all but assured in anything possessing creativity) will mean that at some point a CAI will evolve a motivation to control the real world machinery it is connected to… or can reach.

This is the point where many people argue that “we can just unplug it” and make sure it’s not connected to, say, weaponry.  The “just unplug it” argument has been disposed of before (basically, try unplugging Siri and see how far you get). The odds of successfully air gapping an AI from a network it wants to talk to are tiny; we had genetic algorithms that learned how to communicate across an air gap in 2004.

The answer that Crisis embraces is that we should develop ethical AI in the public domain first, so that it is in a position to dominate the market or defeat any unethical AI that may later arise. This is why we support the OpenAI initiative. It may sound naïve and reckless, but I believe it is the best chance we’ve got.