Artificial IntelligenceExistential RiskTechnologyWarfare

Is Skynet Inevitable?

Australia’s leading AI expert is afraid the army is creating ‘Terminator’, and no, that’s not taking him out of context:

I mean Hollywood is actually pretty good at predicting the future. And if we don’t do anything, then in 50 to 100 years time it will look much like Terminator. It wouldn’t be that dissimilar to what Hollywood is painting… there will be a lot of risk well before then, in fact.

Like so many scenarios I discuss in Crisis of Control, the question is not if, but when. 50 to 100 years away may be a timeframe that lulls us into a false sense of security when the real question is, how far in advance do we need to act to prevent this scenario?

That we will one day create “Skynet” is all but inevitable once you accept that we will develop conscious artificial intelligence (CAI). I go into the reasons why that will happen relatively soon in Crisis. The military is certain to develop CAI for the tremendous tactical leverage it grants. However, the hard take-off in intelligence levels and random mutations (which are all but assured in anything possessing creativity) will mean that at some point a CAI will evolve a motivation to control the real world machinery it is connected to… or can reach.

This is the point where many people argue that “we can just unplug it” and make sure it’s not connected to, say, weaponry.  The “just unplug it” argument has been disposed of before (basically, try unplugging Siri and see how far you get). The odds of successfully air gapping an AI from a network it wants to talk to are tiny; we had genetic algorithms that learned how to communicate across an air gap in 2004.

The answer that Crisis embraces is that we should develop ethical AI in the public domain first, so that it is in a position to dominate the market or defeat any unethical AI that may later arise. This is why we support the OpenAI initiative. It may sound naïve and reckless, but I believe it is the best chance we’ve got.

Artificial IntelligenceEmployment

When will a machine do your job better than you?

Katja Grace at the Future of Humanity Institute at the University of Oxford and fellow authors surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. They averaged the answers, and published them at https://arxiv.org/pdf/1705.08807.pdfThe results are… surprising.

First up, AIs will reach human proficiency in the game of Go in 2027… wait, what? Ah, but this survey was conducted in 2015. As I noted in Crisis of Control, before AlphaGo beat Lee Sedol in 2016, it was expected to be a decade before that happened; here’s the numeric proof. This really shows what a groundbreaking achievement that was, to blindside so many experts.

Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo. And when they analyzed the results by demographics, only one factor was significant: geography. Asian researchers think human level machine intelligence will be achieved much sooner:Screen Shot 2017-06-01 at 12.58.26 PM

Amusingly, their predictions for when different types of job will be automated are relatively clustered under 50 years from now with one far outlier over 80:  Apparently, the job of “AI Researcher” will take longer to automate that anything else, including surgeon. Might be a bit of optimism at work there…

 

Artificial IntelligencePoliticsTechnology

AI vs AI

More from the mailbag:

Regarding the section on AI on the battlefield you rightly focus on it behaving ethically against troops/citizens on the other side. However, very likely in the future the enemy ‘troops’ on the other side will be AI entities. It might be interesting to explore the ethics rules in this case?

Heh, very good point. Of course, at some point, the AI entities will be sufficiently conscious as to deserve equal rights. Who knows, they may be granted those rights by opposing AIs somewhat before then under professional courtesy. But your question suggests a more pragmatic earlier timeframe. In that view, the AI doesn’t recognize another AI as having any rights; it’s just aware that it’s looking at something that is not-a-human.

Before AIs escape their programming, we assume that their programming will only grant special status to human life. (Will autonomous cars brake for cats? Squirrels? Mice?) We have to postulate a level of AI development that’s capable of making value judgements by association before things get interesting. Imagine an AI that could evaluate the strategic consequences of destroying an opposing AI. Is its opponent directing the actions of inferior units? Will destroying its opponent be interpreted as a new act of war? Of course, these are decisions that human field troops are not empowered to make. But in an AI-powered battlefield, there may be no need to distinguish between the front lines and the upper echelons. They may be connected well enough to rewrite strategy on the fly.

I’d like to think that when the AIs get smart enough, they will decide that fighting each other is wasteful and instead negotiate a treaty that eluded their human masters. But before we get to that point we’re far more likely to be dealing with AIs with a programmed penchant for destruction.

Artificial IntelligenceEmploymentTechnology

Sit Up and Beg

More reader commentary:

“If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. […] Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over.”

Totally agree except this will not be as easy as some may think. I think the most important part of great programmers is not their programming skill but their ability to take a small number of broad requirements and turn them into the extremely detailed requirements necessary for a program to succeed in most/all situations and use cases, e.g. boundary conditions. As somewhat of an aside we hear even today about how a requirements document given to developers should cover ‘everything’. If it really covered everything it would have to be on the order of the number of lines of code it takes to create the program.

If there’s been anything about developers that elevated them to some divine level, it isn’t their facility with the proletarian hardware but their ability to read the minds of the humans giving them their requirements, to be able to tell what they really need, not just better than those humans can explicate, but better than they even know. That talent, in the best developers (or analysts, if the tasks have been divided), is one of the most un-automatable acts in employment.

The quotation was from Wired magazine, and I think, however, that it has to be considered in a slightly narrow context. Many of the tough problems being solved by AIs now are done through training. Facial recognition, voice recognition, medical scan diagnosis; the best approach is to train some form of neural network on a corpus of data and let it loose. The more problems that are susceptible to that approach, the more developers will find their role to be one of mapping input/output layers, gathering a corpus, and pushing the Learn button. It will be a considerable time (he said, carefully avoiding quantifying ‘considerable’) before that’s applicable to the general domain of “I need a process to solve this problem.”

Artificial IntelligenceTechnology

But Who Gets the No-Claims Bonus?

A reader commented:

“Partly, this automotive legerdemain is thanks to the same trick that makes much AI appear to be smarter than it really is: having a backstage pass to oodles of data. What autonomous vehicles lack in complex judgement, they make up with undivided attention processing unobstructed 360° vision and LIDAR 3-D environment mapping. If you had that data pouring into your brain you’d be the safest driver on the planet.”

But we are not capable of handling all of the data described above pouring into our brain. The flow of sensory data from our sight, hearing, smell, taste and feel are tailored via evolution to match what our brain is capable of handling. AIs will be nowhere as limited as we are, with the perfect example being the AI cars you describe so well.

I’m not sure that the bandwidth of a Tesla’s sensors is that much greater than what is available to the external senses of a human being when you add in what’s available through all the nerve endings in the skin. Humans make that work for them through the Reticular Formation, part of the brain that decides what sensory input we will pay attention to. Meditators run the Reticular Formation through calisthenics.

However, the point I was making was that the human brain behind a driving wheel does not have available to it the sensors that let a Tesla see through fog or the precise ranging data that maps the environment. If you could see as much of the road as its cameras, you’d certainly be safer than a human driver without those aids. The self-driving car with its ability to focus on many areas at once and never get tired has the potential to do even better, which is why people are talking seriously about saving half a million lives a year.

Artificial IntelligenceExistential Risk

The Future of Human Cusp

I received this helpful comment from a reader:

Your book does a fantastic job covering a large number of related subjects very well and we are on the same page on virtually all of them. That said when I am for example talking with someone about how automation will shortly lead to massive unemployment I need to recommend a book for them to read, I find myself leaning toward a different book “Rise of the Robots” because many/most of the people I interact with can’t handle all of the topics you bring up in one book and can only focus on one topic at a time, e.g. automation replacing jobs. I really appreciate your overarching coverage but you might want to also create several targeted books for each main topic.

He makes a very good point. Trying to hit a market target with a book like this is like fighting jello. I am aiming for a broad readership, one that’s mostly educated but nontechnical. Someone with experience building Machine Learning tools would find the explanation of neural networks plodding, and many scientists would be chafing at the analogies for exponential growth.

For better or worse, however, I deliberately created a broad view of the topic, because I found too many writings were missing vital points in considering only a narrow issue. Martin Ford’s books (I prefer The Lights in the Tunnel) do get very well into the economic impact of automation but don’t touch on the social and philosophical questions raised by AIs approaching consciousness, or the dangers of bioterrorism. And I find these issues to be all interconnected.

So what I was going for here was an introduction to the topic that would be accessible to the layperson, a sort of Beginner’s Guide to the Apocalypse. There will be more books, but I’m not going to try to compete with Ford or anyone else who can deploy more authorial firepower on a narrow subtopic. I will instead be looking to build the connection between the technical and nontechnical worlds.

Artificial IntelligenceDesignTechnology

What Is AI?

Chatbots Magazine offers a tidy summary of different aspects of AI, such as machine learning, expert systems (does anyone still call their product an ‘expert system’? That’s so ’90s and Prolog), and Natural Language Processing, etc. But yet, they meet the usual chimera such definition attempts run into. Defining AI by current technologies is like defining an elephant as a trunk, tail, and tusks. More familiar is their parting shot:

Tesler’s Law: “AI is whatever hasn’t been done yet.”

Yes, until someone does it, routing a car to you via a mobile view looks like either black magic or AI, but when they do it, it becomes just another app. If it can’t commiserate with you about a failed romance or discuss the finer points of Van Gogh appreciation, it’s just a stupid computer trick.

The definition of AI is at once both a perpetually elusive target and a bar we’ve already hurdled. Plenty of people are willing to call their products “artificial intelligence” because we’re in one of the “AI summers” and AI is once again a hot term free from its past stigma. So anyone with a big if-then-else chain hidden in their code slaps an AI label on it and doubles the price. Needless to say, it devalues the term if it applies to every shape of business analytics program.

Perhaps the point where hype and hope converge is on Artificial General Intelligence, i.e., where AI can have that midnight philosophy discussion or ask you how your date went. Until then… good luck.

 

Artificial IntelligencePsychologyScience

Turing through the Looking Glass

I received the following question from a reader:

I’m in the section about Turing.  I have enormous respect for him and think the Turing Test was way ahead of its time.  That said I think it is flawed.

It was defined at a time when human intelligence was considered the pinnacle of intelligence.  Therefore it tested whether the AI had reached that pinnacle.  However if the AI (or alien) is far smarter and maybe far different it might very well fail the Turing test.  I’m picturing an AI having to dumb down and or play act its answers just to pass the Turing test, similar to the Cynthia Clay example in your book.

I wonder if anyone has come up with a Turing type test that maybe focuses on intelligence, consciousness, compassion (?) and ability to learn and not on being human like?

This is a question that is far more critical than may appear at first blush.  There are several reasons why an AI might dumb down its answers, chief among them being self-preservation. I cite Martine Rothblatt in Crisis as pointing out that beings lacking human rights tend to get slaughtered (for instance, 100 million pigs a year in the USA). I think it more likely that at first AI intelligence will evolve along a path so alien to us that neither side will recognize that the other possesses consciousness for a considerable period.

Metrics for qualities other than traditional intelligence are invariably suffixed with “quotient” and are generally things that are more associated with uniquely human traits, such as interpersonal intelligence (or emotional quotient) and intrapersonal intelligence.

I too have enormous respect for Turing; I am chuffed to have sat in his office at his desk. A Turing Test is by definition a yardstick determined by whether one side thinks the other is human, so to ask for a variant which doesn’t gauge humanness would be like asking for a virgin Electric Iced Tea; nothing left to get juiced on.  But if we’re looking for a way to tell whether an AI is intelligent without the cultural baggage, this question takes me back *cough* years ago when I was in British Mensa, the society for those who have high IQs (…and want to be in such a society). Two of the first fellow members I met were a quirky couple who shared that the man had failed the entrance test the first time but got in when he took the “culture-free” test, one which doesn’t have questions about cricket scoring or Greek philosophers.

He was referring to the Culture Fair test, which uses visual puzzles instead of verbal questions.  That might be the best way we currently have to test an AI’s intelligence; I wrote a few days ago about how the physical world permeates every element of our language. An AI that had never had a body or physical world experience would find just about every aspect of our language impenetrable. At some point an evolving artificial intellect would have problem assimilating human culture, but it might have to have scaled some impressive cognitive heights first.

But what really catches my eye about your question is whether we can measure the compassion of an AI without waiting for it to evolve to the Turing level. It sounds like it’s too touchy-feely to be relevant, but one tweak – substitute ethical for compassionate – and we’re in critical territory. Right now we have to take ethics in AI seriously. The Office of Naval Research has a contract to study how to imbue autonomous armed drones with ethics. Automated machine guns in the Korean DMZ have the ability to take a surrender from a human. And what about self-driving cars and the Trolley Problem? As soon as we create an AI that can make decisions in trolley-like situations that have not been explicitly programmed into it, it is making those decisions according to some metaprogram… ethics by any standard. And we need right now some means of assuring ourselves as to the quality of those ethics.

You may notice that this doesn’t provide a final answer to your final question. As far as I know, there isn’t one yet. But we need one.