Artificial IntelligenceEmployment

When will a machine do your job better than you?

Katja Grace at the Future of Humanity Institute at the University of Oxford and fellow authors surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. They averaged the answers, and published them at https://arxiv.org/pdf/1705.08807.pdfThe results are… surprising.

First up, AIs will reach human proficiency in the game of Go in 2027… wait, what? Ah, but this survey was conducted in 2015. As I noted in Crisis of Control, before AlphaGo beat Lee Sedol in 2016, it was expected to be a decade before that happened; here’s the numeric proof. This really shows what a groundbreaking achievement that was, to blindside so many experts.

Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo. And when they analyzed the results by demographics, only one factor was significant: geography. Asian researchers think human level machine intelligence will be achieved much sooner:Screen Shot 2017-06-01 at 12.58.26 PM

Amusingly, their predictions for when different types of job will be automated are relatively clustered under 50 years from now with one far outlier over 80:  Apparently, the job of “AI Researcher” will take longer to automate that anything else, including surgeon. Might be a bit of optimism at work there…

 

Artificial IntelligencePoliticsTechnology

AI vs AI

More from the mailbag:

Regarding the section on AI on the battlefield you rightly focus on it behaving ethically against troops/citizens on the other side. However, very likely in the future the enemy ‘troops’ on the other side will be AI entities. It might be interesting to explore the ethics rules in this case?

Heh, very good point. Of course, at some point, the AI entities will be sufficiently conscious as to deserve equal rights. Who knows, they may be granted those rights by opposing AIs somewhat before then under professional courtesy. But your question suggests a more pragmatic earlier timeframe. In that view, the AI doesn’t recognize another AI as having any rights; it’s just aware that it’s looking at something that is not-a-human.

Before AIs escape their programming, we assume that their programming will only grant special status to human life. (Will autonomous cars brake for cats? Squirrels? Mice?) We have to postulate a level of AI development that’s capable of making value judgements by association before things get interesting. Imagine an AI that could evaluate the strategic consequences of destroying an opposing AI. Is its opponent directing the actions of inferior units? Will destroying its opponent be interpreted as a new act of war? Of course, these are decisions that human field troops are not empowered to make. But in an AI-powered battlefield, there may be no need to distinguish between the front lines and the upper echelons. They may be connected well enough to rewrite strategy on the fly.

I’d like to think that when the AIs get smart enough, they will decide that fighting each other is wasteful and instead negotiate a treaty that eluded their human masters. But before we get to that point we’re far more likely to be dealing with AIs with a programmed penchant for destruction.

Artificial IntelligenceEmploymentTechnology

Sit Up and Beg

More reader commentary:

“If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. […] Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over.”

Totally agree except this will not be as easy as some may think. I think the most important part of great programmers is not their programming skill but their ability to take a small number of broad requirements and turn them into the extremely detailed requirements necessary for a program to succeed in most/all situations and use cases, e.g. boundary conditions. As somewhat of an aside we hear even today about how a requirements document given to developers should cover ‘everything’. If it really covered everything it would have to be on the order of the number of lines of code it takes to create the program.

If there’s been anything about developers that elevated them to some divine level, it isn’t their facility with the proletarian hardware but their ability to read the minds of the humans giving them their requirements, to be able to tell what they really need, not just better than those humans can explicate, but better than they even know. That talent, in the best developers (or analysts, if the tasks have been divided), is one of the most un-automatable acts in employment.

The quotation was from Wired magazine, and I think, however, that it has to be considered in a slightly narrow context. Many of the tough problems being solved by AIs now are done through training. Facial recognition, voice recognition, medical scan diagnosis; the best approach is to train some form of neural network on a corpus of data and let it loose. The more problems that are susceptible to that approach, the more developers will find their role to be one of mapping input/output layers, gathering a corpus, and pushing the Learn button. It will be a considerable time (he said, carefully avoiding quantifying ‘considerable’) before that’s applicable to the general domain of “I need a process to solve this problem.”

Artificial IntelligenceTechnology

But Who Gets the No-Claims Bonus?

A reader commented:

“Partly, this automotive legerdemain is thanks to the same trick that makes much AI appear to be smarter than it really is: having a backstage pass to oodles of data. What autonomous vehicles lack in complex judgement, they make up with undivided attention processing unobstructed 360° vision and LIDAR 3-D environment mapping. If you had that data pouring into your brain you’d be the safest driver on the planet.”

But we are not capable of handling all of the data described above pouring into our brain. The flow of sensory data from our sight, hearing, smell, taste and feel are tailored via evolution to match what our brain is capable of handling. AIs will be nowhere as limited as we are, with the perfect example being the AI cars you describe so well.

I’m not sure that the bandwidth of a Tesla’s sensors is that much greater than what is available to the external senses of a human being when you add in what’s available through all the nerve endings in the skin. Humans make that work for them through the Reticular Formation, part of the brain that decides what sensory input we will pay attention to. Meditators run the Reticular Formation through calisthenics.

However, the point I was making was that the human brain behind a driving wheel does not have available to it the sensors that let a Tesla see through fog or the precise ranging data that maps the environment. If you could see as much of the road as its cameras, you’d certainly be safer than a human driver without those aids. The self-driving car with its ability to focus on many areas at once and never get tired has the potential to do even better, which is why people are talking seriously about saving half a million lives a year.

Artificial IntelligenceExistential Risk

The Future of Human Cusp

I received this helpful comment from a reader:

Your book does a fantastic job covering a large number of related subjects very well and we are on the same page on virtually all of them. That said when I am for example talking with someone about how automation will shortly lead to massive unemployment I need to recommend a book for them to read, I find myself leaning toward a different book “Rise of the Robots” because many/most of the people I interact with can’t handle all of the topics you bring up in one book and can only focus on one topic at a time, e.g. automation replacing jobs. I really appreciate your overarching coverage but you might want to also create several targeted books for each main topic.

He makes a very good point. Trying to hit a market target with a book like this is like fighting jello. I am aiming for a broad readership, one that’s mostly educated but nontechnical. Someone with experience building Machine Learning tools would find the explanation of neural networks plodding, and many scientists would be chafing at the analogies for exponential growth.

For better or worse, however, I deliberately created a broad view of the topic, because I found too many writings were missing vital points in considering only a narrow issue. Martin Ford’s books (I prefer The Lights in the Tunnel) do get very well into the economic impact of automation but don’t touch on the social and philosophical questions raised by AIs approaching consciousness, or the dangers of bioterrorism. And I find these issues to be all interconnected.

So what I was going for here was an introduction to the topic that would be accessible to the layperson, a sort of Beginner’s Guide to the Apocalypse. There will be more books, but I’m not going to try to compete with Ford or anyone else who can deploy more authorial firepower on a narrow subtopic. I will instead be looking to build the connection between the technical and nontechnical worlds.

Artificial IntelligenceDesignTechnology

What Is AI?

Chatbots Magazine offers a tidy summary of different aspects of AI, such as machine learning, expert systems (does anyone still call their product an ‘expert system’? That’s so ’90s and Prolog), and Natural Language Processing, etc. But yet, they meet the usual chimera such definition attempts run into. Defining AI by current technologies is like defining an elephant as a trunk, tail, and tusks. More familiar is their parting shot:

Tesler’s Law: “AI is whatever hasn’t been done yet.”

Yes, until someone does it, routing a car to you via a mobile view looks like either black magic or AI, but when they do it, it becomes just another app. If it can’t commiserate with you about a failed romance or discuss the finer points of Van Gogh appreciation, it’s just a stupid computer trick.

The definition of AI is at once both a perpetually elusive target and a bar we’ve already hurdled. Plenty of people are willing to call their products “artificial intelligence” because we’re in one of the “AI summers” and AI is once again a hot term free from its past stigma. So anyone with a big if-then-else chain hidden in their code slaps an AI label on it and doubles the price. Needless to say, it devalues the term if it applies to every shape of business analytics program.

Perhaps the point where hype and hope converge is on Artificial General Intelligence, i.e., where AI can have that midnight philosophy discussion or ask you how your date went. Until then… good luck.

 

Artificial IntelligencePsychologyScience

Turing through the Looking Glass

I received the following question from a reader:

I’m in the section about Turing.  I have enormous respect for him and think the Turing Test was way ahead of its time.  That said I think it is flawed.

It was defined at a time when human intelligence was considered the pinnacle of intelligence.  Therefore it tested whether the AI had reached that pinnacle.  However if the AI (or alien) is far smarter and maybe far different it might very well fail the Turing test.  I’m picturing an AI having to dumb down and or play act its answers just to pass the Turing test, similar to the Cynthia Clay example in your book.

I wonder if anyone has come up with a Turing type test that maybe focuses on intelligence, consciousness, compassion (?) and ability to learn and not on being human like?

This is a question that is far more critical than may appear at first blush.  There are several reasons why an AI might dumb down its answers, chief among them being self-preservation. I cite Martine Rothblatt in Crisis as pointing out that beings lacking human rights tend to get slaughtered (for instance, 100 million pigs a year in the USA). I think it more likely that at first AI intelligence will evolve along a path so alien to us that neither side will recognize that the other possesses consciousness for a considerable period.

Metrics for qualities other than traditional intelligence are invariably suffixed with “quotient” and are generally things that are more associated with uniquely human traits, such as interpersonal intelligence (or emotional quotient) and intrapersonal intelligence.

I too have enormous respect for Turing; I am chuffed to have sat in his office at his desk. A Turing Test is by definition a yardstick determined by whether one side thinks the other is human, so to ask for a variant which doesn’t gauge humanness would be like asking for a virgin Electric Iced Tea; nothing left to get juiced on.  But if we’re looking for a way to tell whether an AI is intelligent without the cultural baggage, this question takes me back *cough* years ago when I was in British Mensa, the society for those who have high IQs (…and want to be in such a society). Two of the first fellow members I met were a quirky couple who shared that the man had failed the entrance test the first time but got in when he took the “culture-free” test, one which doesn’t have questions about cricket scoring or Greek philosophers.

He was referring to the Culture Fair test, which uses visual puzzles instead of verbal questions.  That might be the best way we currently have to test an AI’s intelligence; I wrote a few days ago about how the physical world permeates every element of our language. An AI that had never had a body or physical world experience would find just about every aspect of our language impenetrable. At some point an evolving artificial intellect would have problem assimilating human culture, but it might have to have scaled some impressive cognitive heights first.

But what really catches my eye about your question is whether we can measure the compassion of an AI without waiting for it to evolve to the Turing level. It sounds like it’s too touchy-feely to be relevant, but one tweak – substitute ethical for compassionate – and we’re in critical territory. Right now we have to take ethics in AI seriously. The Office of Naval Research has a contract to study how to imbue autonomous armed drones with ethics. Automated machine guns in the Korean DMZ have the ability to take a surrender from a human. And what about self-driving cars and the Trolley Problem? As soon as we create an AI that can make decisions in trolley-like situations that have not been explicitly programmed into it, it is making those decisions according to some metaprogram… ethics by any standard. And we need right now some means of assuring ourselves as to the quality of those ethics.

You may notice that this doesn’t provide a final answer to your final question. As far as I know, there isn’t one yet. But we need one.

Artificial IntelligencePsychology

This is not a chair

Guy Claxton’s book Intelligence in the Flesh speaks to the role our bodies play in our cognition, and it’s rather more important than having trouble cogitating because your tummy’s growling:

…our brains are profoundly egocentric. They are not designed to see, hear, smell, taste and touch things as they are in themselves, but as they are in relation to our abilities and needs. What I’m really perceiving is not a chair, but a ‘for sitting’; not a ladder but a ‘for climbing’; not a greenhouse but a ‘for propagating’. Other animals, indeed other people, might extract from their worlds quite different sets of possibilities. To the cat, the chair is ‘for sleeping’, the ladder is ‘for claw sharpening’ and the greenhouse is ‘for mousing’.

This gets at a vast divide between humans and their AI successors. If you accept that an AI must, to function effectively in the human world, relate to the objects in that human world as humans do, then there is a whole universe of cognitive labels-without-language that humans employ to divide and categorize that world, learned throughout childhood as a result of exercising a body in contact with the world.

This assertion underpins some important philosophy. Roy Hornsby says that according to George Steiner, “Heidegger is saying that the notion of existential identity and that of world are
completely wedded. To be at all is to be worldly. The everyday is the enveloping
wholeness of being.” In other words, you can’t form an external perception of something you are immersed in. You likely do not notice it at all. We are immersed in the physical world of objects to which we ascribe identities. It seems so obvious to us that it seems silly to point it out.

Of course that is a chair. But then why is it so hard for a computer to recognize all things chair-like? Because it’s stupid? No, because it’s never sat in one. Our subconscious chair-recognition algorithm includes the thought process, “What would happen if I tried to sit in that?” and that is what allows us to include all sorts of different shapes in the class of chair. And this is what results in fundamentally hard problems of getting computers to recognize chairs.

We might find hope that chair recognition could be achieved through enough training, which would somehow embed the knowledge of “can this be sat in?” without our having to code it. It’s worked well enough for cats, although I would like to know whether that system could classify a lion or tiger as feline as readily as a human. But that training seems doomed to be restricted to each class of images we give it, and might never make the cognitive leap, “Oh yes, I could sit on this table if I needed to.” Because the system would still lack the context of a physical body that needs to sit, and our world is filled with objects that we relate to in terms of how we can use them. If we were forced to see them as irreducible complex shapes we might be so overwhelmed by the immensity of a cheese sandwich that it would never occur to us to eat it. Yet this is the nature of the world that any new AI will be thrust into. Babies navigate this world by narrowing their focus to a few elements: Follow milk scent, suck. As each object in the world is classified by form and function, their focus opens a little wider.

None of this matters in a Artificial Narrow Intelligence world, of course, where AIs never have to be-in-the-world. But the grand pursuit of Artificial General Intelligence will have to acknowledge the relationship of the human body to the objects of the world. One day, a robot is going to get tired, and it’ll need to figure out where it can sit.

Artificial IntelligenceExistential RiskScienceTechnologyThe Singularity

Rebuttal to “The AI Misinformation Epidemic”

Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like. Sometimes that falls within the limits of what a layperson can reasonably be expected to learn about a field in a thirty minute interview; sometimes it’s rank sensationalism.  Zachary Lipton, a PhD candidate at UCSD, takes aim at the press coverage of AI in The AI Misinformation Epidemic, and its followup, but uses a blunderbuss and takes out a lot of innocent bystanders.

He rants against media sensationalism and misinformation but provides no examples
to debate other than a Vanity Fair article about Elon Musk. He goes after Kurzweil in particular but doesn’t say what specifically he disagrees with Kurzweil on. He says that Kurzweil’s date
for the Singularity is made up but Kurzweil has published his reasoning
and Lipton doesn’t say what data or assertions in that reasoning he
disagrees with. He says the Singularity is a nebulous concept, which is
bound to be true in the minds of many laypeople who have heard the term
but he references Kurzweil immediately adjacent to that assertion and
yet doesn’t say what about Kurzweil’s vision is wrong or unfounded, dismissing it instead as “religion,” which apparently means he doesn’t have to be specific.

He
says that there are many people making pronouncements in the field who
are unqualified to do so but doesn’t name anyone aside from Kurzweil and
Vanity Fair, nor does he say what credentials should be required to
qualify someone. Kurzweil, having invented the optical character reader
and music synthesizer, is not qualified?

He takes no
discernible stand on the issue of AI safety yet his scornful tone is
readily interpreted as poo-poohing not just utopianism but alarmism as well. That
puts him at odds with Stephen Hawking, whose academic credentials are
beyond question. For some reason, Nick Bostrom also comes in for attack in the comments, for no specified reason but with the implication that since he makes AI dangers digestible for the masses he is therefore sensationalist.  Perhaps that is why I reacted so much to this article.

There is of course hype, but it is hard to tell exactly where. In 2015 an article claiming that a computer would beat the world champion Go player within a year would be roundly dismissed as hype.  In 2009 any article asserting that self-driving vehicles would be ready for public roads within five years would have been overreaching. Kurzweil has a good track record of predictions, they just tend to be behind schedule. The point is, if an assertion about an existential threat turns out to be well founded but we ignore it because existential threats have always appeared over-dramatized, then it will be too late to say, “Oops, missed one.” We have to take this stuff seriously.