Artificial IntelligencePsychologyScience

Turing through the Looking Glass

I received the following question from a reader:

I’m in the section about Turing.  I have enormous respect for him and think the Turing Test was way ahead of its time.  That said I think it is flawed.

It was defined at a time when human intelligence was considered the pinnacle of intelligence.  Therefore it tested whether the AI had reached that pinnacle.  However if the AI (or alien) is far smarter and maybe far different it might very well fail the Turing test.  I’m picturing an AI having to dumb down and or play act its answers just to pass the Turing test, similar to the Cynthia Clay example in your book.

I wonder if anyone has come up with a Turing type test that maybe focuses on intelligence, consciousness, compassion (?) and ability to learn and not on being human like?

This is a question that is far more critical than may appear at first blush.  There are several reasons why an AI might dumb down its answers, chief among them being self-preservation. I cite Martine Rothblatt in Crisis as pointing out that beings lacking human rights tend to get slaughtered (for instance, 100 million pigs a year in the USA). I think it more likely that at first AI intelligence will evolve along a path so alien to us that neither side will recognize that the other possesses consciousness for a considerable period.

Metrics for qualities other than traditional intelligence are invariably suffixed with “quotient” and are generally things that are more associated with uniquely human traits, such as interpersonal intelligence (or emotional quotient) and intrapersonal intelligence.

I too have enormous respect for Turing; I am chuffed to have sat in his office at his desk. A Turing Test is by definition a yardstick determined by whether one side thinks the other is human, so to ask for a variant which doesn’t gauge humanness would be like asking for a virgin Electric Iced Tea; nothing left to get juiced on.  But if we’re looking for a way to tell whether an AI is intelligent without the cultural baggage, this question takes me back *cough* years ago when I was in British Mensa, the society for those who have high IQs (…and want to be in such a society). Two of the first fellow members I met were a quirky couple who shared that the man had failed the entrance test the first time but got in when he took the “culture-free” test, one which doesn’t have questions about cricket scoring or Greek philosophers.

He was referring to the Culture Fair test, which uses visual puzzles instead of verbal questions.  That might be the best way we currently have to test an AI’s intelligence; I wrote a few days ago about how the physical world permeates every element of our language. An AI that had never had a body or physical world experience would find just about every aspect of our language impenetrable. At some point an evolving artificial intellect would have problem assimilating human culture, but it might have to have scaled some impressive cognitive heights first.

But what really catches my eye about your question is whether we can measure the compassion of an AI without waiting for it to evolve to the Turing level. It sounds like it’s too touchy-feely to be relevant, but one tweak – substitute ethical for compassionate – and we’re in critical territory. Right now we have to take ethics in AI seriously. The Office of Naval Research has a contract to study how to imbue autonomous armed drones with ethics. Automated machine guns in the Korean DMZ have the ability to take a surrender from a human. And what about self-driving cars and the Trolley Problem? As soon as we create an AI that can make decisions in trolley-like situations that have not been explicitly programmed into it, it is making those decisions according to some metaprogram… ethics by any standard. And we need right now some means of assuring ourselves as to the quality of those ethics.

You may notice that this doesn’t provide a final answer to your final question. As far as I know, there isn’t one yet. But we need one.

Artificial IntelligencePsychology

This is not a chair

Guy Claxton’s book Intelligence in the Flesh speaks to the role our bodies play in our cognition, and it’s rather more important than having trouble cogitating because your tummy’s growling:

…our brains are profoundly egocentric. They are not designed to see, hear, smell, taste and touch things as they are in themselves, but as they are in relation to our abilities and needs. What I’m really perceiving is not a chair, but a ‘for sitting’; not a ladder but a ‘for climbing’; not a greenhouse but a ‘for propagating’. Other animals, indeed other people, might extract from their worlds quite different sets of possibilities. To the cat, the chair is ‘for sleeping’, the ladder is ‘for claw sharpening’ and the greenhouse is ‘for mousing’.

This gets at a vast divide between humans and their AI successors. If you accept that an AI must, to function effectively in the human world, relate to the objects in that human world as humans do, then there is a whole universe of cognitive labels-without-language that humans employ to divide and categorize that world, learned throughout childhood as a result of exercising a body in contact with the world.

This assertion underpins some important philosophy. Roy Hornsby says that according to George Steiner, “Heidegger is saying that the notion of existential identity and that of world are
completely wedded. To be at all is to be worldly. The everyday is the enveloping
wholeness of being.” In other words, you can’t form an external perception of something you are immersed in. You likely do not notice it at all. We are immersed in the physical world of objects to which we ascribe identities. It seems so obvious to us that it seems silly to point it out.

Of course that is a chair. But then why is it so hard for a computer to recognize all things chair-like? Because it’s stupid? No, because it’s never sat in one. Our subconscious chair-recognition algorithm includes the thought process, “What would happen if I tried to sit in that?” and that is what allows us to include all sorts of different shapes in the class of chair. And this is what results in fundamentally hard problems of getting computers to recognize chairs.

We might find hope that chair recognition could be achieved through enough training, which would somehow embed the knowledge of “can this be sat in?” without our having to code it. It’s worked well enough for cats, although I would like to know whether that system could classify a lion or tiger as feline as readily as a human. But that training seems doomed to be restricted to each class of images we give it, and might never make the cognitive leap, “Oh yes, I could sit on this table if I needed to.” Because the system would still lack the context of a physical body that needs to sit, and our world is filled with objects that we relate to in terms of how we can use them. If we were forced to see them as irreducible complex shapes we might be so overwhelmed by the immensity of a cheese sandwich that it would never occur to us to eat it. Yet this is the nature of the world that any new AI will be thrust into. Babies navigate this world by narrowing their focus to a few elements: Follow milk scent, suck. As each object in the world is classified by form and function, their focus opens a little wider.

None of this matters in a Artificial Narrow Intelligence world, of course, where AIs never have to be-in-the-world. But the grand pursuit of Artificial General Intelligence will have to acknowledge the relationship of the human body to the objects of the world. One day, a robot is going to get tired, and it’ll need to figure out where it can sit.

Artificial IntelligenceExistential RiskScienceTechnologyThe Singularity

Rebuttal to “The AI Misinformation Epidemic”

Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like. Sometimes that falls within the limits of what a layperson can reasonably be expected to learn about a field in a thirty minute interview; sometimes it’s rank sensationalism.  Zachary Lipton, a PhD candidate at UCSD, takes aim at the press coverage of AI in The AI Misinformation Epidemic, and its followup, but uses a blunderbuss and takes out a lot of innocent bystanders.

He rants against media sensationalism and misinformation but provides no examples
to debate other than a Vanity Fair article about Elon Musk. He goes after Kurzweil in particular but doesn’t say what specifically he disagrees with Kurzweil on. He says that Kurzweil’s date
for the Singularity is made up but Kurzweil has published his reasoning
and Lipton doesn’t say what data or assertions in that reasoning he
disagrees with. He says the Singularity is a nebulous concept, which is
bound to be true in the minds of many laypeople who have heard the term
but he references Kurzweil immediately adjacent to that assertion and
yet doesn’t say what about Kurzweil’s vision is wrong or unfounded, dismissing it instead as “religion,” which apparently means he doesn’t have to be specific.

He
says that there are many people making pronouncements in the field who
are unqualified to do so but doesn’t name anyone aside from Kurzweil and
Vanity Fair, nor does he say what credentials should be required to
qualify someone. Kurzweil, having invented the optical character reader
and music synthesizer, is not qualified?

He takes no
discernible stand on the issue of AI safety yet his scornful tone is
readily interpreted as poo-poohing not just utopianism but alarmism as well. That
puts him at odds with Stephen Hawking, whose academic credentials are
beyond question. For some reason, Nick Bostrom also comes in for attack in the comments, for no specified reason but with the implication that since he makes AI dangers digestible for the masses he is therefore sensationalist.  Perhaps that is why I reacted so much to this article.

There is of course hype, but it is hard to tell exactly where. In 2015 an article claiming that a computer would beat the world champion Go player within a year would be roundly dismissed as hype.  In 2009 any article asserting that self-driving vehicles would be ready for public roads within five years would have been overreaching. Kurzweil has a good track record of predictions, they just tend to be behind schedule. The point is, if an assertion about an existential threat turns out to be well founded but we ignore it because existential threats have always appeared over-dramatized, then it will be too late to say, “Oops, missed one.” We have to take this stuff seriously.

Artificial IntelligenceTechnology

Self-driving cars still 25 years off in the USA? No.

Bill Gurley argues on CNBC that we are 25 years away from autonomous vehicle market penetration in the USA because we’re too litigation-hungry.  And concludes that AVs will instead take hold in a country like China which has relatively uncrowded roads and an authoritarian government that can make central planning decisions.

I don’t agree. Precisely because of rampant litigation in the USA, insurers are going to do the cold, hard math (like they always do), and realize that AVs will save a passel of lives and hence be good for their book. They will therefore indemnify manufacturers or otherwise shield them from opportunistic lawsuits launched in the inevitable few cases where the cars are apparently at fault. Money will smooth the path to AV adoption.

He also says:

The part we haven’t figured out yet, the last 3 percent, which is snow, rain, all the really, really hard stuff — it really is hard.  They have done all the easy stuff.

While I would agree that there are still some really, really hard things to work out in AVs, rain and snow aren’t among them. Sensors like radar can penetrate that stuff far more effectively than human eyesight.  Even pattern recognition in the optical spectrum could outperform humans.

The hard part part is getting the cars to know when they can break the rules. A recent viral posting about how to trap AVs hints at that. When a trash truck is taking up your lane making stops and you need to cross a double yellow to get around it, will an AV be smart enough to do that? Sure, it can just sit there and let the human take manual control, but that doesn’t get us to the Uber-utopia of cars making their way unmanned around the city to their next pickup.

Artificial IntelligenceEmployment

How to Prepare Your Career for Automation

This excellent article by Sam DeBrule explores how to survive the coming changes:

To position oneself to be augmented, rather than replaced by AI, one should embrace the benefits of AI enabled technology and invest in the “soft” skills that will empower her to stand out as an adaptable, personable and multi-faceted employee.

These skills are the higher-order reasoning that AI is not yet close to emulating: Creative thinking, emotional intelligence, problem solving. These are excellent arguments for those people on the higher end of the IQ scale. What does the future hold for people who are not as intellectually equipped?

Artificial IntelligenceExistential RiskScienceTechnology

Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse

Vanity Fair describes a meeting between Elon Musk and Demis Hassabis, a leading creator of advanced artificial intelligence, which likely propelled Musk’s alarm about AI:

Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).

Mostly about Musk, the article is replete with Crisis of Control tropes that are now playing out in the real world far sooner than even I had thought likely. Musk favors opening AI development and getting to super-AI before government or “tech elites” – even when the elites are Google or Facebook.

Uncategorized

The Real Story: AI Reading Scientific Papers

I predicted that mass correlation of scientific papers by AI would happen much sooner than the 20 years that some in the field think it will take. Now read that in the course of a Watson project:

Machine learning software on a laptop can extract the critical information from scientific papers in seconds, enabling the creation of vast knowledge graphs across wide bodies of research in weeks rather than decades.

Artificial IntelligenceExistential RiskScience

Preparing for the Biggest Change

The repercussions of the January Asilomar Principles meeting continue to reverberate:

Importance Principle:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

[…]

As AI professor Roman Yampolskiy told me, “Design of human-level AI will be the most impactful event in the history of humankind. It is impossible to over-prepare for it.”

This is about effects vs probability. It evokes what I said in Crisis of Control, that the probability of being killed by an asteroid impact is roughly the same as that of dying in a plane crash: probability of event happening times number of people killed. Advanced AI could affect us more profoundly than climate change, could require even longer to prepare for, and could happen sooner. All that adds up to taking this seriously, starting now.

Artificial IntelligenceExistential Risk

2027: Ethicists Lose Battle with Omnipotent AI

The UK’s Guardian produced this little set piece that neatly summarizes many of the issues surrounding AI-as-existential-threat. The smug ethicist brought in to teach a blossoming AI is more interested in defending human exceptionalism (and the “Chinese Room” argument), but is eventually backed into a corner, stating that “You can’t rely on humanity to provide a model for humanity. That goes without saying.” Meanwhile the AI is bent on proving the “hard take-off” hypothesis…