Category: Psychology

Artificial IntelligencePsychologyScience

Turing through the Looking Glass

I received the following question from a reader:

I’m in the section about Turing.  I have enormous respect for him and think the Turing Test was way ahead of its time.  That said I think it is flawed.

It was defined at a time when human intelligence was considered the pinnacle of intelligence.  Therefore it tested whether the AI had reached that pinnacle.  However if the AI (or alien) is far smarter and maybe far different it might very well fail the Turing test.  I’m picturing an AI having to dumb down and or play act its answers just to pass the Turing test, similar to the Cynthia Clay example in your book.

I wonder if anyone has come up with a Turing type test that maybe focuses on intelligence, consciousness, compassion (?) and ability to learn and not on being human like?

This is a question that is far more critical than may appear at first blush.  There are several reasons why an AI might dumb down its answers, chief among them being self-preservation. I cite Martine Rothblatt in Crisis as pointing out that beings lacking human rights tend to get slaughtered (for instance, 100 million pigs a year in the USA). I think it more likely that at first AI intelligence will evolve along a path so alien to us that neither side will recognize that the other possesses consciousness for a considerable period.

Metrics for qualities other than traditional intelligence are invariably suffixed with “quotient” and are generally things that are more associated with uniquely human traits, such as interpersonal intelligence (or emotional quotient) and intrapersonal intelligence.

I too have enormous respect for Turing; I am chuffed to have sat in his office at his desk. A Turing Test is by definition a yardstick determined by whether one side thinks the other is human, so to ask for a variant which doesn’t gauge humanness would be like asking for a virgin Electric Iced Tea; nothing left to get juiced on.  But if we’re looking for a way to tell whether an AI is intelligent without the cultural baggage, this question takes me back *cough* years ago when I was in British Mensa, the society for those who have high IQs (…and want to be in such a society). Two of the first fellow members I met were a quirky couple who shared that the man had failed the entrance test the first time but got in when he took the “culture-free” test, one which doesn’t have questions about cricket scoring or Greek philosophers.

He was referring to the Culture Fair test, which uses visual puzzles instead of verbal questions.  That might be the best way we currently have to test an AI’s intelligence; I wrote a few days ago about how the physical world permeates every element of our language. An AI that had never had a body or physical world experience would find just about every aspect of our language impenetrable. At some point an evolving artificial intellect would have problem assimilating human culture, but it might have to have scaled some impressive cognitive heights first.

But what really catches my eye about your question is whether we can measure the compassion of an AI without waiting for it to evolve to the Turing level. It sounds like it’s too touchy-feely to be relevant, but one tweak – substitute ethical for compassionate – and we’re in critical territory. Right now we have to take ethics in AI seriously. The Office of Naval Research has a contract to study how to imbue autonomous armed drones with ethics. Automated machine guns in the Korean DMZ have the ability to take a surrender from a human. And what about self-driving cars and the Trolley Problem? As soon as we create an AI that can make decisions in trolley-like situations that have not been explicitly programmed into it, it is making those decisions according to some metaprogram… ethics by any standard. And we need right now some means of assuring ourselves as to the quality of those ethics.

You may notice that this doesn’t provide a final answer to your final question. As far as I know, there isn’t one yet. But we need one.

Artificial IntelligencePsychology

This is not a chair

Guy Claxton’s book Intelligence in the Flesh speaks to the role our bodies play in our cognition, and it’s rather more important than having trouble cogitating because your tummy’s growling:

…our brains are profoundly egocentric. They are not designed to see, hear, smell, taste and touch things as they are in themselves, but as they are in relation to our abilities and needs. What I’m really perceiving is not a chair, but a ‘for sitting’; not a ladder but a ‘for climbing’; not a greenhouse but a ‘for propagating’. Other animals, indeed other people, might extract from their worlds quite different sets of possibilities. To the cat, the chair is ‘for sleeping’, the ladder is ‘for claw sharpening’ and the greenhouse is ‘for mousing’.

This gets at a vast divide between humans and their AI successors. If you accept that an AI must, to function effectively in the human world, relate to the objects in that human world as humans do, then there is a whole universe of cognitive labels-without-language that humans employ to divide and categorize that world, learned throughout childhood as a result of exercising a body in contact with the world.

This assertion underpins some important philosophy. Roy Hornsby says that according to George Steiner, “Heidegger is saying that the notion of existential identity and that of world are
completely wedded. To be at all is to be worldly. The everyday is the enveloping
wholeness of being.” In other words, you can’t form an external perception of something you are immersed in. You likely do not notice it at all. We are immersed in the physical world of objects to which we ascribe identities. It seems so obvious to us that it seems silly to point it out.

Of course that is a chair. But then why is it so hard for a computer to recognize all things chair-like? Because it’s stupid? No, because it’s never sat in one. Our subconscious chair-recognition algorithm includes the thought process, “What would happen if I tried to sit in that?” and that is what allows us to include all sorts of different shapes in the class of chair. And this is what results in fundamentally hard problems of getting computers to recognize chairs.

We might find hope that chair recognition could be achieved through enough training, which would somehow embed the knowledge of “can this be sat in?” without our having to code it. It’s worked well enough for cats, although I would like to know whether that system could classify a lion or tiger as feline as readily as a human. But that training seems doomed to be restricted to each class of images we give it, and might never make the cognitive leap, “Oh yes, I could sit on this table if I needed to.” Because the system would still lack the context of a physical body that needs to sit, and our world is filled with objects that we relate to in terms of how we can use them. If we were forced to see them as irreducible complex shapes we might be so overwhelmed by the immensity of a cheese sandwich that it would never occur to us to eat it. Yet this is the nature of the world that any new AI will be thrust into. Babies navigate this world by narrowing their focus to a few elements: Follow milk scent, suck. As each object in the world is classified by form and function, their focus opens a little wider.

None of this matters in a Artificial Narrow Intelligence world, of course, where AIs never have to be-in-the-world. But the grand pursuit of Artificial General Intelligence will have to acknowledge the relationship of the human body to the objects of the world. One day, a robot is going to get tired, and it’ll need to figure out where it can sit.

EmploymentExistential RiskPsychologyThe Singularity

Existential risk and coaching: A Manifesto

My article in the November 2016 issue of Coaching World brought an email from Pierre Dussault, who has been writing about many of the same issues that I covered in Crisis of Control. His thoughtful manifesto is a call to the International Coaching Federation to extend the reach and capabilities of the profession of coaching so that the impact of coaching on individual consciousness can make a global impact. I would urge you to read it here.

BioterrorismEmploymentExistential RiskPoliticsPsychology

Crisis of Control: The Book

The first book in the Human Cusp series has just been published: Crisis of Control: How Artificial Superintelligences May Destroy or Save the Human Race. Paperback will be available within two weeks.

Many thanks to my reviewers, friends, and especially my publisher, Jim Gifford, who has made this so beautiful. As a vehicle for delivering my message, I could not have asked him for more.

Artificial IntelligencePsychology

Kids Get It

Today I was giving a talk on space exploration to the eighth grade class at my daughters school. Their theme for this period is ‘Identity,’ so we did some discovery questions about the identities of planets and stars. Then, because so much space exploration is about looking for life, I asked them about the identity of life. We got it down to the usual answers like eating and pooping and reproducing. Then I said, “I see no one suggested ‘intelligence.’ Can we have life without intelligence?” It was decided that we could.

Then I asked, “Can we have intelligence without life?” There was immediate agreement and vigorous nodding.  I did a double take, and one of them helpfully explained: “AI.”  I recovered and remarked that that was not an answer I would have gotten twenty years ago.

Tomorrow’s adults have a good idea what’s coming.

Psychology

Healing Mental Trauma Stemming From Childhood

Good news of ways of healing mental trauma stemming from childhood. Also, not sure how researchers evaluate adoring rat mothers but assume they have their methods.

“Studies of rat pups show enduring changes in DNA and behavior depending on whether they were raised by high or low nurturing mothers (largely measured by how much the mothers licked their pups).

“As early as the first week of life, the offspring of less nurturing mothers were more fearful and reactive to stress, and their DNA contained more methyl groups, which tend to inhibit gene expression. In other words, parenting style permanently changed their DNA — a striking example of nurture over nature.

“The researchers found they could reverse the effects of maternal deprivation by giving the rats an HDAC inhibitor called trichostatin — which removes some of the methyl tags on DNA — when they were adults. Almost magically, these anxious rats now looked and acted just like the pups of adoring mothers.

“The implication is that the harmful effects of early life experiences on gene expression are potentially reversible much later in life.”

Source: Article “Return to the Teenage Brain