Guy Claxton’s book Intelligence in the Flesh speaks to the role our bodies play in our cognition, and it’s rather more important than having trouble cogitating because your tummy’s growling:
…our brains are profoundly egocentric. They are not designed to see, hear, smell, taste and touch things as they are in themselves, but as they are in relation to our abilities and needs. What I’m really perceiving is not a chair, but a ‘for sitting’; not a ladder but a ‘for climbing’; not a greenhouse but a ‘for propagating’. Other animals, indeed other people, might extract from their worlds quite different sets of possibilities. To the cat, the chair is ‘for sleeping’, the ladder is ‘for claw sharpening’ and the greenhouse is ‘for mousing’.
This gets at a vast divide between humans and their AI successors. If you accept that an AI must, to function effectively in the human world, relate to the objects in that human world as humans do, then there is a whole universe of cognitive labels-without-language that humans employ to divide and categorize that world, learned throughout childhood as a result of exercising a body in contact with the world.
This assertion underpins some important philosophy. Roy Hornsby says that according to George Steiner, “Heidegger is saying that the notion of existential identity and that of world are
completely wedded. To be at all is to be worldly. The everyday is the enveloping
wholeness of being.” In other words, you can’t form an external perception of something you are immersed in. You likely do not notice it at all. We are immersed in the physical world of objects to which we ascribe identities. It seems so obvious to us that it seems silly to point it out.
Of course that is a chair. But then why is it so hard for a computer to recognize all things chair-like? Because it’s stupid? No, because it’s never sat in one. Our subconscious chair-recognition algorithm includes the thought process, “What would happen if I tried to sit in that?” and that is what allows us to include all sorts of different shapes in the class of chair. And this is what results in fundamentally hard problems of getting computers to recognize chairs.
We might find hope that chair recognition could be achieved through enough training, which would somehow embed the knowledge of “can this be sat in?” without our having to code it. It’s worked well enough for cats, although I would like to know whether that system could classify a lion or tiger as feline as readily as a human. But that training seems doomed to be restricted to each class of images we give it, and might never make the cognitive leap, “Oh yes, I could sit on this table if I needed to.” Because the system would still lack the context of a physical body that needs to sit, and our world is filled with objects that we relate to in terms of how we can use them. If we were forced to see them as irreducible complex shapes we might be so overwhelmed by the immensity of a cheese sandwich that it would never occur to us to eat it. Yet this is the nature of the world that any new AI will be thrust into. Babies navigate this world by narrowing their focus to a few elements: Follow milk scent, suck. As each object in the world is classified by form and function, their focus opens a little wider.
None of this matters in a Artificial Narrow Intelligence world, of course, where AIs never have to be-in-the-world. But the grand pursuit of Artificial General Intelligence will have to acknowledge the relationship of the human body to the objects of the world. One day, a robot is going to get tired, and it’ll need to figure out where it can sit.