Wednesday, 25 March 2015

Me Myself and iCub - The Robot with a Self

From New Scientist 


The human self has five components. Machines now have three of them. How far away is artificial consciousness – and what does it tell us about ourselves?







What is the self? Rene Descartes encapsulated one idea of it in the 1600s when he wrote: "I think, therefore I am". He saw his self as a constant, the essence of his being, on which his knowledge of everything else was built. Others have very different views. Writing a century later, David Hume argued that there was no "simple and continued" self, just the flow of experience. Hume's proposal resonates with the Buddhist concept anatta, or non-self, which contends that the idea of an unchanging self is an illusion and also at the root of much of our unhappiness.

Today, a growing number of philosophers and psychologists hold that the self is an illusion. But even if the centuries-old idea of it as essential and unchanging is misleading, there is still much to explain, for example: how you distinguish your body from the rest of the world; why you experience the world from a specific perspective, typically somewhere in the middle of your head; how you remember yourself in the past or imagine yourself in the future; and how you are able to conceive of the world from another's point of view. I believe that science is close to answering many of these questions.

Can we learn about being human from a robot? 

A key insight is that the self should be considered not as an essence, but as a set of processes – a process being a virtual machine running inside a physical one, as when a program runs on a computer. Likewise, some patterns of brain activity constitute processes that generate the human self. This fits with Hume's intuition that if you stop thinking, the self vanishes. For instance, when you fall asleep, "you", the entity brought into being by a set of active brain processes, cease to exist. However, when you awake, those same processes pick up much where they left off, providing subjective continuity.

The idea that the self emerges from a set of processes has encouraged my colleagues and I to believe we can recreate it in a robot. By deconstructing it and then attempting to build it up again piece by piece, we are learning more about what selfhood is. This is an on-going collaboration with researchers in several European institutes, and admittedly we still have a way to go. But I'm confident we can create an artificial self, or at least as much of one as would be wise. We believe our work will help resolve the mystery at the heart of selfMovie Camera – that it feels compellingly real, yet, when examined closely, seems to dissolve away.

iCub

Meet iCub (see video at top of page), the state-of-the-art humanoid robot in which we are creating this sense of self. This bot has vision, hearing, touch and a proprioceptive sense that allows it to coordinate its 53 joints. It can speak and interact with its world, and it improves its performance by learning. There are currently 30 such robots in research labs around the world. At Sheffield Robotics, our iCub has a control system modelled on the brain, so that it "thinks" in ways similar to you and me. For the past four years, we have been working to give this robot a sense of self.

We first had to consider exactly how we might deconstruct selfhood in order to build it in a machine. Philosophy, psychology and neuroscience give many insights into what constitutes the human self, and how to recognise and measure aspects of it in adults, infants and even animals. Our attempt begins with psychology, but can also be mapped on to a growing understanding of how the psychological self emerges from brain activity.

William James, a founder of modern psychology, suggested that the self can be divided into "I" and "me" – the former comprising the experience of being a self, the latter the set of ideas you have about your self. In the 1990s, psychologist Ulric Neisser, a pioneer of modern cognitive psychology, went further. He identified five key aspects of self: the ecological or physically situated self, the interpersonal self, the temporally extended self, the conceptual self and the private self (see "Aspects of self"). Neisser's analysis is not the final word, but it is grounded in an understanding of human cognitive development, whereas classical philosophical views such as those of Hume and Descartes were not. It has also provided useful clues about what might be required to build up an artificial self, process-by-process.

How have we gone about creating these processes for our robot? We use an approach called neurorobotics, which means we incorporate knowledge about how real brains work into our programming. So our iCub's control system is designed to emulate key processes found in the mammalian brain. The interactions between these processes are governed by an architecture called distributed adaptive control, developed by my colleague Paul Verschure at the Catalan Institution for Research and Advanced Studies in Barcelona, Spain. This system is modelled on the cognitive architecture of the brain.

Motor babbling

Now, say we want to start building a process that emulates the human ecological self. Key to this is an awareness of one's body and how it interacts with the world. For iCub to achieve this, it needs an internal "body schema" – a process that maintains a model of its physical parts and its current body pose. Rather than programming the body schema directly, as other roboticists might do, we have given iCub the capacity to work it out. It learns by generating small, random movements and observing the consequences these have. Human babies show a similar kind of exploratory behaviour – termed motor babbling – in the womb and in early infancy, suggesting that people learn about their bodies in much the same way.

Using this approach, Giorgio Metta and colleagues at the Italian Institute of Technology in Genoa, Italy, are training our iCub to distinguish self from other, a fundamental aspect of the ecological self. The motor babbling program also allows the robot to learn how to achieve a specific target pose. Combining this body model with knowledge of objects and surfaces nearby enables iCub to move around without colliding with things.

Then there's the temporally extended self. Insights about this can be found in the case of a man we will call N.N., who lost the ability to form long-term memories after an accident in the 1980s. The damage to his brain also left him completely without foresight. He described trying to imagine his future as "like swimming in the middle of a lake. There is nothing there to hold you up or do anything with." In losing his past, N.N. had also lost his future. His ecological self remained intact, but it had become marooned in the present.

This concept of time also poses a problem for our robot. Although we can channel all its sensory input into a hard drive, iCub must also be able to decide how best to use the information to make sense of the present. Peter Dominey and his group at the French Institute of Health and Medical Research in Lyon, have addressed this problem. They have encoded iCub's interactions with objects and people in a way that allows it to more clearly see their relevance to present situations. However, this model uses standard computing techniques, so we are working with them to create a neurorobotic version. It will directly emulate processing in brain areas, such as the hippocampus, that are known to play a role in creating human autobiographical memory.

Recent brain imaging studies have confirmed what we learned from N.N.'s experience: that the same brain systems underlie our ability to recall past events as well as imagine what the future might bring. Our hope is that a model of the temporal self will provide iCub with contextual information from the past that will help it to better understand its current experience. This, in turn, should improve its ability to predict what could happen next.

Thinking about self as a set of processes, it becomes clear that some of these are connected. For example, a key aspect of the interpersonal self is empathy, which derives from a general ability to imagine oneself in another's shoes. One way humans might do this is to internally simulate what they perceive to be the another's situation, using the model that underlies their own ecological self. So the interpersonal self could grow out of the ecological self. But what more is needed? We consider an important building block to be the capacity to learn by imitation.

Your ability to interpret another person's actions using your own body schema is partly down to mirror neurons – cells in your brain that fire when you perform a given movement and when you see someone else perform it. Using this insight, Yiannis Demiris at Imperial College London has extended iCub's motor babbling program into an imitation learning system. As a result, iCub can rapidly acquire new hand gestures, and learn sequences of actions involved in playing games or solving puzzles, simply by watching people perform these tasks. The system will have to be extended further to achieve empathy, so that iCub recognises and mirrors a person's emotional state as well as their movement.

There is still plenty to do. Our models of the ecological, interpersonal and temporal selves are undoubtedly crude in comparison to what goes on in human brains. And we have yet to tackle the conceptual and private selves that would provide iCub with knowledge of what –or who – it is, and an awareness that it has an internal world not shared by others.

When we meet the challenge of making iCub's self processes more realistic, there may be some aspects of the human version that we will not want to emulate. For example, the robot's motivations and goals are essentially those we design in, and it might be wise to leave things that way, rather than allow them to evolve as they do in people.

Something else holding us back is iCub's limited understanding of language. Although our robot can recognise speech, this is not the same as understanding meaning – that requires relating words to action and objects. Our colleagues in Lyon are working on a neurorobotic solution to this problem but, for the moment, iCub is only capable of two-way conversations on a few topics, such as the game it is currently playing with you.

That said, we can see the practical potential of robots of this kind. An ecological self makes our iCub safer to be around. The temporally extended self allows it to remember the past and anticipate the future. The interpersonal self means it can conceive of, and anticipate, human needs and actions. Such a robot could work alongside people in fields from manufacturing and search-and-rescue to helping care for people with disabilities.

You might argue that our models have missed a crucial element: the "I" at the centre of James' notion of self – what we also call consciousness. But one possibility is that this arises when the other aspects of self are brought together. In other words, it may be an emergent property of a suitably configured set of self processes, rather than a distinct thing in itself. Returning to the Buddhist idea of the self as an illusion, when you strip away the different component processes, perhaps there will be nothing left.

But is it a person?

Our idea of the self is intimately tied up with our notion of what it means to be a person. Is it conceivable, then, that one day we might attribute personhood to a robot with an artificial sense of self? In the 17th century, philosopher John Locke defined a person as an entity with reason and language, possessing mental states such as beliefs, desires and intentions, capable of relationships and morally responsible for its actions. Modern philosopher Daniel Dennett at Tufts University in Boston largely agrees, but with an important addition. A person, he says, is someone who is treated as a person by others. So we grant personhood to one another.

Note that neither Locke nor Dennett specify that a person is made of biological stuff. Even so, at this stage, our iCub falls short of the criteria required. It can reason, use language, have beliefs and intentions, and enter into relationships of a kind. We might even be inclined to judge it for the appropriateness of its actions. However, it does not yet have the full set of processes associated with a human self, so we cannot be sure that its mental states are anything like ours. Neither is it a moral being – not as we commonly think of them – because it does not base its choices on values.

Be that as it may, our everyday attribution of personhood is grounded more in direct impressions than a philosophical checklist. As Dennett says, personhood is partly in the eyes of the beholder. And, when interacting with iCub, it can feel natural to behave towards this robot as though we are taking the first steps in creating a new kind of person. Sometimes it even leaves me with the surprising feeling that "someone is home".

Aspects of self

Psychologist Ulric Neisser's multifaceted description of the self provides useful targets for a robot to emulate

Ecological self
Having a point of view; distinguishing yourself from others; having a feeling of body ownership

Interpersonal self
Self-recognition (e.g. in a mirror); seeing others as agents like you; having empathy for others

Temporally extended self
Having awareness of your personal past and future

Conceptual self
Having an idea of who you are; having a life story, personal goals, motivations and values

Private self
Having a stream of consciousness; knowing you have an inner life

No comments:

Post a Comment