Monday, December 27, 2004

Robots teaching their creators - sounds backwards to me

Welcome to Technology Review :
"[Mitsuo] Kawato [director, ATR Computational Neuroscience Laboratories in Kyoto] loves robots not because they are cool, but because he believes they can teach him how the human brain works. 'Only when we try to reproduce brain functions in artificial machines can we understand the information processing of the brain,' he says. It’s what he calls 'understanding the brain by creating the brain.' By programming a robot to reach out and grasp an object, for instance, Kawato hopes to learn the patterns in which electrical signals flow among neurons in the brain to control a human arm."

This rather lengthy piece on MIT's TechnologyReview.com manages to miss the really interesting questions. The robot teaches Kawato nothing, Kawato builds his robot, observes its behavior, and teaches himself. Also, there is only a passing reference to the fact that the robot's manufactured brain and body are very imperfect as models of the structures of our own bodies. Control of a set of hydraulic cylinders to move robotic arms and hands for a robot has to be a fundamentally different thing from controlling a series of muscles to do similar things with a human body. Why should we assume there is close correspondence between a digital computer and our analog brains?

Entirely overlooked in this discussion is the problem of intention. The robot has its intentions programmed by its creator. Where do the intentions of human actions come from? If one takes the atheistic evolutionary view, what appear to be intentions are nothing more than the potentials for action which have been developed through natural selection magnifying useful adaptations and weeding out those which are not useful. In this model, the robot is fundamentally different because we have no idea how such spontaneous intentionality would arise in a robot.

On the other hand, suppose one takes the theistic creation view. Here we have two possibilities, all intentions are pre-programmed - we are mere automata - in which case the robot looks like a useful metaphor. Or, we are created with a suite of intentional possibilities from which we choose - what theologians call free will, or free moral agency - in which case the robot model won't work very well.

Why, you may well ask, do I raise this issue of intentions when all that Kawato and his colleagues are trying to do is understand how the brain moves the arm? Because the brain moves the arm for a reason and changing the reason for which the arm is moved will change the way it is moved. The speed of the movement, the force behind it, how much of the body is involved in the movement - all these depend not only on the specific goal but even on our mood.

Consider picking up a glass from your desk and bringing it to your mouth to drink. The robot calculates distances and force needed and an efficient route the glass will travel. But is that really what we do? Maybe you have a cough or a tickle in your throat, you want that glass of water brought to your lips quickly - even at the risk of spilling some of it. Maybe you are in a thoughtful mood and having thought of sipping a bit of that gibson, you pick up the glass slowly, gently turn it a bit from side to side, watch the little onions roll about, and then take a sip. Maybe you're checking your email and working on your morning caffeine fix at the same time; you might pick up the coffee, take a sip, and put it back down without ever looking at it, hardly even noticing that you have done it. Even a simple thing like raising a cup to the lips has a lot going on, not all of it conscious, and that isn't anything like what goes on with the robot.

Another issue overlooked here has to do with how we learn to control our bodies. In some experiments made famous by Maltz in his book Psychocybernetics, subjects were tested on their ability to shoot free throws on a basketball court. Then they were divided into two groups who practiced equal amounts of time, but one group practiced by actually throwing the ball at the hoop and the other by visualizing the process. They were then tested to see if their performance improved. Both groups improved, and the visualization group improved more. I'd like to see someone explain this in terms of the robot model.

Unfortunately, we are not likely to get definitive answers anytime soon. Kawato, who is in the midst of a five-year and $8 million project to upgrade his current robot, wants a program costing a half billion dollars a year for 30 years just to get a robot with the capabilities of a five-year old child.

To anyone contemplating spending that much money for such a modest goal, I remind you of an old joke from India. Two brothers have not seen each other for over twenty years. One became a holy man to learn to overcome the physical world and owns nothing but his loincloth and rice bowl, the other has become a rich merchant. They meet at the ferry crossing the river to their hometown. The poor holy man walks across the river while his brother pays his fare and crosses on the ferry. When they meet again on the other side, the merchant asks how long it took to learn to walk on water and the holy man says it has taken all of the twenty years since they parted. The merchant says, "it takes me less time than the ferry crossing took to earn the money to pay the fare, which of us has made the best use of his time?"

0 Comments:

Post a Comment

<< Home