Laugh and Your Computer Will Laugh With You, Someday
By DANIEL GOLEMAN
Published: Tuesday, January 7, 1997
THE phrase ''user friendly'' is about to take on a more literal meaning: computer scientists are creating machines that can recognize their users' most intimate moods and respond like an empathetic friend.
To be sure, the idea of a machine cognizant of that human Achilles' heel, emotion, can conjure more sinister images -- like HAL, the savvy, menacing computer in ''2001,'' whose fear that he would be unplugged led him to kill all but one of the crew members on a space mission. Yet in a development welcome to some and alarming to others, as the birth date of HAL in Arthur C. Clarke's novel approaches on Jan. 12, scientists have already constructed pieces of the technical groundwork for such machines.
While the specter of robotic Frankenstein monsters captures the popular imagination, computer scientists offer benign visions of a more humanlike technology, animating gentle cousins of Oz's Tin Man. They foresee a time when computers in automobiles will sense when drivers are getting too drowsy or impatient and so deliver wake-up messages or switch on soothing music. Empathic computer tutors will notice when their pupils are getting frustrated or overwhelmed, and so offer encouraging words and make lessons easier. Wearable computers could warn people with chronic conditions like severe asthma or high blood pressure when they are becoming too overwrought.
Bits and pieces of this emotionally attuned cyber-future already exist. Computer scientists at the Georgia Institute of Technology in Atlanta have developed a computer system that can recognize simple facial expressions of emotions like surprise and sadness. At Northwestern University in Evanston, Ill., and at Carnegie-Mellon University in Pittsburgh, engineers have designed programs that converse with people and respond appropriately to their emotions. And at the Massachusetts Institute of Technology, in Cambridge, where much of the work in what is being called ''affective computing'' is under way, a computer worn around the waist monitors its wearer's every shift of mood.
No one claims that these more sensitive machines will come close to replicating full human emotion. And some skeptics question whether the work to mimic emotion in machines is worth the effort. Pat Billingsley, an expert in human-machine interfaces at the Merritt Group in Williamsburg, Mass., said: ''People don't want a computer that cares about their mood so much as one that makes what they're trying to do easier. You want a very predictable system, one you can rely on to behave the same way time after time -- you wouldn't want your computer to be too emotional.''
One impetus for building these more sensitive computers is widespread frustration with the doltishness of present models. ''Today's computers are emotionally impaired,'' said Dr. Roz Picard, a computer scientist at M.I.T. who is leading the effort there to bring emotion to the all-too-rational universe of computing. ''They blather on and on with pages of output whether or not you care. If they could recognize emotions like interest, pleasure and distress, and respond accordingly, they'd be more like an intelligent, friendly companion.''
Dr. Picard and her associates at M.I.T.'s Media Lab are developing prototypes of such sensitive machines that are not just portable, but wearable. ''A computer that monitors your emotions might be worn on your shoulders, waist, in your shoes, anywhere,'' Dr. Picard said. ''It could sense from your muscle tension or the lightness of your step how you're feeling, and alert you if, say, you're getting too stressed. Or share that information with people you wanted to know, like your doctor or your spouse.''
One immediate step toward warmer-seeming machines is giving them more emotionally realistic voices. While computerized speech has come across at best as a monotonous drone, computer users may be cheered by progress in designing automated voices with the capacity for more realistic emotional inflection. Such nuance, said Dr. Picard, ''adds flavor and meaning to what we say,'' adding: ''With these abilities computers can communicate in a more natural, pleasant way. Monotonous voice-reminder systems could vary their voices, for example, to flag urgent information.''
While warmer voices signal a small start, much of the work deals with more sophisticated aspects of emotional astuteness. Perhaps most progress has come in creating machines that can read human emotion, a technical challenge similar to having them recognize handwritten words or speech.
Emotions like fear, sadness and anger each announce themselves through a unique signature of changes in facial muscle, vocal inflection, physiological arousal, and other such cues. Building on techniques of pattern recognition already used for computer comprehension of words and images, Dr. Irfan Essa, a computer scientist at Georgia Tech, has constructed a computer system that can read people's emotions from changes in their facial expression.
The system uses a special camera that converts changes in facial muscles into digitized renderings of energy patterns; the computer compares each pattern to that of the person with a neutral expression. In pilot tests with people making deliberate expressions of emotions like anger, fear and surprise, the computer read the emotions with up to 98 percent accuracy.
But just as computers that comprehend spoken words need the speaker to enunciate clearly one word at a time, the emotion-reading computer cannot yet detect the rapid, free flow of spontaneous feelings as mirrored in the face. That, said Dr. Essa, is the next step, a more daunting technical challenge: ''What we've done so far is just the very first step in building a machine that can read emotions.''
A more elusive trick is for a computer to know how to respond once it has recognized an emotion. A prototype program for a computer that can do this has been developed at the Institute for the Learning Sciences at Northwestern University, under the direction of Dr. Andrew Ortony, a computer scientist.
''The question was, could you get a computer to reason about people's emotions, like Star Trek's Mr. Spock, who can infer anger without being able to experience it?'' said Dr. Ortony. Working with Dr. Paul O'Rorke, a computer scientist at the University of California at Irvine, Dr. Ortony designed a computer program called ''AbMal,'' which has a rational understanding of emotions. AbMal, for example, can realize that gloating occurs when a person is happy about someone else's distress, or that hope and fear arise because people anticipate success or failure.
Such emotionally smart programs are essential to the next step, constructing machines that react like another person to an individual's emotions -- in other words, give a semblance of empathy. One approach to creating empathic machines is being taken by Clark Elliott, a computer scientist at DePaul University in Chicago. Dr. Elliott's computer program, ''The Affective Reasoner,'' can talk with people about their emotional experiences. The program, which Dr. Elliott hopes will evolve into uses like friendly computer tutors, can comprehend simple sentences and understand the emotions they imply or describe. Then it responds like an understanding friend.
The program has 70 agents or characters, cartoon-like faces on a computer screen that can morph to express different emotions, like turning red and shaking to show extreme anger. ''I can say to an agent in the program, 'Sam, I'm worried about my test,' and Sam will recognize what 'worried' means,'' said Dr. Elliott. ''Sam might respond, 'Clark, you're my friend. I'm sorry you're worried. I hope your test goes well.' Right now Sam's emotional acuity is more advanced than his language ability: he doesn't know what a test is, but he knows how to respond when you're worried.''
In a test of the Affective Reasoner expressing different emotions, Dr. Elliott had people listen to the computer voice and then to an actor, each giving emotional nuance like anger or remorse to ambiguous or nonsensical sentences like ''I picked up katapia in Timbuktoo.''
''People could correctly guess the emotion being expressed by the actor around 50 percent of the time, while they guessed correctly with the computer about 70 percent of the time,'' Dr. Elliott said.
Virtual reality games are another arena where work has advanced, in accord with an animator's maxim that the portrayal of emotions in cartoon characters adds the illusion of life. Dr. Joseph Bates, a computer scientist at Carnegie-Mellon University, has created a virtual reality game in which the human participant interacts with ''woggles,'' characters that have emotional reactions to what goes on.
''Video games and virtual reality games so far are emotional deserts,'' said Dr. Bates. ''The next challenge is to give characters emotional reactions. So the woggles have unique personalities. For instance, one woggle hates fights -- he gets sad and tries to stop them. That makes them more lifelike.''
Beyond the appeal of more cuddly or alluring machines, computer scientists see another reason to bring feelings to computing. Paradoxically, a bit of emotion might make computers smarter, their intelligence less artificial and more like that of a person.
Because computers have no intrinsic sense of what within a mass of data is more important and what is irrelevant, they can waste huge amounts of time looking at every bit of information. For that reason, experts in artificial intelligence are grappling with how to grant a computer a human-like ability to realize what information matters. And that brings them back to emotions.
''It's become clear that something is missing in the purely cognitive, just-the-facts, problem-solving approach to modeling intelligence in a computer,'' said Dr. Fonya Montalvo, a computer scientist in Nahant, Mass. ''From vision research I was doing at M.I.T., it was clear that people get a sense of what's important when they see a scene, while computers don't. They go through every bit of information without knowing what's salient. In humans it's our emotions that flag for attention what is important.''
If computers had something akin to emotion, Ms. Montalvo said, ''they could be more efficient thinking machines,'' adding, ''Artificial intelligence has largely ignored the crucial role of emotions in their models of the human mind.''
As early as the 1960's, one of the first to suggest that intelligence in machines would need to mimic emotions was Dr. Herbert Simon, a Nobel laureate and pioneer in artificial intelligence. But Dr. Simon's insight has been almost entirely ignored in most of the work done in that field in the last 30 years, save for a few lone voices. One of those who has taken up Dr. Simon's call is Dr. Aaron Sloman, a philosopher at the School of Computing Science at the University of Birmingham in England.
The best design for robots with intelligence, Dr. Sloman said, includes ''a mechanism that can deploy resources as changing circumstances demand,'' adding, ''The parallel in the human or animal mind is emotions; a machine as smart as a human would probably need something similar.''
Dr. Sloman said: ''If there is an intelligent robot crossing a dangerous bridge, it needs a state like anxiety that will put aside other, irrelevant concerns and focus on the danger at hand. Then after it had crossed safely, it can allow its attention to roam more freely, a state something like relief.''
But, said Dr. Sloman, while he and his colleagues have begun constructing artificial intelligence programs with emotion-like features, ''the practical work is far behind the theory -- what we've been able to construct so far is very primitive.''
As computer scientists labor on developing the technology that will allow computers to read and express emotions, some are debating whether the results will be more like the charming and affable C3PO in George Lucas's ''Star Wars'' or like HAL, who, as Dr. Picard puts it, ''can not only pass the Turing Test,'' to seamlessly impersonate human intelligence, ''but also kill the person who gives it.''
One way to insure that machines with heart stay benign, Dr. Picard proposes in an essay in ''Hal's Legacy,'' a book published this month by The M.I.T. Press, is to require that the design of emotionally astute computers place primary import on preserving human life. But, she adds, the need for such caveats lies far in the future.
More to the point, Dr. Picard said: ''The question is, do we really want to built an intelligent, friendly machine? I'm not sure people are ready for computers with emotions.''
Photos: ''Emotions Read by Machines'' A computer system that can read people's emotions from changes in their faces relies on patterns of facial movement. A special camera converts changes in facial muscles into digitized renderings of energy patterns. A neutral, emotionless facial expression, above, is compared with the energy of the facial movements made to match certain emotions, right. In pilot tests with people making deliberate expressions of emotions, the computer read the emotions with up to 98 percent accuracy. (Photos by Dr. Irfan Essa/Georgia Institute of Technology)(pg. C1) Diagram: ''Paging Lieutenant Data: Emotions Within the Machine'' The Affective Reasoner is a collection of computer programs, including speech and music synthesizers, that manifest themselves as agents that reason about, and to some extent ''have,'' emotions. With limited language and no general world information, but extensive knowledge about emotional life, A.R. agents are able to reason quite broadly, though shallowly, about what is important to humans. The following are lightly edited transcripts of real-time dialogues between a real researcher, Clark Elliott, and Sam, an agent. I. Clark: I am worried about my exam. Sam: I am sorry to hear that you are worried about your exam. Is it soon? Clark: Yes. Sam: Is your exam important to you? Clark: Yes. Sam: Are chances of a negative outcome strong? Clark: Yes. Sam: I am sorry that you might be dissatisfied. You must be very worried. Let's hope that you have a successful outcome with your exam. II. Clark: I am really furious at Joe. Sam: Boy what a blameworthy thing he must have done! What do you call such an action? Clark: Joe was stealing. Sam: When Joe was stealing, he must have caused you distress. What is the name of the thing you wanted, or wanted to preserve? Clark: My sandwich. Sam: How important was your sandwich to you, Clark? Clark: Not very important. Sam: How surprising was it that Joe was stealing and caused you distress about your sandwich? Clark: It was not surprising. Sam: How bad was it that Joe was stealing? Clark: It was not very bad, actually. Sam: How important is your principle about stealing? Clark: Quite important. Sam: Hmmm. I would think that you are only somewhat annoyed at Joe. I do not understand that you are furious at Joe. After all, what Joe did was not that bad. Your sandwich is not that important to you. It is not very surprising that Joe was stealing. Perhaps your principle about stealing is very, very important to you. Is this correct? Clark: Yes. Sam: I will remember that about you. (Later) Sam: Clark, if I was stealing and it affected your sandwich, would you still be my friend? (Source: Dr. Clark Elliott/Institute for Applied Artificial Intelligence/DePaul University)(pg. C9)