2009年7月26日 星期日

Scientists Worry Machines May Outsmart Man

這問題Simon等人數十年前就討論過

Scientists Worry Machines May Outsmart Man

By JOHN MARKOFF

Published: July 25, 2009

A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Skip to next paragraph
Ken Conley/Willow Garage

This personal robot plugs itself in when it needs a charge. Servant now, master later?

Related

Times Topics: Robots | Artificial Intelligence

RSS Feed

General Atomics Aeronautical Systems

Predator drones, like this one in Afghanistan, still need a human hand to work, at least for now.

Readers' Comments

Readers shared their thoughts on this article.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.

While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue.

The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.

Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.

The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity.

This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.

“Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.”

The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a “cadre” to shape the advances and help society cope with the ramifications.

“My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,” Dr. Horvitz said.

The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

“If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.”

Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.”

A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.”

Ken Conley/Willow Garage

2009年7月24日 星期五

Herbert Simon From Economist.com

Guru

Herbert Simon

Mar 20th 2009
From Economist.com

Herbert Simon (1916-2001) is most famous for what is known to economists as the theory of bounded rationality, a theory about economic decision-making that Simon himself preferred to call “satisficing”, a combination of two words: “satisfy” and “suffice”. Contrary to the tenets of classical economics, Simon maintained that individuals do not seek to maximise their benefit from a particular course of action (since they cannot assimilate and digest all the information that would be needed to do such a thing). Not only can they not get access to all the information required, but even if they could, their minds would be unable to process it properly. The human mind necessarily restricts itself. It is, as Simon put it, bounded by “cognitive limits”.

Hence people, in many different situations, seek something that is “good enough”, something that is satisfactory. Humans, for example, when in shopping mode, aspire to something that they find acceptable, although that may not necessarily be optimal. They look through things in sequence and when they come across an item that meets their aspiration level they go for it. This real-world behaviour is what Simon called satisficing.

He applied the idea to organisations as well as to individuals. Managers do much the same thing as shoppers in a mall. “Whereas economic man maximises, selects the best alternative from among all those available to him,” he wrote, “his cousin, administrative man, satisfices, looks for a course of action that is satisfactory or ‘good enough’.” He went on to say: “Because he treats the world as rather empty and ignores the interrelatedness of all things (so stupefying to thought and action), administrative man can make decisions with relatively simple rules of thumb that do not make impossible demands upon his capacity for thought.”

In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

The principle of satisficing can also be applied to events such as filling in questionnaires. Respondents often choose satisfactory answers rather than searching for an optimum answer. Satisficing of this kind can dramatically distort the traditional statistical methods of market research.

Simon, born and raised in Milwaukee, studied economics at the University of Chicago. “My career,” he said, “was settled at least as much by drift as by choice”, an undergraduate field study developing what became his main field of interest—decision-making within organisations. In 1949 he moved to Pittsburgh to help set up a new graduate school of industrial administration at the Carnegie Institute of Technology. He said that his work had two guiding principles: one was the “hardening of the social sciences”; and the other was to bring about closer co-operation between natural sciences and social sciences.

Simon was a man of wide interests. He played the piano well—his mother was an accomplished pianist—and he was also a keen mountain climber. At one time he even taught an undergraduate course on the French Revolution. He was awarded the Nobel Prize for economics in 1978, to considerable surprise, since by then he had not taught economics for two decades.

Notable publications

With March, J.G., “Organisations”, John Wiley & Sons, 1958; 2nd edn, Blackwell, 1993

“Administrative Behaviour: A Study of the Decision Making Processes in Administrative Organisation”, The Macmillan Co, New York, 1948; 4th edn, Free Press, 1997