2014年9月21日 星期日
2014年9月18日 星期四
The third era of IT
Learning from the brain
Using insights from the human mind to drive innovation in IT
The third era of IT
Why cognitive and neuromorphic computing represent a new era for information technology.
There have so far been only two eras of computing, says John Gordon, vice president of IBM Watson Solutions: the tabulation era and the programmable computer era. Now, he says, we’re beginning to embark on a third: the era of cognitive computing.
Tabulating computers, the predecessors to modern IT systems, were large machines dedicated to doing a specific job, such as doing long division sums. If you wanted them to do something different, you had to take them apart and reassemble them. Then, in the 1950s, came programmable computers, which could be instructed to perform different tasks without being rebuilt.
Computers have become much powerful in the last six decades, of course, but we are nevertheless still in the programmable computing era. The next phase, many are predicting, will be defined by computers that no longer need to be explicitly programmed but instead learn what it is they need to do by interacting with humans and data.
“I truly believe that the cognitive computing era will go on for the next 50 years and will be as transformational for industry as the first types of programmable computers were in the 1950s,” says Mr Gordon.
Cognitive computing
The true potential of cognitive computing was first made public in February 2011 when IBM’s Watson beat Jeopardy! champions, Ken Jennings and Brad Rutter. In a nod to The Simpsons, Mr Jennings wrote on his video screen: “I for one welcome our new computer overlords.”
But these computer overlords have been a long time coming. The roots of cognitive computing stretch back to the work of artificial intelligence (AI) pioneers in the 1950s. Since then, artificial intelligence has ebbed in and out of the public consciousness, with occasional periods of progress interspersed with so-called “AI winters”.
Cognitive computing is AI redux. Researchers took the successful bits of AI, such as machine learning, natural language processing and reasoning algorithms, and turned them into something useful.
“Cognitive computing is starting to take off because we are having this confluence of the maturity of the technology, the evolution of computing power, and the ability to understand things like language, which is coming together to make this more of a reality now,” says Mr Gordon.
But the phenomenon is not purely driven by technological advances; there is also a pressing need for systems that can organise and make sense of big data. It’s no secret that digital data is growing at a clip—about 3 exabytes of data every day. With the Internet of things – where machines talk to other machines over the Internet – predicted to intensify that growth, this need is only going to become more acute.
One of the ways in which cognitive computing may help make sense of all this data is by enabling linguistic interfaces to analytics systems. Current tools, with their complex and specialised user interfaces, might be useful for data scientists and statisticians, but they’re not much good for a doctor standing at a patient’s bedside or a wealth manager advising her client on the best investments given the client’s risk appetite.
A cognitive system that ‘understands’ human language can allow professionals to interrogate data simply by asking questions. “It can propel people into the information economy because they don’t have to know how to write SQL [structured query language],” Mr Gordon says. “They just have to know how to communicate.”
Neuromorphic computing
To date, most cognitive systems work on regular computers. However, some experts believe that their true potential will only be realised when computers are made to work more like the brain, the most sophisticated platform for cognition there is.
Currently in development, so-called “neuromorphic” microprocessors work by mimicking the neurons and synapses that make up the brain. As such, computer scientists claim, they are better suited to spotting patterns within streams of data than their conventional predecessors.
Neuromorphic computers have not made it out of the research labs, yet, and academic institutions including Stanford University, Heidelberg University, University of Manchester, and ETH Zurich lead the field. But companies like IBM and Qualcomm also have very promising neuromorphic chips in R&D.
No one is precisely sure what the eventual application of neuromorphic chips may be, but it has been suggested that they will act as the eyes, ears and nose of conventional and cognitive computers. And because they’re light and power efficient, they can be used in robots, self-drive cars, and drones. They can interact with the environment in real-time and spot any patterns that are out of the ordinary (such as a car coming the wrong way down a one way street).
Neuromorphic chips might also be embedded in mobile phones, tablet computers and other handheld devices. Imagine holding your phone in front of a flower you don’t recognise. The neuromorphic chip would recognise its shape and smell and return all the relevant information you’re looking for—a search engine on steroids.
Although conventional computers are getting better at recognising objects such as the human face, they still require enormous computing power to do so. It took Google 16,000 processors to recognise a cat. Neuromorphic chips will be able to do the same with one low-powered chip.
In Smart machines: IBM’s Watson and the era of cognitive computing, John Kelly and Steve Hamm explain that neuromorphic computing and cognitive computing are complementary technologies, analogous to the division of labour between the right and left hemispheres of the brain. Cognitive computers are the left brain, focusing on language and analytical thinking, and neuromorphic chips focus on senses and pattern recognition.
IBM describes the combination of these capabilities as “holistic computing intelligence”. If this combination lives up to a fraction of its potential, we may indeed be embarking on a new era of computing.
When computers can learn like we do, what will the role of human beings be? Let us know your thoughts on the Future Realities group on LinkedIn, sponsored by Dassault Systèmes.
2014年9月6日 星期六
The International Herbert A. Simon Society
the Herbert Simon Society
The International Herbert A. Simon Society, created in Torino in 2008, is named after the winner of the 1978 Nobel Prize for Economics Herbert A. Simon (1916-2001) who was the first to challenge radically the model of rationality used by neoclassical economics and which was the origin of the recent financial crisis and of the many forecasting errors made by national and international financial institutions.
http://herbertsimonsociety.org/
***** Herbert A. Simon 到過義大利作3次演講,出書:An Empirically-based Microeconomics (1997),有漢譯 (2009)。
Turin - Wikipedia, the free encyclopedia
Publications
Documents from Pat Lanlgey:- Title : Heuristics for Discovery in Cognitive Science
Caption : Pat Langley’s slides: Heuristics for Discovery in Cognitive Science
File name : LANGLEY_slides_Cognitive-Science-Society-meeting-2001.pdf
Size : 706 kB
- Title : Heuristics for Scientific Discovery: The Legacy of Herbert Simon
Caption : Pat Lanlgey’s chapter Heuristics for scientific discovery: The legacy of Herbert Simon.
File name : LANGLEY_essay-in-Models-of-a-Man-MIT-press-2004.pdf
Size : 107 kB
- Title : Heuristics for Discovery in Cognitive Science
OMNI INTERVIEW: Herbert A. Simon (June 1994)
此次訪談,我翻譯並少量出版 (2001.6.15)。
INTERVIEW: Herbert A. Simon (June 1994)
, ... tremendous selectivity to avoid looking at 10 to the 20th power positions.
Interviewed June 1994 by Doug Stewart
January 1956: Ike was in his first term in the White House and electric typewriters were a luxury when Herbert Simon strolled into a mathematical-modeling class he was teaching at Pittsburgh's Carnegie Tech and announced he'd built a machine that could think. Simon, with two colleagues, had created what is now regarded as the first artificial-intelligence (AI) program. In finding proofs for logical theorems, it automated a task that hitherto only human logicians had been smart enough to perform. But to the future Nobel laureate, his program's most important proof was something far grander: proof the human brain wasn't so special after all.
Still teaching at what is now Carnegie-Mellon University, Simon is an academic jack-of-all-trades: computer and social scientist, cognitive psychologist, and philosopher. To Edward Feigenbaum, an AI pioneer at Stanford University, "Herb Simon is first and foremost a behavioral scientist. His genius lies in cutting through the immense complexities of human behavior to build elegantly simple models that work, that explain the data. He might well be the greatest behavioral scientist of the twentieth century.
" Fast talking and combative at 77, Simon remains an unapologetic "left winger" in the AI world he helped found. Brusque to the point of arrogance, he insists that everything a brain does can be adequately explained in terms of information processing. A computer, he argues (and Simon argues a lot), could do these things just as well.
Herb Simon has always argued. His first publication, in grade school, was a letter to the editor in the Milwaukee Journal defending atheism. A civil libertarian and New Deal Democrat, he's been known to dampen conversations at dinner parties by asking guests whether they'd prefer having real children or disease-resistant AI programs that were otherwise identical. He doesn't take criticism well, he confesses, nor is he gracious in defeat--the sort of chess player who'll lose a game, then tell his opponent the next day he'd have won but for a single move.
Until the mid Fifties, Simon was an economist and political scientist. His 1978 Nobel Prize was in economics. He helped push conventional economics beyond neat (and accurate) supply-and-demand charts and toward the real-world complexity of psychology and behavioral science. His theory of "bounded rationality" subverted the classical view that organizations always make decisions that maximize profits and that, more broadly, individuals always pick the best choice among numerous alternatives. Instead, he observed, people are saddled with too much information and not enough brain power. As a result, whether setting prices or playing chess, they settle for the first choice that's "good enough." In Darwinian terms, it's survival of the fitter.
Despite Simon's nominal shift to AI and cognitive science 40 years ago, the central question underlying all of his research has never changed: How do people make decisions? His explorations of how people wade through a mass of information by making a series of split-second decisions, like a person playing Twenty Questions, led him logically to computers. What tool could better test his theories than programs that mimicked a human's search-and-select strategies?
Unlike many of his peers, Simon isn't interested in electronic superbrains. The human brain is obviously limited in how fast and how capably it can handle information. So Simon scrupulously builds into his artificial systems those same limitations. Computers for him are merely means to an end--understanding how a brain can think.
For his first interview with Omni's Doug Stewart, Simon wore a crisp blue Mao jacket, a souvenir of a trip to China. A self-confident man, he is voluble and unrepentant about his many past pronouncements. To anyone who would challenge his assertion that creativity can be automated, he points to his office walls which are dressed up with computer-made figure drawings. Although he evidently admires the drawings, he also finds them useful as exhibits A, B, and C when making his case to skeptical visitors.
- OMNI
- How have computers taught us that "a mind could be housed in a material body"?
- Simon
- Computers can do a wide range of things people do when they say they're thinking. So computers can think. We know brains do this using neurons in some way, but nobody's quite told us the whole recipe yet. Computers offer us one way thinking can be done by a physical device. The we see if we can program computers to do these things not just by brute force--computing very rapidly--but by using the same tricks of search and selection that humans appear to use. And they do.
- OMNI
- What is this the main goal of AI?
- Simon
- AI can have two purposes. One is to use the power of computers to augment human thinking, just as we use motors to augment human or horse power. Robotics and expert systems are major brances of that. The other is to use a computer's artificial intelligence to understand how humans think. In a humanoid way. If you test your programs not merely by what they can accomplish, but how they accomplish it, they you're really doing cognitive science; you're using AI to understand the human mind.
- OMNI
- So you believe computers think?
- Simon
- My computers think all the time. They've been thinking since 1955 when we wrote the first program to get a computer to solve a problem by searching selectively through a mass of possibilities, which we think is the basis for human thinking. This program, called the Logic Theorist, would discover proofs for a theorem. We picked theorems from Whitehead and Russell's foundation work in logic, Principia Mathematica because it happened to be on my shelf. To prove a theorem, a human mathematician will start with axioms and use them to search for a proof. The Logic Theorist did quite a similar search, we think, to end up with a proof--when it was lucky. There were no guarantees it would find one, but there are none for the human logician either.
A year or two later, we embodied these ideas in the General Problem Solver, which wasn't limited to logic. Given a problem like, "How do I get to the airport?" it starts with, "What's the difference between where I want to be and where I am now? That's a difference in location, one of 20 miles. What tools do I know that reduce differences like that? You can ride a bike, take a helicopter or taxi. If I pick a taxi, how do I find one?" Again, GPS asks, "How do you get taxis? You telephone them." And so on.
Every time you set up a problem, it thinks of some method or tool already stored in memory that can remove the difference between where it is and where it wants to be. Each tool requires that certain conditions be met before that tool can be applied, so it then searches its memory for a tool for doing that. Eventually, it finds one it can apply: You call the taxi, it comes, you get in it, and the first thing you know you're delivered to the airport. Notice GPS doesn't try everything--not walking or a helicopter. It knows all sorts of things about walking or helicopters that help it decide they don't work in this situation. - OMNI
- Did you tell Bertrand Russell, Principia's surviving author, what you had done with Logic Theorist?
- Simon
- Yes, and he wrote back that if we'd told him this earlier, he and Whitehead could have saved ten years of their lives. He seemed amused and, I think, pleased.
- OMNI
- Wouldn't most people feel demeaned that a computer--a primitive one by today's standards--could do what they'd devoted ten years of their lives to?
- Simon
- You know, sometimes I feel terribly demeaned that a horse can run so much faster than I can. But we've known for a long time that there are creatures bigger, stronger, and faster than we are.
- OMNI
- But Principia Mathematica was a celebrated cerebral accomplishment, nothing like an animal's brawn!
- Simon
- It's true that thinking seems a peculiarly human capability, one we're proud of. Cats and dogs think, but they think little thoughts. Why should it be demeaning to us to try to understand how we do something? That's what we're really after. How's thinking done? The farther we go in understanding ourselves, the better off we are.
Still, people feel threatened whenever the uniqueness of the human species is challenged. These kinds of people made trouble for Copernicus and Galileo when they said the earth wasn't the center of the universe, for Darwin when he said maybe various species descended from a few ancestors. I don't know that anybody's been hurt by our not being in the center of the universe, although there are some who continue to lose sleep about Darwin. We'll get used to the fact that thinking is explainable in natural terms just like the rest of our abilities. Maybe we'll take one step further and decide the important thing is that we're part of a much larger system on this pale earth, and maybe beyond, and we'd better learn how to be a part of it and stop worrying about our uniqueness. - OMNI
- That thinking is explanable is easier to accept than the idea that machines can think.
- Simon
- In general, when we've found something to be explanable in natural terms, we've found ways of building theories for it and imitating it. The Egyptian didn't think building pyramids was such a great thing for humans to do that they shouldn't develop machines to help. Why should we think that thinking is such a great thing that machines shouldn't help us with it? And they have been helping us, for 40 years now.
- OMNI
- A program you worked on in the Seventies rediscovered Kepler's third law of motion. How?
- Simon
- It took raw data, nothing else, and tried to find patterns. We called it BACON, in honor of Sir Francis, because it's inductive. Kepler in the seventeenth century knew the distances of the planets from the sun and their periods of revolution. He thought there ought to be a pattern to these numbers, and after ten years he found it. We gave BACON the same data and said look for the pattern. It saw that when the period got bigger, the distance got bigger. So it divided the two to see if the ratio might be constant. That didn't work, so it tried dividing the distance again. That didn't work either. But now it had two ratios and found that as one got larger, the other got smaller. So it tried multiplying these--maybe their product was a constant. And by golly, it was. In three tries, BACON got the answer.
- OMNI
- A lucky guess!
- Simon
- It wasn't luck at all. BACON was very selective in what it looked at. If two quantities varied together, it looked at their ratio. If they varied in opposite directions, it looked at their product. Using these simple heuristics, or rules of thumb, it found that the square of a planet's period over the cube of its distance is a constant: Kepler's third law. Using those same tricks, BACON found Ohm's law of electrical resistance. It will invent concepts like voltage, index of refraction, specific heat, and other key new ideas of eighteenth- and nineteenth-century physics and chemistry, although, of course, it doesn't know what to call them.
This tells you that using a fairly simple set of rules of thumb allows you to replicate many first-rank discoveries in physics and chemistry. It thereby gives us an explanation, one that gets more convincing every time BACON gives us another example, of how people ever made these discoveries. It gets rid of these genius theories and tells us this is a normal process. People have to be intelligent, but their discoveries are not bolts from the blue. - OMNI
- Why are rules of thumb so important for computers and humans?
- Simon
- Take something limited like a chessboard. Every time you make a move, you're choosing from maybe 20 possibilities. If your opponent can make 20 replies, that's 400 possibilities. The 20 replies you can then make gets you 8,000 possible positions. Searching through 8,000 things is already way beyond a human's limits, so you limit your search. You need rules to select which possibilities are good ones. To exhaust all the possibilities on a chessboard, a player would have to look at more positions than there are molecules in the universe. We have good evidence that grand masters seldom consider more than 100 possibilities at once.
- OMNI
- As computers become more powerful, are these shortcuts less important?
- Simon
- Less important than they are for humans. The best chess computer today can look at 50 million positions before making a move. Yet even with all that speed, it needs tremendous selectivity to avoid looking at 10 to the 20th power positions.
- OMNI
- You and Allen Newell wrote the world's first chess program in the Fifties. How well did it play?
- Simon
- Not well. Hubert Dreyfus, in his book What Computers Cant Do, seemed pleased that it was beaten by a ten-year-old kid. A pretty bright one, I should add. Shortly after Dreyfus observed that, he was beaten by Greenblatt's machine at MIT, but that's a different story. We were trying to explore not how a computer could play chess, but how people played it. So the program played in a humanoid way: it had goals, was highly selective, responded to cues on the board, and so on.
Later in the Sixties, George Baylor and I built MATER, a program specializing in mating situations, going in for the kill. Its criteria tested whether a given move was powerful and explored only those, never looking at more than 100 choices. Chess books report celebrated games where brilliant players made seemingly impossible mating combinations, looking eight or so moves deep. MATER found most of the same combinations. - OMNI
- It had the same insight as the human champion, so to speak?
- Simon
- You don't have to say "so to speak"! It had the same insight as a human player. We were testing whether we had a good understanding of how human grand masters select their moves in those situations. And we did.
- OMNI
- You talk about a string of serial decisions. Don't grand masters get a chessboard's gestalt by seeing its overall pattern?
- Simon
- A Russian psychologist studying the eye movements of good chess players found that grand masters looked at all the important squares in the first five seconds and almost none of the unimportant ones. That's "getting a gestalt of a position." We wrote a little computer program that did this by following a simple rule. For starters, it picked the biggest piece near the center of the chessboard, then the program found another piece it either attacked or defended. Then the program would focus on the second piece and repeat the process. Lo and behold, it quickly looked at all the important squares and none of the unimportant ones. Show me a situation where ordinary cue-response mechanisms--call them intuitions if you like--can't reproduce those gestalt phenomena!
- OMNI
- But can't good players see several pieces at a glance?
- Simon
- Experiments on perception show we take in all our visual information in a very narrow area. And there's something else: A colleague, Bill Chase, and I did experiments where we took the board of a well-played game after the twentieth move, say, and let chess players look at it for five seconds. A grand master will reproduce the board almost perfectly, maybe 24 or 25 pieces correct. A weekend player will get six or seven correct. You say, "Grand masters have great vision, don't they?"
Now put the same 25 pieces on the board but completely at random, with no regard for the rules of chess. Again, the ordinary player puts six or seven pieces back. This time, the grand master puts six or seven pieces back, maybe one more. Clearly, what the grand master is seeing isn't pieces, but familiar patterns of pieces--Fianchetto's castled-king position or whatever. It's an act of recognition, just as you'd recognize your mother coming down the street. And with that recognition comes all sorts of information.
A grand master can play chess with 50 patzers, moving from board to board every few seconds, and at the end of the evening, he's won 48 of the games. How? He doesn't have time to look ahead, so he looks for cues. He plays ordinary opening moves, hardly looking at the board until he notices an opponent has created a situation he knows is an error. He recognizes it as a feature on the chessboard, just as a doctor sees a symptom and says, "Oh, you've got the measles." The grand master says, "A doubled pawn! He's in bad trouble." - OMNI
- You've compared complex human behavior to an ant's. How so?
- Simon
- When you watch an ant follow a tortuous path across a beach, you might say, "How complicated!" Well, the ant is just trying to go home, and it's got to climb over little sand dunes and around twigs. Its path is generally pointed toward its goal, and its maneuvers are simple, local responses to its environment. To simulate an ant, you don't have to simulate that wiggly path, just the way it responds to obstacles.
Say you want to program a computer to simulate a person playing the piano. Sure, a Bach fugue or Mozart sonata is complicated. When people play them you see their fingers doing all these complicated things. But you can predict every motion if you know what the notes are. The complexity is in the notes. The fingers just do the few things that fingers can.
Maybe people doing apparently abstruse and complicated things are just dealing with an abstruse and complicated environment, that is, what they can sense in the natural environment and what they've stored in memory. What makes an expert's behavior so peculiar is the knowledge, not anything peculiar about his or her brain. We're betting that the process underlying thinking is actually very straightforward: selective search and recognition. - OMNI
- Don't we need to know what neurons are doing?
- Simon
- Eventually. Science is built up of levels. At the bottom level, neuropsychology looks at neurons. At a higher level, you have symbolic psychology, information processing. People are trying to link these levels, but we have few clues about linkage. Nobody yet knows the neurobiological counterpart of a symbol. Until these levels are linked, I find it useful to work on symbolic-level theory, just a people do biochemistry without worrying about the atomic nucleus.
- OMNI
- You've argued that empirical knowledge, not theoretical postulates, must guide computer-system design. Why? What's the matter with theory?
- Simon
- It's claimed that you can't have an empirical computer science because these are artificial objects; therefore, they're whatever you make them. That's not so. They're whatever you can make them. You build a system you hope has a certain behavior and see if it behaves that way. That's most of what we've been doing in the last 35 years.
Not everyone agrees computer scince is emperical. Theorists dream up postulates and use them to prove mathematical theorems about computers, then they hope what they've proved has something to do with the real world. But even physicists don't get their postulates out of thin air; they run experiments first. In computer science, the only way we 'll know what assumptions to start with is through experience with many systems, which will tell us how the world really is. Take parallel processing: we've learned what kind of parallelism we can and can't have mostly by running experiemnts on parallel machines. Humans are at thier best when they interact with the real world and draw lessons from the bumps and bruises they get. - OMNI
- Is this analogous to objections you voiced to classical economics early in your career?
- Simon
- It certainly is. Economists have become so impressed with what mathematics has done for physicists that they spend much of their time building big mathematical models and worrying about their rigor. This work usually proves fruitless, because they're allowed to sit down in an armchair and put any kind of crazy assumptions they want into those models.
Not inconsequentially, I started out in political science, not economics. Political scientists have a deep respect for facts--going out and observing, which I did a lot of. When I was 19, I did a study of how people working for the Milwaukee city government made budget decisions--in particular, how they chose between planting trees and hiring a recreation director. That work led to my Ph.D. thesis and first book, Administrative Behavior, in the late Forties.
Classical economic theory assumes that decision makers, whether groups or individuals, know everything about the world and use it all to calculate the optimal way to behave. Well, in the case of a firm, there are a zillion things that firm doesn't know about its environment, two zillion things it doesn't know about possible products or marketing methods that nobody's ever thought of, and more zillions of calculations it can't make, even if it had all the facts needed to dump into the calculations. This is a ridiculous view of what goes on.
To go into a firm and evaluate the actual decision-making process, you must find out what information they have, choose to focus on, and how they actually process that information. That's what I've been doing all these years. That's why my AI work is a natural continuation of what I did earlier in economics. It's all an attempt to see how decision making works: first at the individual level--how is it possible to solve problems with an instrument like a human brain?--and at the group level, although I've never gotten back to that level. - OMNI
- You've said human decision makers, instead of making the "best choice," always settle for "what's good enough." Even in choosing a spouse?
- Simon
- Certainly. There are hundreds of millions of eligible women in the world at any given time. I don't know anybody who's gone the rounds before making the choice. As a result of experience, you get an idea of which women will tolerate you and which women you will tolerate. I don't know how many women I looked at before I met my wife. I doubt it was even 1,000. By the way, I've stayed married for 56 years.
- OMNI
- Congratulations. Why did you shift from economics to AI and cognitive psychology?
- Simon
- When I looked at the social sciences as fresh territory. They needed a good deal more rigor, so I studied applied mathematics and continued to study it even after I left the university. In economics, you can always turn prices and quantities into numbers, but how do you add rigor to concepts in political science like political power and natural language?
I saw the limits of using tools like differential equations to describe human behavior. By chance, I'd had contact with computers almost from the time they were invented in the Forties, and they fascinated me. At a think tank on the West Coast, called the Rand Corporation in the early Fifties, I'd seen Allen Newell and Cliff Shaw using a computer to superimpose pictures of planes flying over a map. Here was a computer doing much more than cranking out numbers; it was manipulating symbols. To me, that sounded a lot like thinking. The idea that computers could be general-purpose problem solvers was a thunderclap for me. I could use them to deal with phenomena I wanted to talk about without turning to numbers. After that, there was no turning back. Eventually we managed to build rigorous theories about human behavior. You could give a program the same stimulii you gave human test subjects. You could state your problem in English, not as equations and let the computer form a representation of that, and go to work solving it. - OMNI
- But beneath these symbolic representations, isn't a computer just crunching numbers?
- Simon
- [loudly] No, of course the computer isn't! Open up the box of a computer, and you won't find any numbers in there. You'll find electromagnetic fields. Just as if you open up a person's brain case, you won't find symbols; you'll find neurons. You can use those things, either neurons or electromagnetic fields, to represent any patterns you like. A computer could care less whether those patterns denote words, numbers, or pictures. Sure, in one sense, there are bits inside a computer, but what's important is not that they can do fast arithmetic but that they can manipulate symbols. That's how humans can think, and that's the basic hypothesis I operate from.
- OMNI
- Are there limits to machine intelligence that don't apply to humans?
- Simon
- None that I know of, but that's an empirical question. In the long run, it's only going to be settled by experiments. If there's a boundary, we'll knw about it sooner or later.
- OMNI
- Are there decisions you'd never leave to a computer, even an advanced future machine?
- Simon
- Provided I know how the computer is programmed, the answer is no. Years ago, when I flew a great deal, and particularly if I were landing at La Guardia on a bad day, I'd think, I hope there's a human pilot on board. Now, in similar weather, I say, "I hope this is being landed by a computer." Is that a switch of loyalty? No, just an estimate that computers today have advanced to the point where they can land planes more reliably than humans.
- OMNI
- Would you let a computer be the jury in a criminal trial?
- Simon
- Again, I'd want to know what that computer knew about the world, what kinds of things it was letting enter into its judgment, and how it was weighing evidence. As to whether a computer could be more accurate in judging a person's guilt, I don't lack confidence that it could be done. Standardized tests like the Minnesota Multiphasic [Personality] Inventory can already make better predictions about people than humans can. We predict how well students will do at Carnegie-Mellon using their high-school test scores and grade-point averages. When you compare those predictions with the judgments after an interview, the tests win every time.
- OMNI
- Estimating probabilities is one thing; applying human wisdom is another. Do you accept the idea of human wisdom?
- Simon
- Oh, human beings know a lot of things, some of which are true, and apply them. When we like the results, we call it wisdom. But yes, there's something called human wisdom, and if you want a computer to be a good juror, it'd better have wisdom.
Computer programs that do medical diagnosis are good enough to be called in as consultants on difficult cases. If you want to call what they have "wisdom," great. If you want to call it"intuition," great. What they really have is a lot of knowledge about a domain, the ability to retrieve that knowledge when they notice cues called symptoms--or lack of symptoms--and the ability to put it together and say, "I ruled this and that out, so the ailment is probably this." That's what doctors do. - OMNI
- When King Solomon discovered a baby's rightful mother by ordering the infant cut in half, he invoked wisdom beyond a good medical diagnosis.
- Simon
- Let's separate the Solomon story into two parts. First: What did Solomon know that led him to believe the real mother wouldn't like such an offer? Well, he knew what everyone knew--that mothers don't like harm to come to their children. That wasn't really wisdom. Second: How did he design a test using this knowledge? That's harder, but I don't think he used wisdom to do this: I'd call it cleverness. Similar problems happen to be an area of active research right now.
Let me suggest an example that will seem remote from that, for a moment: The mutilated checkerboard problem. If you plunk 32 dominoes on a checkerboard so each domino covers two squares, you'll cover the whole board. It's easy: you put four dominoes in each row, 4 times 8 is 32. Now if I cut out the two squares at the northwest and southeast, can you cover the remaining 62 squares with one less domino? If you ask test subjects to do this, they get very frustrated.
Occasionally, though, someone notices that two squares are always left uncovered and that those squares are always red, never black. That starts people thinking about color. They'll say, "Every domino covers one red and one black square. That means no matter how many dominoes I use, they'll always cover an equal number of red and black. So if I cut off two black squares, it isn't going to work because I'm always stuck with more red squares than black ones."
In fact, we know how to program a computer that probably will figure this kind of thing out. When it's trying solutions to a particular problem, it will keep its eyes open for things that happen every time, like two red squares always being left. Then it will redescribe the problem in terms of that feature. That's the Solomon solution. You focus on the fact that a mother doesn't want her child harmed, and that leads you to threaten the child in order to see who reacts. - OMNI
- Is creativity anything more than problem-solving?
- Simon
- I don't think so. What's involved in being creative? The ability to make selective searches. For that, you first need knowledge and then the ability to recognize cues indexed to that knowledge in particular situations. That lets you pull out the right knowledge at the right time. The systems we built to simulate scientific or any kind of creativity are based on those principles.
- OMNI
- What about an artist's ability to create something beautiful?
- Simon
- Like a painting? Harold Cohen, an English painter at the University of California at San Diego, wanted to understand how he painted, so he tried writing a computer program that could paint in an aesthetically acceptable fashion. This program called AARON has gone through many generations now. AARON today makes some really smashing drawings. I've got a number of them around my house. It's now doing landscapes in color with human figures in them [pulling a book from his shelf].
These were all done on the same day, a half hour apart. These figures seem to be interacting with each other. Aren't they amazing? There's a small random element in the program; otherwise, it would just keep reproducing the same drawing. Clearly, Cohen has fed AARON a lot of information about how to draw--don't leave too much open space, don't distribute objects too evenly, and so forth--whereas human artists have to learn these things on their own. The interesting question is, what does a computer have to know in order to create drawings that evoke the same responses from viewers that drawings by human artists evoke? What cues have to be in the picture? - OMNI
- Why does this strike me as rather unethical?
- Simon
- I don't know. You'll have to explain it to me because it doesn't strike me as unethical.
- OMNI
- Vincent Van Gogh's great creativity supposedly sprang from his tortured soul. A computer couldn't have a soul, could it?
- Simon
- I question whether we need that hypothesis. I wouldn't claim AARON has created great art. That doesn't make AARON subhuman. One trap people fall into in this "creative genius" game is to say, "Yes, but can you do Mozart?" That isn't the right test. There are degrees of creativity. If Mozart had never lived, we would regard lesser composers as creative geniuses because we wouldn't be using Mozart as a comparison.
As to whether a human being has to be tortured to make great art, I don't know of any evidence that Picasso was tortured. I do know he had a father who taught him great technique. The technique he used as a kid just knocks your eyes out; it helped make his Blue Period possible a few years later in Paris. I don't know what that last little bit of juice is--yet. I always suspect these "soul" theories because nobody will tell me what the soul is. And if they do, we'll program one. [laughs]
Here's our friend van Gogh with his ear missing [opens another book]. I don't know whether you need a soul to paint that....The colors of these sunflowers are intense, certainly. There's a forsythia hedge I pass every morning when I walk to my office. When it blooms in the spring, especially if there's a gray sky behind it, the flowers just knock me out. I don't think that hedge has a soul. It has intensity of color, and I'm responding to that. - OMNI
- Van Gogh shot himself soon after he painted "Wheat Field with Crows," so my emotional response to seeing it is inseparable from that knowledge. AARON's at a disadvantage in that sense.
- Simon
- Well, Cohen could invent a history for AARON. It could shoot its ear off.
- OMNI
- Can a machine automate creativity then?
- Simon
- I think AARON has. I think BACON has.
- OMNI
- Could a computer program have come up with your theory of bounded rationality?
- Simon
- [testily] In principle, yes. If you ask me if I know how to write that program this month, the answer is no.
- OMNI
- Does intuition count for anything?
- Simon
- Sure, it's terribly important. How can you tell when people have had an intuition? You give them a problem, and all of a sudden, maybe after a pause, they get the answer. And they can't tell you how they get it. Well, how do you recognize your mother? If she walked down the street, I wouldn't recognize her. You have abody of information in here [taps head] with cues connected to it. Whenever you notice a cue, it evokes some of that information. Most of the time, you can't tell what it was about the cue that evoked the information. You had an intuition, an insight. As far as I'm concerned, insight, intuition, and recognition are all synonymous.
- OMNI
- You say people never have correct intuitions in areas where they lack experience. What about child prodigies? How can a 12-year-old violin virtuoso pack so much practice into so few years?
- Simon
- They do. But when a kid 12 years old makes it on the concert circuit, it's because he or she is a kid. Was Yehudi Menuhin ever really an adult artist? We have data on this; we don't have to speculate. Either out of conviction or a desire to earn money, the teacher says, "Gee, your kid is doing well at the piano." The kid gets gratification from being complimented and from not having to do other things because they have to practice instead.
Then the teacher says, "I've brought this kid along as far as I can. You'd better find a more experienced teacher." So they find the best teacher in town. Then they go national. It goes that way without exception. A study comparing top solo musicians with people good enough to teach or play in orchestras found an enormous difference in the numbers of hours each group puts in. Does that mean you can make people into geniuses by beating them into working 80 hours a week? No. But a large percentage of the difference between human beings at these high levels is just differences in what they know and how they've practiced. - OMNI
- So Albert Einstein didn't invent the theory of relativity in a blaze of insight, but rather prepared himself by amassing experience and learning to recognize patterns?
- Simon
- Einstein was only 26 when he invented spatial relativity in 1905, but do you know how old he was when he wrote his first paper on the speed of light?--15 or 16. That's the magic ten years. It turns out that the time separating people's first in-depth exposure to a field and their first world-class achievement in that field is ten years, neither more nor less by much. Einstein knew a hell of a lot about light rays and all sorts of odd information related to them by the time he turned 26.
- OMNI
- You talk about machines thinking and humans thinking as interchangeable, but could a machine simulate human emotion?
- Simon
- Some of that's already been done. Psychiatrist Kenneth Colby built a model of a paranoid patient called PARRY. Attached to some of the things in its memory are symbols that arouse fear or anger, which is the way we think emotions are triggered in humans. You hear the word father, and that stirs up fear or whatever fathers are supposed to stir up. When you talk to PARRY, the first thing you know it's getting angry at you or refusing to talk. PARRY is very hard to calm down once it gets upset.
- OMNI
- But is machine simulation of an emotion the same as a machine being emotional?
- Simon
- I find that hard to decide. The issue of thinking is very clear to me because the program solves the problem the human solves, whereas so far the simulation of emotion has been much more gross. A computer that responds in the same way as people having emotions respond is not impossible, at a crude level.
Again, there's an interesting double standard. In biology, we demonstrate mutations in fruit flies and immediately conclude that, if one were clever enough, we could figure out what the millions of genes in humans do. In AI, the rule seems to be that when you tell somebody your computer plays pretty good chess, they say, "But can it do hopscotch?" They draw the line right at the particular things you've done, and don't generalize as they do in physics or biology. Now I wonder why that is. - OMNI
- Maybe it's that AI, unlike genetics, has had a disappointing record over the last. . .
- Simon
- That was a myth that Mr. Dreyfus began promulgating back in the Seventies.
- OMNI
- He's not the only one.
- Simon
- He started it. There's a mythical history of AI in which an initial burst of activity was followed by a long period of disappointment. Richard Bellman used the metaphor of climbing a tree to get to the moon: "They've made great progress to the top of the tree, but they're not at the moon yet." If you measure the amount of research published, you'll see that this field has been progressing very nicely, thank you, for 35 years. It's the same in other sciences--physics had great years in 1900, 1905, 1913, and 1925, and mostly confusion in between. Read Niels Bohr around 1920. He was tearing his hair out: "Things are getting worse rather than better. Phenomena that were explained in 1913 are now unexplained again."
- OMNI
- Some say AI has had a disappointing record of progress. What about all the rosy predictions from AI researchers?...
- Simon
- Starting with mine. In 1957, I predicted four things would happen within ten years. First, music of aesthetic interest would be composed by a computer. Second, most psychological theories would take the form of computer programs. Third, a significant mathematical theorem would be proved by a computer. Fourth, a computer would be chess champion of the world. We could quibble about the word most in the psychological-theory predictions--our GPS program is widely accepted as are a number of others--otherwise, all but my chess prediction actually took place in the following ten years.
- OMNI
- Hmm. Isn't the music verdict pretty subjective?
- Simon
- Not at all. Hiller and Isaacson at the University of Illinois used a computer to compose the ILIAC Suite and the Computer Cantata. Without identifying the music, I played records of these for several professional musicians, and they told me they found it aesthetically interesting--I didn't say it had to be great music--so that passed my test. So what's subjective?
- OMNI
- You don't back down at all on your predictions?
- Simon
- No. And on my chess prediction, I was off by a factor of four. It'll take 40 years, not 10, for a computer to be world champion. My alibi is that I thought the field was so exciting that there would be a huge increase in effort on computer chess, and there wasn't.
- OMNI
- Do you ever admit you're wrong?
- Simon
- Oh sure, I do it all the time. My wife couldn't live with me if I didn't. But on these things I wasn't wrong.
- OMNI
- Except the chess.
- Simon
- Except the chess...by a factor of four.
訂閱:
文章 (Atom)