拿國家的錢去美國問Simon 一些沒什麼水準的話
照張像 回來出書 交差了事
The cause, his wife, Aliette said, was pancreatic cancer.
The Polish-born French mathematician founded the field of fractal geometry about four decades ago, the first broad attempt to quantitatively investigate the notion of roughness. In 2003, a story in The Journal News described fractal geometry as "a way to measure the rough and tumble real world. Nature abounds with complex shapes, from trees to snowflakes to mountains."
Fractals now are used in many fields. Fractal equations help Hollywood computer artists create more realistic landscapes and doctors measure irregularly shaped red-blood cells to develop better medication.
"As a physicist, I was trying to find ways of representing the messiness of nature. How many flat things do we see in nature? Maybe a lake when there is no breeze and no fish, but not many more," Mandelbrot told The Journal News in 2003. "How many flat things do we see in industry? Everything! This cabinet, the wall, and so on. So industry took some very basic shapes which are rare in nature and made them everywhere."
He received the Japan Prize that year, a prestigious award that is described as being second only to the Nobel Prize. It's awarded annually by the Science and Technology Foundation of Japan to scientists whose work has "advanced the frontiers of knowledge and served the cause of peace and prosperity for mankind."
Mandelbrot shaped many of his theories during his time as at IBM's Thomas J. Watson Research Center in Yorktown. Later, he became Sterling Professor Emeritus of Mathematical Sciences at Yale University.
He was born in Warsaw and emigrated to France with his family in 1936 to escape the growing German threat. They eventually settled in Lyon, where they endured World War II.
"I have had a very adventurous life," he told The Journal News in 2003. "I almost perished a number of times in my youth. I survived all kinds of complications. In the war in France I was almost killed by Germans or French police. I was 19 at the time. At 20 I was a war-hardened veteran, even though I had never seen military action, because I happened to have survived."
He is survived by his wife, two sons and three grandchildren.
The Associated Press contributed to this report.
Benoît B. Mandelbrot, a maverick mathematician who developed the field of fractal geometry and applied it to physics, biology, finance and many other fields, died on Thursday in Cambridge, Mass. He was 85.
The cause was pancreatic cancer, his wife, Aliette, said. He had lived in Cambridge.
Dr. Mandelbrot coined the term “fractal” to refer to a new class of mathematical shapes whose uneven contours could mimic the irregularities found in nature.
“Applied mathematics had been concentrating for a century on phenomena which were smooth, but many things were not like that: the more you blew them up with a microscope the more complexity you found,” said David Mumford, a professor of mathematics at Brown University. “He was one of the primary people who realized these were legitimate objects of study.”
In a seminal book, “The Fractal Geometry of Nature,” published in 1982, Dr. Mandelbrot defended mathematical objects that he said others had dismissed as “monstrous” and “pathological.” Using fractal geometry, he argued, the complex outlines of clouds and coastlines, once considered unmeasurable, could now “be approached in rigorous and vigorous quantitative fashion.”
For most of his career, Dr. Mandelbrot had a reputation as an outsider to the mathematical establishment. From his perch as a researcher for I.B.M. in New York, where he worked for decades before accepting a position at Yale University, he noticed patterns that other researchers may have overlooked in their own data, then often swooped in to collaborate.
“He knew everybody, with interests going off in every possible direction,” Professor Mumford said. “Every time he gave a talk, it was about something different.”
Dr. Mandelbrot traced his work on fractals to a question he first encountered as a young researcher: how long is the coast of Britain? The answer, he was surprised to discover, depends on how closely one looks. On a map an island may appear smooth, but zooming in will reveal jagged edges that add up to a longer coast. Zooming in further will reveal even more coastline.
“Here is a question, a staple of grade-school geometry that, if you think about it, is impossible,” Dr. Mandelbrot told The New York Times earlier this year in an interview. “The length of the coastline, in a sense, is infinite.”
In the 1950s, Dr. Mandelbrot proposed a simple but radical way to quantify the crookedness of such an object by assigning it a “fractal dimension,” an insight that has proved useful well beyond the field of cartography.
Over nearly seven decades, working with dozens of scientists, Dr. Mandelbrot contributed to the fields of geology, medicine, cosmology and engineering. He used the geometry of fractals to explain how galaxies cluster, how wheat prices change over time and how mammalian brains fold as they grow, among other phenomena.
His influence has also been felt within the field of geometry, where he was one of the first to use computer graphics to study mathematical objects like the Mandelbrot set, which was named in his honor.
“I decided to go into fields where mathematicians would never go because the problems were badly stated,” Dr. Mandelbrot said. “I have played a strange role that none of my students dare to take.”
Benoît B. Mandelbrot (he added the middle initial himself, though it does not stand for a middle name) was born on Nov. 20, 1924, to a Lithuanian Jewish family in Warsaw. In 1936 his family fled the Nazis, first to Paris and then to the south of France, where he tended horses and fixed tools.
After the war he enrolled in the École Polytechnique in Paris, where his sharp eye compensated for a lack of conventional education. His career soon spanned the Atlantic. He earned a master’s degree in aeronautics at the California Institute of Technology, returned to Paris for his doctorate in mathematics in 1952, then went on to the Institute for Advanced Study in Princeton, N.J., for a postdoctoral degree under the mathematician John von Neumann.
After several years spent largely at the Centre National de la Recherche Scientifique in Paris, Dr. Mandelbrot was hired by I.B.M. in 1958 to work at the Thomas J. Watson Research Center in Yorktown Heights, N.Y. Although he worked frequently with academic researchers and served as a visiting professor at Harvard and the Massachusetts Institute of Technology, it was not until 1987 that he began to teach at Yale, where he earned tenure in 1999.
Dr. Mandelbrot received more than 15 honorary doctorates and served on the board of many scientific journals, as well as the Mandelbrot Foundation for Fractals. Instead of rigorously proving his insights in each field, he said he preferred to “stimulate the field by making bold and crazy conjectures” — and then move on before his claims had been verified. This habit earned him some skepticism in mathematical circles.
“He doesn’t spend months or years proving what he has observed,” said Heinz-Otto Peitgen, a professor of mathematics and biomedical sciences at the University of Bremen. And for that, he said, Dr. Mandelbrot “has received quite a bit of criticism.”
“But if we talk about impact inside mathematics, and applications in the sciences,” Professor Peitgen said, “he is one of the most important figures of the last 50 years.”
Besides his wife, Dr. Mandelbrot is survived by two sons, Laurent, of Paris, and Didier, of Newton, Mass., and three grandchildren.
When asked to look back on his career, Dr. Mandelbrot compared his own trajectory to the rough outlines of clouds and coastlines that drew him into the study of fractals in the 1950s.
“If you take the beginning and the end, I have had a conventional career,” he said, referring to his prestigious appointments in Paris and at Yale. “But it was not a straight line between the beginning and the end. It was a very crooked line.”
Smarter Than You Think
Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.
Grasping Language
Articles in this series are examining the recent advances in artificial intelligence and robotics and their potential impact on society.
Share your thoughts.
Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.
Since the start of the year, a team of researchers atCarnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency andGoogle, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.
“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.
The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”
NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.
The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.
NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”
Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.
For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”
Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.
Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search (supplying natural-language answers to search queries, not just links to Web pages) to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.
“The technology is really maturing, and will increasingly be used to gain understanding,” said Alfred Spector, vice president of research for Google. “We’re on the verge now in this semantic world.”
With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.
Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”
NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.
For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.
NELL, he says, is just getting under way, and its growing knowledge base of facts and relations is intended as a foundation for improving machine intelligence. Dr. Mitchell offers an example of the kind of knowledge NELL cannot manage today, but may someday. Take two similar sentences, he said. “The girl caught the butterfly with the spots.” And, “The girl caught the butterfly with the net.”
A human reader, he noted, inherently understands that girls hold nets, and girls are not usually spotted. So, in the first sentence, “spots” is associated with “butterfly,” and in the second, “net” with “girl.”
“That’s obvious to a person, but it’s not obvious to a computer,” Dr. Mitchell said. “So much of human language is background knowledge, knowledge accumulated over time. That’s where NELL is headed, and the challenge is how to get that knowledge.”
A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.
When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)
NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.
His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”
"群體史縮斂成一故事是特別重要的。秉持此一精神,Benedict Anderson (1991)論到,國家感牽涉到建立在透過一遺忘,發明,和詮釋的審慎結合所建構的歷史之想像共同體的了解。" james march (2010)
1999
我 又與他談大中國區的大學教育之「質」(例如世界上真正的科學實証教育,並未在教育中生根立足,所以怪力亂神現象特別嚴重。)和「量」(台灣的高教並不符合 人民的期望,而大陸的高教投資,遠低於發展之所需。台灣的所謂「追求卓越」,大筆的散財於高教,小學等基本教育和設備之質與量堪虞…)
我是有心辦網路上的自由SIMON大學。他以為大學量已經太多,當前提以及未來數世代)中,最重要的是如何利用現代化的傳播科技,把世界上的一流大學(Strong universities)之資源與社區連用、分享。
我希望能編本《司馬賀談教育》;這以後有機會再談。
近來讀張漢裕教授所譯的R. H. Tawney《中國的土地與勞力》(1995,協志工業叢書;原書1929年出版),其中有許多話很重要:「...國家所需要的是受過教育的人,不是沒受過教育的畢業生,…再不可為了大量生產而犧牲內容。應該側重教學生自己思考──這是比較費力的事...」(中譯本,pp. 206 -07)
R. H. Tawney真是名家,他對於中國現代化的整體建議是引《浮士德》中的一句詩為喻:『設非自己心靈出,何得精神助你與。』意思是:若非從你自己心中湧出,你不能得到什麼使你心靈更爽健。(p. 209)
Named by Public Administration Review as "Book of the Half Century," Administrative Behavior is considered one of the most influential books on social science thinking, and was referred to by the Nobel Committee as "epoch-making."
Written for managers and other professionals who wish to understand the decision-making processes at the heart of organization and management, it is also essential reading for students in business and management, economics, sociology, psychology computer science, government, and law.
|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|