2013年12月20日 星期五

The New Palgrave Dictionary of Economics, Second Edition, 2008

the 1987 edition of The New Palgrave, edited by Eatwell, Milgate & Newman.
Herbert A Simon寫了幾條:
 Behaviour Economics
Bounded Rationality
Causality in Economic Model
Evans, Griffith Conrad, 1887-1973
Satisficing

*****http://www.dictionaryofeconomics.com/dictionary
Search results

The results of your search are shown below. If you prefer, you may use the filter options to refine the list further, or search again.

Your search for "herbert simon" over the entire article content within the 2008, 2009, 2010, 2011, 2012 and 2013 editions returned 26 results.
Articles on topic:

    All current edition articles

Result page: 1 2 3 Next >
1. Simon, Herbert A. (1916–2001)

This article discusses how Simon's vision for behavioural economics (and social science generally) was found in the context of his early work in public ...
By Mie Augier. From The New Palgrave Dictionary of Economics, Second Edition, 2008
2. Ando, Albert K. (1929–2002)

Albert K. Ando was an eminent Japanese-born American economist who made many seminal contributions in a broad range of areas of economics. Born in Tokyo, ...
By Charles Yuji Horioka. From The New Palgrave Dictionary of Economics, Second Edition, 2008
3. superstars, economics of

Gigantic incomes and rare talents attract attention and elicit a search for an explanation. Sherwin Rosen has provided us with an elegant neoclassical ...
By Walter Y. Oi. From The New Palgrave Dictionary of Economics, Second Edition, 2008
4. rationality, bounded

‘Bounded rationality’ refers to rational choice that takes into account the cognitive limitations of the decision-maker – limitations of both knowledge ...
By Herbert A. Simon. From The New Palgrave Dictionary of Economics, Second Edition, 2008
5. conventionalism

Conventionalism is the methodological doctrine that asserts that explanatory ideas should not be considered true or false but merely better or worse. ...
By Lawrence A. Boland. From The New Palgrave Dictionary of Economics, Second Edition, 2008
6. Sargent, Thomas J. (born 1943)

Thomas J. Sargent is the 2011 recipient of the Nobel Prize in Economic Sciences (along with Christopher Sims). Sargent has been instrumental in the development ...
By Esther-Mirjam Sent. From The New Palgrave Dictionary of Economics, Online Edition, 2012
7. Granger–Sims causality

The concept of Granger–Sims causality is discussed in its historical context. There follows a review of the subsequent literature that explored conditions ...
By G. M. Kuersteiner. From The New Palgrave Dictionary of Economics, Second Edition, 2008
8. causality in economics and econometrics

Economics was conceived as early as the classical period as a science of causes. The philosopher–economists David Hume and J. S. Mill developed the conceptions ...
By Kevin D. Hoover. From The New Palgrave Dictionary of Economics, Second Edition, 2008
9. Selten, Reinhard (born 1930)

This article describes the main contributions to game theory and boundedly rational economic behaviour of Reinhard Selten, winner, together with John ...
By Eric van Damme. From The New Palgrave Dictionary of Economics, Second Edition, 2008
10. competition and selection

The claim that a business firm must maximize profit if it is to survive serves as an informal statement of the common conclusion of a class of theorems ...
By Sidney G. Winter. From The New Palgrave Dictionary of Economics, Second Edition, 2008

11. experimental economics, history of

Contemporary experimental economics was born in the 1950s from the combination of the experimental method used in psychology and new developments in economic ...
By Francesco Guala. From The New Palgrave Dictionary of Economics, Second Edition, 2008
12. rationality, history of the concept

This article offers a historical and methodological perspective on the concept of rationality. It gives an overview of the various interpretations of ...
By Esther-Mirjam Sent. From The New Palgrave Dictionary of Economics, Second Edition, 2008
13. efficient markets hypothesis

The efficient markets hypothesis (EMH) maintains that market prices fully reflect all available information. Developed independently by Paul A. Samuelson ...
By Andrew W. Lo. From The New Palgrave Dictionary of Economics, Second Edition, 2008
14. Williamson, Oliver. E (born 1932)

Oliver E. Williamson is the 2009 co-recipient (with Elinor Ostrom) of the Nobel Memorial Prize in Economics, awarded ‘for his ...
By Scott E. Masten. From The New Palgrave Dictionary of Economics, Online Edition, 2010
15. satisficing

‘Satisficing’ (choosing an option that meets or exceeds specified criteria but is not necessarily either unique or the best) is an alternative conception ...
By Herbert A. Simon. From The New Palgrave Dictionary of Economics, Second Edition, 2008
16. Evans, Griffith Conrad (1887–1973)

A distinguished American mathematician and pioneer mathematical economist, Evans was born on 11 May 1887 in Boston, Massachusetts. Educated in mathematics ...
By Herbert A. Simon. From The New Palgrave Dictionary of Economics, Second Edition, 2008
17. firm, theory of the

It is doubtful if there is yet general agreement among economists on the subject matter designated by the title ‘theory of the firm’, on, that is, the ...
By G.C. Archibald. From The New Palgrave Dictionary of Economics, Second Edition, 2008
18. rational behaviour

A clear distinction must be drawn between (a) the type of behaviour that might be described as rational, and (b) rational behaviour models that might ...
By Amartya Sen. From The New Palgrave Dictionary of Economics, Second Edition, 2008
19. United States, economics in (1885–1945)

The history of American economics following the founding of the American Economic Association in 1885 is not a simple linear narrative of the triumph ...
By Bradley W. Bateman. From The New Palgrave Dictionary of Economics, Second Edition, 2008
20. input–output analysis

Input–output analysis is a practical extension of the classical theory of general interdependence which views the whole economy of a region, a country ...
By Wassily Leontief. From The New Palgrave Dictionary of Economics, Second Edition, 2008
21. power

We consider the exercise of power in competitive markets for goods, labour and credit. We offer a definition of power and show that if contracts are incomplete ...
By Samuel Bowles and Herbert Gintis. From The New Palgrave Dictionary of Economics, Second Edition, 2008
22. altruism, history of the concept

This article describes the incorporation from the early 1960s of seemingly unselfish behaviour into economics. Faced with the problem of accounting for ...
By Philippe Fontaine. From The New Palgrave Dictionary of Economics, Second Edition, 2008
23. models

Philosophical analysis of the historical development of modelling, as well as the programmatic statements of the founders of modelling, support three ...
By Mary S. Morgan. From The New Palgrave Dictionary of Economics, Second Edition, 2008
24. United States, economics in (1945 to present)

After 1945, American economics was transformed as radically as in the previous half century. Economists’ involvement in the war effort compounded changes ...
By Roger E. Backhouse. From The New Palgrave Dictionary of Economics, Second Edition, 2008
25. Modigliani, Franco (1918–2003)

This article focuses on the scholarly contributions of Franco Modigliani, 1985 Nobel laureate in economics. Particular attention is given to his formulation ...
By Richard Sutch. From The New Palgrave Dictionary of Economics, Second Edition, 2008
26. Fisher, Irving (1867–1947)

Irving Fisher was born in Saugerties, New York, on 27 February 1867; he was residing in New Haven, Connecticut at the time of his death in a New York ...
By James Tobin. From The New Palgrave Dictionary of Economics, Second Edition, 2008


*****
Your search for "satisficing" over the entire article content within the 2008, 2009, 2010, 2011, 2012 and 2013 editions returned 12 results.
Articles on topic:

    All current edition articles

Result page: 1 2 Next >
1. satisficing

‘Satisficing’ (choosing an option that meets or exceeds specified criteria but is not necessarily either unique or the best) is an alternative conception ...
By Herbert A. Simon. From The New Palgrave Dictionary of Economics, Second Edition, 2008
2. rational behaviour

A clear distinction must be drawn between (a) the type of behaviour that might be described as rational, and (b) rational behaviour models that might ...
By Amartya Sen. From The New Palgrave Dictionary of Economics, Second Edition, 2008
3. rationality, bounded

‘Bounded rationality’ refers to rational choice that takes into account the cognitive limitations of the decision-maker – limitations of both knowledge ...
By Herbert A. Simon. From The New Palgrave Dictionary of Economics, Second Edition, 2008
4. Simon, Herbert A. (1916–2001)

This article discusses how Simon's vision for behavioural economics (and social science generally) was found in the context of his early work in public ...
By Mie Augier. From The New Palgrave Dictionary of Economics, Second Edition, 2008
5. economic man

Economic man ‘knows the price of everything and the value of nothing’, so said because he or she calculates and then acts so as to satisfy ...
By Shaun Hargreaves-Heap. From The New Palgrave Dictionary of Economics, Second Edition, 2008
6. competition and selection

The claim that a business firm must maximize profit if it is to survive serves as an informal statement of the common conclusion of a class of theorems ...
By Sidney G. Winter. From The New Palgrave Dictionary of Economics, Second Edition, 2008
7. efficient markets hypothesis

The efficient markets hypothesis (EMH) maintains that market prices fully reflect all available information. Developed independently by Paul A. Samuelson ...
By Andrew W. Lo. From The New Palgrave Dictionary of Economics, Second Edition, 2008
8. rationality, history of the concept

This article offers a historical and methodological perspective on the concept of rationality. It gives an overview of the various interpretations of ...
By Esther-Mirjam Sent. From The New Palgrave Dictionary of Economics, Second Edition, 2008
9. case-based decision theory

Case-based decision theory was developed by Gilboa and Schmeidler. This article describes the framework and lays out the axiomatic foundations of the ...
By Ani Guerdjikova. From The New Palgrave Dictionary of Economics, Online Edition, 2009
10. market competition and selection

There is a long history in economics of using market selection arguments in defence of rationality hypotheses. According to these arguments, rational ...
By Lawrence E. Blume and David Easley. From The New Palgrave Dictionary of Economics, Second Edition, 2008
11. egalitarianism

This article surveys a variety of egalitarian theories. We look at a series of different answers to the question of what the metric of justice should ...
By Harry Brighouse and Adam Swift. From The New Palgrave Dictionary of Economics, Second Edition, 2008
12. profit and profit theory

A theory of profit should address itself to at least three questions – about the size (volume) of profit, its share in total income and about the rate ...
By Meghnad Desai. From The New Palgrave Dictionary of Economics, Second Edition, 2008

2013年10月17日 星期四

Herbert A. Simon Dies at 84; Won a Nobel for Economics



Herbert A. Simon Dies at 84; Won a Nobel for Economics


Herbert A. Simon, an American polymath who won the Nobel in economics in 1978 with a new theory of decision making and who helped pioneer the idea that computers can exhibit artificial intelligence that mirrors human thinking, died yesterday. He was 84. 

He died at the Presbyterian University Hospital of Pittsburgh, according to an announcement by Carnegie Mellon University, which said the cause was complications after surgery last month. Mr. Simon was the Richard King Mellon University Professor of Computer Science and Psychology at the university -- a title that underscored the breadth of his interests and learning. 

Mr. Simon also won the A. M. Turing Award for his work on computer science in 1975 and the National Medal of Science in 1986. In 1993, he was awarded the American Psychological Association's award for outstanding lifetime contributions to psychology. 

In 1994, he became one of only 14 foreign scientists ever to be inducted into the Chinese Academy of Sciences and in 1995 was given awards by the International Joint Conferences on Artificial Intelligence and the American Society of Public Administration. 

Awarding him the Nobel, the Swedish Academy of Sciences cited ''his pioneering research into the decision-making process within economic organizations'' and acknowledged that ''modern business economics and administrative research are largely based on Simon's ideas.'' 

Professor Simon challenged the classical economic theory that economic behavior was essentially rational behavior in which decisions were made on the basis of all available information with a view to securing the optimum result possible for each decision maker.
Instead, Professor Simon contended that in today's complex world individuals cannot possibly process or even obtain all the information they need to make fully rational decisions. Rather, they try to make decisions that are good enough and that represent reasonable or acceptable outcomes. 

He called this less ambitious view of human decision making ''bounded rationality'' or ''intended rational behavior'' and described the results it brought as ''satisficing.''
In his book ''Administrative Behavior'' he set out the implications of this approach, rejecting the notion of an omniscient ''economic man'' capable of making decisions that bring the greatest benefit possible and substituting instead the idea of ''administrative man'' who ''satisfices -- looks for a course of action that is satisfactory or 'good enough.' '' 

Professor Simon's interest in decision making led him logically into the fields of computer science, psychology and political science. His belief that human decisions were made within clear constraints seemed to conform with the way that computers are programmed to resolve problems with defined parameters. 

In the mid-1950's, he teamed up with Allen Newell of the Rand Corporation to study human decision making by trying to simulate it on computers, using a strategy he called thinking aloud. 

People were asked for the general reasoning processes they went through as they solved logical problems and these were then converted into computer programs that Professor Simon and Mr. Newell thought equipped these machines with a kind of artificial intelligence that enabled them to simulate human thought rather than just perform stereotyped procedures. 

The breakthrough came in December 1955 when Professor Simon and his colleague succeeded in writing a computer program that could prove mathematical theorems taken from the Bertrand Russell and Alfred North Whitehead classic on mathematical logic, ''Principia Mathematica.'' 

The following January, Professor Simon celebrated this discovery by walking into a class and announcing to his students, ''Over the Christmas holiday, Al Newell and I invented a thinking machine.'' 

A subsequent letter to Lord Russell explaining his achievement elicited the reply: ''I am delighted to know that 'Principia Mathematica' can now be done by machinery. I wish Whitehead and I had known of this possibility before we wasted 10 years doing it by hand.''
But in a much-cited 1957 paper Professor Simon seemed to allow his own enthusiasm for artificial intelligence to run too far ahead of its more realistic possibilities. Within 10 years, he predicted, ''a digital computer will be the world's chess champion unless the rules bar it from competition,'' while within the ''visible future,'' he said, ''machines that think, that learn and that create'' will be able to handle challenges ''coextensive with the range to which the human mind has been applied.'' 

Sure enough, the I.B.M computer Deep Blue did finally beat the world chess champion Gary Kasparov last year -- about three decades after Mr. Simon had predicted the event would occur. 

Because artificial intelligence has not grown as quickly or as strongly as Professor Simon hoped, critics of his thinking argue that there are limits to what computers can achieve and that what they accomplish will always be a simulation of human thought, not creative thinking itself. As a result, Professor Simon's achievements have sparked a passionate and continuing debate about the differences between people and thinking machines. 

Born on June 15, 1916, the son of German immigrants, in Milwaukee, Herbert A. Simon attended public school and entered the University of Chicago in 1933 with the intention of bringing the same rigorous methodology to the social sciences as existed in physics and other ''hard'' sciences. 

As an undergraduate his interest in decision making was aroused when he made a field study of Milwaukee's recreation department. After receiving his bachelor's degree in 1936 he became an assistant to Clarence E. Ridley of the International City Managers Association and then continued work on administrative techniques in the Bureau of Public Administration of the University of California at Berkeley. 

In 1942, he moved to the Illinois Institute of Technology and in 1943 received his doctorate from the University of Chicago for a dissertation subsequently published in 1947 as ''Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations.'' 

In 1937, he married Dorothea Pye, who survives him along with three children, Katherine Simon Frank of Minneapolis; Peter A. Simon of Bryan, Tex.; and Barbara M. Simon of Wilder, Vt.; six grandchildren, three step-grandchildren; and five great-grandchildren.
A member of the faculty of Carnegie Mellon University since 1949, Professor Simon played important roles in the formation of several departments and schools including the Graduate School of Industrial Administration, the School of Computer Science and the College of Humanities and Social Sciences' psychology department. 

He published 27 books, of which the best known today are ''Models of Bounded Rationality'' (1997), ''Sciences of the Artificial''(1996) and ''Administrative Behavior''(1997). 

In 1991 he published his autobiography, ''Models of My Life,'' and remarked then about his vision of that all-vanquishing computer hunched over the chess boards of the world: ''I still feel good about my prediction. Only the time frame was a bit short.'' And so it was.
Photo: Herbert A. Simon (Ken Andreyo/Carnegie Mellon University)

2013年10月16日 星期三

Herbert A. Simon《我生活的種種模式》(The Models of My Life ) 漢譯本序言

 Herbert A. Simon《我生活的種種模式》(The Models of My Life ) 漢譯本序言


「我生活的種種模式」(1999/03)
(The Models of My Life ) 漢譯本序言要旨
司馬賀(Herbert A. Simon )
我 很幸運,生活在現代電子計算機誕生並由此導致了人工智能領域形成的年代。我的自傳中有很多發生在那些令人激動的歲月中的故事。對我的自傳,我主要的希望 是:它能給正在考慮以科學研究為職業,或剛進入科學研究事業的年輕人,提供一些有關科學研究生涯的激動人心的畫面。當然,這些畫面也許附著許多久遠的舊時 代的色彩,而且就地域而言,它也離中國很遠。但是,一個科學家想要探究未知世界的迫切感,是不拘於任何時間,不特定於這個地球上的任何地方的。無論我們生 活在那個世紀、那塊土地上,我們都會對這種迫切感有所響應,都會因發現對人類有價值的新思想和新事物而感到歡欣和滿意。

在此,我想對我的朋友和讀者重述一下孔夫子的名言:
三人行必有我師焉**。


  • *該書簡體字版,由上海東方出版中心出版,本網站下月期將陸續有評介文章。
  • **司馬賀一生與近一百位朋友合著論文及書籍,他在書中說:很多美妙的靈感都存在朋友的腦中,取之不盡。

2013年10月14日 星期一

Herbert Simon致Hanching Chung 1999.2





約1999.2 Herbert Simon 來信Hanching Chung

Dear Mr. Cheng:

I forgot to answer one of your questions in my last message -- where
to get more information about my publications. The address of my
home page on the WWW is:

http://www.psy.cmu.edu/psy/faculty/hsimon/hsimon.html

At the very end of the page you will find a cross-reference to
five other files that contain my complete bibliography, arranged
chronologically. On the home page you will also find a short list of
some recent publications, and similar lists on my other web pages
in Computer Science, Philosophy, and GSIA (our School of Business).

Sincerely yours,

Herbert A. Simon

2013年6月26日 星期三

米爾頓·弗里德曼 (鄒至莊)

在H. A. Simon的諾貝爾經濟學授獎的演講中曾大力批評M. F的方法論的矛盾.....





我的老師米爾頓·弗里德曼(一)

 
在未來一系列的三篇文章中,我將首先描述我在美國芝加哥大學當弗里德曼學生的經驗,然後描述他一部分的經濟學研究和我後來同樣的研究。這些文章的主要目的是說明如何應用經濟理論來解釋和解決現實的經濟問題,這些文章不足以討論弗里德曼研究的許多其它議題。
弗里德曼作為我的教師和Chow檢驗的發現
我在1955年赴美國芝加哥大學當研究生,並上弗里德曼的價格理論課。當弗里德曼進教室上他的第一課,他已給學生留下了深刻的印象,深於我以前任何 其他的老師。在第一課,他顯示了經濟理論能解釋現實的經濟現象。他的思維是尖銳的,並對別人的話他能立即作反應。在芝加哥,我也很幸運地能向其他著名的經 濟學家和統計學家學習,但弗里德曼對我的影響最大。這些老師,也影響了我後來其他的研究,這些研究不能在這一系列文章里討論。
當我寫我的論文《美國汽車的需求》時,阿諾德•哈伯格當我的導師,但論文的初稿到弗里德曼的研討班討論,從他那裡,我得到了非常有價值的意見。這班 裡也研討了弗里德曼和Mieselman的著名研究,比較使用貨幣供應M或用政府和其它的支出A能更好地解釋國民收入Y,作了Y與M的回歸和Y與A的回歸 來比較。發現用M的解釋比較好。作回歸時弗里德曼必須區分兩個M的定義,第一個只包括貨幣和活期存款,第二個加以定期存款。弗里德曼說:“讓我們把第一個 稱M1,把第二個稱M2”。這是我們稱M1和M2的開始。
弗 里德曼建議我使用永久收入或預期收入,而不是當期的收入來解釋汽車的需求。我發現他的永久收入變量能更好地解釋汽車總存量的需求,而當期的收入能更好地解 釋當年購買汽車的變量。原因是當年的採購包括儲蓄,而儲蓄受當年收入的影響。稍後我將討論永久性收入的用處。弗里德曼的預期收入的概念引導了經濟學中重要 的期望概念。他的預期概念,假定預期數量的年度變化等於前期的實現數量和前期的預期數量差異的一部分。就是說觀察了前期的實現數量和前期的預期數量有差異 時,應把預期數量部分的調整。
弗里德曼用國家的預期收入來解釋國家的消費量,這項研究被授予諾貝爾獎。以前的經濟學界發現,國家消費量增加的百分比少於國家當期收入增加的百分 比。如果是這樣的話,當國家收入繼續增加時國家消費總需求將不足帶動國家收入的繼續加,導致經濟不景氣,需要政府支出來解決。根據弗里德曼的消費理論,當 預期收入繼續增加時消費會同比例地增加,不致發生經濟不景氣。結論是自由市場經濟能夠持續增長,不需政府的乾預。
對耐用品的需求
弗里德曼認為許多假說都能解釋過去的數據,因此只有當一個假說或理論能被用來預測未來的數據我們才能對它有信心。這個概念影響我作美國汽車需求的繼 續研究。1958年,在我論文完成後,我的論文導師阿諾德•哈伯格決定出版一本由他指導關於耐用品需求的論文集。由於我的論文已經在1957年發表,我不 得不寫另一篇論文給他。我疑問我使用到1953年的數據來估計的需求方程是否能夠用來預測1954年至1957年的數據。我需要用一個統計學的檢驗方法來 解答。因此我發現了Chow檢驗來解答這個問題。
在我研究汽車的需求時,應用了加速原理。因為汽車總存量的需求,由收入決定。本年購買的新汽車,是總存量的變化,因此它的需求,是由收入的變化決 定。在這里收入比速度,收入的變化率,可以比加速。本年增加或本年購買的耐用品,是由收入的變化決定,這命為加速度原理。後來我用它來解釋很多其他耐用品 的需求,包括在美國,中國及台灣等地耐用消費品的需求。
註:本文僅代表作者本人觀點,作者近期出版了《鄒至莊論中國經濟》一書。
本文責任編輯 徐瑾 jin.xu@ftchinese.com

2013年6月15日 星期六

Intuition Pumps and Other Tools for Thinking. By Daniel Dennett.





水泵啟動式財政 Pump Priming  新帕尔格雷夫经济学大词典专题索引



 5月28在YouTube看過 此公妙趣橫生的演講


Daniel Dennett: Intuition Pumps and Other Tools for Thinking



Contemporary philosophy

Pump-primer

Tools for pondering imponderables

Intuition Pumps and Other Tools for Thinking. By Daniel Dennett. W.W. Norton; 496 pages; £28.95. Allen Lane; £20. 

“THINKING is hard,” concedes Daniel Dennett. “Thinking about some problems is so hard that it can make your head ache just thinking about thinking about them.” Mr Dennett should know. A professor of philosophy at Tufts University, he has spent half a century pondering some of the knottiest problems around: the nature of meaning, the substance of minds and whether free will is possible. His latest book, “Intuition Pumps and Other Tools for Thinking”, is a précis of those 50 years, distilled into 77 readable and mostly bite-sized chapters.
“Intuition pumps” are what Mr Dennett calls thought experiments that aim to get at the nub of concepts. He has devised plenty himself over the years, and shares some of them. But the aim of this book is not merely to show how the pumps work, but to deploy them to help readers think through some of the most profound (and migraine-inducing) conundrums.
As an example, take the human mind. The time-honoured idea that the mind is essentially a little man, or homunculus, who sits in the brain doing clever things soon becomes problematic: who does all the clever things in the little man’s brain? But Mr Dennett offers a way out of this infinite regress. Instead of a little man, what if the brain was a hierarchical system?
This pump, which Mr Dennett calls a “cascade of homunculi”, was inspired by the field of artificial intelligence (AI). An AI programmer begins by taking a problem a computer is meant to solve and breaking it down into smaller tasks, to be dealt with by particular subsystems. These, in turn, are composed of sub-subsystems, and so on. Crucially, at each level down in the cascade the virtual homunculi become a bit less clever, to a point where all it needs to do is, say, pick the larger of two numbers. Such homuncular functionalism (as the approach is known in AI circles) replaces the infinite regress with a finite one that terminates at tasks so dull and simple that they can be done by machines.
Of course the AI system is designed from the top down, by an intelligent designer, to perform a specific task. But there is no reason why something similar couldn’t be built up from the bottom. Start with nerve cells. They are certainly not conscious, at least in any interesting sense, and so invulnerable to further regress. Yet like the mindless single-cell organisms from which they have evolved (and like the dullest task-accomplishing machines), each is able to secure the energy and raw materials it needs to survive in the competitive environment of the brain. The nerve cells that thrive do so because they “network more effectively, contribute to more influential trends at the [higher] levels where large-scale human purposes and urges are discernible”.
From this viewpoint, then, the human mind is not entirely unlike Deep Blue, the IBM computer that famously won a game of chess against Garry Kasparov, the world champion. The precise architecture of Mr Kasparov’s brain certainly differs from Deep Blue’s. But it is still “a massively parallel search engine that has built up, over time, an outstanding array of heuristic pruning techniques that keep it from wasting time on unlikely branches”.
Those who insist Deep Blue and Mr Kasparov’s mind must surely be substantially different will balk at this. They may well be right. But the burden of proof, Mr Dennett argues, is on them, for they are in effect claiming that the human mind is made up of “wonder tissue” with miraculous, mind-moulding properties that are, even in principle, beyond the reach of science—an old-fashioned homunculus in all but name.
Mr Dennett’s book is not a definitive solution to such mind-benders; it is philosophy in action. Like all good philosophy, it works by getting the reader to examine deeply held but unspoken beliefs about some of our most fundamental concerns, like personal autonomy. It is not an easy read: expect to pore over some passages more than once. But given the intellectual gratification Mr Dennett’s clear, witty and mercifully jargon-free prose affords, that is a feature, not a bug.

2013年5月27日 星期一

Preface to the Chinese Translation of The Sciences of the Artificial 『人工科學通識』中文版序



這篇Simon的序是從2000年的網站所搶救的.

www.deming.com.tw
每周至少五天更新:2000/07/17(updated July 17, 2000Thanks(鳴謝)

感謝司馬賀(H. A. Simon)教授贈其重要" Economics As A Historical
Science"
,本刊將陸續簡介。並賜    『人工科學通識』中文版序 


Preface to the Chinese Translation;

It is a great pleasure to me to see The Sciences of the Artificial made available in   the Chinese language.
In the kind of world in which we live today, we can no longer afford the luxury of limiting new knowledge to the particular language community in which it happens to originate. We understand today that knowledge, not  material things, is the principal basis for human  productivity and for the problem solving skills that we need to continue to live, with each other and with Nature, on our Earth.
Full exchange of knowledge between the Chinese and English language
communities is especially important, because these are the most widely used languages world, and books that are published in both tongues will thereby become available to most of the world's people.
I wish to thank Mr. Hanching Chung warmly for his vigorous efforts both to translate The
Sciences of the Artificial and to arrange for its publication I hope that readers of the Chinese language edition will be encouraged to join in the exciting adventure of exploring the nature of the systems that play such a large role in our modern world

Herbert A. Simon
Pittsburgh, Pennsylvania, U.S.A
June 24, 2000



翻譯是深層游戲
近月來投入翻譯工作,相當的辛苦,不過甘苦參半,有時也很快活

我出版的翻譯本,一定比原文豐富:不只有原作者序、譯注、譯後報告(導補充(譬如說正考慮將用語集和索引合併用西方國家文人表示「貨真價實」叫The
         real Simon Pure.=the real or genuine person or
article
可見我的司馬賀叢書是「真貨」。這位大科學家最擅長精密表達,也極用功,樂意將其「天才(經濟學、理學、設計
計畫學、複雜系統、決策  問題解決學、……)」一以貫之,所以我將這本書名取為《人工科學通識
請看目錄第一章
了解自然和人工世界
第二章
經濟的理性
第三章思考的理學:在自然中深埋人造物
第四章記起與學習記憶作為思想的環境
第五章設計的科學
第六章社會計劃:設計演化中的人造物
第七章關於複雜性的各種觀點複雜性結構:各種層級系統

2013年5月4日 星期六

關於"The Tacit Dimension"的評說

最近台灣有書 名常識
是想仿近300年前美國的暢銷書?

其實現在科學/心理學等大興 一常識 等都可以深入分析: 為什麼知識或迷信等變成常識

---
我曾請教Herbert Simon 關於 tacit knowledge 方面的想法
他是科學家 認為Michael Polanyi 的招牌說法是沒什麼"神秘"可言 現在人們知道得更多
Polanyi's concept of tacit knowledge was first articulated in Personal Knowledge.
... published as "The Tacit Dimension" (1966) he seeks to distinguish between ...


".....此外,「藝師藝友」,因為都是藝文界的知名人物,實即大家的藝師藝友,有的大家已經忘卻,今日重睹故人,會憶起他們對今日台灣的貢獻, 會油然萌生親切之感。「逆旅形色」在旅遊發達的今天,你會覺得「到此一遊」的雪泥鴻爪,也可作藝術創作的處理,別死命的盯著一個風物,把握瞬間稍縱即逝的 機會,憑感覺快速的喀擦,那意外的頃刻,便適時留下了它飛逝中的停格。「人間偶遇」,也是值得參考的作品。不然一生中多少側身而過的偶然,若感覺是新鮮 的,所有的偶然,說不定都是永恆。

 在巴黎,朱德群夫人景昭女士,問我作畫時,都有沒有計劃?我說「沒有!」面對畫面的空白一、二秒鐘的蓄勢,便開始畫到哪是哪。一切都是遇然的機緣,畫完了,有時意猶未足,說不定,隨意的在某個角落墮入一點。她說「德群也是這樣」沒有預設的計劃。

 想不到攝影家莊靈,也全憑那瞬間感覺的靈視,使擦身而過、陌生的異鄉偶然,變成了他的必然。「心照自然」也是在這樣的心情下,即興般的喀擦,「非由造作,畫法自然」的成果,都是「泥上偶然留指爪」和回顧哪復計東西耶。"楚戈  



這些類似的文學藝術家哲學家等人的"直覺"或 TACIT DIMENTION等的看法
Simon認為那極可能沒有深入研究的假設
我有封信跟他略談 他的論點之摘要都寫在 "管理行為"

 
Andrew Hsu 印象中,我一直以為Polanyi的<個人知識: 邁向後批判哲學>也是他翻的,經查是許澤民

Hanching Chung 這本書有一故事 某中研院院士的指導教授是海耶克. Hayek建議他寫這. 他搞不懂.所以放棄. 他回台灣一直強調此書(論文集)不可翻譯.....我跟Herbert Simon談過tacit knowledge 他說現在心理學對此已有很多科學知識......
 

2013年5月1日 星期三

Deep Learning /人工智慧新境What Is Artificial Intelligence? By RICHARD POWERS Published: February 5, 2011







Deep Learning

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.
AI這 行業採用的術語似乎很誇張 譬如說所謂的 Deep Learning

10 Breakthrough Technologies 2013

Deep Learning

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.
It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.
Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.
The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.
With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.
Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.
All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.
Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”
Building a Brain
There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.
Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.
Some of today’s artificial neural networks can train themselves to recognize complex patterns.
Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.
But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.
Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.
Like cats. Last June, Google demonstrated one of the largest neural networks yet, with more than a billion connections. A team led by Stanford computer science professor Andrew Ng and Google Fellow Jeff Dean showed the system images from 10 million randomly selected YouTube videos. One simulated neuron in the software model fixated on images of cats. Others focused on human faces, yellow flowers, and other objects. And thanks to the power of deep learning, the system identified these discrete objects even though no humans had ever defined or labeled them.
What stunned some AI experts, though, was the magnitude of improvement in image recognition. The system correctly categorized objects and themes in the ­YouTube images 16 percent of the time. That might not sound impressive, but it was 70 percent better than previous methods. And, Dean notes, there were 22,000 categories to choose from; correctly slotting objects into some of them required, for example, distinguishing between two similar varieties of skate fish. That would have been challenging even for most humans. When the system was asked to sort the images into 1,000 more general categories, the accuracy rate jumped above 50 percent.
Big Data
Training the many layers of virtual neurons in the experiment took 16,000 computer processors—the kind of computing infrastructure that Google has developed for its search engine and other services. At least 80 percent of the recent advances in AI can be attributed to the availability of more computer power, reckons Dileep George, cofounder of the machine-learning startup Vicarious.
There’s more to it than the sheer size of Google’s data centers, though. Deep learning has also benefited from the company’s method of splitting computing tasks among many machines so they can be done much more quickly. That’s a technology Dean helped develop earlier in his 14-year career at Google. It vastly speeds up the training of deep-learning neural networks as well, enabling Google to run larger networks and feed a lot more data to them.
Already, deep learning has improved voice search on smartphones. Until last year, Google’s Android software used a method that misunderstood many words. But in preparation for a new release of Android last July, Dean and his team helped replace part of the speech system with one based on deep learning. Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms. Since it’s likelier to understand what was actually uttered, the result it returns is likelier to be accurate as well. Almost overnight, the number of errors fell by up to 25 percent—results so good that many reviewers now deem Android’s voice search smarter than Apple’s more famous Siri voice assistant.
For all the advances, not everyone thinks deep learning can move artificial intelligence toward something rivaling human intelligence. Some critics say deep learning and AI in general ignore too much of the brain’s biology in favor of brute-force computing.
One such critic is Jeff Hawkins, founder of Palm Computing, whose latest venture, Numenta, is developing a machine-learning system that is biologically inspired but does not use deep learning. Numenta’s system can help predict energy consumption patterns and the likelihood that a machine such as a windmill is about to fail. Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time. Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment. “Google’s attitude is: lots of data makes up for everything,” Hawkins says.
But if it doesn’t make up for everything, the computing resources a company like Google throws at these problems can’t be dismissed. They’re crucial, say deep-learning advocates, because the brain itself is still so much more complex than any of today’s neural networks. “You need lots of computational resources to make the ideas work at all,” says Hinton.
What’s Next
Although Google is less than forthcoming about future applications, the prospects are intriguing. Clearly, better image search would help YouTube, for instance. And Dean says deep-learning models can use phoneme data from English to more quickly train systems to recognize the spoken sounds in other languages. It’s also likely that more sophisticated image recognition could make Google’s self-driving cars much better. Then there’s search and the ads that underwrite it. Both could see vast improvements from any technology that’s better and faster at recognizing what people are really looking for—maybe even before they realize it.
Sergey Brin has said he wants to build a benign version of HAL in 2001: A Space Odyssey.
This is what intrigues Kurzweil, 65, who has long had a vision of intelligent machines. In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret. Since then, his inventions have included several firsts—a print-to-speech reading machine, software that could scan and digitize printed text in any font, music synthesizers that could re-create the sound of orchestral instruments, and a speech recognition system with a large vocabulary.
Today, he envisions a “cybernetic friend” that listens in on your phone conversations, reads your e-mail, and tracks your every move—if you let it, of course—so it can tell you things you want to know even before you ask. This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.
For now, Kurzweil aims to help computers understand and even speak in natural language. “My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says. Essentially, he hopes to create a more flexible version of IBM’s Watson, which he admires for its ability to understand Jeopardy! queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”)
Kurzweil isn’t focused solely on deep learning, though he says his approach to speech recognition is based on similar theories about how the brain works. He wants to model the actual meaning of words, phrases, and sentences, including ambiguities that usually trip up computers. “I have an idea in mind of a graphical way to represent the semantic meaning of language,” he says.
That in turn will require a more comprehensive way to graph the syntax of sentences. Google is already using this kind of analysis to improve grammar in translations. Natural-language understanding will also require computers to grasp what we humans think of as common-sense meaning. For that, Kurzweil will tap into the Knowledge Graph, Google’s catalogue of some 700 million topics, locations, people, and more, plus billions of relationships among them. It was introduced last year as a way to provide searchers with answers to their queries, not just links.
Finally, Kurzweil plans to apply deep-learning algorithms to help computers deal with the “soft boundaries and ambiguities in language.” If all that sounds daunting, it is. “Natural-language understanding is not a goal that is finished at some point, any more than search,” he says. “That’s not a project I think I’ll ever finish.”
Though Kurzweil’s vision is still years from reality, deep learning is likely to spur other applications beyond speech and image recognition in the nearer term. For one, there’s drug discovery. The surprise victory by Hinton’s group in the Merck contest clearly showed the utility of deep learning in a field where few had expected it to make an impact.
That’s not all. Microsoft’s Peter Lee says there’s promising early research on potential uses of deep learning in machine vision—technologies that use imaging for applications such as industrial inspection and robot guidance. He also envisions personal sensors that deep neural networks could use to predict medical problems. And sensors throughout a city might feed deep-learning systems that could, for instance, predict where traffic jams might occur.
In a field that attempts something as profound as modeling the human brain, it’s inevitable that one technique won’t solve all the challenges. But for now, this one is leading the way in artificial intelligence. “Deep learning,” says Dean, “is a really powerful metaphor for learning about the world.”

人工智慧 挑戰考大學、寫小說


〔編譯林翠儀/綜合報導〕日本經濟新聞報導,為開拓「人工智慧」的無限可能,日本研究人員計畫讓搭載人工智慧的電腦報考東京大學,還要讓電腦在5年後創作4000字的小說。
人工智慧又稱機器智能,通常是指人工製造的系統,經過運算後表現出來的智能。科學家從1950年代著手研究人工智慧,希望創造出具有智能的機器人成為勞動力,但成果仍無法跨越「玩具」領域。
1970年代,人工智慧的研究處於停滯狀態,直到1997年美國IBM電腦「深藍(Deep Blue)」,在一場6局對決的西洋棋賽中擊敗當時的世界棋王。2011年IBM的「華生(Watson)」參加美國益智節目贏得首獎百萬美元,人工智慧的開發再度受到矚目。
日本2010年也曾以一套將棋人工智慧系統打敗職業棋手,國立情報學研究所的研究員更異想天開,打算嘗試讓擁有人工智慧的電腦報考日本第一學府東京大學。
目 前該研究所和開發人工智慧軟體的富士通研究所合作,讓電腦試作大學入學考試的題目。研究人員表示,目前電腦大概能夠回答5到6成的題目,其中最難解的是數 學部分,因為電腦沒辦法像人類一樣,在閱讀問題的敘述文字後,馬上理解題意進行運算。不過,研究人員希望在2016年拿到聯考高分,2021年考上東大。
此外,人工智慧一向被認為「缺乏感性」,因此研究人員還嘗試挑戰用人工智慧寫小說,初步計畫讓電腦寫出長約4000字的科幻小說,並預定5年後參加徵文比賽。

 
 
Op-Ed Contributor

What Is Artificial Intelligence?


Illustrations by Vance Wellenstein
IN the category “What Do You Know?”, for $1 million: This four-year-old upstart the size of a small R.V. has digested 200 million pages of data about everything in existence and it means to give a couple of the world’s quickest humans a run for their money at their own game.
The question: What is Watson?
I.B.M.’s groundbreaking question-answering system, running on roughly 2,500 parallel processor cores, each able to perform up to 33 billion operations a second, is playing a pair of “Jeopardy!” matches against the show’s top two living players, to be aired on Feb. 14, 15 and 16. Watson is I.B.M.’s latest self-styled Grand Challenge, a follow-up to the 1997 defeat by its computer Deep Blue of Garry Kasparov, the world’s reigning chess champion. (It’s remarkable how much of the digital revolution has been driven by games and entertainment.) Yes, the match is a grandstanding stunt, baldly calculated to capture the public’s imagination. But barring any humiliating stumble by the machine on national television, it should.
Consider the challenge: Watson will have to be ready to identify anything under the sun, answering all manner of coy, sly, slant, esoteric, ambiguous questions ranging from the “Rh factor” of Scarlett’s favorite Butler or the 19th-century painter whose name means “police officer” to the rhyme-time place where Pelé stores his ball or what you get when you cross a typical day in the life of the Beatles with a crazed zombie classic. And he (forgive me) will have to buzz in fast enough and with sufficient confidence to beat Ken Jennings, the holder of the longest unbroken “Jeopardy!” winning streak, and Brad Rutter, an undefeated champion and the game’s biggest money winner. The machine’s one great edge: Watson has no idea that he should be panicking.
Open-domain question answering has long been one of the great holy grails of artificial intelligence. It is considerably harder to formalize than chess. It goes well beyond what search engines like Google do when they comb data for keywords. Google can give you 300,000 page matches for a search of the terms “greyhound,” “origin” and “African country,” which you can then comb through at your leisure to find what you need.
Asked in what African country the greyhound originated, Watson can tell you in a couple of seconds that the authoritative consensus favors Egypt. But to stand a chance of defeating Mr. Jennings and Mr. Rutter, Watson will have to be able to beat them to the buzzer at least half the time and answer with something like 90 percent accuracy.
When I.B.M.’s David Ferrucci and his team of about 20 core researchers began their “Jeopardy!” quest in 2006, their state-of-the-art question-answering system could solve no more than 15 percent of questions from earlier shows. They fed their machine libraries full of documents — books, encyclopedias, dictionaries, thesauri, databases, taxonomies, and even Bibles, movie scripts, novels and plays.
But the real breakthrough came with the extravagant addition of many multiple “expert” analyzers — more than 100 different techniques running concurrently to analyze natural language, appraise sources, propose hypotheses, merge the results and rank the top guesses. Answers, for Watson, are a statistical thing, a matter of frequency and likelihood. If, after a couple of seconds, the countless possibilities produced by the 100-some algorithms converge on a solution whose chances pass Watson’s threshold of confidence, it buzzes in.
This raises the question of whether Watson is really answering questions at all or is just noticing statistical correlations in vast amounts of data. But the mere act of building the machine has been a powerful exploration of just what we mean when we talk about knowing.
Who knows how Mr. Jennings and Mr. Rutter do it — puns cracked, ambiguities resolved, obscurities retrieved, links formed across every domain in creation, all in a few heartbeats. The feats of engineering involved in answering the smallest query about the world are beyond belief. But I.B.M. is betting a fair chunk of its reputation that 2011 will be the year that machines can play along at the game.
Does Watson stand a chance of winning? I would not stake my “Final Jeopardy!” nest egg on it. Not yet. Words are very rascals, and language may still be too slippery for it. But watching films of the machine in sparring matches against lesser human champions, I felt myself choking up at its heroic effort, the size of the undertaking, the centuries of accumulating groundwork, hope and ingenuity that have gone into this next step in the long human drama. I was most moved when the 100-plus parallel algorithms wiped out and the machine came up with some ridiculous answer, calling it out as if it might just be true, its cheerful synthesized voice sounding as vulnerable as that of any bewildered contestant.
It does not matter who will win this $1 million Valentine’s Day contest. We all know who will be champion, eventually. The real showdown is between us and our own future. Information is growing many times faster than anyone’s ability to manage it, and Watson may prove crucial in helping to turn all that noise into knowledge.
Dr. Ferrucci and company plan to sell the system to businesses in need of fast, expert answers drawn from an overwhelming pool of supporting data. The potential client list is endless. A private Watson will cost millions today and requires a room full of hardware. But if what Ray Kurzweil calls the Law of Accelerating Returns keeps holding, before too long, you’ll have an app for that.
Like so many of its precursors, Watson will make us better at some things, worse at others. (Recall Socrates’ warnings about the perils of that most destabilizing technology of all — writing.) Already we rely on Google to deliver to the top of the million-hit list just those pages we are most interested in, and we trust its concealed algorithms with a faith that would be difficult to explain to the smartest computer. Even if we might someday be able to ask some future Watson how fast and how badly we are cooking the earth, and even if it replied (based on the sum of all human knowledge) with 90 percent accuracy, would such an answer convert any of the already convinced or produce the political will we’ll need to survive the reply?
Still, history is the long process of outsourcing human ability in order to leverage more of it. We will concede this trivia game (after a very long run as champions), and find another in which, aided by our compounding prosthetics, we can excel in more powerful and ever more terrifying ways.
Should Watson win next week, the news will be everywhere. We’ll stand in awe of our latest magnificent machine, for a season or two. For a while, we’ll have exactly the gadget we need. Then we’ll get needy again, looking for a newer, stronger, longer lever, for the next larger world to move.
For “Final Jeopardy!”, the category is “Players”: This creature’s three-pound, 100-trillion-connection machine won’t ever stop looking for an answer.
The question: What is a human being?
Richard Powers is the author of the novel “Generosity: An Enhancement.”

2013年4月27日 星期六

operational definition

2008.10.22 將david 的信附上
"Dear HC,,
書已收到 這郵局的效率頗為驚人 是到府收件嗎?

《管理行為》 久聞其名 會利用時間好好享讀一"翻"
熱愛品質探求機會的思考方法 則隨緣
精實系統革命 》已經拜讀多次
再次謝謝
Best rgds
David Hsu"

我主要目的是談1971年起的《王雲五社會科學大辭典的 "行政學"
我手頭上是 1989年第七刷
這書 Simon 是主角
還用《管理行為說明 "可運作定義"
不過舉的應用例  根本不合 operational definition
我送的這本《管理行為》有一節他1995寫的
才堪稱operational definition之傑作
類似 多層次--group-wide公司如何衡量錯綜複雜的績效/效率