感謝有您 邁向2012
hc的”剪貼簿”在2011年約有25萬人次造訪
戴明圈: A Taiwanese Deming Circle
"交情千千" 電子連絡簿(日報)
胡適的世界 The World of Hu Shih
管理學新生
Books Birdviews 書海
People 人物
品質世界 quality world
教育人
英文人行道 et cetera, et cetera .
漢語人行道:演變風貌
譯藝
英國風
日本 心得帖
亞洲
SHE健康一生
感謝趙老師
今天春節前的餐會 改由我做東 請趙兄和林兄等選餐廳
("投票那天午餐"的主意似乎不錯)
趙老大閒來或可讀顧少川前妻的回憶錄
No Feast Lasts Forever《沒有不散的筵席》
一笑Dear HC,
"聽到「鄉音」而淚流。那樣的一天,似乎也近了。"
Another story: One year ago, I met professor LJ Wei (a world famous bio-statistician) and said the average life span for men in Taiwan is 73.9, and I am sure I am in the 95% interval. One week ago, I met a friend in 國家衛生院, he said, don't worry, for men in Taipei, the average is 79 point something ...
Min-Te
作家吳淡如最近在廣播節目,請醫師答覆call in電話,為了不讓醫師背上「隔空問診」的違法問題,請工作人員代接電話,轉述回答。 吳淡如上午接受台視獨家訪問表示,節目中特別提醒民眾一定要去看醫師,還說衛生署如果真要查,應該先查地下電台。 藝人吳淡如主持廣播節目,請來醫師回答Call in,只是這樣的節目現在卻疑似被盯上,就因為涉嫌醫師親自隔空問診的內容。 斬釘截鐵地說,就是過敏性鼻炎,到底是衛教宣傳,還是問診,醫師說他們心中也有一把尺,怎麼可能傻到跟著遊走法律邊緣。 衛生署表示,其實call in宣傳衛教問題都可以,只是一旦有診斷,就會違反醫事法。 節目企圖透過工讀生轉述規避罰則,但衛生署表示,根本就無效,重點在於醫師尺度拿捏,只能建議,不能斷定,否則依舊是違反醫事法。
McCarthy 倡導以數學邏輯發展人工智慧,於1958年提出advice taker概念,激發後來的問答系統與邏輯程式的發展,並曾發明garbage collection,自動管理不用的記憶體,解決了程式語言LISP所面臨的問題,LISP後來成為人工智慧領域最受歡迎的程式語言。
McCarthy從小就展現他在數學上的天份,大學進入數學系時跳級兩年,最後取得普林斯頓大學(Princeton University)數學博士。以下是關於這位元電腦大師的簡介:
-1927年生於美國波士頓
-1948年,加州理工,數學學士
-1951年,普林斯頓大學,數學博士
-1956年Dartmouth會議的發起人(該會議被視為AI作為一門學科誕生的標誌)
-1955年在為該會議寫的建議書中提出Artificial Intelligence一詞,從而被視為“人工智慧之父”
-1958,發明Lisp程式設計語言(該語言至今仍在人工智慧領域廣泛使用)
-1960左右,提出電腦分時(time-sharing)概念
-1971因對AI的貢獻獲圖靈獎
–1985獲得IJCAI(the International Joint Conference on Artificial Intelligence)頒發的第一屆”Research Excellence Award”(可看作是AI的終身成就獎)
-1991年獲得“美國國家科學獎”(National Medal of Science Award)
2011年10月可說是科技界傷心的時期,10月5日賈伯斯逝世、10月13日 C 語言之父 Dennis Ritchie 逝世,如今又一顆巨星殞落。(本文部份轉載自36氪)
Two Americans were awarded the 2011 Nobel Prize in economics on Monday for their research into the cause-and-effect relationship between economic policy and the broader economy as a whole.
The two men, Thomas Sargent of New York University and Christopher Sims of Princeton University, carried out their research independently in the 1970s and ‘80s, but their work “is highly relevant today as world governments and central banks seek ways to steer their economies away from another recession,” the Associated Press reports.
The Royal Swedish Academy of Sciences that awards the prize said the two economists, both 68, had developed methods for answering questions such as how GDP and inflation are affected by temporary interest rate hikes or a tax cut.
"Today, the methods developed by Sargent and Sims are essential tools in macroeconomic analysis," the academy said in its citation.
Here’s how the New York Times summed up their research: “Their work uses statistical analysis to disentangle the question of whether a policy change that happened in the past affected the economy or whether it was made in anticipation of events that policymakers thought would happen later. This research has also helped economists better understand how people’s expectations for policy affect the economy.”
Watson, the "Jeopardy!"-playing computer system, is getting a job.
WellPoint Inc. and International Business Machines Corp. are set to announce a deal on Monday for the health insurer to use the Watson technology, the first time the high-profile project will result in a commercial application.
WellPoint said it plans to use Watson's data-crunching to help suggest treatment options and diagnoses to doctors. It is part of a far broader push in the health industry to incorporate computerized guidance into care, as doctors and hospitals adopt electronic medical records and other digital tools that can record, track and check their work.
Vienna-born English architect and theorist, he settled in the USA in 1960. Believing that there are universal ‘timeless’ principles of form and space in architecture, that they are firmly based on the fundamentals of human cognition, and that they can be determined by study, providing the essentials of design, he has shown that they can be found in the architecture of all periods (and indeed of all cultures). His ideas about ‘paradigms’ for architecture were encapsulated in his Notes on the Synthesis of Form (1964), A Pattern Language (1977), and The Timeless Way of Building (1979). With Chermayeff he published Community and Privacy (1963). Advocating that designer, builder, and user should be either one and the same, or work closely together, he promoted self-build housing, and was involved in the evolution of user-designed apartment buildings at St Quentin-en-Yvelines, near Paris (1974), and elsewhere. More recently he has observed that most of the contemporary ways of dealing with architecture have been ‘insane’, and that we need to find new ways in order to become ‘reconnected to ourselves’. To him, Deconstructivism is ‘nonsensical’. From 2002 he published, through the Center for Environmental Structure, Berkeley, CA, The Nature of Order, setting out the essence of his ideas.
Bibliography
The full bibliography for this book is available to download as a pdf file.
He's been called the master of suspense. But Alfred Hitchcock isn't without a bit of mystery of his own. A rare collection of Hitchcock sketches was recently discovered in England.
They were storyboards from one of his movies. And they seem to offer some fascinating insights into the legendary director's creative mind. Nick Glass has the details in this week's edition of "The Revealer."
| Post by: backstory, Alfred Hitchcock Filed under: The Revealer • backstory • Hitchcock |
Jul 30th 2011 | from the print edition
JUDGING artistic styles, and the similarities between them, might be thought one bastion of human skill that machines could never storm. Not so, if Lior Shamir at Lawrence Technological University in Michigan is correct. A paper he has just published in Leonardo suggests that computers may have just as good an eye for style as humans do—and, in some cases, may see connections between artists that human critics have missed.
Dr Shamir, a computer scientist, presented 57 images by each of nine painters—Salvador Dalí, Giorgio de Chirico, Max Ernst, Vasily Kandinsky, Claude Monet, Jackson Pollock, Pierre-Auguste Renoir, Mark Rothko and Vincent van Gogh—to a computer, to see what it made of them. The computer broke the images into a number of so-called numerical descriptors. These descriptors quantified textures and colours, the statistical distribution of edges across a canvas, the distributions of particular types of shape, the intensity of the colour of individual points on a painting, and also the nature of any fractal-like patterns within it (fractals are features that reproduce similar shapes at different scales; the edges of snowflakes, for example).
All told, the computer identified 4,027 different numerical descriptors. Once their values had been established for each of the 513 artworks that had been fed into it, it was ready to do the analysis.
Dr Shamir’s aim was to look for quantifiable ways of distinguishing between the work of different artists. If such things could be established, it might make the task of deciding who painted what a little easier. Such decisions matter because, even excluding deliberate forgeries, there are many paintings in existence that cannot conclusively be attributed to a master rather than his pupils, or that may be honestly made copies whose provenance is now lost.
To look for such distinguishing features, Dr Shamir programmed the computer to use a statistical method that scores the strength of the distance between the values of two or more descriptors for each pair of artists. As a result, he was able to rank each of the 4,027 descriptors by how useful it was at discriminating between artists.
Surprisingly, the values of 19 of the 20 most informative descriptors showed dramatically higher similarities between Van Gogh (left below) and Pollock (right) than between Van Gogh and painters such as Monet and Renoir, who conventional art criticism would think more closely related to Van Gogh’s oeuvre than Pollock’s is. (Dalí and Ernst, by contrast, were farther apart then expected.)
What is interesting, according to Dr Shamir, is that no single feature makes Pollock’s artistic style similar to Van Gogh’s. Instead, the connection is based on a broad set of image-content descriptors which reflect many aspects of the two artists’ styles, including a shared preference for low-level textures and shapes, and similarities in the ways they employed lines and edges.
What was intended, then, as a way of improving the ability to distinguish between different hands has also thrown up a new way of looking for stylistic similarities. Whether Pollock was actually influenced by Van Gogh, or merely happened upon a similar way of doing things through a similar artistic sensibility, is not clear. But it gives art historians a new line of investigation to pursue.
Kenneth J. Arrow is professor of economics emeritus at Stanford University and winner of the Nobel Prize for Economics in 1972. He is the youngest Nobel Laureate to have been awarded the prize in economics. Arrow also served on the White House Council of Economic Advisers under President John F. Kennedy.
Deutsche Welle: What many people around the world probably still deem impossible and what for many experts seemed unrealistic just a few weeks back could become reality. The US, the world's biggest economy and strongest power, may be unable meet its debt payments within days. How big is the risk that the US will in fact default?
Kenneth Arrow: I think it's unlikely. I think that the pressures from the financial sector are going to be sufficient to avoid this. I have seen proposals such as the one by Republican Senate Minority Leader Mitch McConnell to somehow dodge the issue. I have a feeling that's how it is going to end up, but you can't be 100 percent sure. It could be that they somehow have a deadlock in which case the debt limit will be not raised. There's a 10 percent chance that could happen.
With each day of failed political talks in Washington to raise the debt limit the US is edging closer to default and the finger pointing between Democrats and Republicans intensifies. Is the impression one gets from abroad correct that the political players are worried more about not blinking too early and scoring a political victory than about avoiding a possible fiasco?
There are a lot of factors at work here and one I think is ideology. There are some people in the Republican Party who have said they wouldn't vote for an increase in the debt limit no matter what concessions are made. They just feel the government is too big and we should cut it back and this is a very convenient weapon. So it's not entirely just about political advantage.
There is that of course in every political confrontation in every democratic country that's true. But the same Republican Minority Leader, Senator McConnell, said a few months ago my main aim in everything is to make sure that President Obama is a one-term president. So he said explicitly that political advantage is what he is concerned about. So I think there is a mixture of reasons.
Couldn't they just detach the political issue over taxes and spending for the time being and raise the debt limit simply to avert a default which is arguably in no one's interest or is this too naïve?
Of course. The whole thing to my mind is somehow a crazy idea. We have a budgetary process. We have a budget and it was passed earlier this year. Why isn't that the end of the story? Why is there a separate vote on the debt limit? When you have a budget that has certain implications for the debt you don't know exactly what they are because tax revenues at least are uncertain. So when you pass a budget you have projections, but you don't actually know what's going to happen. So the question is why don't you just pass the budget and if you need to borrow you borrow. That's all there is to it. So why is there a second vote?
This is an old thing and I don't know how far back this principle goes. But typically the debt limit has been automatically raised. It's not really controversial. So the idea that we have a vote on the debt limit is crazy in my opinion. You make a budgetary decision, you have your debate and that's it. But once it's there it's used as a political weapon and people don't want to abandon it. It's the same with the filibuster rule in the Senate, but I won't go into that.
There is disagreement over the severity and the consequences of failing to raise the debt limit or an actual default. How bad would the failure to raise the debt limit be in concrete, practical terms?
It would be bad obviously, but more for symbolic reasons. The possibility of a default is very well known and that means that the immediate consequences would be much less severe than if there is a sudden collapse coming out of nowhere as it happened for instance with the subprime mortgage fiasco. The financial system has been adjusting to this.
Second, the fundamental soundness of the United States is not really in question. The United States obviously has an edge by being able to borrow as a safe borrower and as the place where foreigners park their money in times of trouble. And we see that currently by the fact that the interest rate of US government debt is extremely low in spite of the financial problems. In fact if you go back a number of years you find that the United States has been borrowing money at low rates of interest even during prosperity and investing abroad at relatively high returns because the US is considered safe.
Now this is going to shake that somewhat. Not too much, because everybody knows that fundamentally there is no real problem and it's just a political issue. Still it does mean that the United States is a less stable country politically than was expected and it will have consequences.
So how would this play out?
I think the first reaction would be to cut something else. Social security would be possible target because it would be a very big political signal. But at some point there will demands as to why shouldn't the holders of government bonds suffer if the poor, old people are suffering. That will be the next in line. I think at that point or even before that you will see a big rise in interest rates.
Government debt will start going down and of course this will affect the holdings of banks throughout the world. So I suppose there will be a tightening of the belt and I imagine there will be spillover to private enterprise. In any case interest rates will rise, that's a clear consequence and when interest rates rise that is going to affect investment in the United States and abroad probably.
My feeling is there would be lowering of the American economy and probably some of the European economies too because the banking systems are so interlinked. Probably China will be the net gainer in all of this. They will be getting more money on the bonds they hold on the United States.
Interview: Michael Knigge
Editor: Rob Mudge
謝謝KJ 轉給我的信:
仲庸的日本辦公室同事儀勳說:『對永遠的大哥,也是 Mr. Always Say Yes--劉san 大家的阿伯,致上最深的悼念與敬意. …..』
我們通常不太知道:自己在朋友心中的形象?
所以有些人如卜少夫先生或曹又方女士,採取生前讓朋友公開”懷念或送別的方式…..
(今年6月15日在中原大學,三呆介紹我,讓我自己嚇一跳….)
***
人人都有另外一面。我們的2位大樓管理員,各有妙處:一位經常在研究”號碼”名牌,另一位在2周前某天晚上,給他兩位朋友架著,在這片水泥森林中尋找家……
暑假總有新室內裝璜工程,所以兩處的電梯間都被”保護”起來。換句話說,三個月內在電梯間看不到鏡子。據Ackoff 先生等說,此鏡對”等待”的人的”去除焦慮”心理,是相當重要的。所以我偶爾小破壞一下,不過立刻給人修好…….妙的是,修的人不是裝璜相關人,而是另有好事(公益)者。
工人帶來另外的、似曾相似的文化:便當,午睡說安全梯,Radio 的節目更妙…….
***
昨夜窗子紗窗失靈,所以苦讀張君勱先生百齡冥誕紀念文集/張君勱先生遺著叢書……。印象最深刻的是蔣復璁先生的回憶,他說張先生有不忍人之心
所以不搭人力車等 一向安步當車 所以很健康…..
Herbert Simon 也有他的設計 他到卡內基理工 (後來改為CMU) 任教時 買的房子距學校適中 每天上學僕徒步 如此他70歲時 已繞地球7.5周
----
1986年在內壢MOTOROLA上班 三呆建議在附近買屋……反正 沒錢命…..
Jun 30th 2011 | from the print edition
GOOGLE “information overload” and you are immediately overloaded with information: more than 7m hits in 0.05 seconds. Some of this information is interesting: for example, that the phrase “information overload” was popularised by Alvin Toffler in 1970. Some of it is mere noise: obscure companies promoting their services and even more obscure bloggers sounding off. The overall impression is at once overwhelming and confusing.
“Information overload” is one of the biggest irritations in modern life. There are e-mails to answer, virtual friends to pester, YouTube videos to watch and, back in the physical world, meetings to attend, papers to shuffle and spouses to appease. A survey by Reuters once found that two-thirds of managers believe that the data deluge has made their jobs less satisfying or hurt their personal relationships. One-third think that it has damaged their health. Another survey suggests that most managers think most of the information they receive is useless.
Commentators have coined a profusion of phrases to describe the anxiety and anomie caused by too much information: “data asphyxiation” (William van Winkle), “data smog” (David Shenk), “information fatigue syndrome” (David Lewis), “cognitive overload” (Eric Schmidt) and “time famine” (Leslie Perlow). Johann Hari, a British journalist, notes that there is a good reason why “wired” means both “connected to the internet” and “high, frantic, unable to concentrate”.
These worries are exaggerated. Stick-in-the-muds have always complained about new technologies: the Victorians fussed that the telegraph meant that “the businessman of the present day must be continually on the jump.” And businesspeople have always had to deal with constant pressure and interruptions—hence the word “business”. In his classic study of managerial work in 1973 Henry Mintzberg compared managers to jugglers: they keep 50 balls in the air and periodically check on each one before sending it aloft once more.
Yet clearly there is a problem. It is not merely the dizzying increase in the volume of information (the amount of data being stored doubles every 18 months). It is also the combination of omnipresence and fragmentation. Many professionals are welded to their smartphones. They are also constantly bombarded with unrelated bits and pieces—a poke from a friend one moment, the latest Greek financial tragedy the next.
The data fog is thickening at a time when companies are trying to squeeze ever more out of their workers. A survey in America by Spherion Staffing discovered that 53% of workers had been compelled to take on extra tasks since the recession started. This dismal trend may well continue—many companies remain reluctant to hire new people even as business picks up. So there will be little respite from the dense data smog, which some researchers fear may be poisonous.
They raise three big worries. First, information overload can make people feel anxious and powerless: scientists have discovered that multitaskers produce more stress hormones. Second, overload can reduce creativity. Teresa Amabile of Harvard Business School has spent more than a decade studying the work habits of 238 people, collecting a total of 12,000 diary entries between them. She finds that focus and creativity are connected. People are more likely to be creative if they are allowed to focus on something for some time without interruptions. If constantly interrupted or forced to attend meetings, they are less likely to be creative. Third, overload can also make workers less productive. David Meyer, of the University of Michigan, has shown that people who complete certain tasks in parallel take much longer and make many more errors than people who complete the same tasks in sequence.
What can be done about information overload? One answer is technological: rely on the people who created the fog to invent filters that will clean it up. Xerox promises to restore “information sanity” by developing better filtering and managing devices. Google is trying to improve its online searches by taking into account more personal information. (Some people fret that this will breach their privacy, but it will probably deliver quicker, more accurate searches.) A popular computer program called “Freedom” disconnects you from the web at preset times.
A second answer involves willpower. Ration your intake. Turn off your mobile phone and internet from time to time.
But such ruses are not enough. Smarter filters cannot stop people from obsessively checking their BlackBerrys. Some do so because it makes them feel important; others because they may be addicted to the “dopamine squirt” they get from receiving messages, as Edward Hallowell and John Ratey, two academics, have argued. And self-discipline can be counter-productive if your company doesn’t embrace it. Some bosses get shirty if their underlings are unreachable even for a few minutes.
Most companies are better at giving employees access to the information superhighway than at teaching them how to drive. This is starting to change. Management consultants have spotted an opportunity. Derek Dean and Caroline Webb of McKinsey urge businesses to embrace three principles to deal with data overload: find time to focus, filter out noise and forget about work when you can. Business leaders are chipping in. David Novak of Yum! Brands urges people to ask themselves whether what they are doing is constructive or a mere “activity”. John Doerr, a venture capitalist, urges people to focus on a narrow range of objectives and filter out everything else. Cristobal Conde of SunGard, an IT firm, preserves “thinking time” in his schedule when he cannot be disturbed. This might sound like common sense. But common sense is rare amid the cacophony of corporate life.
在第4章裡,我談到了政治學中的行為主義運動,
它的先鋒是芝加哥大學的查爾斯‧梅里亞姆 (C.Merriam)的系。赫伯特‧斯托林 (H.Storing)編了一本書,書名叫《政治學的科學研究論文集》,在這本書裡, 他每人一章分別批判了行為主義的帶頭人物,其中也包括我。 要回答這個政擊,需要一本同《管理行為》一樣厚的書,
我壓根也沒想過寫這樣一本書。在我看來,《管理行為》 一書本身就為自己做了最好的辯獲。 我的判斷似乎經受住了時間的考驗, 時光的流逝並沒有減少這本書的光彩。 當然,我現在仍然被指控為“實證主義”,
而且好像這是多大一個罪過似的,不是大罪也是小罪。同時, 至今仍有相當普遍的人不太理解,如果在前提中不是至少有一個“ 應當”的話,為什麼就不能按邏輯推導出“應當”來。然而, 我想這些困難與斯托林的書沒有多大聯繫。它們起源現今的總趨勢, 把實證主義作為貶義詞用,而對於實證主義者相信的是什麼, 卻沒有個清楚的概念。 在經濟學方面,論戰開始得比較緩慢。
我最初的攻擊是幾篇關於稅會落在誰的身上 [1]和技術改變的文章,這幾篇論文與新古典主義的框架相安無事。然後是幾篇文章, 建議需要認識到理性的限度以便創造比較真實的企業形象。 在這些論文中,已經提供了進行這種挑戰的素材。
[1] 著者對此的解釋是,例如,房東按法律理應交稅,
但是他通過提高房租的辦法,將稅“轉嫁”到房客頭上。—譯注
有關 herbert simon organization complex system 的學術文章 | |
The organization of complex systems - Simon - 被引用 498 次 The sciences of the artificial - Simon - 被引用 10463 次 The architecture of complexity - Simon - 被引用 2907 次 |
May 26th 2011 | from the print edition
THE here and now are defined by astronomy and geology. Astronomy takes care of the here: a planet orbiting a yellow star embedded in one of the spiral arms of the Milky Way, a galaxy that is itself part of the Virgo supercluster, one of millions of similarly vast entities dotted through the sky. Geology deals with the now: the 10,000-year-old Holocene epoch, a peculiarly stable and clement part of the Quaternary period, a time distinguished by regular shifts into and out of ice ages. The Quaternary forms part of the 65m-year Cenozoic era, distinguished by the opening of the North Atlantic, the rise of the Himalayas, and the widespread presence of mammals and flowering plants. This era in turn marks the most recent part of the Phanerozoic aeon, the 540m-year chunk of the Earth’s history wherein rocks with fossils of complex organisms can be found. The regularity of celestial clockwork and the solid probity of rock give these co-ordinates a reassuring constancy.
Now there is a movement afoot to change humanity’s co-ordinates. In 2000 Paul Crutzen, an eminent atmospheric chemist, realised he no longer believed he was living in the Holocene. He was living in some other age, one shaped primarily by people. From their trawlers scraping the floors of the seas to their dams impounding sediment by the gigatonne, from their stripping of forests to their irrigation of farms, from their mile-deep mines to their melting of glaciers, humans were bringing about an age of planetary change. With a colleague, Eugene Stoermer, Dr Crutzen suggested this age be called the Anthropocene—“the recent age of man”.
The term has slowly picked up steam, both within the sciences (the International Commission on Stratigraphy, ultimate adjudicator of the geological time scale, is taking a formal interest) and beyond. This May statements on the environment by concerned Nobel laureates and the Pontifical Academy of Sciences both made prominent use of the term, capitalising on the way in which it dramatises the sheer scale of human activity.
The advent of the Anthropocene promises more, though, than a scientific nicety or a new way of grabbing the eco-jaded public’s attention. The term “paradigm shift” is bandied around with promiscuous ease. But for the natural sciences to make human activity central to its conception of the world, rather than a distraction, would mark such a shift for real. For centuries, science has progressed by making people peripheral. In the 16th century Nicolaus Copernicus moved the Earth from its privileged position at the centre of the universe. In the 18th James Hutton opened up depths of geological time that dwarf the narrow now. In the 19th Charles Darwin fitted humans onto a single twig of the evolving tree of life. As Simon Lewis, an ecologist at the University of Leeds, points out, embracing the Anthropocene as an idea means reversing this trend. It means treating humans not as insignificant observers of the natural world but as central to its workings, elemental in their force.
The most common way of distinguishing periods of geological time is by means of the fossils they contain. On this basis picking out the Anthropocene in the rocks of days to come will be pretty easy. Cities will make particularly distinctive fossils. A city on a fast-sinking river delta (and fast-sinking deltas, undermined by the pumping of groundwater and starved of sediment by dams upstream, are common Anthropocene environments) could spend millions of years buried and still, when eventually uncovered, reveal through its crushed structures and weird mixtures of materials that it is unlike anything else in the geological record.
The fossils of living creatures will be distinctive, too. Geologists define periods through assemblages of fossil life reliably found together. One of the characteristic markers of the Anthropocene will be the widespread remains of organisms that humans use, or that have adapted to life in a human-dominated world. According to studies by Erle Ellis, an ecologist at the University of Maryland, Baltimore County, the vast majority of ecosystems on the planet now reflect the presence of people. There are, for instance, more trees on farms than in wild forests. And these anthropogenic biomes are spread about the planet in a way that the ecological arrangements of the prehuman world were not. The fossil record of the Anthropocene will thus show a planetary ecosystem homogenised through domestication.
More sinisterly, there are the fossils that will not be found. Although it is not yet inevitable, scientists warn that if current trends of habitat loss continue, exacerbated by the effects of climate change, there could be an imminent and dramatic number of extinctions before long.
All these things would show future geologists that humans had been present. But though they might be diagnostic of the time in which humans lived, they would not necessarily show that those humans shaped their time in the way that people pushing the idea of the Anthropocene want to argue. The strong claim of those announcing the recent dawning of the age of man is that humans are not just spreading over the planet, but are changing the way it works.
Such workings are the province of Earth-system science, which sees the planet not just as a set of places, or as the subject of a history, but also as a system of forces, flows and feedbacks that act upon each other. This system can behave in distinctive and counterintuitive ways, including sometimes flipping suddenly from one state to another. To an Earth-system scientist the difference between the Quaternary period (which includes the Holocene) and the Neogene, which came before it, is not just what was living where, or what the sea level was; it is that in the Neogene the climate stayed stable whereas in the Quaternary it swung in and out of a series of ice ages. The Earth worked differently in the two periods.
The clearest evidence for the system working differently in the Anthropocene comes from the recycling systems on which life depends for various crucial elements. In the past couple of centuries people have released quantities of fossil carbon that the planet took hundreds of millions of years to store away. This has given them a commanding role in the planet’s carbon cycle.
Although the natural fluxes of carbon dioxide into and out of the atmosphere are still more than ten times larger than the amount that humans put in every year by burning fossil fuels, the human addition matters disproportionately because it unbalances those natural flows. As Mr Micawber wisely pointed out, a small change in income can, in the absence of a compensating change in outlays, have a disastrous effect. The result of putting more carbon into the atmosphere than can be taken out of it is a warmer climate, a melting Arctic, higher sea levels, improvements in the photosynthetic efficiency of many plants, an intensification of the hydrologic cycle of evaporation and precipitation, and new ocean chemistry.
All of these have knock-on effects both on people and on the processes of the planet. More rain means more weathering of mountains. More efficient photosynthesis means less evaporation from croplands. And the changes in ocean chemistry are the sort of thing that can be expected to have a direct effect on the geological record if carbon levels rise far enough.
At a recent meeting of the Geological Society of London that was devoted to thinking about the Anthropocene and its geological record, Toby Tyrrell of the University of Southampton pointed out that pale carbonate sediments—limestones, chalks and the like—cannot be laid down below what is called a “carbonate compensation depth”. And changes in chemistry brought about by the fossil-fuel carbon now accumulating in the ocean will raise the carbonate compensation depth, rather as a warmer atmosphere raises the snowline on mountains. Some ocean floors which are shallow enough for carbonates to precipitate out as sediment in current conditions will be out of the game when the compensation depth has risen, like ski resorts too low on a warming alp. New carbonates will no longer be laid down. Old ones will dissolve. This change in patterns of deep-ocean sedimentation will result in a curious, dark band of carbonate-free rock—rather like that which is seen in sediments from the Palaeocene-Eocene thermal maximum, an episode of severe greenhouse warming brought on by the release of pent-up carbon 56m years ago.
No Dickensian insights are necessary to appreciate the scale of human intervention in the nitrogen cycle. One crucial part of this cycle—the fixing of pure nitrogen from the atmosphere into useful nitrogen-containing chemicals—depends more or less entirely on living things (lightning helps a bit). And the living things doing most of that work are now people (see chart). By adding industrial clout to the efforts of the microbes that used to do the job single-handed, humans have increased the annual amount of nitrogen fixed on land by more than 150%. Some of this is accidental. Burning fossil fuels tends to oxidise nitrogen at the same time. The majority is done on purpose, mostly to make fertilisers. This has a variety of unwholesome consequences, most importantly the increasing number of coastal “dead zones” caused by algal blooms feeding on fertiliser-rich run-off waters.
Industrial nitrogen’s greatest environmental impact, though, is to increase the number of people. Although nitrogen fixation is not just a gift of life—it has been estimated that 100m people were killed by explosives made with industrially fixed nitrogen in the 20th century’s wars—its net effect has been to allow a huge growth in population. About 40% of the nitrogen in the protein that humans eat today got into that food by way of artificial fertiliser. There would be nowhere near as many people doing all sorts of other things to the planet if humans had not sped the nitrogen cycle up.
It is also worth noting that unlike many of humanity’s other effects on the planet, the remaking of the nitrogen cycle was deliberate. In the late 19th century scientists diagnosed a shortage of nitrogen as a planet-wide problem. Knowing that natural processes would not improve the supply, they invented an artificial one, the Haber process, that could make up the difference. It was, says Mark Sutton of the Centre for Ecology and Hydrology in Edinburgh, the first serious human attempt at geoengineering the planet to bring about a desired goal. The scale of its success outstripped the imaginings of its instigators. So did the scale of its unintended consequences.
For many of those promoting the idea of the Anthropocene, further geoengineering may now be in order, this time on the carbon front. Left to themselves, carbon-dioxide levels in the atmosphere are expected to remain high for 1,000 years—more, if emissions continue to go up through this century. It is increasingly common to hear climate scientists arguing that this means things should not be left to themselves—that the goal of the 21st century should be not just to stop the amount of carbon in the atmosphere increasing, but to start actively decreasing it. This might be done in part by growing forests (see article) and enriching soils, but it might also need more high-tech interventions, such as burning newly grown plant matter in power stations and pumping the resulting carbon dioxide into aquifers below the surface, or scrubbing the air with newly contrived chemical-engineering plants, or intervening in ocean chemistry in ways that would increase the sea’s appetite for the air’s carbon.
To think of deliberately interfering in the Earth system will undoubtedly be alarming to some. But so will an Anthropocene deprived of such deliberation. A way to try and split the difference has been propounded by a group of Earth-system scientists inspired by (and including) Dr Crutzen under the banner of “planetary boundaries”. The planetary-boundaries group, which published a sort of manifesto in 2009, argues for increased restraint and, where necessary, direct intervention aimed at bringing all sorts of things in the Earth system, from the alkalinity of the oceans to the rate of phosphate run-off from the land, close to the conditions pertaining in the Holocene. Carbon-dioxide levels, the researchers recommend, should be brought back from whatever they peak at to a level a little higher than the Holocene’s and a little lower than today’s.
The idea behind this precautionary approach is not simply that things were good the way they were. It is that the further the Earth system gets from the stable conditions of the Holocene, the more likely it is to slip into a whole new state and change itself yet further.
The Earth’s history shows that the planet can indeed tip from one state to another, amplifying the sometimes modest changes which trigger the transition. The nightmare would be a flip to some permanently altered state much further from the Holocene than things are today: a hotter world with much less productive oceans, for example. Such things cannot be ruled out. On the other hand, the invocation of poorly defined tipping points is a well worn rhetorical trick for stirring the fears of people unperturbed by current, relatively modest, changes.
In general, the goal of staying at or returning close to Holocene conditions seems judicious. It remains to be seen if it is practical. The Holocene never supported a civilisation of 10 billion reasonably rich people, as the Anthropocene must seek to do, and there is no proof that such a population can fit into a planetary pot so circumscribed. So it may be that a “good Anthropocene”, stable and productive for humans and other species they rely on, is one in which some aspects of the Earth system’s behaviour are lastingly changed. For example, the Holocene would, without human intervention, have eventually come to an end in a new ice age. Keeping the Anthropocene free of ice ages will probably strike most people as a good idea.
That is an extreme example, though. No new ice age is due for some millennia to come. Nevertheless, to see the Anthropocene as a blip that can be minimised, and from which the planet, and its people, can simply revert to the status quo, may be to underestimate the sheer scale of what is going on.
Take energy. At the moment the amount of energy people use is part of what makes the Anthropocene problematic, because of the carbon dioxide given off. That problem will not be solved soon enough to avert significant climate change unless the Earth system is a lot less prone to climate change than most scientists think. But that does not mean it will not be solved at all. And some of the zero-carbon energy systems that solve it—continent- scale electric grids distributing solar energy collected in deserts, perhaps, or advanced nuclear power of some sort—could, in time, be scaled up to provide much more energy than today’s power systems do. As much as 100 clean terawatts, compared to today’s dirty 15TW, is not inconceivable for the 22nd century. That would mean humanity was producing roughly as much useful energy as all the world’s photosynthesis combined.
In a fascinating recent book, “Revolutions that Made the Earth”, Timothy Lenton and Andrew Watson, Earth-system scientists at the universities of Exeter and East Anglia respectively, argue that large changes in the amount of energy available to the biosphere have, in the past, always marked large transitions in the way the world works. They have a particular interest in the jumps in the level of atmospheric oxygen seen about 2.4 billion years ago and 600m years ago. Because oxygen is a particularly good way of getting energy out of organic matter (if it weren’t, there would be no point in breathing) these shifts increased sharply the amount of energy available to the Earth’s living things. That may well be why both of those jumps seem to be associated with subsequent evolutionary leaps—the advent of complex cells, in the first place, and of large animals, in the second. Though the details of those links are hazy, there is no doubt that in their aftermath the rules by which the Earth system operated had changed.
The growing availability of solar or nuclear energy over the coming centuries could mark the greatest new energy resource since the second of those planetary oxidations, 600m years ago—a change in the same class as the greatest the Earth system has ever seen. Dr Lenton (who is also one of the creators of the planetary-boundaries concept) and Dr Watson suggest that energy might be used to change the hydrologic cycle with massive desalination equipment, or to speed up the carbon cycle by drawing down atmospheric carbon dioxide, or to drive new recycling systems devoted to tin and copper and the many other metals as vital to industrial life as carbon and nitrogen are to living tissue. Better to embrace the Anthropocene’s potential as a revolution in the way the Earth system works, they argue, than to try to retreat onto a low-impact path that runs the risk of global immiseration.
Such a choice is possible because of the most fundamental change in Earth history that the Anthropocene marks: the emergence of a form of intelligence that allows new ways of being to be imagined and, through co-operation and innovation, to be achieved. The lessons of science, from Copernicus to Darwin, encourage people to dismiss such special pleading. So do all manner of cultural warnings, from the hubris around which Greek tragedies are built to the lamentation of King David’s preacher: “Vanity of vanities, all is vanity…the Earth abideth for ever…and there is no new thing under the sun.” But the lamentation of vanity can be false modesty. On a planetary scale, intelligence is something genuinely new and powerful. Through the domestication of plants and animals intelligence has remade the living environment. Through industry it has disrupted the key biogeochemical cycles. For good or ill, it will do yet more.
It may seem nonsense to think of the (probably sceptical) intelligence with which you interpret these words as something on a par with plate tectonics or photosynthesis. But dam by dam, mine by mine, farm by farm and city by city it is remaking the Earth before your eyes.
Anthropocene was originally coined by ecologist Eugene Stoermer but subsequently popularized by the Nobel Prize-winning scientist Paul Crutzen by analogy with the word "Holocene." The Greek roots are anthropo- meaning "human" and -cene meaning "new." Crutzen has explained, "I was at a conference where someone said something about the Holocene. I suddenly thought this was wrong. The world has changed too much. So I said: 'No, we are in the Anthropocene.' I just made up the word on the spur of the moment. Everyone was shocked. But it seems to have stuck."[6] Crutzen first used it in print in a 2000 newsletter of the International Geosphere-Biosphere Programme (IGBP), No.41. In 2008, Zalasiewicz suggested in GSA Today that an anthropocene epoch is now appropriate.[7]