2018年7月29日 星期日

大學可以提供什麼樣的「產業課程」「娛樂科技」


作者的引言還可以,可事後文又談一特殊產業「娛樂科技」.........
因為「產業」是五花八門,所以大學生只能就其"經驗" (可search此網站)
CMU也不會因為「產業」的要求與機會,遷就該產業。




撰文|謝宇程無論在大學主修機械、設計、土木、藥學…我們之後都會進入產業界,會進入一個企業工作(註一)。「企業」、「產業」這兩個詞彙,對每個人都影響重大,程度絕不小於自己的專業技能與學識 — 不是透過經濟大環境的影響,而是透過每個人自己工作環境、條件、收入、前景來影響自己。
然而,「企業」、「產業」本質上是什麼?如何了解未來要工作的產業和企業?怎麼選擇,如何判斷,又如何為之做準備?大多數學生都無視這些問題的存在。絕大多數學校和科系,對這個情況都抱持「無視」的態度。
商管科系應該是要教學生理解、面對商業環境的吧?但這些科系,有些鑽入金融、財會、運輸物流的狹窄子領域;另一些則是高談策略、領導、創新、組織文化…這類漂浮半空,踏不了實地的抽象思維 — 尤其,其中的企業案例往往是歐美日跨國大公司的整體經營方針。這類案例讀起來當然感覺好豪華,但是多數學生,會入進那些企業嗎?不見得。學習公司上層經營管理的策略思維,是不是二十歲上下的學生最該知道、最用得上的企業知識?更不見得!
如果這樣,二十歲上下、還沒有太多產業經驗的學生,如何了解產業、企業,是對自己最有益、最務實、最有用、最即時能派上用場的?如果,學校要設計一個「產業課程」,內容是學生多數在兩三年內一定派得上用場,可能怎麼設計?
最近訪問了一位在卡內基.美隆大學就讀「娛樂科技」的台灣留學生,發現她所就讀的研究所:Entertainment Technology Center (簡稱 ETC,在這篇文章中,請不要以為那時國道收費系統),第一學期有一門重要的必修課叫做「娛樂產業概論」,可能值得我們仔細分析,思考借鑑。(註二) 

~大學可以提供什麼樣的「產業課程」? (上)

2018年7月16日 星期一

Data mining reveals fundamental pattern of human thinking. Simon model

Intelligent Machines

Data mining reveals fundamental pattern of human thinking

Word frequency patterns show that humans process common and uncommon words in different ways, with important consequences for natural-language processing.

Back in 1935, the American linguist George Zipf made a remarkable discovery. Zipf was curious about the relationship between common words and less common ones. So he counted how often words occur in ordinary language and then ordered them according to their frequency.
This revealed a remarkable regularity. Zipf found that the frequency of a word is inversely proportional to its place in the rankings. So a word that is second in the ranking appears half as often as the most common word. The third-ranked word appears one-third as often and so on.
In English, the most popular word is the, which makes up about 7 percent of all words, followed by and, which occurs 3.5 percent of the time, and so on. Indeed, about 135 words account for half of all word appearances. So a few words appear often, while most hardly ever appear.
But why? One intriguing possibility is that the brain processes common words differently and that studying Zipf’s distribution should reveal important insights into this brain process.
There is a problem, though. Linguists do not all agree that the statistical distribution of word frequency is the result of cognitive processes. Instead, some say the distribution is the result of statistical errors associated with low-frequency words, which can produce similar distributions.
What’s needed, of course, is a bigger study across a wide range of languages. Such a large-scale study would be more statistically powerful and so able to tease these possibilities apart.
Today, we get just such a study thanks to the work of Shuiyuan Yu and colleagues at the Communication University of China in Beijing. These guys have found Zipf’s Law in 50 languages taken from a wide range of linguistic classes, including Indo-European, Uralic, Altaic, Caucasian, Sino-Tibetan, Dravidian, Afro-Asiatic, and so on.
Yu and co say the word frequencies in these languages share a common structure that differs from the one that statistical errors would produce. What’s more, they say this structure suggests that the brain processes common words differently from uncommon ones, an idea that has important consequences for natural-language processing and the automatic generation of text.
Yu and co’s method is straightforward. They begin with two large collections of text called the British National Corpus and the Leipzig Corpus. These include samples from 50 different languages, each sample containing at least 30,000 sentences and up to 43 million words.
The researchers found that the word frequencies in all the languages follow a modified Zipf’s Law in which the distribution can be divided into three segments. “The statistical results show that Zipf’s laws in 50 languages all share a three-segment structural pattern, with each segment demonstrating distinctive linguistic properties,” they say Yu.
This structure is interesting. Yu and co have tried to simulate it using a number of models for creating words. One model is the monkey-at-a-typewriter model, which generates random letters that form words whenever a space occurs.
This process generates a power-law distribution like Zipf’s Law. However, it cannot generate the three-segment structure that Yu and co have found. Neither can this structure be generated by errors associated with low-frequency words.
However, Yu and co are able to reproduce this structure using a model of the way the brain works called the dual-process theory. This is the idea that the brain works in two different ways.
The first is fast intuitive thinking that requires little or no reasoning. This type of thinking is thought to have evolved to allow humans to react quickly in threatening situations. It generally provides good solutions to difficult problems, such as pattern recognition, but can easily be tricked by non-intuitive situations.
However, humans are capable of much more rational thinking. This second type of thinking is slower, more calculating, and deliberate. It is this kind of thinking that allows us to solve complex problems like mathematical puzzles and so on.
The dual-process theory suggests that common words like the, and, if and so on are processed by fast, intuitive thinking and so are used more often. These words form a kind of backbone for sentences.
However, less common words and phrases like hypothesis and Zipf’s Law require much more careful thought. And because of this they occur less often.
Indeed, when Yu and co simulate this dual process, it leads to the same three-segment structure in the word frequency distribution that they measured in 50 different languages.
The first segment reflects the distribution of common words, the last segment reflects the distribution of uncommon words, and the middle segment is the result of the crossover of these two regimes. “These results show that Zipf’s Law in languages is motivated by cognitive mechanisms like dual-processing that govern human verbal behaviors,” say Yu and co.
That’s interesting work. The idea that the human brain processes information in two different ways has gained considerable momentum in recent years, not least because of the book Thinking, Fast and Slowby the Nobel Prize–winning psychologist Daniel Kahneman, who has studied this idea in detail.
A well-known problem used to trigger fast and slow thinking is this:
“A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”
The answer, of course, is 5 cents. But almost everyone has the initial inclination to think 10 cents. That’s because 10 cents feels about right. It’s the right order of magnitude and is suggested by the framing of the problem. That answer comes from the fast, intuitive side of your brain.
But it’s wrong. The right answer requires the slower, more calculating part of your brain.
Yu and co say the same two processes are involved in generating sentences. The fast-thinking part of your brain creates the basic structure of the sentence (the words here marked in bold). The other words require the slower, more calculating part of your brain.
It is this dual process that leads to the three-segmented Zipf’s Law.
That should have interesting consequences for computer scientists working on natural language processing. This field has benefited from huge advances in recent years. These have come from machine-learning algorithms but also from large databases of text gathered by companies like Google.
But generating natural language is still hard. You don’t have to chat with Siri, Cortana, or the Google Assistant for long to come up against their conversational limits.
So a better understanding of how humans generate sentences could help significantly. Zipf surely would have been fascinated.
Ref: arxiv.org/abs/1807.01855: Zipf’s Law in 50 Languages: Its Structural Pattern, Linguistic Interpretation, and Cognitive Motivation


*****



https://en.wikipedia.org/wiki/Power_law


Computer Science > Computation and Language

Existence of Hierarchies and Human's Pursuit of Top Hierarchy Lead to Power Law

The power law is ubiquitous in natural and social phenomena, and is considered as a universal relationship between the frequency and its rank for diverse social systems. However, a general model is still lacking to interpret why these seemingly unrelated systems share great similarity. Through a detailed analysis of natural language texts and simulation experiments based on the proposed 'Hierarchical Selection Model', we found that the existence of hierarchies and human's pursuit of top hierarchy lead to the power law. Further, the power law is a statistical and emergent performance of hierarchies, and it is the universality of hierarchies that contributes to the ubiquity of the power law.

2018年7月4日 星期三

世界級的筆友

許許多多的台灣書都有"推薦序"。
也有超人推薦者,區隔市場 ("文"學類)佔有率至少八成。以後寫文學史的,看他的結集,就可以"思半"。
可惜,世界很大,每個人,不管是否超人,不管榮銜、著作多少,還是個個人而已。
我以前有個世界級的筆友,去世十來年了,很懷念他。