2014年5月4日 星期日

Stephen Hawking: Dismissing artificial intelligence would be a mistake

 

Stephen Hawking: Dismissing artificial intelligence would be a mistake

Scientists say not enough research being done on effects of artificial intelligence.
By Danielle Haynes   |   May 3, 2014 at 2:40 PM   |   5 Comments
Astro physicist Professor Stephen Hawking sits in a garden inspired by his book "a brief history of Time" at the 2010 Chelsea Flower Show in London. The flower show is one of the hottest tickets in the London summer season. UPI/Hugo Philpott
| License Photo
LONDON, May 3 (UPI) -- Stephen Hawking, in an article inspired by the new Johnny Depp flick Transcendence, said it would be the "worst mistake in history" to dismiss the threat of artificial intelligence.In a paper he co-wrote with University at California, Berkeley computer-science professor Stuart Russell, and Massachusetts Institute of Technology physics professors Max Tegmark and Frank Wilczek, Hawking said cited several achievements in the field of artificial intelligence, including self-driving cars, Siri and the computer that won Jeopardy!
"Such achievements will probably pale against what the coming decades will bring," the article in Britain's Independent said.
"Success in creating AI would be the biggest event in human history," the article continued. "Unfortunately, it might also be the last, unless we learn how to avoid the risks."
The professors wrote that in the future there may be nothing to prevent machines with superhuman intelligence from self-improving, triggering a so-called "singularity."
"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all," the article said.
"Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks."

Read more: http://www.upi.com/Science_News/Technology/2014/05/03/Stephen-Hawking-Dismissing-artificial-intelligence-would-be-a-mistake/7801399141433/#ixzz30kefV5ts

 

Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists


With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.
Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.
Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".
Johnny Depp plays a scientist who is shot by Luddites in 'Transcendence' (Alcon) Johnny Depp plays a scientist who is shot by Luddites in 'Transcendence' (Alcon)
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.
Stephen Hawking is the director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge and a 2012 Fundamental Physics Prize laureate for his work on quantum gravity. Stuart Russell is a computer-science professor at the University of California, Berkeley and a co-author of 'Artificial Intelligence: A Modern Approach'. Max Tegmark is a physics professor at the Massachusetts Institute of Technology (MIT) and the author of 'Our Mathematical Universe'. Frank Wilczek is a physics professor at the MIT and a 2004 Nobel laureate for his work on the strong nuclear force.

 

霍金警告:人工智慧可能為人類帶來滅亡

2014-05-03  13:54 〔本報訊〕英國著名物理學家霍金(Stephen Hawking)發出警告,他認為人工智慧短期的問題是由「誰」操控,長期的問題是人類根本無法操控,甚至替人類帶來滅亡。
  • 英國著名物理學家霍金(Stephen Hawking)警告,人工智慧可能會為人類帶來滅亡。(法新社) 英國著名物理學家霍金(Stephen Hawking)警告,人工智慧可能會為人類帶來滅亡。(法新社)
英國《獨立報》(The Independent)報導,霍金在討論強尼戴普(Johnny Depp)的新電影《全面進化》時提到,人工智慧的發展不僅是人類歷史最重大的事件,但如果人類無法學會如何避免危機,它就會是人類史上「最後」一件重大事件。
霍金指出,人工智慧為人類的生活帶來很大的幫助,如自動駕駛汽車、SIRI、Google Now等,人工智慧還具備消滅戰爭、疾病和貧窮的潛力,但人工智慧卻潛藏巨大風險,「人類正在面對一個不確定的未來」。
霍金憂心,很少人認真研究人工智慧的風險,專家根本就還沒準備好。霍金表示,人工智慧的發展未來可能會超越金融市場、人類研發能力和人類領袖的智慧,甚至研發出人類根本無法想像的武器,「人工智慧發展到極致程度時,人類將​​面臨史上最好或者最壞的事情。」

沒有留言: