Nick Bostrom is a famous Swedish philosopher who proposed that if we introduce super intelligence before solving control problems, the future development of artificial intelligence research would pose a significant threat to human survival. Bostrom warned that in addition to artificial intelligence taking over the world and intentionally exterminating humanity, even if given a harmless task, super artificial intelligence will be optimized, leading to the extinction of humanity. He said that although artificial intelligence brings us many benefits, solving control problems is still a top priority. Some critics in the field of artificial intelligence believe that artificial intelligence actually does not have such great capabilities, and this situation only occurs in long-term and extreme situations. Bostrom has published over 200 works, including the New York Times bestseller Super Intelligent Pathways, Dangers, Strategies 2014, and Observational Selection Effects in Human Bias Science and Philosophy 2002. In 2009 and 2015, he was selected as one of the top 100 global thinkers in foreign policy. Bostrom works in super intelligence and is concerned that it will pose a threat to human development in the coming centuries - similar views shared by Bill Gates and Elon Musk. Bostrom was born in Helsingborg, Sweden in 1973. He grew tired of studying when he was young and dropped out of high school in his final year to study at home. He strives to learn various disciplines, including anthropology, art, literature, and science. Despite being called a serious person, he also worked for the London Comedy Circus. He holds a bachelor's degree in philosophy, mathematics, logic, and artificial intelligence from the University of Gothenburg, a master's degree in philosophy and physics from the University of Stockholm, and a master's degree in computational neuroscience from King's College London. During his studies at the University of Stockholm, he studied and analyzed the relationship between language and reality through the philosopher Quine W V Quine. In 2000, he obtained a PhD in philosophy from the London School of Economics. He served as a professor at Yale University from 2000 to 2002 and as a postdoctoral researcher at the British Academy of Sciences at the University of Oxford from 2002 to 2005. Risky Bostrom's research focuses on the future and long-term outcomes of humanity. He introduced the concept of risk and defined it as a negative outcome that would lead to the extinction of primitive intelligent organisms on Earth or permanently curb their development potential. Bostrom and Milan Cherkovich described the relationship between the presence of risk and significant global risks in the 2008 Global Disaster Risk Report, linking the presence of risk with the observation selection effect and Fermi paradox. In 2005, Bostrom established the Institute for the Future of Humanity to study the distant future of human civilization. He is also a risk research consultant for the center. The fragility of human beings in the face of the progress of artificial intelligence, as mentioned by Bostrom in his book \Paths, Hazards, and Strategies of Super Intelligence\ in 2014, the birth of super intelligence means that humans may be heading towards extinction. Bostrom cites the Fermi paradox to prove that extraterrestrial intelligent life in the universe is a victim of its own technology. One machine is equivalent to human intelligence
Technology & Finance
ID: 182
添加时间: 瑞典著名哲学家尼克·博斯特罗姆提出,如果在解决控制问题之前引入超级智能,未来人工智能研究的发展将对人类的生存构成重大威胁。博斯特罗姆警告说,除了人工智能接管世界并有意灭绝人类之外,即使被赋予无害的任务,超级人工智能也会被优化,导致人类的灭绝。他表示,虽然人工智能给我们带来了很多好处,但解决控制问题仍然是重中之重。一些人工智能领域的批评者认为,人工智能实际上并不具备这么大的能力,而且这种情况只会发生在长期、极端的情况下。博斯特罗姆出版了200多部著作,包括《纽约时报》畅销书《超级智能路径》、《2014年危险、战略》和《2002年人类偏见科学与哲学中的观察选择效应》。2009年和2015年,他被评选为全球外交政策百强思想家之一。博斯特罗姆研究超级智能,并担心它将对未来几个世纪的人类发展构成威胁——比尔·盖茨和埃隆·马斯克也持有类似的观点。博斯特罗姆 1973 年出生于瑞典赫尔辛堡。他年轻时就厌倦了学习,高中最后一年就辍学在家学习。他努力学习各种学科,包括人类学、艺术、文学和科学。尽管被称为严肃的人,他也曾在伦敦喜剧马戏团工作过。他拥有哥德堡大学哲学、数学、逻辑和人工智能学士学位,斯德哥尔摩大学哲学和物理学硕士学位,以及伦敦国王学院计算神经科学硕士学位。在斯德哥尔摩大学学习期间,他通过哲学家蒯因W V蒯因研究和分析了语言与现实的关系。 2000年获得伦敦经济学院哲学博士学位。 2000年至2002年,他担任耶鲁大学教授,2002年至2005年,担任牛津大学英国科学院博士后研究员。Risky Bostrom的研究重点是人类的未来和长期成果。他引入了风险的概念,并将其定义为会导致地球上原始智慧生物灭绝或永久抑制其发展潜力的负面结果。 Bostrom 和 Milan Cherkovich 在《2008 年全球灾害风险报告》中描述了风险的存在与重大全球风险之间的关系,将风险的存在与观测选择效应和费米悖论联系起来。 2005年,博斯特罗姆成立了人类未来研究所,研究人类文明的遥远未来。他还是该中心的风险研究顾问。人类在人工智能进步面前的脆弱性,正如博斯特罗姆在2014年的著作《超级智能的路径、危险和策略》中提到的,超级智能的诞生意味着人类可能正走向灭绝。博斯特罗姆引用费米悖论来证明宇宙中的外星智慧生命是其自身技术的受害者。一台机器相当于人类的智能
更新时间: 2026-02-24 07:31:21
来源: 查看来源