Nick Bostrom

03.05.2015 |

Episode #9 of the course “Significant futurists and their ideas”

As the technological age advanced at the end of the 20th and beginning of the 21st centuries, there were many scholars and theorists who feared that the creation of superintelligent beings would pose severe risks to the human species. Nick Bostrom, Swedish philosopher with a PhD in Economics, believes that not only is it wrong to assume that superintelligent beings will strive to dominate humanity, but that they will more likely be apathetic to the values that humans hold as natural beings, such as self-preservation and procreation.

Bostrom’s work argues that it is wrong to assume superintelligent machines would dominate man out of a human motivation like revenge, because artificial intelligence (AI) would not operate under the same principles of morality, or possibly any morality at all. In addition, Bostrom’s prolific writings discuss the unique existential risks associated with being human and the unremarkability of the fact that the complex conditions necessary to sustain life exist in time and space.

Bostrom holds postgraduate degrees in theoretical physics, philosophy, and computational science and combines these fields of knowledge to expound on the existential realities of life in the contemporary age. His cornerstone work, Superintelligence: Paths, Dangers, Strategies, analyzes concerns with artificial intelligence, human enhancement, and the outcome of exponentially more sophisticated technologies. Bostrom has founded the Future of Humanity Institute as well as the Oxford Martin Program on the Impacts of Future Technology. His work has incorporated aspects of the ethical debate surrounding artificial intelligence, its capacity, and its downfalls; he argues that even small improvements in the chance of humans to prevent or survive a global catastrophic event is worth the effort because it maximizes the chance of humanity.

 

Share with friends