Unchecked AI Will Bring On Human Extinction, with Michael Vassar

  • 6 years ago
Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that AI itself isn't the greatest risk to humanity. Rather, it's "the absence of social, intellectual frameworks" through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.

Read more at BigThink.com: http://goo.gl/R58APd

Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink

Transcript: If greater than human artificial general intelligence is invented without due caution it is all but certain that the human species will be extinct in very short order. People have written extensively about why it is basically analytically compulsory to conclude that in the default scenario, in the absence of major surprising developments or concerted integrated effort in the long term artificial intelligence will replace humanity.

It’s the natural all but inevitable consequence of greater than human artificial intelligence that it ought to develop what Steve Omohundro has called basic AI drives and basic AI drives basically boils down to properties of any goal directed system. The obedience to the Von Neumann-Morgenstern decision theory suggests that one ought to do the things that you expect to have the best outcomes based on some value function. And that value function uniquely specifies some configuration of matter in the universe. And unless the value function that is built into an AI implicitly uniquely specifies a configuration of matter in the universe that conforms to our values which would require a great deal of planning to make that happen then given sufficient power we should expect an AI to reconfigure the universe in a manner that does not preserve our values. As far as I can tell this position is analytically compelling. It’s not a position that a person can intelligently honestly and reasonably be uncertain about.

Therefore, I conclude that the major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions. Nick Bostrom who recently wrote the book Superintelligence, Machine Superintelligence I believe was aware of the basic concerns associated with AI risk 20 years ago and wrote about them intelligently in a manner that ought to be sufficiently compelling to convince any thoughtful and open minded person. By ten years ago practically everything that is said in Machine Intelligence had been developed intellectually into a form that a person who was more skeptical and not willing to think for themselves but who was willing to listen to other people’s thoughts and merely critically scrutinize ought to have been convinced by. But instead Bostrom had to spend ten years more becoming the director of an incredibly prestigious institute and writing an incredibly rigorous meticulous book in order to get a still tiny number of people and still a minority of the world essentially most analytically capable people on to the right page on a topic that is from a philosophy perspective about as difficult as Plato’s issues in the Republic about how it’s possible for an object to be bigger than one thing and smaller than another even though bigness and smallness are opposites. We are talking about completely trivial conclusions and we are talking about the world’s greatest minds failing to adopt these conclusions when they are laid out analytically until an enormous body of prestige is placed behind them. [TRANSCRIPT TRUNCATED]


Directed/Produced by Jonathan Fowler, Elizabeth Rodd, and Dillon Fitton

Recommended