top of page

How To Thwart A Robot Apocalypse: Oxford Professor Nick Bostrom on the Dangers of Superintelligent M

Writer's picture: OurStudioOurStudio

"If we one day develop machines with general intelligence that surpasses ours, they would be in a very powerful position," says Nick Bostrom, Oxford professor and founding director of the Future of Humanity Institute.

Bostrom sat down with Reason science correspondent Ron Bailey to discuss his latest book, Superintelligence: Paths, Dangers, Strategies, in which he discusses the risks humanity will face when artificial intelligence (AI) is created. Bostorm worries that, once computer intelligence exceeds our own, machines will be beyond of our control and will seek to shape the future according to their own plan. If the AI's goals aren't properly set by designers, a superintelligent machine will see humans as a liability to completing its goals–leading to our annihilation.

Click above to watch or click the link below for more information and downloadable links.

0 views0 comments

Comments


NEWSLETTERS

Get Reason In Your Inbox.

Thanks for submitting!

Join the

LIBERTARIAN PARTY

We are funded entirely by Americans who want to help give liberty a voice. By joining the Libertarian Party as a dues-paying member, you are investing in this critical work.

Thanks for submitting!

ADDRESS

1444 Duke St.

Alexandria, VA 22314-3403

PHONE

(800) ELECT-US

(800) 353-2887

EMAIL

bottom of page