Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, St. Martin's Press, 322 pages, $26.99.
In the new Spike Jonze movie Her, an operating system called Samantha evolves into an enchanting self-directed intelligence with a will of her own. Not to spoil this visually and intellectually dazzling movie for anyone, but Samantha makes choices that do not harm humanity, though they do leave us feeling a bit sadder.
In his terrific new book, Our Final Invention, the documentarian James Barrat argues that hopes for the development of an essentially benign artificial general intelligence (AGI) like Samantha amount to a silly pipe dream. Barrat believes artificial intelligence is coming, but he thinks it will be more like Skynet. In the Terminator movies, Skynet is an automated defense system that becomes self-aware, decides that human beings are a danger to it, and seeks to destroy us with nuclear weapons and terminator robots.
Barrat doesn't just think that Skynet is likely. He thinks it's practically inevitable.
Barrat has talked to all the significant American players in the effort to create recursively self-improving artificial general intelligence in machines. He makes a strong case that AGI with human-level intelligence will be developed in the next couple of decades. Once an AGI comes into existence, it will seek to improve itself in order to more effectively pursue its goals. AI researcher Steve Omohundro, president of the company Self-Aware Systems, explains that goal-driven systems necessarily develop drives for increased efficiency, creativity, self-preservation, and resource acquisition. At machine computation speeds, the AGI will soon bootstrap itself into becoming millions of times more intelligent than a human being. It would thus transform itself into an artificial super-intelligence (ASI)—or, as Institute for Ethics and Emerging Technologies chief James Hughes calls it, "a god in a box." And the new god will not want to stay in the box.
St. Martin's Press
The emergence of super-intelligent machines has been dubbed the technological Singularity. Once machines take over, the argument goes, scientific and technological progress will turn exponential, thus making predictions about the shape of the future impossible. Barrat believes the Singularity will spell the end of humanity, since the ASI, like Skynet, is liable to conclude that it is vulnerable to being harmed by people. And even if the ASI feels safe, it might well decide that humans constitute a resource that could be put to better use. "The AI does not hate you, nor does it love you," remarks the AI researcher Eliezer Yudkowsky, "But you are made out of atoms which it can use for something else."
Barrat analyzes various suggestions for how to avoid Skynet. The first is to try to keep the AI god in his box. The new ASI could be guarded by gatekeepers, who would make sure that it is never attached to any networks out in the real world. Barrat convincingly argues that an intelligence millions of times smarter than people would be able to persuade its gatekeepers to let it out.
The second idea is being pursued by Yudkowsky and his colleagues at the Machine Intelligence Research Institute, who hope to control the intelligence explosion by making sure the first AGI is friendly to humans. A friendly AI would indeed be humanity's final invention, in the sense that all scientific and technological progress would happen at machine computation speed. The result could well be a superabundant world in which disease, disability, and death are just bad memories.
Unfortunately, as Barrat points out, most AI research organizations are entirely oblivious to the problem of unfriendly AI. In fact, a lot research funded by the Defense Advanced Research Project Agency (DARPA) aims to produce weaponized AI. So again, we're more likely to get Skynet than Samantha.
A third idea is that the initial constrained but still highly intelligent AIs would help researchers to create increasingly more intelligent AIs. Each more intelligent AI must be proved to be safe before creating subsequent AIs. Or perhaps AIs could be built with components that are programmed to die by default. Thus any runaway intelligence explosion would be short-lived and enable researchers to study the self-improving AI to see if it is safe. As safety is proved at each step, some components programmed to expire would be replaced enabling further self-improvement.
The most hopeful possible outcome is that we will gently meld over the next decades with our machines, rather than developing ASI separate from ourselves. Augmented by AI, we will become essentially immortal and thousands of times more intelligent than we currently are. Ray Kurzweil, Google's director of engineering and the author of The Singularity is Near, is the most well-known proponent of this benign scenario. Barrat counters that many people will resist AI enhancements and that, in any case, an independent ASI with alien drives and goals of its own will be produced well before the process of upgrading humanity can take place.
To forestall Skynet, among other tech terrors, Sun Microsystems co-founder Bill Joy has argued for a vast technological relinquishment in which whole fields of research are abandoned. Barrat correctly rejects that notion as infeasible. Banning open research simply means that it will be conducted out of sight by rogue regimes and criminal organizations.
Barrat concludes with no grand proposals for regulating or banning the development of artificial intelligence. Rather he offers his book as "a heartfelt invitation to join the most important conversation humanity can have."
Although I have long been aware of the ongoing discussions about the dangers of ASI, I am at heart a technological optimist. Our Final Invention has given me much to think about.
Comments