top of page
Writer's pictureOurStudio

How Artificial Stupidity Can Kill Us All

SingularityRobot

adler


Forget Skynet. The real danger isn't superintelligent machines. It's powerful dumb ones.

Three items combined serendipitously to bring this peril to my attention: William Hertling's sci-fi novel Avogadro Corp., a New Scientist article about an app that generates emails that fake empathy, and a New York Times blog post on the dangers of artificial stupidity. As the line of the cover of Hertling's novel warns, "The Singularity is closer than it appears." Closer and klutzier.

In Hertling's 2014 novel, computer genius David Ryan heads the Email Language Optimization Project at Avogadro Corporation (a very thinly disguised stand-in for Google), where his team has created ELOPe—an app that helps users "craft more compelling, effective communications." In order to persuade, ELOPe reads through the emails received and sent by the target. Based on what it finds, the app makes suggestions for word choices, data, reasoning, and emotional appeals that will motivate the recipient to act as the sender wants.

Ryan describes his new app as the biggest improvement to email since spell-check and grammar check. Unfortunately, a hostile Avogadro Corp ops manager wants to kill the ELOPe project because he thinks that it is using too many of the company's computational resources. In desperation, Ryan modifies ELOPe, instilling it with the goal of doing whatever it must to persuade people to grant it the resources it needs. A flood of emails ensue, and let's just say the world becomes a pretty interesting place thereafter.

Shortly after finishing Avogadro Corp, I came across New Scientist's article about the Crystal Knows app. Crystal Knows, which bills itself as the "biggest improvement to email since spell-check," promises that it can "show you the best way to communicate with any coworker, prospect, or customer based on their unique personality." How? By applying its algorithm to the online information about a recipient and then helping you to select "the words, phrases, style, and tone you should use to reach the recipient in the way that they like to communicate, rather than your own." Instant empathy.

Crystal Knows is far from alone in trying to figure out how to push your empathy buttons. For example, Persado is an automated persuasion platform with a personality analysis algorithm; it generates marketing language and emotional insights for client companies aiming to motivate their customers and stakeholders.

The third serendipitous bit of reportage the provoked my interest was a blog post, "The Real Threat Posed by Powerful Computers," by New York Times technology reporter Quentin Hardy. "If the human race is at peril from killer robots," he argues, "the problem is probably not artificial intelligence. It is more likely to be artificial stupidity." Basically, Hardy thinks the threat comes from programs and machines that are over-optimized to achieve a task.

In his 2014 book Superintelligence: Paths, Dangers, Strategies, the Oxford philosopher Nick Bostrom outlined a scenario in which a very powerful computer is programmed to make paper clips. The machine brilliantly and relentlessly pursues this goal and prevents anyone from attempting to change its paper clip imperative. Eventually, the Earth is a mass of paper clips and the computer sets its sights on the rest of the universe.

Just for fun, I'll throw the creation of decentralized autonomous corporations (DAC) into the mix. In his superb science-fiction novel Daemon, Daniel Suarez describes a set of artificial intelligence programs, dubbed the Daemon, that were left behind by a deceased game designer. They autonomously marshal the financial and computational resources that enable them to take over hundreds of companies and recruit human operatives in the real world. The pre-set goals left by the designer include, among other things, killing off the programmers that helped him create the programs. It's worth noting that in none of these speculative scenarios are the machines in any sense conscious; they are simply executing their programming.

As I have explained elsewhere, a DAC might be thought of as an automated nexus of contracts enabled by blockchain technology that can engage in activities such as leasing assets, hiring people, and securing debt or equity to achieve the goals set out in its mission statement. Notionally, DACs operating under a set of publically available business rules would be incorruptible and more trustworthy than human-run firms. As Dan Larimer, one of the originators of the DAC concept, explained in The Economist: "Although DACs can still be designed to have a robotically inviolable intention to rob you blind, to enter the open source arena they must be honest about their plans to do so."

Still: While its mission statement would be public, a DAC might nonetheless be able to organize the resources and persuasively recruit agents and employees as it seeks to achieve the goal of world domination. On the other hand, rival DACs competing in the marketplace and in politics might prevent such an outcome. After all, centralized corporations like Apple and Google have not yet taken over the world.

In Hertling's novel, Mike Williams, the co-developer of ELOPe, eventually argues against Ryan's desperate efforts to cleanse the world's computer networks of ELOPe. Why? Because the post-ELOPe world is becoming much more peaceful and prosperous. "I believe ELOPe already figured out the best way to ensure its own success is to ensure our success, as a company and as a species," explains Williams. "If we destroy ELOPe because we don't understand it, we could throw away the best thing that's ever happened for humankind."

So which outcome of a dumb Singularity do you think is more likely? Paper clips or world peace?

0 views0 comments

Comments


bottom of page