Dmytro Zinkevych / Dreamstime
Each month, there seems to be some new hysteria-inducing headline or movie about how artificial intelligence (AI) is going to steal our jobs, break our hearts, or just outright kill us all.
This is not merely a Luddite problem. No less a technophile than Elon Musk routinely invokes fear and loathing of AI, comparing it to an evil demon that must be exorcised through the force of the state. With all of this anti-AI prejudice going around, it's up to libertarians and technologists to stand up to lawmakers and academics who want to clamp down on these technologies.
People are usually afraid of new things. Whether it's the weaving loom or the smartphone boom, people have never ceased to find things to be worried about—and reasons to cajole the government into crushing new innovations.
Anti-technology arguments have been handily addressed in the past. Economists like Joseph Schumpeter pointed out that the act of creation necessarily involves destruction—destruction of old, usually outmoded and inefficient, methods of living and production. But with this destruction comes new life: new opportunities, goods and services, and new ways of living that we simply cannot live without. Economically, the number and quality of new jobs wrought by a disruptive technology almost invariably exceed those that were so jealously guarded in the past. And culturally, society has weathered the rocking storms that so many had claimed would lead to social decay and apocalypse.
But when it comes to AI, something seems different. The pace of technological change seems simply too fast. The idea of smart machines seems to be just too similar to us to inspire comfort, and the primal fears of personal replacement become all too immediate.
These fears have unfortunately metastasized into what could become a full-blown technopanic on the academic and legal levels, as a new Mercatus Center study by Adam Thierer, Raymond Russell, and yours truly discusses. If we're not careful, the worst excesses of our paranoid imaginations could lead to regulations that shut us out from amazing developments in health, manufacturing, and transportation.
First, it's important to be clear about exactly what is on the line here. Stories about killer robots and inhumane futures from science fiction are just that: the stuff of fiction. The reality of artificial intelligence technologies will be at the same time both more mundane and much more fantastical. They will be mundane because when most effective, they will blend so seamlessly into our environments as to be almost imperceptible. But they will be fantastic because they have the potential to save millions of lives, billions more dollars, and make our lives easier and more comfortable.
Consider manufacturing. Many people fret over the risk that robotics and AI pose to many traditional jobs. But even the most alarming of the studies analyzing the impact of automation on jobs finds that the vast majority of workers will be just fine, and those who are affected may find better jobs that are enhanced by automation. Yet at the same time, AI improvements to manufacturing techniques could result in roughly $1.4 trillion in value by 2025 according to McKinsey and Company. That huge number represents very real savings for some of the least well off among us, and could very well spell the difference between continued poverty and chance to move up in life.
Or think about health care. Doctors have already been employing AI-enhanced technologies to guide them in precision surgery, more effectively diagnose illnesses, and even assist in health tracking outcomes for patients over an extended period of time. These technologies may have literally saved lives, and over the course of the next decade they are anticipated to majorly reduce costs to the tune of hundreds of billions in our out-of-control health care industry.
The very real risk that preventing AI technologies pose for millions of human lives is not even that abstract. It is as simple as allowing hundreds of thousands of preventable highway deaths each year by halting the development of driverless cars.
These are just a few of the examples that we highlight in our report. A good number of academics studying technology issues discount these advances. They believe that the risks of AI technologies—either regarding labor displacement, physical safety, or disparate impact and discrimination—warrant a "stop first, ask questions later" approach. The regulations that they propose would effectively chill AI research; indeed, some who advocate these positions explicitly recognize that this is the goal.
Interestingly, the traditional concerns regarding automation—namely, labor market displacement and income effects—are being increasingly outpaced by new worries of existential risks and social discrimination. On the bleaker side of the metanarrative structure are overarching concerns about "superintelligences" and hostile "hard" AI. This point of view is the one adopted by Musk and popularly advanced in Nick Bostrom's 2014 book, Superintelligences. Yet as we discuss in our paper, there exists much disagreement in the scientific community about whether such outcomes are even physically possible. And hey, if worse comes to worst, we can always just unplug the machines.
More familiar to most readers will be the worries fueled by sociopolitical concerns of the day. A substantial portion of AI antagonism comes from critics who do not fear societal apocalypse, but that algorithms and machine learning software can further entrench social gaps. For example, algorithms that provide outputs that are weighted towards or against any particular protected group are immediately suspect. The fear is that a society ruled by "black boxes," to use a term coined by critic Frank Pasquale, will tip the scales of society in potent but imperceptible ways, and thus will dangerously further social injustice.
Of course, Silicon Valley is a disproportionately liberal place, so you might expect that critics would view them as a natural ally to proactively counter bias in AI technologies. Not so. AI critics believe we need to "legislate often and early" to get ahead of innovators—and force them to do whatever the government says. They have called for the creation of a plethora of new government offices and agencies to control AI technologies, ranging from a federal AI agency, to a "Federal Robotics Commission," to a "National Algorithmic Technology Safety Administration." Needless to say, as software continues to integrate AI techniques, and everything around us continues to become imbued with software, the creation of such a federal AI agency could end up having regulatory control over basically everything that surrounds you.
Some want to sneak regulation in through the courts. Law professor Danielle Keats Citron, for example, has called for a "carefully structured inquisitorial model of quality control" over algorithms and AI technologies to be achieved through a legal principle she calls "technological due process." A 2014 White House report on privacy and big data seemed to nod at a more beefed up administrative investigatory process, calling upon regulators to sniff around algorithms for "discriminatory impact upon protected classes." Of course, few innovators want to openly break the law or unfairly affect certain groups of people, but such federal investigations, when not carefully structured, run the risk of becoming over-zealous witch hunts.
These regulatory proposals have the commonality of creating disproportionately more problems than the few ones they seek to address. As noted above, an overbearing regulatory regime would rob us of trillions in economic growth and cost savings, vast improvements of life, and millions of lives saved across the world. But even a lighter regulatory regime or liability structure could chill AI development, which would have a similar effect.
And indeed, there are far better ways of addressing these problems as well. Our machines have gotten smarter. Isn't it time that our regulations do the same? The old command-and-control model of the past simply will not work, not least because much of the information that regulators would need to make informed decisions are not even apparent to the developers who work on these technologies (especially in the case of machine learning.)
What should policymakers embrace instead? Humility, education, and collaboration with academics, innovators, and industry. Most concerns will be readily addressed through the market forces of competition and certification. Where large problems do present themselves—as is the case with AI technologies applied to law enforcement techniques, or the development of "smart weapons" for armies—perhaps more precaution is warranted. But in the vast majority of cases, dialogue and the normal tools of liability and legal remedies will be more than sufficient to get the job done.
The benefits of AI technologies are projected to be enormous. It's up to us to make sure that we don't allow the humans to stifle the robots.
Comments