The future is here. Driverless vehicles, drones, machine learning, and other emerging technologies offer programmable assistants able to handle mundane tasks and critical life-saving interventions alike. But not everyone is pleased. The digital Arcadia that awaits us is being fettered by the rise of the robophobes.
Robophobia exists on a continuum. At the extreme end are reactionaries who indiscriminately look to stifle all that goes beep in the night. They call for swift and pre-emptive regulations to address any imagined safety or privacy concerns, however unlikely. To the extent that they can enact their ideas, their mind-set is guaranteed to slow the pace of innovation, resulting in countless lost opportunities for economic and social progress—and, yes, even consumer safety and privacy. You'd almost suspect that this is their unstated goal.
Other cases of robophobia are milder, manifesting, for instance, in proposals for new government agencies. In a white paper published by the Brookings Institution last September, Ryan Calo, an assistant professor at the University of Washington School of Law, calls for a Federal Robotics Commission (FRC). Older agencies, he argues, don't have the expertise to "deal with the novel experiences and harms robotics enables." Furthermore, there are "distinct but related challenges that would benefit from being examined and treated together." Robots, he says, "may require investment and coordination to thrive."
Calo does not have a surreptitious desire to stifle new technologies hidden behind his policy proposals. He rightfully criticizes the Federal Aviation Administration (FAA) for its ham-handed drone policies, calling them "arbitrary and non-transparent." But Calo is no proponent of permissionless innovation, a term for totally unfettered freedom to experiment with new technology and business models coined by my technology policy colleague at the Mercatus Center Adam Thierer, either. He wants to regulate drones; he just thinks the FAA is doing it the wrong way. In his mind, a FRC would have the narrow focus and specialized expertise needed to effectively protect us.
Really, Calo is too kind to the FAA. He doesn't mention most of the questionable drone regulations the agency has proposed. The FAA has practically stopped innovation in its flight path by proposing to ban all but a handful of private-sector drones while the agency completes rules to govern the rest. Another doozy was its proposal to require drone pilots to obtain the same license as old-school airplane pilots, even though they never need set foot on an aircraft to do their jobs. The FAA's actions are badly hindering this exciting new technology, and for not-altogether-altruistic reasons. A January 15 story in The Wall Street Journal quotes Jim Williams, the head of the FAA's unmanned-aircraft office, bragging about his agency going to bat for the aerial surveyors, photographers, and moviemaking pilots who frequently lobby him to put the kibosh on commercial drone activity. "They'll let us know that, 'Hey, I'm losing all my business to these guys. They're not approved. Go investigate,'" he explains. "We will investigate those."
Would a robot commission be any better? History suggests that it won't. This is not the first time a scribbler has proposed a new agency to oversee an emerging technology. Robophobia is only the most recent incarnation of a timeless reaction to scientific developments: the desire to control them.
Calo cites the Federal Railroad Administration (FRA) as a successful response to the scary new phenomenon of travel by train. But actually, the Interstate Commerce Commission was created first, in 1887; it was promptly captured by railroad companies and began promulgating anti-consumer regulations on their behalf. The FRA was established far later, through the same 1966 legislation that brought us the Department of Transportation (DOT). It is strange but telling that Calo offers the FRA and DOT as prototypes for a future Federal Robotics Commission, since both bodies suffer from rather extreme amounts of regulatory zealotry, waste, fraud, and abuse.
But there is a more fundamental reason to object to an FRC. Calo himself claims to favor something more akin to a supervisory body than a formal regulatory agency, yet he leaves the door wide open for agency power grabs and ever-expanding regulation. Bureaucrats almost always act to maximize their spheres of influence. Why wouldn't this be the case for an agency tasked with overseeing a lucrative new technology like robotics? On the flip side, what makes us think the robotics industry itself would refrain from doing what so many other industries have before and working to influence FRC regulations for its own ends?
Regulatory capture is real. Consider the Federal Communications Commission (FCC) and its war on cable television. A recent paper by Thierer and another technology policy scholar at the Mercatus Center, Brent Skorup, is a must-read for anyone interested in how robotics might fare in Calo's world. Titled "A History of Cronyism and Capture in the Information Technology Sector," the paper documents the many ways the FCC has mostly served the private interests it was supposed to regulate rather than the "public interest" promoted by the likes of Calo.
When cable TV came about in the 1960s, the agency moved quickly to quash it—a naked effort to protect entrenched television broadcasters. Regulatory creep became a serious problem as the commission expanded its authority into almost every new telecommunications and media service that emerged. Predictably, the "independent" FCC eventually succumbed to the very problems that Calo's FRC ostensibly aims to rectify, such as being slow and arbitrary and constantly encroaching on areas it isn't equipped to regulate.
Calo is correct that our existing collection of regulatory agencies is ill-qualified to handle robotics policy. But adding another group of eggheads to the mix is doubling down on the problem rather than offering a solution. At a minimum, as Thierer writes, "when proposing new agencies, you need to get serious about what sort of institutional constraints you might consider putting in place to make sure that history does not repeat itself."
Innovation doesn't flourish at the hands of bureaucrats—even knowledgeable, benevolent, non-robophobic ones. It's simply impossible to anticipate what will happen when engineers, developers, and consumers take new technologies and begin to apply them in novel ways. Department of Defense engineers and early users of the agency's internal ARPANET system never dreamed that the simple packet switching network used in a handful of university research laboratories would one day be credited as the precursor to the Internet. In fact, ARPANET's administrators actually banned many of the core functions that you and I enjoy today, such as online commerce. Thierer, in his 2014 book Permissionless Innovation, quoted from the 1982 handbook at MIT's artificial intelligence lab, which stated: "It is considered illegal to use the ARPANet for anything which is not in direct support of Government business…Sending electronic mail over the ARPANet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people, and it is possible to get MIT in serious trouble with the Government agencies which manage the ARPANet."
The modern Internet does not owe its success to a brilliant policy wonk, a series of white papers, or a federal agency tasked with developing a new technology and protecting people from any conceivable harm that might arise from it. The opposite is true: It's because the Clinton administration decided to break with tradition by rejecting top-down, command-and-control regulations that the Internet as we know it was born—a product of human action, not merely of human design.
Things could easily have been different. If the overly cautious had gotten their way, the commercial properties of the Internet may well have been squelched before we ever knew what we were missing. The same would be true under a Federal Robotics Commission. Progress requires us to reject robophobia and feel the digital love.
Comments