Not too long ago, conventional wisdom held that the internet should enjoy minimal government oversight precisely because it was a technology that enabled open and free speech for everyone. The remedy for hateful and offensive remarks, that 1990s-vintage argument went, was more speech—or logging off.
This principle, which can be traced back through the writings of St. Thomas Aquinas and John Stuart Mill, was nicely captured in the U.S. Supreme Court's 1997 decision striking down certain speech-chilling provisions of the Communications Decency Act. "Through the use of chat rooms," Justice John Paul Stevens wrote, "any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer."
A generation later, Stevens' argument has been not merely discarded. It has been inverted.
Politicians now insist that the internet should be subject to increased regulations precisely because it allows that hypothetical town crier to speak with a voice that resonates farther than it could from any physical soapbox. The possibility of freewheeling online discussions has been transformed, in other words, from virtue to vice. Platforms like Facebook and YouTube increasingly face demands that they restrict content. In some cases the demands are effected through public pressure, in others through outright government censorship.
The movement to stifle online expression is still in its early stages, but it represents a fundamental threat to the principles that have allowed the internet as we know it to grow and thrive. If these efforts continue, we may soon see the end of the free and open web.
Europe vs. Big Tech
At the vanguard of the efforts to restrict online speech are, ironically, Western nations that have historically prized free expression—in particular, the European Union.
In March, members of the European Parliament approved a Copyright Directive. What's known as Article 13 of the measure (renumbered as Article 17 in the final version) will require technology companies to impose "upload filters" to scan user-provided content and remove material viewed as unlawful. If a service provider fails to delete "copyright-protected works and other subject matter," the text says, it "shall be liable for unauthorized acts of communication."
Internet pioneer Vint Cerf, World Wide Web inventor Tim Berners-Lee, Electronic Frontier Foundation co-founder John Gilmore, Wikipedia founder Jimmy Wales, and dozens of other prominent technologists denounced Article 13 as "an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users."
Their arguments failed. An amendment that would have removed the upload filter requirement from the Copyright Directive was defeated by five votes.
In the short term, that will probably grant an unintended competitive advantage to large U.S.-based companies such as Facebook, Google, and Twitter, which possess the resources to devise, test, and implement automatic filtering technologies. Smaller startups will find compliance more of a challenge. Nonprofit efforts such as Wikipedia and websites run by individuals are likely to shoulder even greater burdens.
Other E.U. measures include last year's General Data Protection Regulation (GDPR), which imposed new rules on how businesses can handle and use user data, and this year's so-called platform-to-business rules, announced in draft form in February. The latter step is less known but arguably reaches further: Slated to take effect in February 2020, these regulations will control the business practices of "online platforms," including search engines, voice assistants, app stores, online marketplaces, price comparison tools, and some social media applications. They will be subject to new rules, including a ban on such "unfair practices" as suspending a user's account without explanation.
The French government applauded the draft rules in a statement, saying that the E.U. will now be able to "steer the market in the right direction"—though what the "right direction" is was left unsaid—while lamenting that they do not go further by regulating electronic devices too.
Privatizing Censorship
If Brexit actually happens and the U.K. manages to extricate itself from the European Union, Britain's internet users and businesses will be fortunate to escape the full impact of these efforts. But Westminster's elite have not been idle.
In April, the British government published a proposal, the "Online Harms White Paper," that echoes the approach of the E.U. Copyright Directive by holding tech companies, in particular social media platforms, liable for what their users post or upload. Under such a legal regime, internet censorship would be effectively privatized, as businesses would have no choice but to monitor and restrict users.
Among the online "harms" the U.K. proposes to outlaw are "extremist content," "disinformation," "violent content," and "trolling," which could include anything from what a government agency decrees to be fake news to remarks critical of the Prophet Muhammad. Another offensive category is "glamorization of weapons," which invites questions about how it may be applied to venerable British institutions like the National Museum of Arms and Armour, the nation's oldest museum.
In an April op-ed for CNN, Jeremy Wright, the U.K.'s secretary of state for digital, culture, media, and sport, argued for this adventure in online censorship by likening, without irony, adult internet users to young children. "It is similar to the principle that when you take your child to a playground, you trust that the builder made sure the equipment was safe and that no harm will come to them," Wright wrote.
The U.K.'s free market Adam Smith Institute calls the "Online Harms White Paper" illiberal and incompatible with English principles of freedom: "This proposal is about preventing Internet users from engaging in knowing and voluntary speech, and it's about recruiting vast armies of private sector policemen to patrol their thoughts."
Perhaps most problematic, the Adam Smith Institute points out, is that the U.K. proposes to restrict political speech that remains legal elsewhere in the English-speaking world. It is no exaggeration to say that "glamorization of weapons" is a popular hobby in the United States—and is fully protected by the American First and Second Amendments.
Other nations are edging in the same illiberal direction. Soon after the Christchurch, New Zealand, mosque massacre that killed 51 people, Australia enacted a new law punishing the publication or hosting of "abhorrent violent material" with up to three years in prison. According to the law, it "is immaterial whether the hosting service is provided within or outside Australia."
Read broadly, this suggests that executives of U.S. and other foreign hosting services—at least those failing to strictly censor their services for Australian audiences—could face legal peril if they visited Sydney on vacation. The law also allows television stations to broadcast violent material while prohibiting Twitter users from posting an identical video online.
That was too much even for Australia's Labor Party. "There needs to be proper consultation with not just the social media sector but also traditional media, who are also caught up by this bill and whose legitimate journalism and online news sites will also be impacted on by these laws," said Mark Dreyfus, a Labor representative and former attorney general, during the parliamentary debate. Dreyfus warned the law was being rushed through Parliament for political reasons "as this chaotic and desperate government careen[ed] toward" an election this spring.
For his part, New Zealand's Chief Censor David Shanks—yes, this is an actual government title—ruled that the video recorded by the Christchurch shooter and his accompanying manifesto both fell under the category of "objectionable" material and would be illegal to watch or read. The censorship office's classification decision said the manifesto "promotes and encourages acts of terrorism in a way that is likely to be persuasive to its intended audience." Merely viewing the document in electronic form, even if it is not downloaded to local storage, is punishable by up to 10 years in prison.
It is possible that New Zealand's censorship will prevent further extremist violence. But it is more likely that a formal ban will turn the Christchurch shooter into a kind of free speech martyr, bringing more attention to his loathsome ideology. Forbidden ideas have a tendency to draw the curious and the untethered.
The United States of Deplatforming
So far, at least, the U.S. government has yet to appoint a chief censor. But Silicon Valley's coastal elites have been eager to volunteer their services gratis.
The last year has marked a dispiriting new low in the "deplatforming," or banning from various online channels, of dissident voices. The ax fell on Infowars' Alex Jones, actor James Woods, the editorial director of AntiWar.com, the director of the Ron Paul Institute, and radio talk show host Jesse Kelly. (Some of these accounts have since been reinstated.)
Lawmakers have encouraged these social media bans. Congressional hearings have been called to interrogate tech execs on how their products are being used. Last August, Sen. Chris Murphy (D–Conn.) urged an even broader crackdown, proclaiming on Twitter that "the survival of our democracy depends on it."
Rep. Bennie Thompson (D–Miss.), chairman of the Homeland Security Committee, must have been listening. In March, Thompson sent a letter to Facebook, YouTube, Twitter, and Microsoft insisting that they remove "toxic and violent" content, even if it is legal to distribute in the United States. (The platforms already prohibit illegal content.) If the companies are "unwilling" to do so voluntarily, Thompson warned, Congress will "consider policies" to compel their cooperation. Left unexplained was how any such requirement could comply with the First Amendment.
The Fight Online Sex Trafficking Act, better known as FOSTA, ended the federal government's laissez faire approach to internet companies when it was enacted in April 2018. Executives are now criminally liable if they own, manage, or operate a service "with the intent to promote or facilitate the prostitution of another person." The Electronic Frontier Foundation has filed a lawsuit challenging the constitutionality of FOSTA, saying it muzzles constitutionally protected speech and is not tailored enough to comply with the First Amendment.
The World's Most Effective Censor
Whatever threats from constitutionally challenged politicians the United States faces, it remains a beacon of freedom compared to China, which can claim the dubious honor of most effective internet censor in the world. Social media apps are blocked, political content is restricted, and activists and journalists who document human rights abuses may be arrested and held in lengthy pretrial detention. Anonymity is impeded, with real names required.
The country's constitution says that "Citizens of the People's Republic of China enjoy freedom of speech, of the press, of assembly, of association, of procession and of demonstration." But the reality is that the internet in China is almost entirely subservient to government whims.
As Freedom House, a nonprofit group advocating for political freedom, reports, "websites and social media accounts are subject to deletion or closure at the request of censorship authorities, and Internet companies are required to proactively monitor and delete problematic content or face punishment." In addition, "officials systematically instruct Internet outlets to amplify content from state media and downplay news, even from some state-affiliated media, that might generate public criticism of the government." Hundreds of popular websites are blocked by the country, including Google, Facebook, Whats-App, YouTube, Flickr, Tumblr, Dropbox, Instagram, SoundCloud, WordPress, and Pinterest.
In 2017, China reinforced its control of the web with a law that increased censorship rules and, more worryingly, required that user data be stored on the Chinese mainland. "Data localization," as it's called, means that sensitive personal records will be easily available to police and intelligence agencies. U.S.-based companies such as Airbnb and Evernote dutifully moved Chinese user data to state-controlled companies. Last year Apple announced, without elaboration, that it was shifting iCloud operations for all its mainland Chinese customers to a government-owned local partner, Guizhou-Cloud Big Data Industry.
China is not alone in its efforts to control the internet. Instead, it is leading the way among authoritarian nations. Russia and Nigeria now have similar, though less comprehensive, data localization laws.
Getting Back to Our Roots
What nearly all of these extrusions of governmental interference have in common is that they focus their attention on the large internet companies that act as common platforms.
A small number of massive, slow-moving regulatory targets is a delightful state of affairs, at least from the perspective of Brussels or Beijing. It's far easier to pressure a few huge multinationals equipped with risk-averse legal departments than it is to control millions of unpredictable internet users, some of whom are certain to ignore bureaucratic diktats—or to invent creative ways to circumvent them.
When the U.S. government decreed that encryption was a munition—essentially a dangerous weapon subject to federal rules for exporting arms and tanks—Microsoft and Netscape complied. But programmer-activists thumbed their noses at the rules by exporting the source code of popular PGP encryption protocols in book form. Others shrunk the RSA encryption algorithm to three lines of code in the Perl programming language, which they gleefully wore on T-shirts. The Justice Department declined to make an example of these scofflaws.
Today, there's keen interest in homebrew gunsmithing, whether local laws permit it or not, thanks to online code repositories, such as GitHub and Defense Distributed's DefCad. These sites offer design files that allow key components of working firearms to be manufactured at home using a 3D printer.
There is no natural law of computing that says search must be centralized in Google or Baidu, social networking must happen on Facebook or WeChat, auctions must go through eBay or Alibaba, and so on. What we're accustomed to today represents a historic shift, one that's difficult to overstate, from an earlier era of the internet. From the moment of its public release at 2:56 p.m. Greenwich Mean Time on August 6, 1991, the World Wide Web was meant to be decentralized. Anyone could browse from any connected device. Every person with the technological means could set up his or her own website. The gatekeepers were gone.
It's true that centralized platforms have advantages, including improved security and better resistance to spam and abuse. They can also be quicker to build. But centralization brings costs with it, including providing a single convenient point of control for governments eager to experiment with censorship and surveillance.
There are some tantalizing hints that decentralization will return. Bitcoin and Ethereum, two blockchain-based computing platforms, are prominent examples. Solid—an open-source project backed by World Wide Web mastermind Berners-Lee—is intended to help you keep ownership of your own data by placing it under your control. The Internet Archive has hosted a pair of Decentralized Web Summits in San Francisco. Prototypes of distributed search engines, wikis, and Slack-like chat programs exist.
If a decentralized internet does return, it will likely arise only as a response to regulatory overreach by governments—and primarily because cryptonetworks provide developers and maintainers with economic incentives in the form of digital currency if they participate. A key advantage of open-source programming is developer friendliness: Twitter, especially, is notorious for disabling features that developers had relied on. Google's feature killing is memorialized at KilledByGoogle.com. When no one owns a platform, that sort of thing is much less likely to happen.
Chris Dixon, an entrepreneur turned venture capitalist in Silicon Valley, wrote in a well-read February 2018 post on Medium: "Today, unaccountable groups of employees at large platforms decide how information gets ranked and filtered, which users get promoted and which get banned, and other important governance decisions. In cryptonetworks, these decisions are made by the community, using open and transparent mechanisms."
Decentralization is hardly a perfect solution to the internet's ills, but it's likely to be better than the unhappy situation we find ourselves in today.
Comments