Entrepreneur Andrew Yang has run a tech-centered campaign for the Democratic presidential nomination, positioning his Universal Basic Income proposal as a solution to rapid technological change and increasing automation. On Thursday, he released a broad plan to rein in the tech companies that he says wield unbridled influence over the American economy and society at large.
"Digital giants such as Facebook, Amazon, Google, and Apple have scale and power that renders them more quasi-sovereign states than conventional companies," the plan reads. "They're making decisions on rights that government usually makes, like speech and safety."
Yang has now joined the growing cacophony of Democrats and Republicans who wish to amend Section 230 of the Communications Decency Act; the landmark legislation protects social media companies from facing certain liabilities for third-party content posted by users online. As Reason's Elizabeth Nolan Brown writes, it's essentially "the Internet's First Amendment."
The algorithms developed by tech companies are the root of the problem, Yang says, as they "push negative, polarizing, and false content to maximize engagement."
That's true, to an extent. Just like with any company or industry, social media firms are incentivized to keep consumers hooked as long as possible. But it's also true that social media does more to boost already popular content than it does to amplify content nobody likes or wants to engage with. And in an age of polarization, it appears that negative content can be quite popular.
To counter the proliferation of content he does not like, Yang would require tech companies to work alongside the federal government in order to "create algorithms that minimize the spread of mis/disinformation," as well as "information that's specifically designed to polarize or incite individuals." Leaving aside the constitutional question, who in government gets to make these decisions? And what would prevent future administrations from using Yang's censorious architecture to label and suppress speech they find polarizing merely because they disagree with it politically?
Yang's push to alter 230 is similarly misguided, as he seems to think that removing liabilities would somehow end only bad online content. "Section 230 of the Communications Decency Act absolves platforms from all responsibility for any content published on them," he writes. "However, given the role of recommendation algorithms—which push negative, polarizing, and false content to maximize engagement—there needs to be some accountability."
Yet social media sites are already working to police content they deem harmful—something that should be clear in the many Republican complaints of overzealous and biased content removal efforts. Section 230 expressly permits those tech companies to scrub "objectionable" posts "in good faith," allowing them to self-regulate.
It goes without saying that social media companies haven't done a perfect job with screening content, but their failure says more about the task than their effort. User-uploaded content is essentially an infinite stream. The algorithms that tech companies use to weed out the content that clashes with their terms of service regularly fail. Human screens also fall short. Even if Facebook or Twitter or Youtube could create an algorithm that only deleted the content those companies intended for it to delete, they would still come under fire for what content they find acceptable and what content they don't. Dismantling Section 230 would probably discourage efforts to fine-tune the content vetting process and instead lead to broad, inflexible content restrictions.
Or, it could lead to platforms refusing to make any decisions about what they allow users to post.
"Social media services moderate content to reduce the presence of hate speech, scams, and spam," Carl Szabo, Vice President and General Counsel at the trade organization NetChoice, said in a statement. "Yang's proposal to amend Section 230 would likely increase the amount of hate speech and terrorist content online."
It's possible that Yang misunderstands the very core of the law. "We must address once and for all the publisher vs. platform grey area that tech companies have lived in for years," he writes. But that dichotomy is a fiction.
"Yang incorrectly claims a 'publisher vs. platform grey area.' Section 230 of the Communications Decency Act does not categorize online services," Szabo says. "Section 230 enables services that host user-created content to remove content without assuming liability."
Where the distinction came from is somewhat of a mystery, as that language is absent from the law. Section 230 protects sites from certain civil and criminal liabilities if those companies are not explicitly editing the content; content removal does not qualify as such. A newspaper, for instance, can be held accountable for libelous statements that a reporter and editor publish, but their comment section is exempt from such liabilities. That's because they aren't editing the content—but they can safely remove it if they deem it objectionable.
Likewise, Facebook does not become a "publisher" when it designates a piece of content to the trash chute, any more than a coffee house would suddenly become a "publisher" if it decided to remove an offensive flier from its bulletin board.
Yang's mistaken interpretation of Section 230 is likely a result of the "dis/misinformation" around the law promoted by his fellow presidential candidates and in congressional hearings. There's something deeply ironic about that.
Commenti