top of page
Writer's pictureOurStudio

The Zuckerberg Hearings Prove Government Shouldn't Regulate Facebook

In the year 2018, at the height of The Russia Scare, Facebook CEO Mark Zuckerberg was hauled in front of a tribunal of tech-illiterate politicians and asked to explain himself. "It was my mistake, and I'm sorry," Zuckerberg told senators who are upset about the company's exploitation (and fumbling) of user data—which, unbeknownst to them, was social media's entire business model.

A number of panics have brought us to this preposterous place: the idea that Russian trolls on Facebook could swing the 2016 election and undermine our "democracy"; the idea that Facebook's leftward bias is so corrosive that we should regulate it like a utility; and, finally, the general way in which social media tends to reveal the ugly side of human nature—which is indeed scary but has little to do with any particular platform.

If one could brush aside the bipartisan preening and sound bites during the Zuckerberg hearings, he would still be subjected to an infuriating mix of ignorance and arrogance. It's true that the United States is, in large part, run by a bunch of elderly politicians completely unsuited to regulate the tech industry. The obvious lesson, though, was still lost on many. Rather than trying to elect more technocrats, we should come to terms with the fact that in an increasingly complex world, politicians will be unsuited to regulate most industries, which is why they should do so sparingly.

Not that ignorance has ever stopped senators from grandstanding. Republican Sen. John Kennedy, for instance, believes Facebook should be disciplined because its users erroneously assumed the service was free. "Your user agreement sucks," said Kennedy, describing a perfectly legal document that had already been subjected to an array of contractual regulations and was probably read by only a fraction of the social media giant's users. He went on to say: "The purpose of that user agreement is to cover Facebook's rear end. It's not to inform your users about their rights. … I don't want to vote to have to regulate Facebook, but by God I will."

So if a private entity follows the law but happens to upset the sensibilities of the United States Senate, it will, by God, be punished with some nannyistic intrusion or byzantine regulation?

Well, not really punished, right? Because of course the rent-seeking Facebook desires more regulation. For one, it would make the state partially responsible for many of the company's problems—meting out "fairness," writing its user agreements, and policing speech—but more importantly for Zuckerberg, it would add regulatory costs that Facebook could afford but upstart competition almost certainly could not.

It's a long-standing myth that corporate giants are averse to "regulations," or that those regulations always help consumers. We've already seen the hyper-regulation of health care "markets" create monopolies and undermine choice. We've seen the hyper-regulation of the banking industry inhibit competition and innovation. Politicians, often both ignorant of specifics and ideologically pliable, tend to fall sway to the largest companies, which end up dictating their own regulatory schedules. I mean, Sen. Lindsey Graham of South Carolina actually asked a compliant Zuckerberg to submit a list of government interferences he might embrace.

The bigger ideological problem with the Facebook circus is that our politicians are acting as if being subjected to an opinion—or an ad—they dislike is some kind of attack on an individual's rights. Not one senator will ever tell constituents: "Hey, if you don't like the way Facebook conducts itself or you're unhappy about its political bias, then leave. No one is forcing you to open or maintain an account with Facebook, much less voluntarily hand over data. And if you're constantly falling for 'fake news,' well, that's a you problem, because the state can't fix stupid."

Yet to assure senators that he could, in fact, control billions of interactions, Zuckerberg noted that in five to 10 years, his company will possess artificial intelligence technology sophisticated enough to eliminate "hate speech" and "fake news" before it is even posted. If Facebook wants to use that technology, it has the right to do so, of course. But many of us who are familiar with the expansive definition of "hate speech" and the people who curate "fake news" think, well, no, thank you. Moreover, the idea that the platform should be responsible for governing the speech of billions of users is not only dangerous but also incredibly expensive.

Sen. Ben Sasse had a good point when he told Zuckerberg that although Facebook may decide it needs to police speech, "America might be better off not having (been) policed by one company that has a really big and powerful platform." The answer to quelling the outrage mob isn't for the government to help Facebook entrench its position with some cronyistic regulation but to let Facebook fix itself or go the way of Myspace.

0 views0 comments

Comments


bottom of page