top of page
Writer's pictureOurStudio

Twitter Implementing European-Style Hate Speech Bans

Jack Dorsey

Ron Sachs/SIPA/Newscom


Twitter's leadership announced this morning that it is broadening its bans on "hateful" conduct to try to cut down on "dehumanizing" behavior.

The social media platform already bans (or attempts to ban, anyway) speech that targets an individual on the basis of race, sex, sexual orientation, and a host of other characteristics. Now it intends to crack down on broader, non-targeted speech that dehumanizes classes of people for these characteristics.

Here's how the company's blog post describes the new rules:

You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm. Definitions: Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic). Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.

Directly under that rule, they ask for feedback. If you find this definition vague, you can let them know. They actually ask for examples of how this rule could be misapplied to punish "speech that contributes to a healthy conversation." Feel free to fill them in.

As a private platform, Twitter can decide that it does not want to make space for speech it finds unacceptable. Newspapers and other media outlets have often declined to run letters to the editor or otherwise provide platforms for speech that uses such "dehumanizing" language. It's their right to do so.

To the extent that there's a "but" here, it's about how toxic the political discussion on Twitter has already become. A large number of people actively try to get other people banned for saying things they don't like, flopping and shrieking like pro soccer players at every piece of criticism they don't like in hopes of drawing out a red card from a ref. If you add to the reasons that Twitter will censor tweets and shut down accounts, surely you'll just increase the volume of people shrieking at Twitter CEO Jack Dorsey demanding that he and Twitter do something.

Also, while this new rule is a product of the creepily-named Trust and Safety Council that Twitter organized in 2016, its language echoes the broad anti–hate speech laws of the European Union and United Kingdom. This morning Andrea O'Sullivan noted that the European Union is attempting to regulate what online companies permit and forbid. It's a lot harder to see what Twitter is doing as a voluntary reaction to consumer pressure when we know that there is additional governmental efforts to try to force them to censor users. And it won't just be ordinary citizens who use this rule to yell at Twitter and demand they shut down speech they don't like. Politicians certainly will as well.

Both Twitter's blog post and Wired's coverage of the rule change point to the research of Susan Benesch of The Dangerous Speech Project as an inspiration for the new rule. Yet while one might think an organization that says certain types of speech are actually dangerous would be pro-censorship, that's not really what the group is about.

While The Dangerous Speech Project does say that "inhibiting" dangerous, dehumanizing speech is one way to prevent the spread of messages meant to encourage violence and hatred toward targeted groups, that's not what the group is actually encouraging. It says outright that efforts to fight "dangerous" speech "must not infringe upon freedom of speech since that is a fundamental right." It adds that "when people are prevented from expressing their grievances, they are less likely to resolve them peacefully and more likely to resort to violence."

The Dangerous Speech Project calls instead for engaging and countering bad speech with good speech. In fact, last year Benesch co-wrote an article specifically warning against online Twitter mobs that attempt to shame or retaliate against people in real life for the things that they've said, even when those things are full-on racist. When naming-and-shaming is used as a social tactic to suppress speech, she notes, it often ends up with the majority oppressing minorities. And besides, it doesn't really work:

Shaming is a familiar strategy for enforcing social norms, but online shaming often goes farther, reaching into a person's offline life to inflict punishment, such as losing a job. Tempting though it is, identifying and punishing people online should not become the primary method of regulating public discourse, since this can get out of hand in various ways. It can inflict too much pain, sometimes on people who are mistakenly identified—and in all cases it is unlikely to change the targets' minds favorably.

It's a little odd to see this group's work being used to justify suppressing people's tweets.

0 views0 comments

Comments


bottom of page