Facebook is stepping up moderation against anti-Black hate speech

Facebook has started weighting anti-Black hate speech on its platform as higher priority than hate speech directed at white people, men, and Americans in an effort to address the disproportionate effects such speech has on minority groups, the company tells The Verge

The result is that Facebook’s automated moderation systems for detecting and taking action against hate speech should now more proactively scan the site for such racist content. Meanwhile, more innocuous forms of hate speech, like those directed at white people or men in general, are deemed lower priority and left alone unless a user reports them. Facebook has internally deemed this approach “WOW,” or “worst of the worst” for the types of behaviors it now wants to focus its resources on. 

The effort is part of a new hate speech project within Facebook, first reported earlier today by The Washington Post, that aims to address years of inaction regarding racial discrimination on the platform. Activists, civil rights advocates, and researchers of the platform have long accused Facebook of abetting hate speech and operating a moderation system that doesn’t take into account real-world effects of bias and the way racism disproportionately affects minorities. Only in July of this year did Facebook say it would begin studying racial bias in its algorithms, after executives spent years resisting doing so by forming new research-focused equity teams for its main app and Instagram.

Now, the company says it’s taking steps to ensure it moderates its platform to help the most vulnerable victims of hate speech and abuse — instead of treating the problem as one that affects everyone in equal measure. The new moderation changes aren’t just aimed at helping root out anti-Black hate speech, but also hate speech directed toward Muslims, Jewish people, and members of the LGBTQ+ community. 

“We know that hate speech targeted towards underrepresented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious,” says Sally Aldous, a Facebook spokesperson, in a statement given to The Verge

“Over the past year, we’ve also updated our policies to catch more implicit hate speech, such as content depicting Blackface, stereotypes about Jewish people controlling the world, and banned holocaust denial,” Aldous adds. “Thanks to significant investments in our technology we proactively detect 95 percent of the content we remove and we continue to improve how we enforce our rules as hate speech evolves over time.”

Leave comment

Your email address will not be published. Required fields are marked with *.