How Facebook’s new election rules sidestep the real problem

I. The announcements

After months of deliberations, Facebook gave its answer to the critics who have called for it to put new restrictions on political advertising. The company said it would not accept new political ads in the seven days leading up to the Nov. 3rd US presidential election, but would allow those that had already been approved to continue running. The move was framed as a compromise: campaigns can continue to use Facebook for get-out-the-vote efforts through Election Day, but they’ll lose the ability to test new messages. As a result, it might be harder for candidates to spread misinformation in the final days of the campaign. 

There’s a lot to say about the limits and implications of this approach. But there’s also much more to Facebook’s announcement, which included a broad set of measures intended to limit the ability of, uh, someone to spread lies about election safety, voting procedures, and the legitimacy of the outcome. 

The other big highlights include limiting forwarding in Messenger to five people per message; promoting accurate voter information at the top of Facebook and Instagram through the election; providing live, official election results through a partnership with Reuters; and adding labels to posts that attempt to declare victory before the results are official, or try to cast doubt on the outcome.

Another notable dimension about the announcements is the way they were announced. They came not from the corporate blog but from CEO Mark Zuckerberg himself, in a Facebook post. And he struck an unusually direct note of concern:

“The US elections are just two months away, and with COVID-19 affecting communities across the country, I’m concerned about the challenges people could face when voting,” he wrote. “I’m also worried that with our nation so divided and election results potentially taking days or even weeks to be finalized, there could be an increased risk of civil unrest across the country.”

He continued: 

“This election is not going to be business as usual. We all have a responsibility to protect our democracy.”

II. The reaction

How far will Facebook’s announcements this week go to, as Zuckerberg says, protect our democracy? 

I think the moves will go a long way toward promoting voter registration and turnout. The Reuters partnership will ensure that a huge number of Americans sees accurate, real-time information about the vote count. And the various policies announced to remove or label problematic posts could inject a welcome dose of reality into the more unhinged conspiracy theories about the election that are now swirling in the fever swamps.

At the same time, as Steve Kovach notes at CNBC, the policies announced Thursday have some obvious limitations. Misinformation in political advertising can continue right up until Election Day, so long as it has been running for at least a week. By the time the new restrictions kick in, mail-in voting will have been underway for weeks. And no label will be able to stop Trump from declaring that he has won, loudly and repeatedly.

Meanwhile, on Twitter, Zeynep Tufekci raises the larger point always lurking in the background of these discussions. “There are the details,” she wrote, “and there is this: Mark Zuckerberg, alone, gets to set key rules — with significant consequences — for one of the most important elections in recent history. That should not be lost in the dust of who these changes will hurt or benefit.”

I think all of that is fair, and yet I’ve struggled to land on an overall point of view on Facebook’s approach to regulating political speech. The question I keep coming back to is: what exactly is Facebook trying to solve for? 

III. The solve

By now, almost everyone accepts that social platforms have a role to play in protecting our democracy — as do average citizens, journalists, and the government itself. In 2016, all four of those groups failed in various ways, and we’ve spent much of the intervening period litigating who was most at fault, and what ought to be done about it.

One way to view Facebook’s announcements on Thursday is as an acknowledgement that when it comes to protecting our democracy, in 2020 the US government cannot be counted upon. Just this week, the president effectively told voters in North Carolina to vote twice — sending in a mail-in ballot, then showing up at the polls to vote again. He has sought to sabotage the post office to make voting by mail more difficult. He won’t commit to leaving office should he lose the election — and “jokes” about never leaving office, period.

None of these are issues a tech platform can solve. But because of their perceived power, the platforms are under strong pressure to take decisive action in response. And they are taking it seriously, Axios reported today, structuring a serious of war-game exercises to prepare for various election disaster scenarios: 

Facebook, Google, Twitter and Reddit are holding regular meetings with one another, with federal law enforcement — and with intelligence agencies — to discuss potential threats to election integrity.

Between March 1 and Aug. 1, Twitter practiced its response to scenarios including foreign interference, leaks of hacked materials and uncertainty following Election Day.

Meanwhile, the president continues to use the platforms in transparently anti-democratic ways. On Thursday, while still under criticism for his remarks about North Carolina, he repeated his instructions to all voters that they should both mail in a ballot and show up to vote in person. The post appeared both on Twitter and on Facebook, and both companies left it up. Twitter placed it under a warning label after determining the post could lead people to vote twice, and also prevented people from retweeting it or replying. Facebook added a label underneath saying that mail-in voting has been historically trustworthy.

The basic idea here is to allow for a maximum of political speech, and to answer the most problematic speech with more speech, in the form of labels. The platforms have offered no positive conception of what political speech should be or do there. Instead, they police it as beat cops, running off the worst posts while writing speeding tickets for lesser offenses.

The idea rests upon a foundational belief that both parties are good-faith actors when it comes to political speech, all available evidence to the contrary. And it’s this, more than anything else, that has resulted in Facebook’s strange contortions on the subject. As the press critic and New York University professor Jay Rosen put it: 

“The media ecosystem around one of our two major parties runs on made up claims and conspiracy theories. Facebook has institutionally committed itself to denial of this fact. It also says it has rules against spreading misinformation. The two commitments are in conflict.”

It’s in such a world that Facebook can make a host of changes to its policies in response to the actions, both actual and predicted, of President Trump, without ever saying the words “President Trump” at all. Company executives clearly feel a moral obligation to act against a grave threat to American democracy — but they cannot bring themselves to name the threat. This posture of impartiality, which Rosen calls “the view from nowhere,” has long been the default stance of the American media.

But it has been in decline for some time now, and for good reason. When you commit yourself to the view from nowhere, you will find, over and over again, that you are being played.

It’s in this sense that the steps Facebook is taking today can be viewed as positive, and also in some larger sense as being beside the point. If you are working at a big social platform and find yourself concerned about the degree to which it is enabling fascism, it’s not enough to simply adjust the boundaries of discourse.

You have to do something about the fascism.

IV. A parable

A headline from Wednesday evening in The Daily Beast: “Facebook’s Internal Black Lives Matter Debate Got So Bad Zuckerberg Had to Step In.”

The story, by Maxwell Tani and Spencer Ackerman, recounts a controversy that broke out inside the company when one of its 50,000 employees posted a short essay to its internal Workplace forum titled “In Support of Law Enforcement and Black Lives.” The essay, which was posted on Monday, sought to defend police officers in the wake of Wisconsin cops shooting Jacob Blake seven times in the back and leaving him paralyzed. Tani and Ackerman write:

The post called into question the notion of racially disparate outcomes in the criminal-justice system, argued that racism is not a serious motivation in police shootings, railed against “critical race theory,” and claimed narratives about police violence often “conveniently leave out” other factors, including whether the victim was under the influence of drugs or complied with officers’ directives. […]

“My heart goes out to the Blake family,” the staffer wrote on Friday. “It also goes out to the well-intentioned law enforcement officers who have been victimized by society’s conformity to a lie.” The staffer continued: “What if racial, economic, crime, and incarceration gaps cannot close without addressing personal responsibility and adherence to the law?”

On enterprise Facebook, just as it might have on consumer Facebook, the controversial post generated much outrage and engagement. It bubbled to the top of the feeds, and inspired many anguished comments. Its polite, just-asking-questions tone, coupled with clear endorsement of a system that has terrorized Black Americans for centuries, put the company’s commitment to free speech in the workplace to the test. If left unchecked, the post threatened to undermine faith in company leadership.

On consumer Facebook, the post would have stayed up even if it had been reported. But on enterprise Facebook, the post occasioned some reflection. Zuckerberg wrote a note affirming that “systemic racism is real,” and chided “some” employees for not considering the full weight of their words on their Black colleagues. (I obtained a copy.) In response, he said, Facebook would soon move “charged topics” to “dedicated spaces” within Workplace, and added that these forums would have “clear rules and strong moderation.”

“You won’t be able to discuss highly charged content broadly in open groups,” he said. “As you know, we deeply value expression and open discussion, but I don’t believe people working here should have to be confronted with divisive conversations while they’re trying to work.”

This is a view from somewhere. It is a positive conception of how a discussion ought to take place. Not just what words or symbols are allowed or disallowed, but how it should be constructed. I have no doubt it will make Facebook a better place to work. And I wonder whether the version of Facebook the rest of us would not benefit from similarly decisive intervention.

Leave comment

Your email address will not be published. Required fields are marked with *.