What Does Freedom of Speech Look Like Online?

American Democracy

Democracy Examined

On Jan. 7th, Facebook (along with Twitter and YouTube) banned then-President Donald Trump indefinitely for incendiary posts relating to the Jan. 6th Capitol riot, which included calling the rioters “patriots” as well as instructing them to “Remember this day forever.” On Wednesday, May 5th, the Facebook Oversight Board released a much-anticipated decision upholding the ban and giving Facebook six months to decide whether or not it will be permanent.

The Board is a collection of 20 outside journalists, lawyers, dignitaries, activists, and policy experts—often called “Facebook’s Supreme Court”—that the company spent $130 million convening to issue third-party rulings on content moderation decisions. Facebook says it treats the Board’s rulings as binding. However, Georgetown’s April Falcon Doss notes that “the Board’s Charter includes a number of escape clauses.” Along with its decisions, the Board can also issueadvice to Facebook, which the company treats only as guidance.

In its decision, the Board asserted its independence, ripping into Facebook for attempting to “avoid its responsibilities” by issuing a “vague, standardless penalty” and then outsourcing its final decision to the Board. Hours later, Facebook Vice President of Global Affairs and Communication Nick Clegg saidthe company will follow the Board’s decision and “determine an action that is clear and proportionate.”

But what does it say about our democracy if our national discourse is governed by an unelected Silicon Valley technocracy, giving non-binding, advisory opinions to another unelected group of Silicon Valley technocrats? 

Isn’t this censorship?

After Trump’s suspension, many conservatives reprimanded Facebook and other tech giants for what they saw as an ad hoc, inconsistent moderating process. After this recent decision, they repeated those accusations. Sen. Bill Hagerty (R-TN) spoke for many of his colleagues when he wrote:

Here we are, with an unelected and unaccountable ‘oversight board’ of academics, journalists, lawyers, and activists determining whether a former United States president who recently received 74 million votes from the American public may participate in the modern-day public square… This type of censorship regime is what I would have expected from the Chinese Communist Party, not Silicon Valley. 

Because Facebook is a private company, the law currently gives it complete authority to moderate its own forum (with the exception of not allowing it to permit otherwise unlawful speech, like statements “directed at inciting or producing imminent lawless action”).

However, University of Chicago Professor Genevieve Lakier points out that this framework may be antiquated because platforms like Facebook and Twitter undoubtedly form part of the 21st century public square. As a result, giving them complete moderation privileges is perhaps akin to government censorship, without the accountability.

Under current law, social media companies have little reason to take their moderation responsibilities more seriously. Courts have interpreted Section 230 of the Communications Decency Act to designate them as “distributors” with a “broad shield” from liability, largely preventing private actors who are harmed from suing social media companies for damages.

Companies have abused their moderation privileges to protect their bottom-lines, selling the truth to appease authoritarians all around the world. Twitter is willingly censoring criticism over India’s handling of the pandemic to placate Modi, the UN found Facebook played a “determining role” in the Rohingya genocide by failing to take down posts advocating for ethnic cleansing, and YouTube is silencingRussian dissident Alexei Navalny’s videos to appease Putin. Even when social media companies do try to take action, they tend to make ad-hoc decisions in response to spontaneous outbursts of public pressure. They have not been able to develop any consistent policy. This has led to some laughably inconsistent results: Twitter may have deplatformed Trump, but it still allows ruthless dictators like Iran’s Ali Khamenei or Russian autocrat Vladimir Putin to spew hate-filled disinformation.

Could Facebook’s Oversight Board be part of the answer?

Perhaps, but only a small part. Some observers have criticized the Board as “political theater,” a toothless gesture meant to provide a veneer of accountability with the true purpose of duping regulators into thinking Facebook can govern itself. Others question how independent a Board that Facebook has endowed with $130 million can truly be. Regardless of those concerns, there are two significant reasons why the board may provide a flawed model for online governance:

First, its authority is too limited. Board members are not necessarily Zuckerberg sycophants; indeed, the Board has ruled against Facebook in six of its ten cases to date. However, Facebook has sharply limited what it can do. The Board can only rule on content Facebook has already blocked and cannot investigate“algorithmic amplification, groups and advertisements,” meaning it can only share its input on a tiny sliver of important pending issues. Indeed, as the Washington Post Editorial Board noted, the Oversight Board’s most important analysis in the Trump case––urging more transparent moderation processes and questioning exceptions to its prohibited content policies––was nonbinding. The Board currently has the power, and the courage, to shame Facebook, but it needs far more than that to make a real impact.

Second, it simply isn’t scalable. Even if these issues in the Board’s structure were fixed, misinformation and tricky cases proliferate far faster than distinguished experts can debate and rule on them. Thus, it’s critical to focus on developing more transparent rules and systems rather than touting experts confined to ruling on a handful of edge cases as a real solution. A few promising examples of de-radicalization and anti-misinformation programs include Google’s Project Redirect, YouTube’s Three Rs Program, and “circuit breakers” for which the Forum for Information and Democracy has advocated. These initiatives are intended to lower the virality of false and extremist content as it spreads, a potentially more impactful approach.

To be clear, for all its flaws, Facebook’s Oversight Board isn’t a terrible start; it prompts important conversations and Facebook pays a PR cost every time it defies it. Nonetheless, as Doss puts it, “it will take more than the duct tape of high-profile advisory boards to combat systemic problems of disinformation, from their roots to where they flourish online.

So where do we go from here?

In addition to some of the efforts referenced above, one possible approach to consider would be to establish a public-private partnership which would develop consistent policies and standards for social media companies to follow in their attempts to moderate content.

There is precedent for such public-private partnerships; in 1949, in order to gain access to free use of radio airwaves, the FCC introduced the Fairness Doctrine. This doctrine granted unfettered broadcast access to stations, on the condition that broadcasters devote “a reasonable percentage of time to coverage of public issues; and [the] coverage of these issues must be fair in the sense that it provides an opportunity for the presentation of contrasting points of view”. This allowed for broadcasters to avoid direct government interference over their content, so long as they documented their compliance with the rule. 

While the Fairness Doctrine is no longer in force, it can help to provide a framework for how to confront the misinformation currently dominating social media. This public-private joint venture would allow private-sector companies to have input over their own futures, while also giving ordinary citizens a way to hold social media giants accountable for their actions. Together, Silicon Valley executives, alongside academics, federal regulators, and elected officials would form a commission to explore, discuss, and recommend consistent regulations for moderating content. 

Critically, this commission would have to include mechanisms to ensure that its decisions would be binding, along with penalties for non-compliance. 

Ultimately, the fact that social media giants like Facebook feel pressure to allow for a modicum of accountability is laudable. It demonstrates that public protests, pressure, and shame can harm a company’s bottom line and force change. But if we are to protect our democracy, we cannot allow the future of our national discourse to be governed by a Silicon Valley bureaucrat’s benevolence and magnanimity.