On July 28 I shared a post on the Facebook page of the American Humanist Association (AHA) with this caption: “Intentional sharing of potentially harmful misinformation, such as this, should constitute a federal offense that comes with a hefty monetary penalty and/or hundreds of hours in community service.”
The blurb was in response to a news article about Twitter penalizing Donald J. Trump’s eldest son, Donald Trump Jr., for sharing a video that both dismissed masks and lockdowns as measures to slow the spread of the coronavirus and promoted hydroxychloroquine as a treatment. The tweet, they made clear, was in violation of the platform’s COVID-19 misinformation policy. (Don Jr.’s promotion of hydroxychloroquine and chloroquine as treatments for COVID-19 came after the Food and Drug Administration revoked its authorization and another study found a “possible association” between the drugs and heart arrhythmias and death.)
My Facebook post quickly sparked a heated debate amongst AHA Facebook fans—a majority of whom expressed vociferous support for the caption. Simultaneously, a small but outspoken minority argued that “restricting speech is how fascism starts,” and so on, as the conversations turned into a debate about the role of privately owned social media companies in the arena of public discourse.
Incidentally, a month earlier, a coalition of nonprofit organizations called on Facebook’s advertisers to temporarily hit pause on ad spending on Facebook and Instagram due to the company’s failure to combat hate speech on its platforms (Facebook also owns Instagram, WhatsApp, and Oculus). In a matter of days the Stop Hate for Profit campaign (orchestrated by Color of Change along with the NAACP, Common Sense Media, Anti-Defamation League, Free Press, and Sleeping Giants) caught steam with over 1,100 businesses and nonprofit organizations—including the American Humanist Association—jumping on board. A formerly defiant Mark Zuckerberg was forced to concede to some of the campaign’s ten-point requests after Facebook saw its market value drop “by more than 8 percent, amounting to about $72 billion,” according to social media and technology news platform SCommerce.
The unprecedented ad boycott came on the heels of mounting pressure on social media platforms from activists, civil rights organizations, and some Democrats to crack down on disinformation, racism, white nationalism, bigotry, bias, hate speech, and, perhaps more urgently, misleading and false claims about the 2020 presidential election.
But where Facebook lagged in curtailing disinformation, Twitter took the lead. On May 27 the platform labeled the president’s tweet claiming that mail-in ballots would lead to widespread voter fraud as “potentially misleading”—a move the president alleged was “stifling FREE SPEECH.”
Undergirding the free speech debate are long-standing jurisprudential questions that have befuddled constitutional law experts, philosophers, and, indeed, the American public: Do there exist other constitutional limits on free speech beyond the ones settled upon already? What constitutes hate speech, and, more importantly, who gets to decide?
In June 2017 the US Supreme Court determined that
Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express “the thought that we hate.”
However, agitators for the censorship of hate speech insist that there should be a limit to free speech beyond which potentially harmful rhetoric must be punished. Richard Stengel, a former editor of Time, argued in a Washington Post op-ed last year: “[O]ur First Amendment doesn’t just protect the good guys; our foremost liberty also protects any bad actors who hide behind it to weaken our society,” adding,
All speech is not equal. And where truth cannot drive out lies, we must add new guardrails. I’m all for protecting “thought that we hate,” but not speech that incites hate. It undermines the very values of a fair marketplace of ideas that the First Amendment is designed to protect.
On the other hand, Erwin Chemerinsky, dean and professor of law at the University of California, Berkeley School of Law, argues against the regulation of hate speech, writing,
History shows that punishing hate speech risks creating martyrs and rallying support. There is no evidence that banning hate speech does anything to lessen the presence in society of racist ideas or even racist crimes. The law is clear that hate-motivated crimes can be subject to enhanced punishments; it is just the speech that is protected by the First Amendment.
The Supreme Court took a similar position in Brandenburg v. Ohio (1969), holding,
the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.
And yet some, like myself, argue that the spreading of harmful or potentially harmful disinformation by public figures, and especially elected officials like the president, with megaphones and a duty of care to their electorate, ought to command some punishment. In a 2019 paper entitled “Disinformation Campaigns and Hate Speech: Exploring the Relationship and Programming Interventions,” researchers Lisa Reppell and Erica Shein of the International Foundation for Electoral Systems, carefully outline hate speech and disinformation as two sides of the same coin with a dangerously symbiotic relationship. The paper advances disinformation as a conduit and catalyst for amplifying hatred and antipathy, which are both detrimental to public trust as well as the democratic process and its institutions. (Disinformation is defined by Merriam-Webster as “false information deliberately and often covertly spread [as by the planting of rumors] in order to influence public opinion or obscure the truth.”)
In October 2019, Twitter CEO Jack Dorsey announced that the platform had “made the decision to stop all political advertising on Twitter globally.” Adding, “We believe political message reach should be earned, not bought.”
Since that statement, and following Twitter’s recent actions of attaching fact-check links on otherwise misleading tweets, Dorsey has had to defend the company’s actions by pointing to the platform’s Civic Integrity Policy, a strict rule guiding the use of Twitter in relation to some civil processes, namely, “political elections, censuses, and major referenda and ballot initiatives.” In a series of tweets, he explained their intention “to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.”
Doubling down on this decidedly polarizing path, on March 5, 2020, the platform expanded their hateful conduct policy to “address dehumanizing speech around more complex categories like race, ethnicity, and national origin.” Presumably acting on this new mandate, Twitter banned former Ku Klux Klan grand wizard, avowed white supremacist, and anti-Semite David Duke in July, citing “repeated violations of the Twitter Rules on hateful conduct.” A month earlier, YouTube took a similar step when the platform banned a coterie of white supremacists and white nationalist accounts—including Duke, Stefan Molyneux, and Richard Spencer. Facebook, for its part, had deleted a number of white nationalist and neo-Nazi profiles and pages in the wake of the deadly alt-right and neo-Nazi Unite the Right rally in Charlottesville, Virginia, in 2017. But just last week it came under fire for failing to take down a page for a group called “Kenosha Guard,” which called for armed citizens to defend streets in the Wisconsin city where protests followed the police shooting of Jacob Blake—and where one armed civilian who showed up killed two and injured a third person. Zuckerberg admitted that the page violated their policies and should have been removed.
These efforts by big tech companies to stymie hate speech have provoked the ire of many mainstream Republicans and alt-right personalities who accuse the platforms, and Twitter especially, of a “consistent pattern of political bias and censorship on the part of big tech,” as Senator Ted Cruz (R-TX) most succinctly put it.
Addressing the lingering question of the rights of a private tech company like Twitter to ban or fact-check its users whether for hate speech (à la Duke) or disinformation (like in the case of Don Jr.), Frank LoMonte, director of the Brechner Center for Freedom of Information at the University of Florida, deftly argues that “Nobody has a ‘free speech right’ to insist on using a non-governmental platform to convey his message.” He told Adweek,
When a government agency refuses to let a speaker speak, we call that censorship and it’s a First Amendment problem. But when a private platform like Twitter refuses to let a speaker speak, we call that “editing.” There’s no way that the publishing business could possibly work if we all had a right to demand that our letters run in the New York Times or in the New Yorker exactly the way we want them.
So, while completely aware of how dangerously slippery censorship of any kind or form is, I would argue that some kinds of disinformation and hate speech are just too imminently grave to be left to exist without any form of penalty—be it a fine or community service, as I recommended in my Facebook post.