One of the legal foundations of the freewheeling internet is under attack in Washington, DC.
When the internet first got rolling a quarter century ago, it didn’t take long for the firms enabling millions of users to share their views with the world to notice that at least some of those shared opinions were problematic: libel, incitement to violence, hate speech, child pornography, and more. Controlling the torrent would be difficult, to say the least. When arguably unlawful content was posted, plaintiffs’ lawyers had an easy choice: (1) sue the poster, who would be hard to find and possibly judgment-proof; or (2) sue the venture-capital rich internet platform on which the poster placed the illegal content. Most chose path #2.
The internet firms being sued didn’t like this approach. They claimed they were different from newspapers, which are generally liable for content they choose to publish, because internet providers don’t have control over the content posted online. They’re simply neutral “platforms,” taking all comers. Suing them would be like suing the post office for delivering a censorable letter.
The problem with this strategy became evident in a 1996 libel case against Prodigy, a now-defunct bulletin board service. Prodigy lost because it had previously made a half-hearted effort to clean profanity out of its postings. Therefore, said the court, it’s not just a platform but a publisher, as liable for every word that appears there as TheHumanist.com would be in case I commit some offense in this article.
Even Washington, DC, back in the more sensible ’90s, thought this outcome was nuts. If a platform makes some effort to clean things up, it’s treated worse than if it had done nothing at all? So a bipartisan coalition added Section 230 to the Communications Decency Act of 1996, making it clear that internet publishers won’t lose their platform status if they attempt to censor posted content they deem unlawful or undesirable.
Most of the big platforms (Facebook, Twitter, etc.) are now spending vast sums doing exactly this: screening out millions of postings they find offensive. They do this partly because the laws in some countries, like Germany, require it. They also do it because better, cleaner content means more money. They believe, probably correctly, that users will be less willing to spend time (and money) on a platform where the content makes them squeamish. (The nitty-gritty work of doing the actual screening is highly unpleasant; at least one contractor claims it has given him PTSD.)
The net result has been an internet enabling massive diversity of opinion, with occasional grumbling from people (e.g., Franklin Graham) whose posts are censored, and still more grumbling from people who think that some posts that should be removed aren’t (e.g., Nancy Pelosi). Sometimes the grumbling pays off, and sometimes it doesn’t.
Without Section 230, though, the internet would look vastly different. If Twitter, for example, were told, “You’ll be liable like a newspaper, and not protected like a platform, if you lift a finger to censor any tweets at all,” they would have three choices: (1) shut down their whole censorship operation, and let every piece of garbage through; (2) be prepared to spend a gazillion dollars on lawyers to defend themselves every time they choose to delete (or fail to delete) an arguably offensive tweet—not to mention the damages they’ll have to pay every time they lose; or (3) call it a day, cease operations, and let someone else deal with the hassle.
Destroying today’s 99+ percent open internet forum is exactly what Sen. Josh Hawley (R-MO) seems intent on doing. This is the same Sen. Hawley, you may recall, who recently accused a Trump judicial nominee of “anti-religious animus” for doing his job as a city attorney in enforcing an anti-discrimination ordinance. Hawley has now introduced a bill to gut Section 230, at least for large companies. The only way any large firm could enjoy its protections would be to conduct a “political neutrality audit” of its screening procedures every two years, which would then have to be approved by four out of five members of the Federal Trade Commission. How do you conduct such an audit, and how do you interpret it? No one knows—least of all the FTC. We do know that the audit is supposed to ferret out “an intent to negatively affect a political party, political candidate, or political viewpoint.” But aren’t the Nazis a political party? Isn’t “Let’s butcher all the capitalist pigs!” a political viewpoint? Isn’t screening out this kind of filth exactly what the platforms are supposed to be doing?
Hawley and many other conservatives vehemently insist that today’s screening process is biased against conservative viewpoints. They offer no data to support this, only anecdotes. Maybe they’re right—there do seem to be quite a few such anecdotes, some of them as bizarre as Facebook censoring an excerpt from the Declaration of Independence, or Twitter barring phrases that appear repeatedly in the United States Code. I’ve argued here before that censorship decisions ought to be made by accountable government officials, not by private firms. Censoring the whole internet would require a massive new bureaucracy, but perhaps there’s a less disruptive alternative: a quick, cheap, and easy appeals process, in which those who feel wronged by a private censorship decision can get a binding second opinion from a judicial official. It shouldn’t take long for a neutral party to decide whether a given post is OK or not. If you still disagree with the decision, you can go through a full-blown, years-long litigation process, but the vast majority of such appeals should be initially decided within a day or two. The cost of whatever new appeals bureaucracy is required could be paid by a fee levied on the platforms seeking Section 230 protection.
If I were king, I’d couple that new appeals process with a greater emphasis on going after the real wrongdoers: the people who post censorable material in the first place. The guy in New Zealand who distributed video of the mosque shooting was just handed a twenty-one-month jail sentence. Good! Alex Jones may be heading straight toward being bankrupted by emotional distress lawsuits from Sandy Hook parents. Good! Let’s punish the bad guys, and leave those who have given us the Information Age alone.