Ahead of the Curve: Should Fake News Be Outlawed?
Read Matthew Bulger’s response to this article, “Fake News Must Remain Legal,” here.
Shortly after the individual who shot up a church in Sutherland Springs, Texas, earlier this month was identified, news reports told us that he was a member of the “Antifa” movement and had vowed to start a civil war by “targeting white conservative churches” and causing anarchy in the United States. He carried an Antifa flag and shouted, “This is a communist revolution!”
Not long afterward we were told that none of this was true. It was yet another example of the “fake news” items that have proliferated on the internet in recent years. Their impact is impossible to measure, but it is far from negligible. Many people rely on news sources that echo their own predilections; they are far more likely to see only the fake item, and not the debunking that comes from the opposite pole.
I’m no friend of Antifa. I’d like see that whole movement shrivel and die. But I want to see them fail on the merits of what they do—not on lies.
The source of fake news can be hard to pin down. Sometimes an item can begin life as a fact, get twisted and embellished along the way as in the game of Telephone, and eventually cross a line to become outright false. That’s not good, but it’s also not what happened in the Texas shooter case. Every indication is that someone sat down at a keyboard and banged out an utter fabrication just to promote a political point of view. The amount of good this kind of deliberate lying does for humanity, in my view, is zero.
The 2016 election was a high-water mark for fake news, at least so far. It’s impossible to determine accurately how much impact it had in electing Donald Trump, but it’s hard to resist the suspicion that the impact was significant. I know one Pennsylvanian, for example, who has very little interest in politics and rarely votes, but who made the effort to get out and vote against “crooked Hillary” because of all the awful things he saw about her on Facebook. Some of what he saw was true, but some of it was manufactured in the same way the item about the Texas shooter was.
Lies have existed since the first caveman learned how to grunt, but the internet makes things different. Fake news can amplify itself in ways not imaginable before the worldwide web arrived. The overwhelming multiplicity of sources makes it far easier and more comfortable for people to narrow their inputs to confirm to their own biases, in ways that weren’t possible when nearly everyone only had access to three television networks and a couple of daily newspapers.
Future growth in artificial intelligence technology is going to make matters even worse. Scientific American ran a piece last month called “Could AI Be the Future of Fake News and Product Reviews?” which is downright scary. Almost as scary as last summer’s story, “AI Will Make Forging Anything Entirely Too Easy.” With very little effort, whoever made up the Texas shooter story will now be able to create fake video “proof” of his participation in past Antifa riots. “Pictures don’t lie,” people used to say. Well, now they do. Changed circumstances call for changed laws.
Any law against fake news must be narrow and carefully tailored. Lots of things that are arguably “bad” should remain legal, including opinions, “embellishment,” and passing on something you don’t know to be false. I’m not trying to put the whole blogosphere in jail. But I do think that seasoned experts in free expression law could sit down and come up with some language to hone in on the most vicious fabricators, like the person who made up the Texas shooter item off the top of his or her head.
A narrow law like that will not completely solve the problem. It most especially won’t solve the part of the problem emanating from other countries, like Russia, which is accused of the most massive fabrications during the 2016 campaign. Facebook said that as many as 126 million of its users may have seen content that originated from a Russian disinformation campaign. But a carefully designed law, followed by a couple of well-publicized prosecutions of egregious violators, could plant a well-deserved fear in the minds of those who lie with impunity.
We already have tort laws to deal with defamation, but they aren’t up to the task of dealing with twenty-first-century fake news. In many cases, there is not a readily identifiable victim or a quantifiable financial harm; even where there is, not every victim has the resources or the stomach to carry through with a lawsuit. There aren’t many things that any Trump family member has done that I admire, but Melania Trump’s ferocious campaign against the blogger who accused her of working for a paid escort service is one of them. Not everyone is in a position to do what she did, though.
Germany’s cabinet recently approved a law that would fine social networking sites for failing to promptly remove fake news once it has been identified as such. The European Union is looking at ways to do something like this on an EU-wide basis. It will be illuminating to see how this works out. My preference would be to pursue the perpetrators, rather than the media, but I could be wrong about that. What I think I’m right about is that we shouldn’t just roll blissfully along, letting technology make the problem worse.
There’s an old saw from the late Justice Louis Brandeis: “Sunlight is said to be the best of disinfectants.” He went on to argue that the best way to deal with false information is to crowd it out with true information, in a free marketplace of ideas. But as a matter of medical fact, sunlight is not the best of disinfectants. If you have a serious infection, just sitting outside in the sun is not what you’ll want to do. There are plenty of disinfectants that are far quicker and more effective than sunlight. And moving from Brandeis’s metaphor back to reality, the information world of 2017 is quite a bit different today than it was in 1913. Laws need to keep up.
It’s easy to glibly recite mantras like “freedom of speech,” and think that settles the question. In fact, we already have laws against certain types of speech, and I don’t see many calls for getting rid of them. For example, earlier this month the FDA began cracking down on sellers of cannabis products for claims they can combat tumors and kill cancer cells. I don’t know all the facts here. But if it turns out these sellers are just making things up, why is that ok? Why shouldn’t they be punished?
Consider the case of Andrew Wakefield. He’s the guy who faked the study linking vaccination to autism a few years ago, causing untold misery for thousands who believed him—and even for people who didn’t, but who became sick anyway because no vaccination is 100 percent perfect. How many years in prison did he serve? None, because there is no particular criminal law against what he is alleged to have done. Even more to the point is regulation of fake news in capital markets. Wanna make a quick buck? All you have to do is spread a fake story that XXX Corp. has a sex abuse scandal, or a production line breakdown, or an imminent patent infringement claim—right after you’ve shorted its stock. In the day or two it takes to get the truth out, you can make a killing. Pros call this free speech technique “poop and scoop.” The SEC calls it illegal, and has a powerful tool in Rule 10b-5 for dealing with it. Just last spring, SEC filed charges against twenty-seven respondents for spreading manufactured news. Rule 10b-5 would be a useful model for any new fake news law, because of its well-litigated “scienter” requirement that means, in English, you really have to know something is fake before you get in trouble. I don’t think Rule 10b-5 can be carried over wholesale to a more general fake news law, but it’s a good starting point.
We already have laws against fake news, when the victims are rich investors. When the victim is the democratic process itself, I think it’s time the rest of us get the same kind of protection as rich investors get.