Ahead of the Curve: What If Lie Detectors Worked?
Not everything you read on the internet is true. I know this, because I read it on the internet.
A few weeks ago I came across a piece entitled “A New AI That Detects ‘Deception’ May Bring an End to Lying as We Know It.” This followed closely on the heels of “AI System Detects ‘Deception’ in Courtroom Videos,” based on the same new technology.
Both of these headlines fall squarely in the “not true” category. The technology in question “uses computer vision to identify and classify facial micro-expressions and audio frequency analysis to pick out revealing patterns in voices.” But it’s never been used with “courtroom videos.” It’s been used with actors pretending to be in a courtroom and either lying or telling the truth. There is every reason to suspect that a person motivated to tell a real lie to preserve his or her freedom or property might employ “micro-expressions” quite a bit differently from those of an actor, as might a person under heavy stress but still telling the truth. Still, the artificial intelligence (AI) did a much better job of picking out the actors’ lies than a human control group did.
This isn’t the only recent example of facial examination being used to figure out what’s going on inside a human’s head. Another recent study showed results of surprising accuracy in predicting a person’s sexual orientation simply by studying a picture of his or her face. In China, an AI has been developed that’s pretty good at identifying which driver’s license photos belong to convicted criminals.
Despite the growing pile of data, I remain skeptical about the ability of a machine to make reliable judgments simply by looking at a face. That may make me a stuck-in-the-last-century relic, but so be it. My doubt fades, though, when I read about the rapid progress being made with machine-brain interfaces. If you want to see something jaw-dropping, take a peek at the pictures generated by an AI developed at Kyoto University that can quite literally read a person’s mind. The subject dons a helmet containing a series of electrodes that can pick up brain signals, then focuses on a picture. By detecting which precise areas of the brain are activated, the AI can create a roughly accurate reproduction of what the subject was looking at. I didn’t believe this was possible—until I looked at the pictures. They’re far from perfect (the tiger looks a bit like a groundhog) but the fact that a machine can do this at all is astonishing. The first television pictures back in the 1920s were a little blurry, too, but that came along pretty rapidly.
These results are achieved just with sensors located on the outside of the skull. That’s a bit like trying to figure out what’s going on inside a computer by putting your ear up against it. The real breakthrough comes when conductors (metal or otherwise) that communicate directly with neurons are inserted inside the brain—as is already happening.
Old-fashioned polygraphs that simply measure physiological symptoms like sweating and pulse rate, are already moderately useful. Their proponents claim they are 90 percent accurate; their detractors say the accuracy is closer to 65 percent, but that’s still higher than you’d get by flipping a coin. When you take that as a baseline, add in what’s already happening with the facial feature AI, and top it off with what seems eminently doable in a direct machine-brain connection, it seems more likely than not that we’re closing in on a world where lie detectors basically work. Not perfectly, perhaps, but pretty darned well.
Is this OK?
The objective of Ahead of the Curve is to stimulate thinking about the world-changing technologies just beyond the horizon. Like it or not, highly accurate lie detection is likely to be upon us soon. Better to start thinking about it now than to wait until it hits us.
My first reaction, which may be where I wind up, is that it’s more OK for everyone else than it is for me. Not that I ever lie… (OK, that’s a lie). I’m just sufficiently offended that someone might think that I was lying as to hook me up to one of these machines. I’d be terrified that they might start asking about… (if you think I’m going to finish that sentence, you’ve got another thing coming.)
Polygraph evidence is virtually always inadmissible in court, precisely because it is thought to be unreliable. If the reliability increases, though, then what’s the point in keeping it out? Juries would still need to be cautioned that no machine or test is 100 percent perfect, but other less-than-perfect evidence is routinely used in the search for truth. Why not this as well?
One thing machines cannot do is force someone to answer a question. Maybe drugs can break down the will not to answer, but a drug-induced answer may be less amenable to whatever lie-detecting procedure is being employed.
Here in the United States, the Bill of Rights gives us the right not to testify against ourselves. Courts have added a gloss to this, that juries are not supposed to draw any inference if a defendant declines to testify. That’s not a universally accepted view, though. In England and Wales, juries are perfectly free to draw any inferences they like. Most people don’t consider England and Wales to be particularly repressive places. If a person refuses to participate in a highly accurate lie detection procedure, why shouldn’t juries be able to take that refusal into account?
It’s easy to visualize horrible uses of accurate lie detection. Repressive governments or religions could use it to identify and crush opposition. But there are positive potential uses as well. Consider the single phenomenon of government corruption, especially (though not exclusively) in the developing world. The costs of corruption are impossible to quantify, because the actual money that changes hands (estimated by some as up to a trillion dollars annually) is just the tip of the iceberg. The real damage comes from all the legitimate economic players that don’t want to subject themselves to this kind of extortion. They simply stay away, letting these places stagnate. Suppose accurate lie detection were limited to the single purpose of random testing of government officials, high and low, about illegal influences on the performance of their duties. If you don’t want a machine poking around in your brain, fine—don’t become a government official. For those who do choose public service, the in terrorem effect of routine accurate lie detection is quite likely to improve their honesty. That, for me, would be an enormous plus.
Change is scary. It’s tempting to stick your head in the sand and say: “Accurate lie detection raises too many hard questions, so let’s not go there.” Too bad—we’re already on the way. Far better to start thinking about how to complete sentences like: “It’s OK to use it for ___, but not for ___.”