Ahead of the Curve: Facial Recognition Grandstanding

Members of the San Francisco Board of Supervisors don’t normally attract national press attention, but that changed two weeks ago when Supervisor Aaron Peskin garnered headlines for proposing an outright ban on the city’s use of facial recognition technology. “I have yet to be persuaded that there is any beneficial use of this technology that outweighs the potential for government actors to use it for coercive and oppressive ends,” the supervisor said.

Peskin is not alone. Last month a coalition of eighty-five groups, including the ACLU and the National Lawyers Guild, sent a letter to Amazon, Google, and Microsoft demanding that the tech giants stop selling facial recognition technology to government agencies. The letter argues that the technology, already in use in many places, could “amplify and exacerbate historical and existing bias that harms these and other over-policed and over-surveilled communities. In a world with face surveillance, people will have to fear being watched and targeted by the government for attending a protest, congregating outside a place of worship, or simply living their lives.” The letter goes on to suggest that it would “supersize the government’s ability to target and separate families living in our communities.”

Neither Supervisor Peskin nor the authors of the letter mention what the “beneficial use of this technology” might be. First and foremost, its proponents say it will be used to identify criminals and deter crime. This is not a trivial matter. Black people, Hispanics, and women are substantially more likely to be victims of serious crime than men or white folks—in fact, nearly two-thirds of US murder victims are black or Hispanic. Reducing the number of crime victims, of whatever race or gender, is one of the principal reasons why government exists. Does Peskin care about these people?

Facial recognition has been used in law enforcement ever since the profession began. Until recently, it’s been conducted exclusively by humans, who are notoriously inept at it. I don’t know whether the state of the technology today is that much better than humans can achieve alone, but there is every reason to believe that it ultimately can be. How much sense would it make to ban human police officers from identifying suspects based on what they look like? If the answer is “Not much sense at all,” then how much sense would it make to force police to identify suspects using tools better and more accurate than unaided human eyes and memories?

There are other beneficial uses of the new facial recognition technology. A pilot program in New Delhi, India, last year used facial recognition technology to identify some three thousand missing children. It’s also been used to help diagnose DiGeorge Syndrome, a genetic disease primarily affecting minority children. Supervisor Peskin doesn’t mention any of this.

Some of the attacks mounted against facial recognition technology fall somewhere between hysterical and dishonest. Last summer there were lots of raucous headlines when ACLU tested an Amazon product and found that it incorrectly identified the faces twenty-eight members of Congress as matching those of convicted criminals. The headlines were much smaller, though, when Amazon pointed out that the program had been set at an 85 percent “level of confidence” for the matching algorithm, rather than at the 99 percent level its documentation recommended. At that level, the number of Congress members matching the criminal database fell to zero.

Should we just let the “invisible hand of the market” decide how and when governments use facial recognition technology? Absolutely not. Facial recognition conducted by machines is definitely cheaper than that conducted by humans, and (if it’s not there already) will soon be more accurate. It should not be used in the ways suggested by the ACLU letter: to target people “attending a protest, congregating outside a place of worship, or just living their lives.” It must be tightly regulated, and that regulation needs to be nimble enough to keep up with the pace of potential abuse.

I find myself recoiling at the thought of agreeing with Microsoft Corp. about almost anything, after having invented whole new genres of profanity to hurl at some of their more frustrating products over the years. But I believe a thoughtful article by Microsoft President Brad Smith from last December, before either the ACLU letter or the Peskin proposal was released, hits the nail on the head.

Smith details exactly what can go wrong with facial recognition technology. It can be racially biased. It can enable “mass surveillance” that will “encroach on democratic freedoms.” It can produce results using opaque logic not traceable by any human. Its imperfect results can be applied without any common-sense human intervention to harass or punish the innocent. It is far too powerful a tool to be safely used by the unscrupulous or the ignorant.

Most businesses chafe under government regulation, but Smith is demanding it, and demanding it as soon as possible. He has a number of detailed, constructive suggestions, such as requiring transparency to “put impartial testing groups like Consumer Reports and their counterparts in a position where they can test facial recognition services for accuracy and unfair bias in a transparent and even-handed manner.” Humans, not machines, need always to be in charge—as Smith puts it, “It’s critical for qualified people to review facial recognition results and make key decisions rather than simply turn them over to computers.” Court orders should be required before individuals are targeted for tracking and monitoring by the ever-growing number of cameras in public places.

Smith has lots of other suggestions, too, which may or may not be designed to give Microsoft an edge over its competitors. I can’t say at this point that his list of proposals is perfect, but I can say that we pay legislators and their staffs to make decisions on matters like this. When something has as much potential for both good and evil as facial recognition technology does, you shouldn’t just “ban” it—that’s the demagogue’s answer. You should roll up your sleeves and do the hard work.

Others before me have observed that artificial intelligence, of which facial recognition technology is just one manifestation, is like fire: useful, but dangerous. It’s easy to imagine one Neanderthal saying to another “You’re bringing what into this cave? I don’t think so! We’re going to ban fire, because I have yet to be persuaded that there is any beneficial use of this technology that outweighs its potential use for coercive and oppressive ends.” Or something like that, in prehistoric-speak. Today we have a rather massive societal structure for regulating fire, from early education to building codes to materials science to bureaucracies dedicated to extinguishing fire when it does break out. I’m glad we chose that course, and I’ll be glad when we can start to enjoy the benefits of widespread facial recognition technology coupled with whatever regulation is necessary—which may be a lot.