The Loss of Anonymity and More? Mass Surveillance and Facial Recognition

Photo by Rob Sarmiento on Unsplash

Video surveillance has become commonplace throughout the world. From shopping centers to private warehouse facilities, and even in your own home, it’s not unusual to be on camera. But what happens when cameras stop being passive and start reporting our faces to officials?

In Singapore this is not a hypothetical; a federal agency, GovTech, has announced its intention to turn every lamppost in the country, over 100,000, into a wireless sensor network equipped with facial recognition software by next year.

“We are testing out various kinds of sensors on the lampposts, including cameras that can support backend facial recognition capabilities,” a spokesman from GovTech told Reuters. “These capabilities may be used for performing crowd analytics and supporting follow-up investigation in the event of a terror incident.” While the government vowed that the technology is meant to improve people’s lives and will not be overbearing, intrusive, or unethical, the new program is raising concerns among privacy groups.

Facial recognition technology is already utilized in cities like Beijing and Shanghai predominantly for law enforcement. Just last week Chinese authorities arrested a fugitive after their facial recognition system found him in a crowd of 60,000 people. In Japan, foreigners leaving the country, and nationals returning home, are subject to facial screening which is checked against passport photos in order to strengthen anti-terrorism measures.

Facial recognition software can be used for more than just catching criminals. Recently, a facial recognition firm in China was able to identify a man with a cognitive disorder who had been missing for over a year, reuniting him with his family. And an Indian news outlet reported that nearly 3,000 missing children in New Delhi have been located thanks to the technology.

While aiding investigators in identifying criminals, lost adults, and children may seem like an overwhelming benefit of mass video surveillance equipped with facial recognition software, the privacy and social justice implications of such systems are far too great to ignore.

Jay Stanley, senior policy analyst at the American Civil Liberties Union, told Reuters, “When you contemplate face recognition that’s everywhere, we have to think about what that’s going to mean for us. If private companies are scraping photos and combining them with personal info in order to make judgements about people—are you a terrorist, or how likely are you to be a shoplifter or anything in between—then it exposes everyone to the risk of being misidentified, or correctly identified and being misjudged.”

In the United States, mass surveillance coupled with facial recognition could be used to violate free speech through discouraging protest. The FBI already surveils groups—like Black Lives Matter—that participate in peaceful assembly. If mass facial recognition software was implemented in the US, and people knew that law enforcement could record and identify them at a protest, would they be less likely to attend?

Additionally, with a failed justice system that criminalizes class and race (disproportionately affecting people of color and low-income communities) and with a militarized police force that encourages abuse of power, should we invest in a mass surveillance system that has been found to misidentify people of color and women? Not only is this technology at risk of perpetuating racial biases, but there’s a strong chance that it can lead to false arrests. Ultimately, innocent people will need to spend time and resources proving themselves innocent, if given the chance at all. If we’re truly invested in addressing crime through the use of cameras, perhaps we should start by attaching body-worn cameras (that cannot be shut off) to all our police officers.