A Face in the Crowd: Facial Recognition Technology and the Value of Anonymity
Facial recognition technology has the power to identify every face in a crowd. The case for a ban has never been stronger.
By: Tatum Millet, Staff member
On September 29, the Detroit City Council voted to extend its contract for facial recognition technology. Despite calls to ban the city’s facial recognition system after a false positive match led to a wrongful arrest in June, the Detroit Police Commissioner defended the Police Department’s continued use of the controversial technology, emphasizing that AI-driven facial recognition software is just “one tool in the toolbox.” Meanwhile, earlier in September, the city of Portland unanimously passed the nation’s toughest ban on facial recognition technology, emphasizing in the text of the statute the risk of false positives within over-surveilled populations.
Facial recognition technology is new and presents many unresolved challenges. The technology works by identifying unique details in peoples’ faces, then comparing that facial data to other faces stored in a database, such as mugshot databases, DMV photos, and even social media. Facial recognition can be used to identify people in stored photos and videos, and in real time. Though the technology has improved dramatically in recent years through machine learning, it still makes mistakes. But even as an imperfect tool, the technology powerfully amplifies the capacity of law enforcement to surveil, track, and identify people as they move throughout public life.
Facial recognition technology presents governments with a tradeoff: efficiency benefits for law enforcement versus serious threats to privacy and civil liberties. A few U.S. cities, like Portland, have weighed the risks more heavily and banned the technology. Other cities, including Detroit, continue to expand the use of facial recognition systems but have committed to imposing certain limits—taking advantage of the benefits offered by machine learning while preserving privacy and civil liberties.
Yet, as protest movements flare up around the world in cities equipped with facial recognition systems, the repressive potential of the technology has become more tangible—and more chilling. Identifying and tracking each individual face in a crowd, a herculean task using traditional surveillance methods, is made cheap and effortless by advanced AI technology.
The struggle to preserve anonymity in public spaces played out vividly in Hong Kong during the 2019-2020 protests. Although officials never publicly confirmed or denied that police used facial recognition technology to monitor protests, pro-democracy protesters wore masks and carried umbrellas to conceal their faces from CCTV cameras and occasionally even targeted the cameras themselves, toppling surveillance towers throughout the city. In October 2019, Hong Kong officials banned face coverings that could conceal protesters’ identities. Even after officials issued mask mandates in response to the coronavirus crisis, Hong Kong courts upheld the mask ban as applied to unlawful public gatherings. This repressive ban raised the stakes for protesters, who face arrest if they are found to be in violation of Hong Kong’s strict Public Order Ordinances.
In Moscow, too, where facial recognition is widespread, the government uses the technology to collect information on peaceful protesters. Anti-surveillance activists protesting the installation of facial recognition cameras throughout the city started a campaign encouraging people to paint their faces using CV Dazzle, a camouflage technique that confuses facial recognition algorithms. Activists employing the face-distorting technique, in addition to preserving their own anonymity, offer a striking demonstration of how people adjust their behavior when they are subject to state surveillance. The constant threat of identification by facial recognition technology distorts the social life of communities, suppressing public gathering and political dissent.
Delineating the boundary between permissible and repressive applications of facial recognition technology becomes more difficult where the public rises up in protest to demand structural change. After the NYPD used facial recognition to arrest a Black Lives Matter protester, New York Mayor Bill de Blasio said that the city’s standards for facial recognition technology “need to be reassessed.” Unfortunately, fine-tuning the NYPD’s standards for deploying facial recognition in the future will not reverse the impact of the NYPD’s decision to use the technology against protesters.
Governments need not use facial recognition systems with great frequency to chill associational freedoms; the potential for such use is enough. When one Black Lives Matter protester is tracked down using facial recognition, it reveals that every protester is potentially vulnerable to such surveillance methods. Accordingly, when the police employ facial recognition, they infringe not just upon the liberty of those specifically targeted, but also upon the liberty of those who fear that one day, they themselves might be targeted.
This threat is, in part, a product of a lack of transparency and accountability in how facial recognition technology is implemented. Given that the technology often makes use of CCTV cameras, traffic cameras, and social media, it can be difficult to ascertain where and when faces are being scanned. In the UK, Cardiff resident Ed Bridges filed a lawsuit against the South Wales Police after he spotted a facial recognition van near the site of a peaceful protest he attended. On August 11, The UK Court of Appeal held that the use of facial recognition technology violates the rights of everyone scanned by the system, emphasizing the lack of criteria governing the technology’s deployment.
But the heart of the threat posed by facial recognition cannot be addressed merely by adjusting standards for its deployment. This is because the ultimate power to apply those standards regarding the permissibility of facial recognition rests in the hands of law enforcement. For populations historically targeted and surveilled by the state, proportionality guidelines do not provide substantial relief from racially discriminatory risk assessments. This threat is particularly acute given that facial recognition has shown a pattern of racial and gender bias, increasing the risk that people of color will be wrongfully arrested as a result of a false match.
Protecting the fundamental rights to privacy, expression, association, and equality requires protecting the right to remain anonymous in public spaces. Under a standards-based system built on risk assessment (and, when called for, reassessment) discriminatory and repressive technology often irreparably violates these rights long before they are ultimately vindicated. The rights of those individuals most vulnerable to the oppressive power of this biased surveillance infrastructure—and to its “mistakes”—should not be sacrificed while officials squabble over standards and tech developers diversify their data sets. Rather, facial recognition should be banned in the name of safeguarding these fundamental rights. After all, those with the power to design, develop, and deploy facial recognition—not those subject to its algorithmic gaze—will be the people who determine when the technology is working correctly.
In the EU, facial recognition technology must conform to the limitations on data collection and processing set out in the EU’s data protection regulation, the GDPR. The GDPR imposes particularly strict restrictions on processing biometric data, such as facial data. Still, in view of the risk the technology poses to fundamental rights, the European Commission has not ruled out the possibility of an outright ban. A new U.S. bill proposes a ban on police use of facial recognition technology, but passing the bill into law will surely be a long fight. As legislation stalls, narrow Supreme Court rulings applying the Fourth and First amendments to new surveillance technologies have left courts without much guidance on the facial recognition question.
Accordingly, any widespread ban on facial recognition technology in the next few years would need to be the product of political will. Regional ordinances banning the technology have succeeded in shifting public perception of facial recognition. Additionally, the use of facial recognition technology in recent protests has clarified the stakes of the debate by demonstrating that even technology that is subject to proportionality standards can infringe upon fundamental rights.
But unless activists and advocates succeed in holding public officials accountable for the repressive impact of facial recognition technology on communities, being just another face in the crowd will soon be a thing of the past.
Tatum Millet is a second-year student at Columbia Law School and a Staff member on the Columbia Journal of Transnational Law. She graduated from Wesleyan University in 2019. This past summer, she worked as a legal intern for the Digital Freedom Fund.