Privacy Risks in Using Facial Recognition for Contact Tracing

The ongoing pandemic has provided governments with more justification for surveilling and collecting personal data from individuals. Because human faces are highly sensitive information, any use of facial recognition by the government for contact tracing presents inherent risks to privacy and must be regulated with great care.

man-5946820_1280.jpg

By: Jianchen Liu, staff member

The world is suffering from the ongoing COVID-19 pandemic. Scientific evidence suggests that conducting tests to identify individuals with confirmed cases, quarantining them, and tracing those they had close contact with is an effective way to control the spread of the pandemic. Since contact tracing is a critical component of the above comprehensive strategy, governments have been striving to implement it effectively. At the moment, the digital tools used to conduct contact tracing are predominantly proximity tracing tools, which use location-based or Bluetooth technology to find and trace suspect cases.  However, things have changed with the advent of new technology:  facial recognition.

Facial recognition is a method of identifying or verifying the identity of an individual by using their face which can be used to identify people in photos, video, or in real-time.  After a photograph of a face is captured by a video camera, a computer program picks out the face’s distinctive features, converts the photograph to a grayscale template, and automatically compares the template with other templates stored in an existing database. In essence, facial recognition is a digital matching technology that can be used by governments to identify, surveil, trace, and verify individuals.  

To deal with the pandemic, countries including China, Russia, India, Poland, and Japan have used, or plan to use, facial recognition technology (“FRT”) for tracing and surveillance. The United States is also moving in this direction. The State of Washington passed a bill on March 31, 2020, allowing the state government to use facial recognition to identify, verify, or track individuals when certain conditions are met. A proposed California bill followed, proposing that government agencies be allowed to use facial recognition services to engage in surveillance. The bill was temporarily shelved due to public concerns about privacy risks.  Even if these two bills were not specially made to respond to COVID-19, if enacted, they could shape the way that people are traced and surveilled. 

Similar efforts at enacting COVID-19 contact tracing are in progress. Under the pressure of a record-high death rate, one northwestern Indiana city is reportedly considering installing a facial recognition video network camera system downtown in hopes of slowing the spread of the coronavirus. Clearview AI, an infamous New York-based startup that has collected billions of photographs from social platforms and made a near-universal facial recognition system mainly for government use, is reportedly in discussions with federal and state agencies to help with contact tracing.  Given these developments, we may find ourselves in a situation where individuals’ privacy is at risk. 

The ongoing COVID-19 pandemic may have provided governments with more justification to surveil and trace individuals. However, human faces constitute highly sensitive information; therefore, unregulated use of FRT by governments presents inherent risks to privacy. Critics have expressed great concern about invasions of privacy by this controversial technology.  Therefore, privacy concerns need to be addressed before governments put FRT into use. 

Human faces are widely used to identify others in everyday life, and they are also a part of government-issued identity documents such as passports and driver’s licenses. FRT can make individuals easily identifiable even without the help of any other identifier. What’s more, with the help of FRT, governments may obtain an amount of personally identifiable information (“PII”) that is quantifiably and qualitatively different from what they can get using other devices.  For example, facial surveillance and facial tracing can make individuals’ location data available to the governments. Such location data can reveal an individual’s interests, habits, associations, and more, raising the same concerns expressed by the Supreme Court in Riley v. California and Carpenter v. United States where the data contained in a cell phone or the locational data revealed by a cell phone is quantitatively and qualitatively different from those data that the government may obtain by using other past devices.  The human face is such a central feature of identity that it may pose more privacy risks than any other identifier. 

Under the Fourth Amendment and Katz v. United States, if a surveillance technology used by the government violates an individual’s reasonable expectation of privacy, government surveillance using that technology constitutes a search under the law. Without a warrant or an exception to the warrant requirement, such conduct is unconstitutional. The Fourth Amendment protection, however, is subject to the third-party doctrine, under which an individual may lose such protection if he voluntarily turns over information in question to a third party. In recent cases, the Supreme Court has expanded the Fourth Amendment to address privacy risks posed by new technologies like long-term GPS tracking, smartphone search, and cell phone location data.  The decision in Carpenter makes clear that the third-party rule “does not by itself overcome the user’s claim to Fourth Amendment protection.” This decision marks the latest position of the Supreme Court on the government’s use of modern surveillance tools. 

As mentioned above, FRT can be used by the government to identify, surveil, trace, and verify individuals. Facial identification reveals one’s identity, while facial surveillance and tracing can collect both one’s identity information and location data, which are qualitatively and quantitatively different. Under the Supreme Court’s recent decisions, government use of FRT may well constitute a search under the Fourth Amendment. 

The privacy protection regime under the Fourth Amendment, however, has its shortcomings. The Fourth Amendment does not answer the question of whether mere facial identification by the government causes harm to an individual, and as a result, gives him the standing to sue.  In particular, it is ill-suited to tailor a regime for FRT by exerting different constraints on different types of FRT used by the government. In addition, the Fourth Amendment is not in a good position to provide regulatory guidelines for how the government may use its own databases for facial recognition purposes, or how a private company in possession of an image database should respond to a government request to access its database. Ethical issues such as the reliability of FRT also need to be addressed.  To fill these gaps, a new legislative framework is needed.

Policymakers should start by evaluating whether government use of FRT for surveillance or identification purposes should be totally banned. Sociological research reveals that commercially driven FRT applications like Apple Face ID help increase citizens’ familiarity with the technology and “thereby may indirectly increase acceptance levels for state surveillance uses of the technology.” Certain uses of FRT are beneficial to society. For instance, the government may use FRT to locate or identify missing persons. Therefore, a categorical ban on government use of FRT may not be appropriate. It is also necessary to distinguish between the different types of FRT and set tailored limitations on government use, establishing safeguards that will allow government agencies to use facial recognition in a manner that does not threaten individual privacy. The justification provided by the ongoing pandemic should be weighed before making a final decision. In addition, government use of FRT for regulatory purposes should be differentiated from use for criminal investigative purposes, as the ongoing pandemic may provide little if any justification for police use of FRT.

Jianchen Liu is an L.L.M. student at Columbia Law School and a staff member of the Columbia Journal of Transnational Law.  He graduated from East China University of Political Science and Law in 2016. Afterward, he practiced law as a litigation attorney in China for two years.


 
Jake Samuel Sidransky