Facial recognition technology and the Fourth Amendment
by Rushing McCarl LLP
Sep. 22, 2020
The emergence of new technology has consistently presented a challenge for the law because while technology evolves at a lightning pace, the law changes much more slowly. The law often must adapt long-established legal frameworks to address novel problems, and sometimes legal solutions do not keep pace with technological innovation.
One current technological challenge facing courts is how facial recognition technology (FRT) will be regulated, and how it will interact with existing legal frameworks. This Rushing McCarl update will discuss concerns raised by governmental actors’ use of FRT. Future updates will address privacy concerns raised by private parties’ use of FRT.
FRT and Its Current Uses
FRT is an algorithm-driven technology that creates a facial template by which a person’s features as well as those features’ relative distances from each other can be consistently measured, categorized, and sorted.[1] By analyzing a facial image against the template, a list of potential matches can be generated, and a match can often be identified.[2]
FRT has been used in various forms and with varying success for approximately twenty years. It was used to manage security during the Super Bowl in 2001.[3] It was also used by Baltimore police during the Freddie Gray protests, where the use of the technology led to the arrest of protestors with outstanding warrants.[4] It has been used by the State of New York to identify more than 10,000 people who had more than one driver’s license. And the use of FRT by U.S. Customs and Border Patrol has become common. Anyone who has signed up for Global Entry has submitted to FRT.
Despite its use by law enforcement, most people are familiar with FRT through their use of social media. Facebook’s auto-tagging feature, which identifies people by their photos, is used by millions of Americans daily. What is a convenience when applied to social media, however, becomes a concern when it is used by governments because of the increased consequences of error.
FRT’s Error Rate
Because FRT is driven by learning algorithms, the images that are fed into the algorithm help determine which face types it becomes adept at categorizing and interpreting. The pictures’ lighting and quality have a profound effect on the technology’s reliability. As a result, the error rate of FRT varies.
In China, the technology is reportedly adept at categorizing and interpreting Asian faces but has a high error rate with respect to other ethnicities. In the United States, the technology is adept at identifying Caucasian faces, but has a high error rate with respect to non-Caucasian faces. Indeed, Detroit Police Chief James Craig stated that the Dataworks Software being used by his agency was unreliable.
If we would use the software only [to identify subjects], we would not solve the case 95–97 percent of the time. . . . That’s if we relied totally on the software, which would be against our current policy . . . . If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify.[5]
The possibility for misidentification is so significant that in the wake of the George Floyd protests, Amazon, IBM, and Microsoft announced that they would not sell FRT technology to police departments until there was a nationwide law in place to regulate use of the technology.[6]
After the recent Black Lives Matter protests, there is growing concern that racial biases exhibited by police will be magnified by the use of error-prone FRT.[7]
The Fourth Amendment and New Technology
The Fourth Amendment of the United States Constitution protects against unreasonable searches and seizures by federal or state authorities. The Supreme Court held in Katz v. United States that a two-part test should be used to assess whether a person has a reasonable expectation of privacy.[8] The test looks to (1) whether the plaintiff had an actual subjective expectation of privacy, and (2) whether that expectation is reasonable.[9] When the two-part Katz test is met, a warrant is required before the search can be carried out.[10]
As technology progresses, the scope of expected privacy will change. Historical cell phone data is one example because until recently, the law was undecided about whether a person had a reasonable expectation of privacy related to their physical location data. In 2018, the Supreme Court addressed this issue in Carpenter v. United States.[11] In Carpenter, the Court held that that the Fourth Amendment was violated when the government obtained historical cell phone location data from mobile carriers without first obtaining a warrant.[12]
The Carpenter decision was narrow in that it only addressed the issue of long-term location data collection by private companies. The Court did, however, recognize that as new technology develops, the courts will be required to work at protecting people’s privacy. This decision will influence how future courts view the use of other new technology.
FRT and the Fourth Amendment
It is settled law that people do not have a right of privacy in their faces because a person’s face is open to the public.[13] Given the Court’s approach in Carpenter, it is likely that public FRT systems (as opposed to the private systems at issue in Carpenter) used for a short period of time will not violate the Fourth Amendment.[14]
However, there may be a Fourth Amendment challenge to the use of FRT where governmental actors use private databases to run the facial comparisons.
In January, the New York Times revealed that Clearview A.I. has built a business around scraping billions of facial images from social media, creating a database which could be licensed to authorities for FRT comparison. Under the Katz test, an argument can be made that individuals have a reasonable expectation of privacy when they post images to a closed social network (as opposed to public forums). Clearview A.I.’s obtaining those images without consent and subsequent use of those images may violate some state privacy laws.[15] Under Carpenter, there may also be an argument that governmental use of these images violates the Fourth Amendment because they are held by a private company (whether Clearview A.I. or the original social forum) and because the person who posted the image had a reasonable expectation of privacy in the image.
As of the time of this writing, Clearview A.I. has retained the services of Floyd Abrams — a well-known First Amendment attorney — to defend its use of the images. Clearview A.I. will likely argue that its actions are a form of protected free speech.
FRT and The First Amendment
While in certain circumstances the use of FRT may not trigger Fourth Amendment concerns, it may nevertheless trigger First Amendment concerns related to the freedom of association and the right to privacy.[16]
While courts have found that law enforcement’s photographing demonstrators does not violate the First Amendment right to freedom of association, specific targeted surveillance may do so.[17] For example, the New York Police Department’s targeted use of video, photography, and undercover surveillance of Muslim Americans was held to have caused “direct, ongoing, and immediate harm.”[18]
A case-specific factual analysis will be needed to determine whether the targeted use of FRT has a chilling effect on a person’s First Amendment rights. The right to freely associate and promote minority points of view could easily be undermined through targeted FRT.
Conclusion
A protestor has no right of privacy in their face and can expect to be identified in photographs of protests. This has long been the state of the law.
FRT, however, is a much more powerful tool than a mug book. FRT-enabled drones can identify and track individuals in real-time and provide immediate information to law enforcement. Given FRT’s error rate and its potential chilling effect on the exercise of First Amendment rights, policy makers will need to have a robust debate concerning the proper balance between privacy and public safety. It will be up to the courts to interpret whatever regulations emerge in this area and fit them into a privacy framework rooted in the Constitution’s principles of privacy and freedom of speech.
Notes
[1] Most FRT algorithms are fed images gathered from the internet. One company, Clearview A.I., has gathered billions of images from social media sites and the wider internet, creating a facial database that it can market to law enforcement for FRT purposes. As recently as August 11, 2020, Clearview has begun an aggressive First Amendment defense of its actions. See Kashmir Hill, Facial Recognition Startup Mounts a First Amendment Defense, N.Y. Times (Aug. 11, 2020).
[2] Kevin Bonsor and Ryan Johnson, How Facial Recognition Systems Work, HowStuffWorks (online at https://electronics.howstuffworks.com/gadgets/high-tech-gadgets/facial-recognition.htm).
[3] Note that the use of FRT at the 2001 Super Bowl was met with limited success, identifying only 19 people with minor criminal records. There were some false positives. Kevin Bonsor & Ryan Johnson, supra n. 2.
[4] Benjamin Powers, Eyes over Baltimore: How Police Use Military Technology to Secretly Track You, Rolling Stone (Jan. 6, 2017).
[5] Vice, Detroit Police Chief: Facial Recognition Software Misidentifies 96% of the Time (June 29, 2020).
[6] See Jay Greene, Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM, Wash. Post (June 11, 2020), online at https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/.
[7] If FRT is used to identify a suspect, and an arrest is made, the admissibility of FRT evidence will be a crucial issue in litigation. Evidentiary battles concerning FRT are beyond the scope of this update, but the reader should note that at a minimum, it is likely that expert testimony will be needed to establish the technology’s reliability. The expert will need to opine on the algorithm’s reliability. This opinion will require knowledge about how the algorithm works, how it was trained, and the reliability of its training data.
[8] 389 U.S. 347 (1967).
[9] Id.
[10] Id.
[11] 138 S.Ct. 2206.
[12] Id. Readers should note that before Carpenter, law enforcement could obtain historical cell phone location data by explaining that the information was necessary to an investigation and that it was controlled by a third party. Carpenter, however, requires probable cause before the authorities will be allowed access to such data.
[13] Katz, 389 U.S. at 351-52; United States v. Dionisio, 410 U.S. 1, 14 (1973).
[14] The reader should distinguish between technology for widespread use and limited technology that is “sense enhancing.” In Kyllo v. United States, the Supreme Court held that the use of thermal imaging to gain information about the contents of a person’s home violated the Fourth Amendment. 533 U.S. 27, 33 (2001). While a person may not have a right of privacy in their face, the use of “enhancing” FRT may nevertheless violate the Fourth Amendment.
[15] In fact, several cases have been brought against Clearview A.I. on these grounds.
[16] The Perpetual Line-Up: Unregulated Police Face Recognition in America, GEO. L. CTR. ON PRIVACY & TECH. 42-44 (October 18, 2016).
[17] Laird v. Tatum, 408 U.S. 1 (1972); Philadelphia Yearly Meeting of Religious Soc’y of Friends v. Tate, 519 F.2d 1335, 1337–38 (3rd Cir. 1974).
[18] Hassan v. City of New York, 804 F.3d 277, 292 (2nd Cir. 2015).