The Presidential Commission on Law Enforcement and the Administration of Justice invited EFF to testify on law enforcement use of face recognition. The Commission, which was established via Executive Order and convened by Attorney General William Barr earlier this year, is tasked with addressing the serious issues confronting law enforcement and is made up of representatives from federal law enforcement as well as police chiefs and sheriffs from around the country.
We testified orally and provided the Commission with a copy of our whitepaper, Face Off: Law Enforcement Use of Face Recognition Technology. The following is our oral testimony:
President’s Commission on Law Enforcement and the Administration of Justice
Hearing on Law Enforcement’s Use of Facial Recognition Technology
Oral Testimony of
Jennifer Lynch
Surveillance Litigation Director
Electronic Frontier Foundation (EFF)
April 22, 2020
Thank you very much for the opportunity to discuss law enforcement’s use of facial recognition technologies with you today. I am the surveillance litigation director at the Electronic Frontier Foundation, a 30-year-old nonprofit dedicated to the protection of civil liberties and privacy in new technologies.
In the last few years, face recognition has advanced significantly. Now, law enforcement officers can use mobile devices to capture face recognition-ready photographs of people they stop on the street; surveillance cameras and body-worn cameras boast real-time face scanning and identification capabilities; and the FBI and many other state and federal agencies have access to millions, if not hundreds of millions, of face recognition images of law-abiding Americans.
However, the adoption of face recognition technologies has occurred without meaningful oversight, without proper accuracy testing, and without legal protections to prevent misuse. This has led to the development of unproven systems that will impinge on constitutional rights and disproportionately impact people of color.
Face recognition and similar technologies make it possible to identify and track people, both in real time and in the past, including at lawful political protests and other sensitive gatherings. Widespread use of face recognition by the government—especially to identify people secretly when they walk around in public—will fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny, even when they are doing absolutely nothing wrong. And this burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.
The right to speak anonymously and to associate with others without the government watching is fundamental to a democracy. And it’s not just EFF saying that—the founding fathers used pseudonyms in the Federalist Papers to debate what kind of government we should form in this country, and the Supreme Court has consistently recognized that anonymous speech and association are necessary for the First Amendment right to free speech to be at all meaningful.
Face recognition’s chilling effect is exacerbated by inaccuracies in face recognition systems. For example, FBI’s own testing found its face recognition system failed to even detect a match from a gallery of images nearly 15% of the time. Similarly, the ACLU showed that Amazon’s face recognition product, which it aggressively markets to law enforcement, falsely matched 28 members of Congress to mugshot photos.
The threats from face recognition will disproportionately impact people of color, both because face recognition misidentifies African Americans and ethnic minorities at higher rates than whites, and because mug shot databases include a disproportionate number of African Americans, Latinos, and immigrants.
This has real-world consequences; an inaccurate system will implicate people for crimes they didn’t commit. Using face recognition as the first step in an investigation can bias the investigation toward a particular suspect. Human backup identification, which has its own problems, frequently only confirms this bias. This means face recognition will shift the burden onto defendants to show they are not who the system says they are.
Despite these known challenges, federal and state agencies have for years failed to be transparent about their use of face recognition. For example, the public had no idea how many images were accessible to FBI’s FACE Services Unit until Government Accountability Office reports from 2016 and 2019 revealed the Bureau can access more than 641 million images—most of which were taken for non-criminal reasons like obtaining a driver license or a passport.
State agencies have been just as intransigent in providing information on their face recognition systems. EFF partnered with the Georgetown Center on Privacy and Technology to do a survey of which states were currently using face recognition and with whom they were sharing their data – a project we call “Who Has Your Face.” Many states, including Connecticut, Louisiana, Kentucky, and Alabama failed to or refused to respond to our public records requests. And other states like Idaho and Oklahoma told us they did not use face recognition but other sources, like the GAO reports and records from the American Association of Motor Vehicle Administrators (AAMVA), seem to contradict this.
Law enforcement officers have also hidden their partnerships with private companies from the public. Earlier this year, the public learned that a company called Clearview AI had been actively marketing its face recognition technology to law enforcement, and claimed that more than 1,000 agencies around the country had used its services. But up until the middle of January, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using his image to sell its services to other agencies.
Unfortunately, the police have been just as tight-lipped with defendants and defense attorneys about their use of face recognition. For example, in Florida law enforcement officers have used face recognition to try to identify suspects for almost 20 years, conducting up to 8,000 searches per month. However, Florida defense attorneys are almost never told that face recognition was used in their clients’ cases. This infringes defendants’ constitutional due process right to challenge evidence brought against them.
Without transparency, accountability, and proper security protocols in place, face recognition systems will be subject to misuse. For example, the Baltimore Police used face recognition and social media to identify and arrest people in the protests following Freddie Gray’s death. And Clearview AI used its own face recognition technology to monitor a journalist and encouraged police officers to use it to identify family and friends.
Americans should not be forced to submit to criminal face recognition searches merely because they want to drive a car. And they shouldn’t have to fear that their every move will be tracked if the networks of surveillance cameras that already blanket many cities are linked to face recognition.
But without meaningful restrictions on face recognition, this is where we may be headed. Without protections, it could be relatively easy for governments to amass databases of images of all Americans—or work with a shady company like Clearview AI to do it for them—and then use those databases to identify and track people as they go about their daily lives.
In response to these challenges, I encourage this commission to do two things: First, to conduct a thorough nationwide study of current and proposed law enforcement practices with regard to face recognition at the federal, state, and local level, and second, to develop model policies for agencies that will meaningfully restrict law enforcement access to and use of this technology. Once completed, both of these should be easily available to the general public.
Thank you once again for the invitation to testify. My written testimony, a white paper I wrote on law enforcement use of face recognition, provides additional information and recommendations. I am happy to respond to questions.
The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. We work to ensure that rights and freedoms are enhanced and protected as our use of technology grows. Visit https://www.eff.org