Ongoing tests of facial recognition technology continue to show that the technology is baffled when people wear masks of the sort that have become widespread (and even mandatory) in some places during the current pandemic. Forty-one newly tested algorithms—some of which were designed to compensate for face coverings—show the same dramatically elevated error rates as those examined earlier.
The tests have important implications for privacy at a time when surveillance technology is growing increasingly pervasive—but so is mask wearing. These studies are of interest, too, in an era of political instability and growing concern over law enforcement excesses, when people may have a strong interest in making identification of opponents and protesters difficult for the powers-that-be.
The tested facial algorithms are additions to those scrutinized by the U.S. government’s National Institute of Standards and Technology (NIST) in a report issued in July. “Now that so many of us are covering our faces to help reduce the spread of COVID-19, how well do face recognition algorithms identify people wearing masks? The answer, according to a preliminary study by the National Institute of Standards and Technology (NIST), is with great difficulty,” NIST summarized its findings at the time. “Even the best of the 89 commercial facial recognition algorithms tested had error rates between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask.”
While an error rate of 5 percent at the low end may not sound like much, that’s under near-ideal conditions. The algorithms were tested in one-to-one settings, of known subjects, like you might find at a passport checkpoint. And most of the systems suffered much higher error rates when dealing with covered faces.
Because of the growing popularity of face masks even before the pandemic, and their booming and often mandated usage since COVID-19 spread worldwide, facial recognition companies have raced to develop algorithms that can identify people despite coverings. NIST plans to test such technology in the future to see if it delivers as promised. But this latest round of algorithms isn’t part of that study.
“These algorithms were submitted to NIST after the pandemic began,” Chad Boutin, a science writer for NIST, told me by email. “However we do not have information on whether or not they were designed with face coverings in mind. The research team plans to analyze the data and issue its next [Face Regnition Vendor Test] report in the next few months, and will continue to report results on new submissions on the face mask webpage.”
But some of the 41 newly examined algorithms are very clearly intended to compensate for face mask usage.
Dahua, a Chinese company, boasts that its facial recognition technology allows for “attributes including gender, age, expression (happy, calm, surprised, sad, and angry), glasses, face masks, and beard & moustache, which makes searching and tracking subjects of interest more efficient.” In the NIST test, the error rate of Dahua’s algorithm went from 0.3 percent with an uncovered face to 7 percent with a face mask.
Likewise, Rank One insists that “accurate identification can be achieved using solely the eye and eyebrow regions of the face.” The company’s error rate went from 7 percent without masks to 35 percent with them in the NIST study.
Vigilant Solutions, which is well known for its vast license plate reader network but makes no claims about compensating for covered faces, went from a 2 percent error rate without masks to 50 percent with them.
Some of the products had error rates approaching 100 percent with covered faces, although these were generally less-accurate algorithms to begin with.
Again, these are facial recognition algorithms tested in one-to-one settings of the sort used to confirm an identity to unlock a phone or at an access control point. “Future study rounds will test one-to-many searches and add other variations designed to broaden the results further,” according to NIST.
Even before that future study, however, we know that one-to-many comparison of strangers on the street or in a crowd to databases of images is much more challenging. It can be thrown off by many factors—including age, sex, and race—even when people’s faces are uncovered.
Such “systems tend to have lower accuracies compared to verification systems, because it is harder for fixed cameras to take consistent, high-quality images of individuals moving freely through public spaces,” noted William Crumpler of the Center for Strategic and International Studies earlier this year.
Getting high-quality images is especially difficult when people are actively trying to avoid identification. And while such avoidance is difficult to pull off at a passport checkpoint or when trying to unlock a device, resisting identification is par for the course at a protest or on a sidewalk. Last year, police in the U.K. stopped pedestrians who covered their faces when they approached facial recognition cameras precisely because that was seen as an effort to thwart identification.
Now, governments are simultaneously ordering the public to mask-up in public places under threat of stiff fines and fretting over the resulting impact on surveillance. “We assess the widespread use of masks for public safety could likely continue to impact the effectiveness of face recognition systems even after federal or state mandates for their use are withdrawn,” a Department of Homeland Security notice warned in May.
Given that the algorithms designed to compensate for face masks necessarily rely on the remaining exposed portions of the face—specifically, the eyes and eyebrows—donning hats and sunglasses may be all that’s necessary to curtail the effectiveness of facial recognition technology.
The technology has raised enough privacy concerns that there has been enormous push back against its deployment. Boston and San Francisco are the largest of the U.S. cities that have banned the use of facial recognition technology by law enforcement agencies. Early in August, in a decision that could have wide ramifications for the U.K.’s growing surveillance state, a British court ruled against the use of the technology by the police.
The news that facial recognition technology can be defeated by cheap and ubiquitous face masks may well come as good news to Americans in the streets protesting biased and abusive law enforcement, or just in favor of reforming the way police do their jobs. Such news may also be welcomed in a country bitterly divided into hostile political factions. Half of the country is bound to distrust surveillance technology in the hands of whoever wins the November election.
That means large numbers of Americans should be pleased to know that they have a good chance of preserving their privacy with cheap pieces of fabric stretched across their faces.
Founded in 1968, Reason is the magazine of free minds and free markets. We produce hard-hitting independent journalism on civil liberties, politics, technology, culture, policy, and commerce. Reason exists outside of the left/right echo chamber. Our goal is to deliver fresh, unbiased information and insights to our readers, viewers, and listeners every day. Visit https://reason.com