Poll Finds Americans Trust Police Use of Facial Recognition

In May, San Francisco banned city agencies like the police and sheriff’s departments from using facial recognition technologies over concerns they would be “coercive and oppressive.” The cities of Oakland and Somerville, Massachusetts followed suit this summer. Congress, too, is considering action.

Despite the concerns, a new poll finds that a majority of Americans—56 percent—trust law enforcement to use facial recognition responsibly. The poll, conducted by the Pew Research Center, also found that 73 percent of respondents believe the tools can accurately identify people—a view at odds with some studies of the technology’s accuracy.

The findings are surprising given recent controversies involving facial recognition. In May, Georgetown University published a report showing the New York Police Department manipulated its facial recognition system by replacing an image of a suspect that was too blurry with an image of the actor Woody Harrelson. In April, a student at Brown University was incorrectly identified as one of the Sri Lankan Easter bombers. Aaron Smith, director of data labs at Pew, says the findings are consistent with historical trends showing Americans generally trust police. “Americans are generally willing to trade off digital privacy and civil liberties when those issues are framed in the context of general public safety or preventing things like terrorist attacks,” he says.

Support for law enforcement use of facial recognition technology varied greatly by race and age. About 60 percent of whites said they trust law enforcement with the technology, but only 43 percent of black respondents did. Similarly, 67 percent of Americans over 65 trust law enforcement with the technology, as opposed to 49 percent of Americans ages 18 to 29. Registered Republicans responded more positively than registered Democrats.

However, the poll found Americans are less trustful of other uses of facial recognition technologies. Only about a third of respondents trust technology companies with facial recognition and only 18 percent trust advertisers. Smith says the survey didn’t probe for the cause of those disparities, but he finds them surprising. He points out that the negative consequences of police incorrectly labeling someone as a possible suspect “are potentially much more profound than, say, an advertiser misidentifying you as someone else.” The results are another sign of the backlash against big technology companies, which includes multiple government investigations.

Matt Cagle, a lawyer at the ACLU of Northern California which advocated for the San Francisco ban on facial recognition, says the survey should have included more context about how the technology would be used. When the ACLU conducted its own poll in March of California voters’ opinions on facial recognition, it asked more pointed questions, such as whether the government “should be able to monitor and track who you are and where you go using your biometric information.” In that survey, 82 percent of respondents—across age, region, and political affiliation—disagreed. “Once people know how invasive this technology is, once people know the flaws in many of these products, they reject it more than they initially did when they first heard about it,” Cagle says.

In its national survey, Pew found that nearly 75 percent of Americans believe facial recognition technologies are “at least somewhat effective” at correctly identifying individual people and just over 60 percent of Americans think the tools can effectively assess someone’s gender or race. While many news reports have pointed out the limits of these technologies, those messages haven’t necessarily affected public opinion. “The more people tell us they’ve heard about facial recognition, the better they think it is at doing its job,” says Smith.

Better algorithms and more diverse data sets have improved facial recognition significantly in recent years. In August, the New York Police Department successfully used its facial recognition technology to track down a man who allegedly left two rice cookers in a lower Manhattan subway station, prompting a bomb scare.

Keep Reading



The latest on artificial intelligence, from machine learning to computer vision and more

But numerous studies reveal these tools are far from perfect, particularly when it comes to identifying women and people of color. Software from France’s Idemia, used by police in the US, Britain, and Australia, was 10 times more likely to mix up black women’s faces than white women’s in US government tests. In 2018, researchers at MIT showed that programs developed by IBM and Microsoft were almost perfect at tagging pale-skinned men with the correct gender. With dark-skinned women, the same programs had a 20 percent error rate.

In a blog post, Pew noted that the systems are not only flawed, but can be indecipherable. The machine learning algorithms at their cores are so sophisticated, they often arrive at puzzling conclusions that can’t be explained, even by data scientists who design them.

Gretchen Greene, an AI policy adviser at Harvard’s Berkman Klein Center for Internet and Society, says not being able to explain a system’s decisions could threaten crucial rights. “In the US we have guarantees around when the police are allowed to search or come into your house or look into your car,” she says. We also have rights that protect us against excessive use of force by police officers. But what if a facial recognition system wrongly identifies someone as a suspect who is armed and dangerous and the police respond with lethal force? What if a system misidentifies someone for a search? Without a way to audit the system and see how it arrived at that decision, Greene says, “you don’t meaningfully have that right.”

Facial recognition also isn’t working alone. It fits into broader surveillance systems from geo-tracking on cell phones to security cameras in department stores. “You could create an ever-present collection of information that I hope doesn’t happen,” says Greene, who worries that kind of intense surveillance could affect more than just crime. It could stop people from participating in political groups, meeting with certain people, or practicing their religion, all of which are constitutionally guaranteed rights. “I think we should think hard about the tradeoff we want to make between security and other things like privacy,” says Greene. At the very least, if we do decide to make that tradeoff, citizens should be aware of what they stand to gain, and what they might lose.


More Great WIRED Stories

Read More