There will soon be a new bobby on the beat in London: artificial intelligence.
London’s Metropolitan Police said Friday that it will deploy facial recognition technology to find wanted criminals and missing persons. It said the technology will be deployed at “specific locations,” each with a “bespoke watch list” of wanted persons, mostly violent offenders. However, a spokesperson was unable to specify how many facial recognition systems will be used, where, or how frequently.
The Met said use of the technology would be publicized beforehand and marked by signs on site. It said the technology would not be connected to the city’s existing surveillance cameras or to cameras worn by police officers.
The announcement signals one of the most significant police deployments of facial recognition in the West, by one of the largest police forces in the democratic world. The Met has more than 30,000 officers, covering the 32 boroughs of greater London. The UK capital is already one of the most surveilled cities in the world, with some 627,000 cameras, according to CCTV.co.uk, most of them privately owned.
Seeing a major police force in a Western democracy embrace facial recognition may only embolden more repressive regimes elsewhere in the world, says Silkie Carlo, director of Big Brother Watch in the UK. She says activists in Hong Kong, Russia, or South America may find it more difficult to push back against expanded use of facial recognition as a result.
Carlo says facial recognition is already being used in dubious ways in the UK, for example to monitor the Notting Hill Carnival, a major event in British Black culture. “We talk about mission creep, but the mission is already well out of control,” she says. “It is inherently an authoritarian tool, and it lends itself to abuses of power.”
The Metropolitan Police assistant commissioner, Nick Ephgrave, said in a statement that the technology would help London police combat violent crime. “As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London,” Ephgrave said.
Civil liberties groups criticized the move. “As a highly intrusive surveillance technique, [facial recognition] can provide authorities with new opportunities to undermine democracy under the cloak of defending it,” the London-based watchdog Privacy International said in a statement.
Facial recognition has advanced rapidly in recent years, thanks to progress in machine learning. With sufficient high-quality training data and processing power, it is now possible for computers to pick faces out of a crowd with high accuracy. But the technology has proven controversial, partly because of its capacity to invade people’s privacy, but also because without diverse training data it can work better on some types of people (usually white males) than others.
Several US cities, including San Francisco and Oakland in California and Somerville in Massachusetts, have banned official use of facial recognition. Meanwhile, in authoritarian countries, facial recognition is rapidly becoming a routine tool of policing and government control.
Allowing police to scan the faces of innocent bystanders in search of criminals challenges a person’s expectation of privacy in public spaces. And it is hard to imagine that the technology would not spread to the dragnet of cameras that blanket London’s streets eventually.
The Met’s move also comes as UK experts question the reliability and legality of the technology in trials, and even as the government’s own privacy watchdog argues that more oversight is required.
The Metropolitan Police has previously tested facial recognition in 10 locations, including London’s West End. Officials said that in these trials, 70 percent of wanted suspects were spotted, while only one in 1,000 people were incorrectly flagged as a person of interest. But even these limited experiments have been controversial.
Researchers at the University of Essex who were given access to the trials offered a more critical assessment of the experiment in a report produced in July 2019. They claim that in six trials, where 42 persons were identified, “in only eight of those matches can the report authors say with absolute confidence the technology got it right.”