On the Amazon panopticon

Last year, “Amazon employees met with ICE officials … to market the company’s facial recognition technology,” the ACLU informs us. Amazon VP Brad Huseman later said “We believe the government should have the best available technology.” Then, last month, Motherboard revealed Amazon has partnered with police departments around the country to create “a self-perpetuating surveillance network” of Ring products.

Allow me to be the umpteenth to say: what the hell, Amazon?

Amazon shareholders, tech employees, warehouse employees, and customers are all protesting this marketing of Rekognition to ICE, as well as the services provided by Amazon to infamous Palantir. More than 500 Amazon tech employees, in particular, have signed a letter of protest — but Amazon’s leadership does not yet seem to be willing to engage with them in good faith.

Instead, Amazon has defended itself with a “Facts on Facial Recognition with Artificial Intelligence” page, in which they seem to think the only possible problem with their technology is the possibility of false positives, and offer halfhearted half-measures as “In all public safety and law enforcement scenarios, technology like Amazon Rekognition should only be used to narrow the field of potential matches … facial recognition software should not be used autonomously.”

The technical concerns are real enough, as shown by Orlando’s cancellation of their pilot Rekognition program. But I’m tired of tech companies acting as if they have no responsibility to the public beyond fixing their bugs and getting their tech working as intended. Sometimes the intent itself is the problem.

“I feel that society develops an immune response eventually to the bad uses of new technology, but it takes time,” Jeff Bezos has said. Which is true as far as it goes. But a corollary is that, in the interim, while society hasn’t developed immune responses, we should be especially cautious about abuses. Another is that the world’s wealthiest man should not abdicate his own nontrivial part in optimizing society’s responses to new technologies.

The question is not whether Rekognition’s technical problems are solvable. The question is whether marketing it to governments and law enforcement in order to enable ubiquitous panopticon surveillance is good for any society in the world. It’s dangerously intellectually lazy to say “if it’s currently legal it must be fine” or “the institutions of democracy will protect us from harm, therefore as a tech maven I don’t need to think or worry about any consequences.”

In realitym the law is extremely slow to react to new technologies, and our institutions are increasingly sclerotic and paralyzed — as Silicon Valley will be all too eager to tell you in other contexts. Relying on them for our “immune response” is wilful negligence. Yes, technology, like fire, can be used for both good and bad; but we are rightfully far more cautious about fire in tinderbox conditions than during the rainy season, and we adjust our risk assessment accordingly. The unwillingness of tech companies to accept any responsibility for the risks they create is beyond worrying.

As I’ve said before, the only real, or at least real-time, check on tech companies is their own employees. So it’s heartening to see AWS employees push back against company policies — and worrying to see Amazon refuse to engage with them in good faith. The world expects better of Bezos and Amazon than dodging important questions about the risks of their technologies, while passing those off as someone else’s department.

Facebook provides another cautionary tale. Hard as it may be to believe now, not all long ago, they were widely respected, trusted, even beloved. A backlash against companies like Amazon and Facebook seems at first like a few minor cavils from an extremist fringe … but sometimes the pebbles of complaint suddenly accumulate into a landslide of contempt. Let’s hope Amazon sees the light before the techlash turns yet another erstwhile hero into a thoroughly modern villain.

Read More