Facebook’s Deepfake Ban Is a Solution to a Distant Problem

Wait—did Facebook just address a problem before it became a colossal nightmare?

That’s one way of looking at the company’s new policy on deepfake videos. In a blog post published Monday, Monika Bickert, Facebook’s vice president of global policy management, announced that deepfakes will be joining nudity, hate speech, and graphic violence on the list of Facebook’s categories of banned content. Over the last few years, the company has developed a reputation for reacting to problems only after they publicly blow up in Mark Zuckerberg’s face—whether it’s the spread of hate speech, Russian influence campaigns, or data breaches. So it’s notable that Facebook is taking a strong position on deepfakes before the technology has actually gotten sophisticated enough to create truly convincing fake videos of real people—a potentially apocalyptic scenario for humanity’s ability to tell truth from falsehood.

But there’s another way to look at the policy, which is that it doesn’t do very much to address the types of misleading videos that are already much more prevalent on the platform. (The vast majority of deepfakes currently involve crafting pornographic videos using real women’s faces—a terrible problem, but one already covered by Facebook’s ban on porn.) To run afoul of the policy, a video has to meet two criteria: it must be manipulated “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say,” and it must be “the product of artificial intelligence or machine learning.” What this leaves out, of course, is so-called “shallow fake” or “cheap fake” videos—the kind of selectively edited or out-of-context snippets that humans are already adept at creating and spreading. Bickert’s blog post explicitly says that the policy doesn’t apply to video “that has been edited solely to omit or change the order of words”—even though that can be just as misleading as a total fake. Last week, for example, a video made the rounds in which Joe Biden said, in an apparently xenophobic spirit, “Our culture is not imported from some African nation or some Asian nation.” In fact, the full quote in context made clear that Biden meant Americans have only ourselves to blame for not taking sexual violence seriously enough.

“I think the new ban on AI-driven deepfakes is a step in the right direction, but it’s disappointing that Facebook’s new policy apparently won’t result in the removal of provably false videos doctored with less advanced means,” said Paul Barrett, the deputy director of NYU’s Center for Business and Human Rights and an expert on political disinformation. He pointed to high-profile examples like last year’s viral video doctored to make Nancy Pelosi appear to be slurring her speech. Facebook emphasizes that this type of content is subject to its third-party fact-checking program, and that when a video is flagged as false or misleading, users must click through a prominent disclaimer before viewing or sharing it. But Facebook won’t take the post down.

It’s not hard to see why the company would be more comfortable banning deepfakes than more old-school forms of misleading video. A system that can rely on automated software to sniff out the presence of AI-enabled manipulation doesn’t depend as heavily on human determinations of what’s true and what isn’t. On the other hand, the policy exempts parody and satire, which would seem to require precisely the kind of interpretive judgment that the company abjures to the point of outsourcing fact-checking to third parties.

Beyond convenience, there is a certain logic to banning deepfakes while relying on fact-checking for old-school video manipulation. “With a shallow fake, you can release or link to the authentic content,” said Renée DiResta, a disinformation researcher (and a WIRED contributor). “With a deepfake, because it’s made of whole cloth, there is no counter or clarifying content that you can point someone to.” But, DiResta added, the reliance on fact-checking overlooks the problem of misleading or false content going viral long before it can be debunked. “One of the key complaints has been that it’s too slow—that the thing has already gone viral long before it’s been fact checked; most people don’t see the fact check; and most people, at this point, if it’s political, will disbelieve the fact check because the fact checking organization will be accused of being partisan.” The infamous Pelosi video, for instance, was viewed millions of times before Facebook got around to labeling it false.

“I think it’s a good policy,” said Sam Gregory, program director at the human rights nonprofit WITNESS, which was one of several groups that gave Facebook feedback on how to handle deepfakes. “I think it’s really important that the platforms state clearly how they’re going to handle deepfakes before they’re a widespread problem.” But, he added, “this policy is not applying to the vast majority of existing visual misinformation and disinformation.” Especially outside the US, he explained, that means material that is either doctored or intentionally mislabeled—as when years-old videos from around the world are passed off as anti-Hindu violence via WhatsApp (a Facebook subsidiary) in India to stoke hatred against Muslims. Dealing with that kind of disinformation, Gregory said, is more complicated, and will require both better policies and better tools, like reverse video searches, to help users and journalists debunk hoaxes more quickly.

When it comes to deepfakes, it remains to be seen whether Facebook’s new ban will be up to the challenge. It’s a question not just of whether the platform’s detection technology can reliably detect AI-enabled fake videos—the company is currently running a contest, along with Microsoft and academic institutions, to encourage researchers to come up with better methods—but also of whether users will trust its explanations of why certain content gets removed. Facebook’s enforcement of content restrictions already draws howls of censorship and “shadowbans,” particularly from conservatives citing Silicon Valley liberal bias, despite the company’s denials and the lack of evidence that politics, not violating behavior, plays a role. There’s little reason to expect a deepfake ban to play out any differently, especially when politics is involved.

And politics is ultimately the rub, at least when it comes to misleading video in the US. It’s probably not a coincidence that Bickert’s blog post appeared right before she was set to appear at a congressional hearing on online manipulation and deception. As the 2020 election looms only 10 probably interminable months away, it’s hard to find anyone from across the political spectrum who is satisfied that Facebook will play a benign role in the democratic process. With its announcement, the company seems to be trying to convince Washington, and the country, that it’s up to the task ahead. But while it deserves credit for coming up with a plan to address tomorrow’s disinformation threats, the country is still waiting on evidence that it can solve the problems that have already arrived.


More Great WIRED Stories

Read More