When the makers of the 1994 movie Forrest Gump inserted Tom Hanks into an old film clip to make it appear the actor was shaking hands with Richard Nixon, their handiwork was considered cutting edge. Now it’s easier to doctor video. The popular Chinese app Zao allows users to easily insert faces into movie and television clips.
These sorts of artificial intelligence-generated videos, known as “deepfakes,” are worrisome. Imagine the faces of politicians, activists, or journalists superimposed into, say, a porn video. Or consider how a bully could use the technology to torment schoolmates.
For now, tells like disappearing facial features or audio glitches make deepfakes relatively easy to spot. But the technology is continually improving. Facebook CTO Mike Schroepfer worries that artificial intelligence experts are spending too much time perfecting deepfakes and not enough time finding ways to detect them.
Thursday, Facebook, Microsoft, the Partnership on AI coalition, and academics from seven universities launched a contest to encourage better ways of detecting deepfakes. Organizers of the “Deepfake Detection Challenge” did not specify prizes. The contest will run from late 2019 until spring of 2020.
Keep Reading

The latest on artificial intelligence, from machine learning to computer vision and more
After being screened, participants will be given access to a collection of deepfake videos that Facebook plans to release by December. They will feature professional actors who have consented to have their faces used in deepfakes, but Schroepfer says the videos in the dataset will, as much as possible, resemble real Facebook videos. No Facebook user data will be used. You won’t have to use Facebook’s videos to participate in the contest.
Contests are a common way to encourage researchers and hobbyists to find solutions to difficult computer science problems. Netflix famously held a competition to build a better movie recommendation engine, and the website Kaggle hosts data science competitions for a wide range of companies and organizations.
One big challenge for deepfake researchers is that, as with all AI work, researchers need numerous examples of deepfakes in order to “train” a system to spot doctored videos. It’s relatively easy, Schroepfer says, to spot a deepfake when a system has already “seen” the original video. For example, if a system is already familiar with the original film clip used in Forrest Gump, it can tell that something has changed. But if a malicious actor records an original video and doctors it, it’s much harder for an AI system to spot. The Facebook-created deepfakes are intended to help solve that problem.
Schroepfer says Facebook already implements all the detection methods it knows about, but hopes the contest will generate novel ways to spot deepfakes. The idea isn’t to create a system that will stop all deepfakes forever, but to find ways to make it harder and more expensive to create passable deepfakes.
More Great WIRED Stories
- We can be heroes: How nerds are reinventing pop culture
- Why on earth is water in Hawaii’s Kilauea volcano?
- Jeffrey Epstein and the power of networks
- I replaced my oven with a waffle maker and you should, too
- Learn how to fall with climber Alex Honnold
- 👁 Facial recognition is suddenly everywhere. Should you worry? Plus, read the latest news on artificial intelligence
- 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones.