Even the AI Behind Deepfakes Can’t Save Us From Being Duped

Last week Google released several thousand deepfake videos to help researchers build tools that use artificial intelligence to spot altered videos that could spawn political misinformation, corporate sabotage, or cyberbullying.

Google’s videos could be used to create technology that offers hope of catching deepfakes in much the way spam filters catch email spam. In reality, though, technology will only be part of the solution. That’s because deepfakes will most likely improve faster than detection methods, and because human intelligence and expertise will be needed to identify deceptive videos for the foreseeable future.

Deepfakes have captured the imagination of politicians, the media, and the public. Video manipulation and deception have long been possible, but advances in machine learning have made it easy to automatically capture a person’s likeness and stitch it onto someone else. That’s made it relatively simple to create fake porn, surreal movie mashups, and demos that point to the potential for political sabotage.

There is growing concern that deepfakes could be used to sway voters in the 2020 presidential election. A report published this month by researchers at NYU identified deepfakes as one of eight factors that may contribute to disinformation during next year’s race. A recent survey of legislation found that federal and state lawmakers are mulling around a dozen bills to tackle deepfakes. Virginia has already made it illegal to share nonconsensual deepfake porn; Texas has outlawed deepfakes that interfere with elections.

Tech companies have promoted the idea that machine learning and AI will head off such trouble, starting with simpler forms of misinformation. In his testimony to Congress last October, Mark Zuckerberg promised that AI will help it identify fake news stories. This would involve using algorithms trained to distinguish between accurate and misleading text and images in posts.

LEARN MORE

The WIRED Guide to Artificial Intelligence

The clips released last week, created in collaboration with Jigsaw, an Alphabet subsidiary focused on technology and politics, feature paid actors who agreed to have their faces swapped. The idea is that researchers will use the videos to train software to spot deepfake videos in the wild, and to benchmark the performance of their tools.

The clips show people doing mundane tasks: laughing or scowling into the camera; walking aimlessly down corridors; hugging awkwardly. The face-swapping ranges from convincing to easy-to-spot. Many of the faces in the clips seem ill-fitting, or melt or glitch in ways that betray digital trickery.

Video: Google

WIRED downloaded many of the clips and shared them with several experts. Some say that deepfakery has progressed beyond the techniques used by Google to make some of the videos.

“The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” says Hany Farid, a digital forensics expert at UC Berkeley who is working on deepfakes. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”

Video: Google

Google says it created videos that range in quality to improve training of detection algorithms. Henry Ajder, a researcher at a UK company called Deeptrace Lab, which is collecting deepfakes and building its own detection technology, agrees that it is useful to have both good and poor deepfakes for training. Google also said in the blog post announcing the video dataset that it would add deepfakes over time to account for advances in the technology.

The amount of effort being put into the development of deepfake detectors might seem to signal that a solution is on the way. Researchers are working on automated techniques for spotting videos forged by hand as well as using AI. These detection tools increasingly rely, like deepfakes themselves, on machine learning and large amounts of training data. Darpa, the research arm of the Defense Department, runs a program that funds researchers working on automated forgery detection tools; it is increasingly focused on deepfakes.

Much more deepfake training data should soon be available. Facebook and Microsoft are building another, larger dataset of deepfake videos, which the companies plan to release to AI researchers at a conference in December.

Sam Gregory, program director for Witness, a project that trains activists to use video evidence to expose wrongdoing, says the new deepfake videos will be useful to academic researchers. But he also warns that deepfakes shared in the wild are always likely to be more challenging to spot automatically, given how they may be compressed or remixed in ways that may trick even a well-trained detector.

As deepfakes improve, Gregory and others say it will be necessary for humans to investigate the origins of a video or inconsistencies—a shadow out of place or the incorrect weather for a particular location—that may be imperceptible to an algorithm.

“There is a future for [automated] detection as a partial solution,” Gregory says. He believes that technical solutions could help alert users and the media to deepfakes, but adds that people need to become more savvy about new possibilities for deception.

Keep Reading

The latest on artificial intelligence, from machine learning to computer vision and more

Videos can, of course, also be manipulated to deceive without the use of AI. A report published last month by Data & Society, a nonprofit research group, notes that video manipulation already goes well beyond deepfakery. Simple modifications and edits can be just as effective in misleading people, and are harder to spot using automated tools. A recent example is the clip of video of Nancy Pelosi slowed down to make it appear as if she were slurring her words.

Britt Paris, an assistant professor at Rutgers and coauthor of the Data & Society report, says the fact that Google and Facebook are releasing deepfake datasets shows they are struggling to develop technical solutions themselves.

The companies now interested in finding solutions to AI-enabled fakery have have also been pushing the frontiers of the technology for their own commercial purposes. Last year Google revealed Duplex, a system that uses realistic synthetic speech to make automated calls to restaurants and stores. In January, Google released a dataset of fake speech synthesized using AI to “advance state-of-the-art research on fake audio detection.”

“Google and Facebook are outsourcing the problem,” says Paris. “It is likely clear to these tech companies that if they don’t appear as though they are trying to solve the problem, bipartisan regulation and policy might beat them to it in ways that hurt their bottom line.”

Addressing deepfakes, fake news, and misinformation will, however, involve more than training data and AI algorithms. “As forensic techniques advance, they will be used as a triage—filtering the billions of daily uploads into a more manageable number that will then require human moderators,” says Farid of UC Berkeley. “The platforms are going to have to get more serious about both the technology and human side of this effort.”


More Great WIRED Stories

Read More