Mark Zuckerberg didn’t mince words on a call with reporters Monday: “The bottom line here is that elections have changed significantly since 2016, and Facebook has changed too.”
It’s true, the days of Zuckerberg arguing that filter bubbles are worse in the real world than on Facebook, and dismissing the notion that social media could influence the way people vote as a “pretty crazy idea” are long gone. Facebook, he said, has gone from being “on our back foot” to proactively seeking out threats and fighting coordinated influence operations ahead of the 2020 US presidential election.
As proof, he pointed to the slew of new efforts the company announced Monday to combat combat election interference and the spread of disinformation, describing the initiatives as one of his “top priorities.” But critics say he’s missing the point.
Disinformation and media manipulation researchers say Facebook’s announcements Monday left them frustrated and concerned about 2020. Though the policy updates show that Facebook understands that misinformation is a serious problem that can no longer be ignored, that message was undercut by the company’s reluctance to fully apply its own rules, particularly to politicians. What’s more, they say the new election integrity measures are riddled with loopholes and still fail to get at many of the most pressing issues they had hoped Facebook would address by this time.
“All of the tactics that were in play in 2016 are pretty much still out there,” says Joan Donovan, head of the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center.
Among the features announced Monday were new interstitials—notices that appear in front of a post—that warn users when content in their Instagram or Facebook feeds has been flagged as false by outside fact-checkers. Donovan says it makes sense to use a digital speed bump of sorts to restrict access to inaccurate content, but the notices may have the opposite effect.
“The first accounts that they choose to enforce that policy on are going to get a lot of attention,” from both the media and curious users, she explained. “We have to understand there’s going to be a bit of a boomerang effect.” She says “media manipulators” will test the system to see how Facebook responds, “and then they will innovate around them.”
Facebook did not respond to inquiries about when or where the feature would be rolled out, or whether it would apply to all content that had been rated partly or completely false by third-party fact-checkers.
Donovan says she’s not sure if the feature’s potential benefits are worth the risks of amplification, particularly since Facebook may not be able to identify and flag misleading content before it reaches people. “Taking it down two days later isn’t helpful,” nor is hiding it behind a notice, she says, “especially when it’s misinformation that’s traveling on the back of a viral news story, where we know that the first eight hours of that news story are the most consequential for people making assessments and bothering to read what the story is even about.”
Also Monday, Facebook said it would attach new labels to pages or ads run by media outlets that it deems to be “state-controlled,” like Russia Today. It said it will require that some pages with a lot of US-based users to be more transparent about who’s running them—this will at first apply only to verified business pages, and later include pages that run ads on social issues, elections or politics in the US. In addition, ads that discourage people from voting would no longer be permitted.
But researchers say that these measures are too little too late. “Every announcement like this, and all the recent publicity blitz has an undercurrent of inevitability,” says David Carroll, an associate professor at Parsons School of Design known for his quest to reclaim his Cambridge Analytica data. “It shows that they still need to show that they’re doing things. One advantage to these cosmetic things is that they look like they’re significant moves, but they’re really just like pretty small user interface tweaks.” But that’s not enough at this stage, he says.
The key question, researchers say, is enforcement, what Donovan calls “the Achilles heel of all of these platform companies.” She says Facebook issues many policies related to hate speech, misinformation, and election integrity. “But if they’re not willing to enforce those rules—especially on politicians, PACs, and super PACs—then they haven’t really done anything.”
In September, Facebook said politicians would be exempt from the company’s usual policies prohibiting posting misinformation and other forms of problematic content in the name of newsworthiness. Earlier this month, that exemption was extended to advertisements, giving users free rein to lie in Facebook ads so long as they are political candidates or officeholders.
This is the “big gaping hole” Facebook’s announcement Monday failed to address, says disinformation researcher (and WIRED Ideas contributor) Renee DiResta. Facebook’s policies contradict themselves, she says, as they try to simultaneously argue that misinformation is a problem when disseminated by foreign actors, but free expression when posted by anyone that falls under the vague category of “politician.”
Both DiResta and Donovan expressed concerns as to whether Facebook’s new transparency measures and election integrity policies would be applied to political candidates at all. On the press call, Zuckerberg emphasized that he didn’t think it was right for a private company like Facebook to “censor” the speech of politicians—a point he argued at length last week in a speech at Georgetown University—but noted that there were exceptions, when the person calls for violence or urges voter suppression, for example.
Facebook was short on details as to how exactly it would determine a politician was doing so, and how it determines a user is a politician or political candidate. Katie Harbath, Facebook’s public policy director for global elections, said Facebook would look at registration paperwork to determine whether campaigns are legitimate; though she offered no details as to who specifically would undertake the research, how the information would be communicated to moderators, and how frequently the information would be updated.
DiResta says the exemptions effectively communicate to bad actors that misinformation is allowed on Facebook, so long as you can find a way to get yourself labeled a politician or political candidate. “Anybody who’s a good troll should go and file papers to run for office at this point,” she joked. “Run for something that’s free to file for—you’re never going to get elected, but you can certainly troll the hell out of everybody else while you’re doing it.”
More Great WIRED Stories
- The death of cars was greatly exaggerated
- The first smartphone war
- 7 cybersecurity threats that can sneak up on you
- “Forever chemicals” are in your popcorn—and your blood
- The spellbinding allure of Seoul’s fake urban mountains
- 👁 Prepare for the deepfake era of video; plus, check out the latest news on AI
- ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers.