Twitter CEO Jack Dorsey engaged in a bit of cross-platform trolling last week, announcing that his company would ban political advertising from the platform, and adding a snarky subtweet aimed at Facebook’s stated policy of tolerance for paid political misinformation. Yet there’s something very strange about the centrality of paid ads in our ongoing debate about how to grapple with viral lies and their distorting effect on democracy. Restricting this particular form of communication may be a simple solution—and one that’s easy to import from familiar debates about campaign finance in the television era—but it holds no real promise for addressing the problem. Paid ads have very little to do with how political deception spreads online, and doing away with them will likely hamper legitimate political speech.
Any conversation about misinformation on social media must, of course, engage with Russia’s well-documented campaign of interference in the 2016 American presidential race. While we may never know with any certainty whether that campaign altered the outcome of the election, it’s clear that paid ads were not themselves a significant factor. As a recent report from the Senate Select Committee on Intelligence notes, the disinformation warriors at Russia’s Internet Research Agency spent only about $100,000 on Facebook ads over the two years leading up to the election—chicken feed in the context of national presidential campaign spending. (As a point of comparison, US representative Alexandria Ocasio-Cortez, one of the more vocal critics of Facebook’s lax approach to political ads, spent $370,000 on the platform to defend a single congressional district seat.) Moreover, very little of the IRA’s spending was on traditional political advertising: The Senate report notes that only about 5 percent of the Russian ads users saw prior to the presidential election actually referenced Hillary Clinton or Donald Trump directly.
Instead, the paid ads most often functioned to raise the profile of phony astroturf communities that formed the heart of the Russian disinformation campaign. IRA employees posing as Americans posted “organic” content to ersatz groups, which were then shared voluntarily by unwitting Facebook users. “The nearly 3,400 Facebook and Instagram advertisements the IRA purchased,” according to the SSCI report, “are comparably minor in relation to the over 61,500 Facebook posts, 116,000 Instagram posts, and 10.4 million tweets that were the original creations of IRA influence operatives, disseminated under the guise of authentic user activity.”
That dovetails with the conclusion of researchers at Oxford University who found that organic sharing of Russian disinformation dwarfed the reach of the campaign’s paid ads, with some 30 million users passing along IRA content on Facebook and Instagram from 2015 to 2017. The role of paid ads was more indirect, laying the groundwork for that sharing. The single most viewed paid ad from Russian trolls, for instance, promoted a bogus group called Back the Badge, which purported to support law enforcement officials in the face of complaints about their use of force and racialized policing practices. Other ads took the opposite approach, and promoted fake groups that mimicked real activist movements such as Black Lives Matter. Indeed, racial minorities appear to have been the groups most heavily targeted by Russia—not with ads promoting a candidate but via rhetoric designed to suppress the largely Democratic African American vote by portraying traditional electoral participation as futile and politicians from both parties as handmaidens of white supremacy.
If you’re worried about foreign governments mounting disinformation campaigns, restricting paid political advertising on social media will do almost nothing to hinder such campaigns. When foreign intelligence services want to misinform American voters, they mostly rely on Americans to do the work of spreading their messages for free.
If ad bans won’t help against foreign interference, will they at least help foster a healthier and more authentic domestic discourse? Dorsey’s argument that “political message reach should be earned, not bought” sounds reasonable enough. But if you expect ad bans to bolster outsider candidates against monied interests, you may be disappointed. Ryan Grim, a political writer for The Intercept and author of We’ve Got People: From Jesse Jackson to AOC, the End of Big Money and the Rise of a Movement, called Dorsey’s announcement “a huge blow to progressives, and a boon to big-money candidates.” Twitter and Facebook, Grim explained in a tweet, “are where candidates build and organize lists of supporters, that they then turn into donors and volunteers. If only Twitter bans ads, that’ll hurt progressive candidates but it wouldn’t be fatal. If Facebook does it too, that’s damn near fatal in this ecosystem. That’s how unknown candidates find supporters, persuade them to join their email list/contact info, then organize them.”
This may seem counterintuitive: Ad bans will apply across the board, and one might expect that a level playing field works to the greatest advantage of those without deep pockets. But ads are most valuable to outsider candidates and grassroots movements that don’t already have large audiences and high name recognition. If paid ads aren’t available as a mechanism to break through and attract initial attention and support, then the importance of the people and institutions that already have large audiences will be magnified and entrenched. The official Twitter accounts of the national party committees—@GOP and @TheDemocrats—have millions of followers. Eliminating an alternative path for outreach to voters will tend to increase dependence on those institutions and their large microphones.
More worrying is the possibility that lesser-known candidates would compensate for the loss of paid exposure by tailoring their messaging more aggressively for viral spread. What we know about the type of content that is most likely to go viral—the sort of “reach” most easily “earned”—is not reassuring: It’s provocative, sensational, outrageous, and extreme. Indeed, that’s one of the reasons online misinformation is such a serious problem: Surprising and outlandish conspiracy theories, or fake news headlines, tend to draw eyeballs and spread quickly. This applies to paid ads as well, so the incentive to produce ever more polarizing rhetoric—calculated to provoke anger rather than deliberation—will be powerful whether or not social media platforms institute Twitter-style prohibitions. But those incentives only strengthen when campaigns and outside political groups are forced to rely exclusively on virality.
Things get thornier still when you consider that Twitter’s ban covers not just traditional campaign ads but also the more nebulous category of “issue ads.” That means an enormous number of activist groups of every ideological stripe, many of which may not think of themselves as engaged in “political advertising,” will discover they’ve lost an important means of raising awareness—especially when they’re trying to take their message to groups that aren’t already politically engaged, and may not know how the issue affects them. As tech writer Will Oremus argues, what counts as a “political ad” is itself contentious, and politics often permeates even ordinary advertising. Energy companies, for instance, are fond of running elaborate ad campaigns seeking to brand themselves as ecologically friendly—messaging that implies, whether stated explicitly or not, that there’s little need for new policies or regulations to mitigate the environmental impacts of their operations. Such ads are unlikely to get classified as “issue ads”—though a campaign by an environmental group that contradicted this sort of green branding likely would be. In other words, an ad policy meant to level the playing field could in practice have the opposite effect.
Political ad bans appeal in part because they’re so simple and (apparently) straightforward, and in part because they flatter our democratic self-conception as wise and informed citizens. If we’ve been consuming and sharing misinformation, we like to think that it’s because some outside force has foisted those messages on us. It’s harsher to consider the alternative explanation: that we’re simply more attuned to use information that confirms our preexisting beliefs than information that corrects them—in other words, that we’re simply not adept at discerning truth.
Addressing the real problem will be far more difficult. In short, we’d have to figure out how to combat the organic sharing of misinformation by ordinary users without turning platforms into invasive arbiters of speech. To dodge that hard task by touting a superficially neutral ad ban should be seen for what it is: a cop-out.
More Great WIRED Stories
- The internet is for everyone, right? Not with a screen reader
- Trying to plant a trillion trees won’t solve anything
- Pompeo was riding high—until the Ukraine mess exploded
- Maybe it’s not YouTube’s algorithm that radicalizes people
- The untold story of Olympic Destroyer, the most deceptive hack in history
- 👁 Prepare for the deepfake era of video; plus, check out the latest news on AI
- 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones.