The Two Myths of the Internet

On January 21, 2010 Secretary of State Hillary Rodham Clinton addressed a crowd at the Newseum in Washington, DC. She was there to proclaim the power and importance of “internet freedom.” In the previous few years, she said, online tools had enabled people all around the world to organize blood drives, plan demonstrations, and even mobilize in mass demonstrations for democracy. “A connection to global information networks is like an on-ramp to modernity,” she declared, and the US would do its part to help promote “a planet with one internet, one global community, and a common body of knowledge that benefits us all.”

Clinton’s speech acknowledged that the internet could also be a darker instrument—that its power might be hacked to evil ends, used for spewing hatred or the crushing of dissent. But her thesis rested on the clear beliefs of techno-fundamentalism: that digital technologies necessarily tend toward freedom of association and speech, and that the US-based companies behind the platforms would promote American values. Democracy would spread. Borders would open. Minds would open.

Wouldn’t that have been nice? Ten years later, Clinton is a private citizen, denied the highest office she would seek by a political amateur who leveraged Facebook, Twitter, and YouTube to drive enthusiasm for his nativist, protectionist, and racist agenda. Oh, and the Newseum is closing down as well. Back in 2010, Clinton had called that institution “a monument to some of our most precious freedoms.” Now it too appears to be a relic of a bygone optimism.

The second decade of the 20th century began at the apex of naivete about the potential for the internet to enhance democracy and improve the quality of life on Earth. By the end of 2019, very few people could still hold such a position with honesty.

There were signs, at first, that Clinton’s sanguine stance had been foretelling. The speech on “internet freedom” was given almost exactly a year before the Tunisian and Egyptian uprisings of 2011. The idea was in the air, and then it seemed we had proof. A “Twitter Revolution” had begun to spread around the globe.

The evidence was faulty, though. When the protests erupted in Tunis in December 2010, many learned about them via Twitter, in English or French, as most European and American journalists did, and thus assumed that Twitter played a greater role in spreading the movement than did text messages or Al Jazeera satellite television. In fact, before the revolution, only about 200 accounts actively tweeted in Tunisia. (Twitter would not even offer its service in Arabic until 2012.) Overall, fewer than 20 percent of the country’s citizens used social media platforms of any kind. Almost all, however, used cell phones to send text messages. Unsurprisingly and unspectacularly, people used the communication tools that were available to them, just as protesters have always done.

The same was true of Egypt. When in January 2011 angry people filled the streets of Cairo, Alexandria, and Port Said, many inaccurately assumed, once again, that Twitter was more than just a specialized tool of that country’s cosmopolitan, urban, educated elites. Egypt in 2011 had fewer than 130,000 Twitter users in all. Yet this movement too would be drafted into the rhetoric of Twitter Revolution.

What Facebook, Twitter, and YouTube offered to urban, elite protesters was important, but not decisive, to the revolutions in Tunisia and Egypt. They mostly let the rest of the world know what was going on. In the meantime, the initial success of those revolutions (which would be quickly and brutally reversed in Egypt, and just barely sustained in Tunisia to this day) allowed techno-optimists to ignore all the other factors that played more decisive roles—chiefly decades of organization among activists preparing for such an opportunity, along with some particular economic and political mistakes that weakened the regimes.

The speed of those two revolutions, with each leading to a leader’s ouster in a matter of weeks, also allowed spectators to disassociate them from other uprisings in 2011 that did not turn out to end so quickly or so well, or that did not end at all. While the world was watching the streets of Cairo and Tunis, protesters demanded revolution or reform in Bahrain, Lebanon, and Morocco. While Morocco’s King Mohammed VI did entertain modest reforms, similar uprisings in Libya more slowly ended in the overthrow of dictator Moammar Gadhafi in August 2011. And, most ominously, the optimism of the protests spread to Syria, where a brutal civil war rages to this day while Bashar al-Assad remains firmly in control.

Nonetheless, an unshakable myth of the Arab Spring emerged: Pro-democratic reformers had energized a broad population through Facebook and Twitter. That’s one of the reasons why so many people took Clinton’s “internet freedom” agenda seriously for so long.

Facebook and Twitter leveraged all this good publicity to give themselves more central roles in politics and policy. At the same time, social and digital media dramatically increased their reach. By 2018, more than 35 million Egyptians (more than one-third of the population) used Facebook regularly, and more than 2 million used Twitter. Embedded in mobile phones, which grew from rare to nearly universal around the globe over this past decade, Facebook became the chief way that billions learned about the world around them.

In 2019 Facebook stands out as a powerful organizational machine; the service has, in a sense, grown into the very role that was imagined for it at the decade’s start. If you want to fill the National Mall with anti-Trump protesters, or turn out supporters for a nativist referendum, Facebook is the ideal means by which to identify like-minded people and push them to act. Its global scale, precise advertising platform, and tendency to amplify emotionally charged content have made it indispensable for political organizers of all persuasions. Indeed, it may be the most effective motivational tool ever created. The myth of 2010 seemed to have come true, at least in part.

Healthy democracies, however, demand more than motivation. They need deliberation. None of the major global digital platforms that deliver propaganda, misinformation, and news to billions are designed to foster sober, informed debate among differently minded people. They’re not optimized for the very type of discourse that we’ll need to address the crucial challenges of the next decade: migration, infectious diseases, and climate change, just to name a few.

Aligning people and firing them up with indignation can loosen civic commitments across identity lines, and end up undermining trust in the kinds of institutions that cultivate deliberation, from schools and journalism to science. That the rosy optimism of 2011 soon ebbed into the dark side of the digital revolution became too glaring to ignore.

Two political events would be the fulcra for this pivot. The first was the 2013 revelation by former intelligence contractor Edward Snowden that governments had tapped into the formerly secure channels of major data companies to track and profile citizens without their knowledge. We realized, all at once, that what might once have seemed like a “harmless” system of private surveillance—the tracking of our preferences, expressions, and desires for the sake of convenience and personalization—had been handed over to unaccountable state actors. Snowden’s whistle-blowing put the dangers of massive data surveillance into public conversation, leaving journalists and citizens sensitized to further revelations.

The next freak-out hit when the Guardian and The New York Times revealed the breadth of voter data lifted off of Facebook by a little-known, London-based consulting firm. Cambridge Analytica claimed to have a magic formula that could sort out users based on their psychology, and sold its specious assumptions to political campaigns around the world.

It was all bunk, of course, and by 2016, the game should have been up. Ted Cruz’s presidential run had fizzled, despite—or perhaps because of—its reliance on Cambridge Analytica. When CA board member Steve Bannon took control of the Donald Trump presidential campaign that summer, he brought the firm’s services with him. No one working for the Trump campaign was fooled. They didn’t need Cambridge Analytica’s two-year-old user data; they already had Facebook’s targeting power, and its staff, at their disposal. The social network was happy to connect them with the precise voters they aimed to reach through its powerful advertising system.

Sitting in the same San Antonio office as Cambridge Analytica staff, Facebook employees aided Trump as the campaign surgically segmented voters and customized messages to motivate them to donate, attend rallies, knock on doors, and ultimately vote for its candidate. Trump won the three states that put him in the Oval Office by fewer than 80,000 votes. A hundred different things influenced voters that year, but Trump’s digital campaign head, Brad Parscale, understood that Facebook’s ability to identify and motivate potential Trump voters in swing states made a difference—perhaps the key difference.

Clearly, Facebook had boosted Trump as it had Rodrigo Duterte in the Philippines and Narendra Modi in India. It helped Jair Bolsonaro, another candidate with authoritarian tendencies, win the presidency of Brazil in 2018. Bolsonaro, like Modi, had run his campaign on Facebook, YouTube, and WhatsApp—Facebook’s encrypted private messaging service.

In the meantime, news media reported on Facebook’s role in amplifying calls to genocide in Myanmar, as well as sectarian violence in India and Sri Lanka. Other services were also named as culpable in spreading destructive, hate-filled content. Reports outlined how YouTube’s recommendation engine drives videogame fans toward misogynistic and racist videos; and explained that Twitter has been populated with trolls and bots that amplify propaganda aimed at fracturing liberal democracies around the world.

In the end, the myth of 2010 was transformed into another myth: Where once we thought online platforms would help depose dictators all around the world, we came to think that the same technologies are predisposed to do the opposite—to empower bigots and prop up authoritarian regimes. Neither of these notions is entirely wrong. But they don’t lead us to a clear agenda for confronting excesses and concentrations of power. Technologies determine nothing. Technologies influence everything.

Facebook, with its 2.5 billion users in more than 100 languages, is unlike any communicative tool we have ever had. It should bear the brunt of our criticism and regulatory attention, but not the full extent of it. Just as we need not look to Bond villains like those who ran Cambridge Analytica to blame for our fates, we should remember that Facebook merely amplifies and concentrates dangerous trends already extant in the world.

SUBSCRIBE

Subscribe to WIRED and stay smart with more of your favorite Ideas writers.

Technologies are not distinct from the people who use them. They are, as Marshall McLuhan told us, extensions of ourselves. As such, they will embody the biases that we apply through their design and use. No tech is neutral by design or effect. They make some actions easier and others harder, and it takes extra effort to notice and correct those biases.

Facebook, Twitter, and YouTube were not invented to undermine trust in science or indoctrinate racists. They just turned out to be the best possible ways to accomplish those goals. They were invented for a better species than ours. No technology is fixed in its form or use. People shape technologies over time, and technologies shape people. It’s a complex dialectic.

We focus too little on the slow, steady degradation of our ability to think and talk like reasonable adults. The goal of right-wing propaganda rarely is to generate a measurable, short-term effect like winning an election. The goal is to alter the range of what people imagine is possible or reasonable—to push the boundaries of the acceptable. It’s a long game meant to break norms. Political success follows, but years later and in unpredictable ways.

Read More