A New Law Makes Bots Identify Themselves—That’s the Problem

This month, California became the first state to require bots to openly identify as automated online accounts.

On July 1, nine months after being signed into law, California’s SB 1001—the “Bolstering Online Transparency,” or B.O.T. bill—came into effect. The new rule requires all bots that attempt to influence California residents’ voting or purchasing behaviors to conspicuously declare themselves. The owner or creator of the bot is responsible for prominently designating the account as automated; the platform itself is off the hook. (To see what the law might look like in practice, visit @Bot_Hertzberg—a bot created by SB 1001 author and California Senator Bill Hertzberg to showcase the need for the new rule. “I AM A BOT,” its Twitter bio reads. “Automated accounts like mine are made to misinform & exploit users.”)

Renee DiResta (@noUpside) is an Ideas contributor for WIRED, the director of research at New Knowledge, and a Mozilla fellow on media, misinformation, and trust. She is affiliated with the Berkman-Klein Center at Harvard and the Data Science Institute at Columbia University.

Bots have been used to mislead users; to artificially inflate follower counts, likes, and retweets; and to manufacture consensus on divisive issues. In 2015 and 2016, they were frequently used to make topics trend on Twitter, creating the false impression that certain stories were of great importance. To its supporters, the bot law is a respite in a time of online chaos, a sorely needed defense against the disinformation, misinformation, and filter bubbles that exist on today’s sprawling social media platforms. For regulators, it’s a chance to do something—to make social media manipulation less of a free-for-all—as the US heads into another election.

But upon deeper inspection, California’s bot bill is hollow. It’s flashy and intriguing, but it does very little to deal with the issues plaguing online communication. The law’s original intent—to shine a spotlight on malign bot activity and to increase public awareness—is noble. But in effect, the law has three major flaws: ambiguity, limited platform responsibility, and misguided enforcement.

The law’s ambiguity stems from several definitional problems that are tough to pin down. First, it fails to fully answer the question “What is a bot?” The text of the law defines a bot as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” This readily calls to mind Twitter, but that broad definition also includes customer service chatbots.

More importantly, what does “substantially” mean? Malign bots are often automated for some percentage of the time, whether that’s intra-day or over their life cycle, particularly if the operator wants to “age” the account so that it doesn’t look like it was created for one purpose. But the account is then taken over by a human operator when it’s time to execute an influence operation. Russia’s infamous Internet Research Agency accounts, for example, are often carelessly referred to as “bots.” In reality, their most effective and well-known personas were largely human-operated. Persona accounts run directly by people are a much more daunting problem than purely automated bots when it comes to mitigating online influence operations.

The highly ambiguous law is tailored to address conversations that inspire purchases or sway votes, but it fails to carefully define what constitutes “influenc[ing] a vote in an election.” Is sharing legitimate news stories or voting locations a form of “influence?” After all, not all online automated accounts are malign; some are simply news or information dissemination services, while others are created by artists.

Defining the boundaries of “influence,” “political issues,” and “automated accounts” has presented a challenge for platform policy teams and lawmakers alike. Those imprecise designations continue to plague legislative efforts as Congress and other states model bot bills of their own after the California effort. Last week, Senator Feinstein introduced a bill to prohibit campaigns, parties, and in some cases PACs from using political bots to influence a campaign. The bill would to require social platforms to enforce a bot disclosure policy. But again, in this instance, the bill’s definition of a bot—an “automated software program or process intended to impersonate or replicate human activity online”—encompasses far more than political Twitter bots.

SUBSCRIBE

Subscribe to WIRED and stay smart with more of your favorite Ideas writers.

The California law’s second major flaw is its failure to mandate platform responsibility. Early drafts of the bill required online platforms to facilitate the disclosure of bots and to provide users with a way to report them. Twitter, for example, might have had to introduce a “bot badge,” similar to its “verified” checkmark, and implement a reporting framework.

Debate ensued about whether all platforms—big and small—should be responsible for enacting these bot disclosure frameworks. The frameworks might be technically possible for large platforms like Twitter to implement, but impossible for startups and small platforms. Ultimately, the California law moved away from putting any responsibility for disclosure in the hands of the platforms. Instead, that responsibility now falls to the creator of the bot itself. Can we really expect individuals who build malign bots to voluntarily identify their creations as such? Is it naive to expect platforms with large and potentially influential bot populations to voluntarily self-police themselves without oversight?

To a lesser extent, this issue also exists in the federal version. Anyone who owns a “social media website” (defined as “any tool, website, application, or other media that connects users on the internet for the purpose of engaging in dialogue, sharing information, collaborating, and interacting”) must come up with a plan to require bot owners to self-identify. This poses potential challenges for small platforms.

The California law’s third flaw is its ambiguous enforcement mechanisms. The state government is not equipped to proactively—or even retroactively—identify bots. The penalty structure is defined by California’s Unfair Competition Law, which “gives the state Attorney General broad enforcement authority to levy fines of up to $2,500 per violation, as well as equitable remedies.” An assessment by the law firm Lewis Rice notes that the bot law’s use of overly-broad language related to “intent” and “bot” could mean steep fines for small sites using automated marketing tools like chatbots. California would be punishing the innocuous rather than the truly harmful.

It’s unclear to what extent the law has even been recognized outside of California. There does not appear to have been a sudden influx in labeled automated accounts on Twitter since its passage.

California’s bot law isn’t without merits: As it moved through the California Senate, the law raised awareness about the pervasiveness of bots online. It creates a framework that the Attorney General’s office can draw on to address specific instances of malicious bot usage. But because it requires creators to identify their own bots, the law ultimately lacks the teeth that it would need to be effective. Since California’s progressive technology laws are regularly used to inform federal efforts—and, in this case, already have influenced emerging legislation—that’s a problem.

Any future bills should keep the onus to disclose bots on the online platforms best equipped to identify and address them. As John Frank Weaver wrote in The Journal of Robotics, Artificial Intelligence & Law, “Without requiring online platforms to have a stake in the successful enforcement of the Bot Bill, the California government will be left to its own devices and resources, which are already thinly spread. Removing all responsibility from online platforms significantly reduces the bill’s chances to be successful.”

The law’s goal—to take material steps toward protecting people from being manipulated—is admirable. That is a critically important objective, and one that our lawmakers have been slow to address. Bots are just one component of a compromised online information system. Let’s acknowledge the bot bill for what it is: an imperfect step in the right direction, with a long road ahead.


More Great WIRED Stories

Read More