What Would Facebook Regulation Look Like? Start With the FCC

The federal government seems increasingly likely to take action on platform giants such as Facebook and Google. Antitrust intervention has emerged as the likely focal point of such efforts. Just listen to Mark Zuckerberg.

But is antitrust enforcement able to address the range of concerns that these platforms present? How does antitrust address the problem of disinformation? Or live-streamed violence? Or hate speech? Or the role that these platforms have played in the implosion of local journalism? Can antitrust extend beyond the economic marketplace and effectively protect the marketplace of ideas? Probably not.

Therefore, even if antitrust enforcement moves forward, as Harvard’s Gene Kimmelman has argued, “social welfare regulations are also required.” This is why there have also been calls for the creation of a new regulatory agency focused on digital platforms. Such an agency would need to be able to address not only concerns about competition but also these broader social welfare concerns. Essentially, then, we need a robust public interest framework for platform regulation.

There is already a well-established template. The Federal Communications Commission’s regulatory mandate includes not only assuring adequate competition in the electronic media sector but also assuring that the broader public interest is being served. Within the context of this public interest standard, the FCC has pursued a variety of social welfare objectives, from reducing the digital divide to protecting children from adult content to ensuring that the public has access to a diversity of sources and viewpoints. It even has regulations in place that prohibit the broadcasting of disinformation. The FCC also has the authority to review proposed media mergers according to two criteria: 1) the implications for competition; and 2) the implications for the broader public interest.

However (and this is a hugely important however), the FCC’s ability to regulate on behalf of the public interest is in many ways confined to the narrow context of broadcasting. Consider, for instance, that the FCC did not even review the 2018 merger of AT&T and Time Warner, two of the largest communications companies in the world. Why? Because this colossal merger did not involve the changing of hands of any broadcast licenses, which is the lone necessity to trigger the application of the FCC’s public interest standard of review for media mergers. In 2019, this seems incredibly anachronistic.

Our existing public interest framework for media regulation does not apply to digital platforms. Why is the application of the public interest standard so limited? Because the First Amendment restricts government intervention in the media sector. However, the system that has evolved in the US creates narrow and limited exceptions based upon the characteristics of individual technologies. For instance, broadcast regulation is permissible in part because broadcasters utilize a public resource—the broadcast spectrum, which policymakers and the courts have treated as “owned by the people.” In exchange for access to this collectively owned resource, broadcasters must abide by a range of public interest obligations, some of which may infringe upon their First Amendment freedoms.

How, then, could we expand the scope of the public interest standard so that it could be brought to bear where it now seems to be needed most – in the regulation of digital platforms? The solution involves borrowing from broadcast regulation.

Like broadcasters, many digital platforms have built their business on a public resource. In this case, the public resource is not spectrum but, rather, our user data. Massive aggregations of user data provide the economic engine for Facebook, Google, and beyond. For several reasons, user data can—and should—be thought of as a public resource that is “owned by the people.”

First, it is widely accepted at this point that individuals should have some form of property rights in their user data. But given that user data’s real value is not at the individual level but, rather, at the level of the massive aggregations, a more collectively oriented property right makes sense. Second, practical challenges (and potential downsides) come with granting individuals full-fledged property rights in their user data. An individual property rights approach ignores the distinctive characteristics of user data as a resource. Such an approach could make it more difficult to unlock wide-ranging benefits from large aggregations of user data. A more collectivist approach could better protect and preserve the value and innovations that emerge from these data aggregations.

If we understand aggregate user data as a public resource, then just as broadcast licensees must abide by public interest obligations in exchange for the privilege of monetizing the broadcast spectrum, so too should large digital platforms abide by public interest obligations in exchange for the privilege of monetizing our data.

What those obligations should look like is, of course, the next big question. But once we think of aggregate user data as a public resource, the path opens up for moving beyond antitrust enforcement and developing a regulatory framework in which digital platforms operate under obligations to serve the public interest.


WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com


More Great WIRED Stories

Read More