Why Mark Zuckerberg’s Oversight Board May Kill His Political Ad Policy

Mark Zuckerberg is the ultimate decisionmaker at Facebook—he is not only the CEO, he also controls a majority of the stock and cannot be overruled. He always gets his way.

No example proves this more than his stance on political advertising. Per his command, Facebook allows politicians to say anything in ads short of illegalities. They can lie with impunity. It’s a controversial, maybe indefensible stand. Employees have begged him to reconsider. Legislators threaten that it’s a big reason Facebook should be regulated. Critics charge that it’s proof that Facebook is breaking democracy. Hillary Clinton has called him an “authoritarian” because of his policy. But Zuckerberg is adamant that it’s up to Facebook’s users to determine for themselves whether politicians are lying. So, no fact-checking those ads.

At Facebook, Zuck’s word is law.

But beginning at some point later this year, Zuckerberg’s word will no longer always be the final one. After nearly two years, Facebook is almost done setting up its Oversight Board, an independent panel with the power to override Facebook’s most contentious decisions on controversial content. Today, Facebook is releasing a set of bylaws that will determine how the board will operate. (The bylaws still need to be approved by the board when it is convened.) Next month it will reveal the names of the first set of content arbiters, starting with around 20. It will eventually grow to 40.

Think of the Oversight Board as kind of a Supreme Court of Facebook, the top court of appeal on what goes down and remains on the News Feed and Instagram. (At first, WhatsApp, Messenger, and Facebook Dating aren’t in play.) Some call it a bold experiment in corporate government. Others say it’s an elaborate exercise in passing the buck. But whether you’re skeptical or optimistic about it, it’s undeniably a huge effort. Facebook is spending well over $100 million and building an elaborate infrastructure to support the board, both internally and for the independent trust that will operate the board. Even more emphatic is the power it is transferring to the board on determining the fate of the disputed content the members rule on. As with the Supremes, board decisions are final. Facebook has vowed to honor its rulings, even if it disagrees with them. Even if Zuckerberg disagrees.

That means a long but inexorable countdown clock has begun on Zuckerberg’s insistence to permit paid political lying.

The Oversight Board’s bylaws set out a road map for what may become the end of his stubborn stand on political advertising. Here’s the scenario: A politician makes a bogus charge in a paid Facebook ad, falsely claiming an opponent has taken a bribe, appeared in a sex film, trafficked in drugs, or doesn’t wash their hands after visiting the bathroom. Right now, the victim of one of those lies has no recourse: If they appeal to Facebook, the company will refer to Zuckerberg’s official policy. Facebook will continue to pocket the money and promote the lie.

The point of the board is to take knotty decisions like these out of Facebook’s hands. The first cases, which could happen as early as March, will either come from users who have exhausted appeals after Facebook took their content down, or cases submitted by Facebook itself. Later this year, Facebook’s product team will create a means where users can appeal to the board on decisions where the company allows objectionable content to remain.

Submissions to the board first go to its staff for review. The board will have a sizable number of full-time staffers, including a team of case managers who will fulfill roles similar to law clerks, while others will handle administration and the interactions with Facebook’s team. Those working in this instant bureaucracy will not be paid directly by Facebook but by the separate trust that the company has created, funded by an irrevocable $130 million grant.

A complaint by a target of a bogus political ad is bound to come before the board eventually, which will certainly take on the case. Or Facebook itself might send the issue to the board. After all, this issue satisfies almost all the factors listed by Facebook itself when assessing important cases. (A subset of the board’s members will sit on a selection committee.) According to an explanation of the board’s charter written by Facebook, these include severity (“the content reaches or affects someone else’s voice, privacy or dignity”); public discourse (“the content spurs significant public debate and/or important political and social discourse”); and difficulty (“there is disagreement about Facebook’s decision on the content and/or the underlying policy or policies”). It’s almost as if the whole project was created specifically to rule on Zuckerberg’s stance on political ads.

Nick Clegg, the former UK deputy prime minister who is now Facebook’s head of global policy and communications, confirms this. “I think this political ads issue of how do you treat political speech is exactly the kind of thing where it would be very interesting to see how the board thinks,” he says.

Here’s what will happen next when the board finally takes on the issue. Cases are judged by five-person panels of board members. Facebook’s Community Standards team—the gang that formerly was the last word in content judgements—prepares a package of information about the offending content as well as a defense of the company’s actions or inactions. The oversight panel can ask for more information, as well as do its own research, tapping experts in the field. Then the panel makes its ruling. One can never know for certain how it will rule, but the values it must consider include authenticity, safety, privacy and dignity, none of which seem to be present in an ad that intentionally circulates an outright defamatory lie about someone. So my guess is that that ruling will be in favor of the aggrieved party in a fallacious ad.

The next step is to have the full board ratify or question the decision. Any ruling must be approved by the full board. Maybe the board will decide that all its members should consider the case, an option used for particularly significant decisions. The outcome will probably be the same.

The panel, or the full board if it’s involved, will produce a public report on its decision. And at that point it will order Facebook to remove the ad. Facebook will be obliged to comply. It’s right there in the charter: “The board’s resolution of each case will be binding.” Facebook will have a week to zap the lying ad.

And in one swoop, Mark Zuckerberg’s most dug-in position will be dug up. And out.

The Oversight Board is a direct product of Facebook’s woes after the 2016 election. In the following year, Zuckerberg thought a lot about how much responsibility came from Facebook’s role in determining the boundaries of what people posted on the platform. Regulating the speech of billions of people was a lot of responsibility—and he didn’t want it. He began thinking about corporate governance structures where outside voices could make some of those calls.

He was in the midst of those ruminations in January 2018 when Sheryl Sandberg forwarded an email from one of her college friends to him. Noah Feldman was a Harvard law professor who had been thinking of Facebook’s problems through the lens of early constitutional law. He had just finished reading a book on James Madison when, on a visit to California, he took a bike ride around the Stanford campus. The idea came to him that Facebook’s toughest calls might best be handled by a quasi-judicial unit with autonomous powers. “This is a strange thing,” admits Feldman of the Oversight Board. “I mean, Facebook is not a country, and this body looks sort of like the court.”

He sent a brief description to Sandberg, who urged him to write up a proposal. When Zuckerberg saw it, he summoned Feldman to a meeting. “Unbeknownst to me, he had been thinking for a long time about devolution of power away from Facebook,” Feldman says. Zuckerberg ultimately hired Feldman as a consultant, and the project was put in motion.

“Mark had been seeking input from a lot of different places,” says Andy O’Connell, a director on Facebook’s Global Public Policy team. “Noah’s idea was actually implementable, and other ideas were not. And it was the most detailed proposal.” (Still, many in and out of Facebook claim to have thought of it. ”I can’t tell you how many people have said, ‘Glad you’re running with my idea,” says Zoe Darmé, a manager on the project.)

By the spring of 2018, Zuckerberg was sharing his excitement about the idea with people. In an April interview that year, he told me about brainstorming a Supreme Court–like entity, whose members don’t work for Facebook but would have binding authority. “I think [it] would help make people feel like the process was more impartial on judging what content should be on the service and what’s not,” he told me.

Leading the project were two relative newcomers to Facebook, Brent Harris and Heather Moore. Facebook had hired Harris, an expert in international regulation, to become its director of governance and global affairs late in 2017. Since he had worked on adjudicating the BP oil spill in the Gulf of Mexico, he was well-placed to deal with the gushers of offensive content on Facebook’s platform. Soon after the March 2018 Cambridge Analytica scandal broke, he began focusing on the board, joined by the newly hired Moore, an attorney with an extensive background in procedural justice. She headed the effort to write the board’s charter and bylaws. “[The concept] was to bring together a group of people outside of these walls with the expertise, knowledge, and purpose to really make consequential decisions in a way that was more democratic than what was currently happening inside the company,” she says.

In keeping with the theme of independence, the project leaders created a process by which they sought guidance from experts in a dense series of meetings, workshops, and conferences. It ran simulations of board considerations. All told, Facebook consulted with more than 2,200 people from 88 countries.

Last year Facebook ran a series of 20 workshops, in places like Singapore, Menlo Park, Brazil, Berlin, Nairobi, and New York City, to take feedback from activists, politicians, nonprofit groups, and even a few journalists. By the time of the New York workshop I attended, Facebook had tentatively drafted a charter, and had suggestions on the bylaws that would dictate the group’s operations. But in our two-day discussion, everything was up for grabs.

One of the longest discussions involved precedent. Facebook handles millions of appeals every year on its content decisions. The board will handle an infinitesimal slice of those, maybe 25 or 30 in its first year—and Facebook is obliged to respect its decisions only in those individual cases. For instance, in our workshop we simulated a board discussion about a Facebook decision to take down a post where a female comedian claimed that “all men are scum.” Facebook considered it hate speech and took it down, and a public controversy ensued. If a board overruled Facebook, the post would be restored. But removing a single post doesn’t tackle the underlying problem that Facebook’s Community Standards were too inflexible by handling hate speech the same, whether it was directed jokingly at a powerful group or employed harshly toward a vulnerable minority.

Ultimately, Facebook came up with a process where the board could suggest, but not force, the company to regard its decisions as precedent for other cases. Members of the board ruling on a case can ask Facebook to change its Content Standards to adhere to its decision more generally. When that happens, Facebook must consider the request but is not obligated to fulfill it. If it doesn’t change its rules, it must post a public explanation why not.

Will Facebook take those recommendations? According to O’Connell, the Community Standards team will examine any requests from the board the same way it routinely considers changes to its rules already. This means forming a committee to study the various alternatives, asking for expert opinion, and then making the decision based not only on what is the right thing to do from a human rights perspective, but what’s feasible. “We would take [the recommendations] incredibly seriously,” he says. But it won’t necessarily implement them. “A lot has to go into really understanding the implications of what might seem like a pretty straightforward decision when you’re talking about applying it across billions of posts per day, in the thousands of languages,” says O’Connell. “There’s real operational translation, data science, testing work that really has to happen.” Heather Moore agrees that there will definitely be instances where Facebook rejects the board’s recommendations. If these proliferate, people might question whether the company is really committed to oversight.

And if the board considers—and rejects—Facebook’s policy in a case involving lying in political ads, Facebook will truly be under pressure to make the decision a precedent. “That’s precisely what the system is designed to do—place excruciating pressure on us to only stick to our policy if we really are absolutely sure that it’s the right one to do,” says Clegg. The current justifications for the policy could hardly stand up if its own oversight board judged it as a violation of human rights and dignity.

Another contentious issue at the workshop I attended involved who should sit on the board. Facebook seemed to think it was … people like us, in the room—well-educated, comfortable technocrats or public policy wonks. You can bet that some of the members will come from human-rights backgrounds. Another imperative was that the board be diverse, both culturally and geographically. After considering alternatives, Facebook concluded that board members should work part-time. They will work remotely, meeting in real life no less than twice a year. Their identities will be public, though their work on individual cases will be unsigned, to prevent blowback. Because the board, especially before it reaches its full 40 members, will be so small, this could put pressure on members. If there are only one or two from a given region or culture, are they then charged with representing the millions of people who share those characteristics?

Also, there was consensus that if the board was really to be independent, Facebook would not be selecting its members. This presented a logical problem. Since Facebook was setting up the whole process, it was unavoidable that its fingerprints would be on the choices, even if it just appointed who would appoint the members. “We heard everywhere, people didn’t think it would be legitimate for Facebook to choose the first group of members,” says Darmé. “When we asked, well, how would you do it?, there was a wide range of ideas. Do we open it up to democratic votes? No, that’s a terrible idea. Do you choose a membership committee? Oh, that sounds great, but who gets to choose the membership committee? It was a bit of a Gordian knot.”

The solution it chose was to have Facebook appoint three co-chairs, drawing from publicly submitted nominations and helped by executive search firms. Those co-chairs (who are already selected and will be announced in February) will then select their colleagues. From that point, the board will pick all subsequent members, to serve no more than three terms of three years. That puts a huge burden on the co-chairs, who will permanently infuse their intellectual DNA into the board. “A big content issue on Facebook right now is extremist content, leading to offline violence,” says Dia Kayyali, a program manager for Witness, a social activism organization. “What if they pick someone with [a biased] sort of political background and then that person is picking everyone else?” Facebook already is regularly accused of political bias from the left, the right, and everywhere in between; there’s little reason to believe this crucial juncture will prove any different.

Another problem is that some of the best potential candidates may reject an offer because of the stigma of becoming part of Facebook’s process. One person approached to be on the board, who wanted anonymity because the request was confidential, immediately thought, Are you kidding me? “There are people who would never talk to me,” this person says. “I have a real fear they’re not going to be able to get a good board because of the reputational harm that could accrue to anyone who says yes to them.” (One positive sign: Facebook announced today that Thomas Hughes, the director of the global free expression organization Article 19, has signed on to become the director of the oversight board. He won’t rule on cases but will set up the structures that will guide the board’s operations.)

Facebook put me in touch with several people in nonprofit organizations around the world who have been advising the board. All have issues with Facebook; their support for the board might be described as extremely cautious optimism. To Facebook’s credit, during its elaborate consultation process, the company actually listened to the thousands of people it approached. “At the beginning of the process, they were going to pick all the [board members], they were going to house the support function in Facebook,” says Charles Bradley, executive director of Global Partners Digital, a UK-based company specializing in human rights and governance. “One of the questions I had was, what’s the guarantee that Mark Zuckerberg is not the trustee of the trust? Then they created this [outside] trust.”

Ultimately, all inside and outside Facebook agree, the board’s reputation will rest on the independence of its decisions. If Facebook is happy with everything the board does, it can probably be deemed a failure.

“The only route to success here is for Facebook to regret some individual decisions that the board makes but not regret creating a body that will provide genuine oversight,” says Noah Feldman. “And that’s pretty risky.”

Clegg agrees with that. “We know that the initial reaction to the oversight board and its members will basically be one of cynicism—because basically, the reaction to pretty well anything new that Facebook does is cynical,” he says. Only by making impeccable decisions—including ones that give Facebook agita—can the board gain what Clegg says is “a slow burn credibility.”

As the two-year process to create a board reaches its conclusion, one thing still nettles Facebook’s governance team: the charge that the entire exercise is a way for Facebook to duck responsibility for its toughest decisions. “I just couldn’t disagree with that more,” says Harris. “There’s no question that the company holds a responsibility on this set of issues. It makes a whole series of decisions every day of what content is allowed or what’s not. [Setting up the board] is a conscious choice of, Who do you want to empower to make that decision?

“It’s not like we’re punting without making a decision,” adds Moore. “We have to make a decision, and then it goes to the board for a final review. We will have to explain our rationale for the decision, provide information about what happened on the platform. I think of it as like an expansion of what we’re doing rather than punting it entirely to an outside entity.”

In fact, Facebook is already thinking of further expansions of its decisionmaking delegation. According to Kate Klonick, an assistant professor at St. John’s University School of Law who has been given inside access to document the process of creating the board, Facebook envisions oversight occurring even before it implements controversial products. “The bigger picture for the board is that they’re trying to fix a problem that has always existed in all tech companies—they want to build policy directly into product.” In other words, even before Facebook builds a product using something controversial like facial recognition, for example, it may submit the concept to get the board’s imprimatur. “They want product decisions, when they’re being formulated, to go to the board beforehand to insulate what it is that they end up doing if the product screws up,” she says.

Some Facebook people are already talking about the board becoming a model for the industry. Maybe other tech companies will adopt the idea. Or even submit their own controversies to Facebook’s oversight board. “We would love to partner with organizations like Twitter and Google or YouTube, when they’re looking at these areas,” says Fay Johnson, a Facebook product manager working to implement the company’s interface with the board.

More likely, those organizations will be watching closely to see whether Facebook’s oversight board survives the skepticism directed at everything the company does these days. And perhaps the most significant test will be whether the board takes on Mark Zuckerberg’s much-despised stand on misinformation in political ads. When I ask Harris and Moore about this, the room gets quiet. Harris flashes a nervous grin. Clearly this has been a topic of considerable discussion among the wonks in Facebook’s governance sphere.

“It’s possible,” says Moore. “That’s why you see Brent’s face looking like that.” She hastens to say that it won’t happen until the board starts taking direct complaints from aggrieved users. That is, unless Facebook itself sent the issue to the board, asking for its new content overlords to rule on this nettlesome issue. Still, the takeaway I got is that Zuckerberg and the board he created are indeed on a collision course on political ads, even if it takes a while. (Clegg is emphatic that it won’t happen before the 2020 election, but instead at a later time, after the board “finds its feet.” A counter-argument might be that making the decision when we need it most will be instrumental to gain footing for this experiment. The bylaws do allow for Facebook to ask the board to handle an issue about ads on an expedited schedule.)

Maybe if the board ends Mark Zuckerberg’s stance on political ads—symbolically marking the end his total control over all things Facebook—the strong-willed founder may wonder just exactly what he’s done by empowering a group of 40 people to make decisions over what goes up or stays down on Facebook.

Or maybe it’s the escape route he’s been waiting for all along.

Updated 1-28-2020 2:45 pm ET: The Oversight Board’s first cases can be submitted by users or by Facebook, not just Facebook as previously stated.


More Great WIRED Stories

Read More